Wednesday, August 29, 2007

What's in an OS X Package (.pkg) file?

I recently had occasion to figure out how packages work on OS X. This isn't an extensive treatise on the subject, and I didn't bother to look for any docs. This is just some poking around I did in the Finder and mostly in Terminal.

The original motivation was that I wanted to uninstall a package that I'd installed. The package file is Subversion-1.4.4.pkg. The first thing I learned is that packages are another case where a directory (aka folder) shows up as a single file in the Finder. You can see the contents in the Finder by control-clicking on the package file and selecting Show Package Contents. The Finder opens up a new window (just like any other folder).

All I see in my package is a folder called Contents. Within that folder, is a file called Archive.pax.gz, which is a compressed pax archive. Pax is an archive program like tar and cpio. (Actually, it's an experiment on genetic engineering - in order to "solve" the tar vs. cpio wars, they merged the two and called it pax, Latin for peace.) The pax archive was subsequently compressed with gzip - hence the gz extension.

Too see what's in the archive, we need to decompress it and get pax to print a listing of the contents. I did this as follows, although there are other ways to skin this cat:

cd /tmp
gzip -d < Subversion-1.4.4.pkg/Contents/Archive.pax.gz | pax -v

(Pax contains an option to do decompression, but I'm too old fashion.) Note that I redirect the compressed archive into gzip. I did this because I wanted to leave the archive compressed. I could have decompressed the archive and then ran pax on that, but I wanted to leave the entire package unharmed. The first bit of the output looks like:


-rwxr-xr-x 1 root wheel 1664424 Jun 22 23:57 ./usr/local/bin/svnadmin
-rwxr-xr-x 1 root wheel 1571744 Jun 22 23:57 ./usr/local/bin/svndumpfilter
-rwxr-xr-x 1 root wheel 1664612 Jun 22 23:57 ./usr/local/bin/svnlook


So, we can see that these are the files that got installed (in /usr/local). One nice thing to note is that the path names are all relative - they begin with "./" rather than just "/". This means the files can be installed anywhere - change into a directory, extract the archive, and the files will show up in a directory called usr/local below your current directory.

How can I use all of this to uninstall these files? Again, there are many ways to skin the cat, but here's what I did. I extracted the archive to /tmp. ("I thought you wanted to remove the files. Why are you extracting them again?" Patience, my friend.)

cd /tmp
gzip -d < Subversion-1.4.4.pkg/Contents/Archive.pax.gz | pax -r

This creates all of the files, but under /tmp - e.g., /tmp/usr/local/bin/svnadmin. I can now use find to get me a list of just the files:

find usr -type f -print > /tmp/fff

I then looked through the file names in /tmp/fff, and they made sense, so I removed them all.

sudo rm -i `cat /tmp/fff`

The sudo command was needed because the files were not owned by my user ID. The package installer asked for the administrator password and installed the files owned by root (as I recall). Of course, I could have avoiding "installing" the files in /tmp by running the output of pax -v through some awk or perl, but the archive was small, and I knew the options to find off the top of my head - I would have had to look up some awk or perl, since I don't use them that often any more.

enjoy,
Charles.


P.S. This post almost never was: Blogger mangled the fonts repeatedly, and I almost gave up on it. Stupid JavaScript HTML editors!

Wednesday, August 22, 2007

Everything I Know About Business I Learned from My Mama

Everything I Know About Business I Learned from My Mama by Tim Knox is nominally a business book. However, it is certainly not your typical business book. The author doesn't tell you what corporate structure (e.g., LLC vs. S-Corp) is best, how to keep the books, or how to manage a Fortune 500 company. Rather, this is a bigger picture view of going into business for yourself. Tim Knox is all about entrepreneurship.

Although this book isn't really a how-to manual with lots of nuts-and-bolts details. It would be a really good book for someone who is thinking of going into business for himself. It even begins with a bunch of reasons why someone shouldn't go into business. If after reading the book, someone still wanted to go into business, he should probably read a few more books before jumping in. (The E-Myth Revisited: Why Most Small Businesses Don't Work and What to Do About It is next on my list to read.)

The book is pretty short and humorous, and the chapters are nice little bite-sized chunks. Here's an attempt as his sense of humor: this would be a great book to keep next to the toilet - each chapter is about one visit long. (I have a special place in my heart for books like that.) I suspect the book was easy for Tim to write since he's been writing newspaper articles for some time. I can easily imagine that many of the chapters are extensions of articles he wrote for the paper, bless his heart.

One minor nit I could pick with the title is that there isn't all that much mention of stuff his mama told him. Rather, there's a fair amount of common sense that one's mother might impart.

I bought this book because I've been listening to Tim on Dan Miller's radio show - see www.48days.com. Dan is a career coach and wrote the book 48 Days to the Work You Love. The two of them spend a lot of time telling people to quit jobs they hate and move towards work they love. The radio show recently ended, and they've switched to an Internet format. The radio shows and the new Internet shows are available as podcasts.

enjoy,
Charles.

Roomba Rants

I recently started using my (comparatively old) Roomba again, and I've been reminded of my chief complaint about Roomba: it gets dirty and needs to be cleaned.

Our house is a regular petting zoo, so stuff accumulates pretty quickly. This fills up the bin, which isn't so bad because emptying the bin is quick and easy. But the human hair and pet fur fowls the brush, and that takes some serious effort to clean. I suppose the Roomba for pets (which appears to be discontinued) addresses this with a brush that is comparatively easy to clean.

And then last night I discovered a new place to clean: the side sweeper brush. That thing was totally fowled (not surprising considering I just discovered that I needed to clean it - "read all the words"), and it took me 10 to 15 minutes with my pocket knife and tweezers.

I find it ironic that a cleaning tool requires a non-trivial amount of cleaning. Perhaps if I kept up with regular vacuuming, Roomba wouldn't get so overwhelmed by crap.

Finally, the reason I'm just getting back to using Roomba is that I just got a new battery after killing the old one some time ago. It turns out managing batteries with Roomba is a bit different than other devices I've used. Typically, I run devices down and recharge them. If I don't have an immediate use, I'd leave the device uncharged. Well, Roomba keeps discharging the battery even if it's not in use. And if you do that to a battery that's already low, it deeply discharges the battery - definitely not A Good Thing. I noticed the discharging thing: sometimes when I'd charge the battery and not use Roomba immediately, the battery would be low when I went to use it. So, Roomba prefers being left on the charger, even if you're not using it, which is something I tend to avoid with other devices. Anyway, the result was that I only got ~50 cycles out of the original battery - not good. The recording on Roomba's tech support tells you if you're not using the unit anytime soon, you should take the battery out. Fortunately, it's easy to install and remove the battery - e.g., no screws required.

Don't get me wrong: Roomba is still fun to use, and I did just put down more money for a new battery. So, I will continue to use it. It's just not as carefree and easy as I'd like to see it.

Later,
Charles.

Friday, August 17, 2007

New iMac

Even though I was wishing for a Core2 Mini, when it was (finally) released (somewhat silently), I opted instead for one of the new iMacs, and I'm loving it.

My original plan was to get a Mini and use a KVM along with the PC boat-anchor that I keep around for a client. However, after using a KVM for a while with other PCs, I realized that KVMs can be kinda hokey. For example, my KVM makes a keyboard look generic - i.e., it hides any special keys. Also, although I have a cool 22" Viewsonic LCD, it's not as nice as an Apple display.

I opted for the 20" 2.4Ghz model with the stock 1GB of memory. I was very pleased to see how easy it will be to upgrade the memory when I get a few extra dollars. The 24" was pretty tempting (the price is quite reasonable), but I figured if I ever need a larger display, I can use the external video connected to my (somewhat inferior) Viewsonic for a ~40" display experience.

The CPU is hella fast, although I admit I'm comparing it to an 800 Mhz G4 iMac which was significantly taxes (~20% CPU) running Firefox and Gmail. To date, the only time I'm maxed out the two cores was converting audio files into MP3 with iTunes. My Internet is still only 128K ISDN, which makes for slow Gmail, but Steve Jobs can't be expected to fix that.

The new, thin keyboard is something that cannot be appreciated in a store standing over it. You have to sit in front of it with a real chair in a real working position to appreciate the nice ergonomics. My one gripe is that my thumb drive is too fat to fit in the USB sockets.

And finally, the Apple educational discount and promotions were sweet. After the rebates, I'll have a free Nano and printer, both of which I gave to the wife - score!

Anyway, Joe Bob says, 5 stars (on a scale of 4) - check it out!

enjoy,
Charles

Wednesday, June 13, 2007

Google Docs for Family Support

I talked before about using Google Docs as a small-time wiki, but I've recently come up with another, slightly less geeky use for Google Docs. My sister and I live in Oregon. Our mother lives down in Orange County, CA. She was recently diganosed with stage 4 lung cancer and was temporarily hospitalized and then released.

My sister and I have been in the difficult position of trying to coordinate care from far. We both work full-time, and so we tag-team on calling people. As we call various professionals and friends, we learn new names and phone numbers of more people to call. (Sounds like Amway, but it isn't. :-)

Anyway, I set up a Google spreadsheet with the names and numbers I was working from and shared it with my sister. She added the information she had collected. We also have columns for email addresses (where relevant) and notes about the person (e.g., a friend from church, the social worker from the hospice, etc.).

The collaborative feature of Google Docs (and spreadsheets) is a life saver. And, if you're away from the Internet, just print it out and take it with you - how 20th century. Joe Bob says, four stars - check it out!

enjoy,
Charles.

Thursday, March 01, 2007

Book: Windows Forensics and Incident Recovery

Windows Forensics and Incident Recovery
by Harlan Carvey -ISBN 0-321-20098-5

This is a great book because I learned more than I thought I would from it. Coming from a command-line Unix background, I tend to view Windows as excessively GUI-centric (maybe that's why it's called Windows?) and full of opaque Microsoft voodoo. This books showed me that there are plenty of things to be learned from the Windows command line, and there are lots of transparent, open-source tools to expose the inner workings of Windows.

There are really three types of information in this book: how Windows works, tools to collect information about Windows, and the bigger task of forensic information extraction and processing. There is a lot of information about basic operating systems concepts (files, processes, etc.) and how they are implemented in Windows. I especially liked the presentation of user privileges - we typically only hear about those in the context of administrator versus non-administrator, but there is a listing of each of the individual privileges and what they mean. The tools that the author presents are primarily command-line tools, and many of them are written in Perl - very approachable for an old Unix hack. (A second edition of this book would benefit from a treatment on Microsoft's WMIC tool.) With the basic groundwork laid, the author presents a bigger picture of how to use all of the tools in a forensic investigation. He presents a series of dreams, which are a bit corny, but they serve as a sequence of case studies. He also provides a "forensic server" to storing all the little bits of information that get collected - a bit like "real" tools like EnCase.

Like the book File System Forensic Analysis, one of my favorite aspects of this book is that it provides a lot of practical information about applied operating systems - Windows. The author provides links to a lot of tools and web pages, so this book serves as an excellent starting point to learn a lot more about Windows and forensic data recovery. The text includes complete source code for the Perl tools, so a code-oriented reader can really see what the information is and where it comes from.

If I had to criticize something in this book, I'd say that Chapter 9 on scanners and sniffers drifts a bit from the central theme of the book, but then I've found that to be pretty common in security books because so many of the topics are interrelated; you start pulling one thread on the sweater, and the next thing you know, you've unraveled the whole thing.
All in all, this is a great starting point for learning about forensic data acquisition on the Windows platform.

Enjoy,
Charles.


Tuesday, January 09, 2007

Google Docs - Quick and Dirty Wiki

I've been using wikis for collaborative software development for at least five years. I've typically used TWiki. It's a pretty cool tool, but the set-up is non-trivial, especially for a small project. Also, the last wiki I set up got spammed big time - it was on the Internet rather than a private intranet.
Google recently bought out JotSpot before I discovered it. It sounds like a cool idea - they set it up for you, and I guess it could be configured to be totally private, which would eliminate the spam. I'll be very curious to see what it looks like when it comes back online.
In the meantime, I've discovered Google Docs and Spreadsheets as a collaborative tool. At first I thought, free Word and Excel - who cares? I've already paid The Evil Empire for my software. But when you add the Internet storage and collaboration features, it becomes very cool. A shared document becomes a wiki page!
I see two really nice features with Google documents compared to run-of-the-mill wikis: the formatting and editing is a word processor (implemented in Ajax), not another markup language, and the sharing is on a document-by-document, user-by-user basis. In a small team, it's nice to allow Bob to see the document and Jane to edit it. On another document, they can both edit it. Of course, in a large team, configuring this one-by-one would suck.
Another nice feature is that you can upload documents (from Word, OpenOffice, others), so someone can begin something with a word processor (maybe when s/he is offline) and convert it trivially into a pseudo-wiki page. Sure you can cut and past from Word into a blog or wiki, but I hate the way some characters get mangled in the process.
The Google spreadsheets are nice for collaborative project management. I've been using Voo2do for managing simple task lists. It's nice, but the collaboration options are limited: you can share a password-protected, read-only view, but to allow someone else to edit, as near as I can tell, you have to grant them access to your whole account - not just the one task list you wanted to share.
With Google spreadsheets, all you have to do is create a spreadsheet with the tasks (ala Joel on Software), and share it, either read-only or read-write.
I'm currently working on a small project with two other people, and I've just gotten into this. So far, it's great. More news when it happens.

Enjoy,
Charles.

Saturday, November 11, 2006

Core2 Mac Mini

There has been a fair amount of speculation about when/if Apple will release a version of the Mac Mini using the Core 2 Duo processor. For example:

Apple hints at Core 2 Duo Mac mini?

How Soon Will the Mac mini Go Core2?


Personally, I'm dying to see a Core2 Mini. At the moment, I'm stuck with a 800 Mhz G4 iMac that just doesn't cut it any more, but I can't afford a 20" Core2 Duo iMac.

For the sake of argument, why would Apple not update the Mini to the Core2? Part of that depends where Apple really sees the Mini. When the (G4) Mini first came out, one of the pitches was for PC developers to use it in addition to a PC via a KVM switch. In this scenario, it made sense for Apple to make it beefy.

When the first Intel Macs came out, the Mini had a Core processor (I'll call the Core processor the "Core1" just to be extra clear) just like the other Macs, in particular the iMac. They wanted the Core archiecture, but there weren't many of those processors, so the Mini and the iMac were pretty similar. Then the Core2 came out and Apple bumped the iMac to a Core2 but left the Mini at a Core1.

I take this to indicate that Apple is trying to create some diversity in the product line - i.e., the need to differentiate the Mini from the iMac. Thus, they need to keep the Mini crippled, and they're no longer pitching the Mini as a developer machine. This is (unfortunately) like the IBM PC Jr back in the day. Now, they have bumped both Mini models to a Core1 Duo, but it's still a Core1 not a Core2 - i.e., still crippled.

The other possible motive (for Apple) of keeping the Mini at Core1 is profit margin. When the Core2 came out, Intel slashed the price of the Core1, but Apple has not dropped the price of the Mini. If Apple had a big stock of Core1 processors (especially Core1 Duos) when the Core2 came out (not likely given how shrewd Apple often is), this gives them a way to flush their Core1 inventory. More likely, Apple is just making bank on the reduced cost of the inputs.

So, I can see a few reasons why Apple may not bump the Mini to a Core2 very soon. I hope I'm wrong it because I'd really like a Core2 Mini. The MacWorld release timeframe that is being rumored would fit my budget very nicely.

enjoy,
Charles.

Thursday, August 24, 2006

Flex: Debug Flash Player on OS X

As noted in "Building and Deploying Flex2 Applications" (Live Docs), the output of trace statements can be configured in the standalone debug version of the Flash player. The documentation describes the location of the mm.cfg file on Mac OS X using Mac-style (non-Unix) path notation with colons to separate directories.

I set up a config file, but for the life of me, I couldn't find the output file. Then, I found an old, pre-Flex2 blog post by Christian Cantrell which fills in two missing parts of the documentation. First of all, if no TraceOutputFileName directive is given, the trace output will go to flashlog.txt in the same directory as the config file - fair enough. Once I removed the directive, I found the output file.

Also, if a path name is given, the path must be specified in the old Mac-style notation (with colons) rather than Unix-style (with slashes). I suppose that's why they specified the location of the config file in that notation, but the significance of the colons was lost on me. One totally trivial tidbit correctly noted in Christian's blog post is that the directory name is "Macromedia" not "macromedia", as indicated by the docs. (Since the Mac's HFS+ file system is basically case insensitive, it doesn't really matter.)

Once I figured that stuff out, I could get my trace output and get on to fixing my bugs.

I do have (at least) one gripe with the debug Flash player on the Mac compared to the PC - it doesn't keep a history of recent URLs. So, every time you go to open an URL, you have to type it in again (or paste it from a browser, which does keep history.) Even if you keep the player running, it still doesn't remember the last URL it opened.

Enjoy,
Charles.

Wednesday, July 19, 2006

Flex: Sending Messages from Server-Side Java Code

I've been trying to figure out how to send messages from the depths of my Java application out to Flex clients. I'm not talking about messages that originate on a client, pass through the Java server, and then head back to the clients - that stuff is documented in the manual. I mean there is a process going on in the server that needs to send messages to the clients.

Of course, JMS and the JMS adapter would work, but I've been really leery of the complexity of JMS since it came out years ago. It has the advantage that it would allow my application to be separated from the Flex server, which would work well in terms of scaling. I may yet try JMS, but for the moment, I'm looking to avoid JMS.

So, I was thinking that writing a message service adapter would be the key. Adobe has a brief example of creating a custom Message Service adapter. It seems to be a complete example, but it left me with lots of questions. For example Message is an interface, which concrete class do we use? How do we construct it? What all can we do with the MessageService? What value are we returning from invoke?

After some digging around in the sample code, I came across a couple of Java classes associated with the dashboard sample application. So, I've taken the code from that example, paired it down a bit, and wrote a very simple, Soviet-style (butt-ass ugly but functional) Flex interface for it.

Below is the server-side Java code.

import java.util.*;
import flex.messaging.MessageBroker;
import flex.messaging.messages.AsyncMessage;
import flex.messaging.util.UUIDUtils;

public class FmsDateFeed
{
private static FeedThread thread;
public FmsDateFeed()
{
System.out.println("FmsDateFeed.ctor: " + this);
}

public static void main(String[] args)
{
FmsDateFeed feed = new FmsDateFeed();
feed.toggle();
}

public void toggle()
{
if (thread == null)
{
System.out.println("FmsDateFeed.toggle: starting...");
thread = new FeedThread();
thread.start();
}
else
{
System.out.println("FmsDateFeed.toggle: stopping...");
thread.running = false;
thread = null;
}
}
public static class FeedThread extends Thread
{
public boolean running = true;
public void run()
{
MessageBroker msgBroker = MessageBroker.getMessageBroker(null);
String clientID = UUIDUtils.createUUID(false);
System.out.println("FmsDateFeed.run: msgBroker=" + msgBroker + ", clientID=" + clientID);

Random random = new Random();
while (running)
{
Date now = new Date();

AsyncMessage msg = new AsyncMessage();
msg.setDestination("time_msgs");
msg.setClientId(clientID);
msg.setMessageId(UUIDUtils.createUUID(false));
msg.setTimestamp(System.currentTimeMillis());
HashMap body = new HashMap();
body.put("userId", "daemon");
body.put("msg", now.toString());
msg.setBody(body);
System.out.println("sending message: " + body);
msgBroker.routeMessageToService(msg, null);
try
{
long wait = random.nextInt(8000) + 3000;
System.out.println("sleeping: " + wait);
Thread.sleep(wait);
}
catch (InterruptedException e) { }
}
}
}
}


The toggle method is called from the client to turn the message stream on or off. When messages are turned on, a thread is created and the run method is called. It just loops creating Java Date objects and putting the String representation of the date into Flex messages - instances of AsynchMessage. The key for me was the routeMessageToService method on the MessageBroker class. When you have the class compiled, install the Java class file in samples/WEB_INF/classes.

The client is pretty simple, by comparison. It has two features: a button for turning the message stream on and off (via a RemoteObject call) and an event handler to process the incoming messages.

<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml"
backgroundColor="#FFFFFF"
creationComplete="initComp()">
<mx:Script>
public function initComp():void {
consumer.subscribe();
}
public function toggle():void {
feeder.toggle();
}
import mx.rpc.events.ResultEvent;
import mx.messaging.events.MessageEvent;
public function messageHandler(event:MessageEvent):void {
var body:Object = event.message.body;
output.text += body.userId + ": " + body.msg + "\n";
}
</mx:Script>

<mx:RemoteObject id="feeder" destination="time_msgs">
<mx:method name="toggle"/>
</mx:RemoteObject>
<mx:Consumer id="consumer" destination="time_msgs" message="messageHandler(event)"/>

<mx:Form>
<mx:FormItem>
<mx:Button label="Toggle Date Feed" click="toggle()"/>
</mx:FormItem>
<mx:FormItem label="Output Message">
<mx:TextArea id="output" width="250" height="150" />
</mx:FormItem>
</mx:Form>
</mx:Application>

The interesting features are the configuration information for the RemoteObject and the message Consumer. These correspond to destinations configured on the server (see below). When the toggle button is pressed, we just call the toggle ActionScript method, which make a call on our RemoteObject, feeder; we don't care about a return value, which simplifies things.

We configure the Consumer to call our messageHandler method when new messages arrive. It just gets the message body and appends it to our text box. Install this file in somewhere in the samples directory - I created my own subdirectory for the dumb, little programs like this.

To configure the server, edit messaging-config.xml in samples/WEB-INF/flex and clone the destination for "dashboard_chat" and call the new version "time_msgs". You could change the Java and MXML to use the existing dashboard_chat destination (which is what I did initially), but a) you could end up with a conflict with the Flex demos, and b) writing to messaging-config.xml (or any of those other config files) is a convenient way to get the server to restart and see your new Java class.

Oh yeah, you also need to edit remoting-config.xml to contain a RemoteObject destination also called "time_msgs". (You could use separate names for the two destinations, and it would probably be less confusing, but I was too unimaginative.)

If all goes well, you should get messages with the current time showing up randomly in the Flash client.


Enjoy,
Charles.