Sunday, 30 October, 2005

Memory leaks and reference counting

It's not that I have nothing to say, but rather I don't have time to say it.  I've been hard at work these last few weeks on the project, trying to make sure that everything we need for the end of month deliverable is ready.  And oh, boy, did I run into a fun one.

Back on the 19th I mentioned that I was having trouble with memory leaks.  That turned out to be just the start of my problem.  Memory management is hard--really hard--and even really good programmers seem to have trouble with it even in relatively simple programs.  In complex programs, where several subsystems need references to a single object, it gets even more complex.  A very common problem is trying to dereference an object that's already been destroyed.  Imagine, Subsystem A creates an object.  Subsystem B obtains a reference to the object.  After a while, Subsystem A destroys the object.  Bad things happen when Subsystem B then tries to access the destroyed object.  This is what I call "pulling the rug out from under me."

A common solution to this problem is reference counting.  When Subsystem A creates the object, the object's reference count is set to 1.  When Subsystem B obtains a reference, the reference count is increased to 2.  Now, when Subsystem A "destroys" the object, the reference count is set back to 1 but the object isn't actually destroyed.  It will only be destroyed when the reference count goes to zero (i.e., when Subsystem B releases the reference).

Reference counting works if everybody follows the rules.  But what happens all too often is that somebody doesn't follow the rules (surprise, surprise), and all of a sudden you're back to memory leak land except that it's more difficult to track down who actually is the culprit.  Who didn't release the object?  Oh, it do get ugly in a hurry.

The really frightening part is that people sometimes implement reference counting in order to eliminate memory leaks.  There's a good idea!  Solve a complicated problem (memory management) by putting an even more complicated problem (reference counting) on top of it.  Whoever thought that was a good idea should have his fingers glued together so that he can't do any more programming.

Tracking down memory leaks and mis-matched object reference counts is why you haven't been hearing much from me lately.

Sunday, 23 October, 2005

Fighting with Amazon

I just spent a very frustrating 10 minutes fighting with Amazon's Web site.  I don't use Amazon very often, which might be part of the problem.  Still, I'm no dummy (at least, I don't think I am), and when a Web site is difficult for me to figure out I often wonder how less computer-savvy people manage to navigate it.  Amazon's programmers and Web designers need to delete their cookies and try operating the site without cookies enabled, or with expired cookies.  Or am I the only person in the world who doesn't somehow have his computer automatically log in to Amazon at startup? is a great site for finding and buying all kinds of things.  But their site is horribly difficult to navigate.  Sometimes if I click on "Add to Cart," the resulting screen shows my cart and supplies a button that will let me return to the item or list of items I was viewing.  Other times, not.  Same with the one-click ordering:  sometimes I can get back to where I was and sometimes I have to search again to build the list.  The pages are filled with so many order buttons, add to cart, one-click order, buy new and used, recommended items, specials, and everything else that I'm overwhelmed.  All I wanted was a repair manual for my truck!  I didn't need to fight through the promotions, ads, recommendations, and assorted other trash just to find that the wrong item ended up in my cart.  I finally had to close my browser and start over.  Amazon and I just weren't getting along in that browser session.

Buying from is like walking into a bookstore looking for a specific item or items, and being continually interrupted by salespeople who jump into your path and scream "Look!  Buy this now!" while gesturing wildly with a book that's peripherally related to something I bought in the store five years ago.  Maybe I'll start shopping at Barnes & Noble's online site.

Wednesday, 19 October, 2005

Hunting Memory Leaks

One of the things I like best about the .NET Framework is the automatic memory manager.  Not having to worry about freeing memory is a huge load off a programmer's mind.  Old time C and C++ programmers scoff at the idea of garbage collection, for two reasons.  The first is the curmudgeon:  "I don't need no fancy garbage collector.  By golly, if a programmer can't remember to free the memory he's allocated then he shouldn't be programming." 

It'd be nice to live in such a perfect world.  The fact is that all too often programmers don't free the memory that they've allocated.  Even really good programmers forget, especially when they're faced with an API that doesn't make it clear who is responsible for freeing memory.  All too many programmers write APIs that have the screwiest side-effects.  It's often difficult to tell which functions allocate memory and, if they do, when or if it's safe to free the memory.  In addition, maintenance programmers often end up allocating memory in an object's constructor and conveniently forget to free the memory in the destructor.  There are tools that will help you identify memory leaks, but many programmers either don't know the tools are available or don't understand that memory leaks are almost inevitable.  Especially when you go into maintenance mode.

The other reason that C++ programmers don't like the .NET garbage collector is something called "nondeterministic finalization."  In short, there's no way to tell exactly when an object's destructor will be called.  This can be a problem in some relatively rare circumstances, but there are ways around it.  Mostly, programmers fixate on the nondeterministic finalization argument because it's the only solid argument against garbage collection.  Nevermind that it's a non-issue in all but a handful of cases that most programmers will never encounter.

I'm in the process of removing memory leaks from a couple hundred thousand lines of old C and C++ code that I inherited.  There are some very common patterns to the way that memory is leaked in this code.  Most of the bugs fall into two categories:  memory that's allocated when an object is created and not released when the object is destroyed, and memory that's allocated as a side-effect of calling a function and not released by the caller.  The first category is usually caused by maintenance programmers.  The second is the result of non-obvious side effects.

This code, by the way, is not just one isolated system that suffers from memory leaks.  Every C, C++, and Delphi project that I've been called in to work on has had memory leak problems.  Proper memory management is very difficult.  The .NET Framework's use of garbage collection to avoid memory leaks is a Good Thing.

Tuesday, 18 October, 2005

Old Calculating Devices

Last year I mentioned the Datamath Calculator Museum when talking about my old Radio Shack EC-4000 programmable calculator.  We were discussing slide rules, old calculators, and old technology in general on my ham radio group's email list recently and somebody mentioned The Old Calculators Web Museum.  That site focuses on early electronic and electromechanical desktop calculating machines--a technological step back from the handhelds.  It has pictures and very detailed descriptions of many old machines, and links to many similar sites that cover calculators, early office equipment, and old computers.  There's even a Calculator Collector Web Ring, if you can believe it.

If you're interested in slide rules, by the way, a good place to start is Eric's Slide Rule Site.  This attractively-done site has basic information about the history of slide rules, how to use them, and the different scales that you'll find on some of the devices.  There are downloadable instruction manuals, a great "how to clean a slide rule" section, and an annotated links page that will probably get you more information than you ever wanted to know about slide rules.

Slide Rule Universe is apparently the slide rule resource site.  It's also one of the ugliest Web sites I've seen in a very long time.  I'm reminded of a fortune cookie I received several years ago:  "Remember that elegance and good taste are usually a matter of less, not more."  There is an incredible amount of information on that site, but the page design (I use that term lightly) is painful.

Friday, 14 October, 2005

The Death of a Truck

My truck died today.  Its death came as no big surprise, as I have known for a couple of months that it was on its last legs.  It was kind of interesting to see it go.

Back in June, the "Check Gauges" light started coming on when the engine was warm and idling.  A quick glance at the dashboard showed that the oil pressure was in the red--registering less than 10 PSI.  The guys at the shop I took it to said, in effect, "No problem.  These older model GM trucks do that sometimes.  Run thicker oil."  I'm no mechanic, but that didn't sound right to me.  I asked around a bit, talked to my brother in law who owns a auto repair shop, and then had the dealer look at it and confirm what I suspected:  the engine was worn out.  Oil just wasn't circulating well for some reason.  We did determine that it wasn't the oil pump causing the problem, and it was too expensive to do any more diagnostics to find the problem.

The nice thing about having a worn engine that still mostly works is that it's beyond repair.  Short of rebuilding the engine, there was nothing I could do to make things better and continuing to drive it wasn't going to make the eventual repair any more expensive.  So that's what I did.  For three months I drove the thing around town, revving the engine at stop lights to keep the oil pressure out of the red, and hoped every time that I could make it to my destination.  I would not have done this if I were commuting to work every day, but with me working at home most of my trips were non-critical.

On my way home today I noticed a light knocking sound that was new and the oil pressure started falling very near the red line even when at higher RPM.  By the time I was about five miles from home, the knocking was very loud and the engine was developing very little power.  I knew that this would be the truck's last trip on that engine.  The engine had "spun a bearing."  One (or more) of the bearings on the piston rods came loose from the crankshaft and is slapping around.  The affected cylinders now fire when the piston is not at top dead center.  It makes for some very nasty exhaust in addition to the noise.

I'm somewhat disappointed that the engine only lasted 120,000 miles.  The rest of the truck is in very good shape:  the body is straight, the transmission and the rest of the drive train are good, and everything else checks out okay.  The tires will probably last another 50,000 miles and I just had the brakes redone so they should last at least as long.  I want to keep the truck because it's paid for, in good condition, and because it's been very reliable over the years.  But I don't know quite how to evaluate the risks and rewards of replacing the engine.

The dealer gave me a quote of $5,600 to completely replace the engine.  It's always good to get the dealer price, as that gives you a high figure from which to work.  I'm waiting on quotes from some mechanics to sell me a "long block," which consists of a new engine block, pistons, rings, etc:  everything from the head gasket down.  I suspect that will be less than half of the dealer's "new engine" price.  Allowing for the unexpected, I figure that I can have the truck operational again for about $4,000.  Kelley Blue Book tells me that the truck, with a good engine, is worth between $2,500 and $3,000 on the used market if I sell it myself.  Here in Central Texas where small trucks are at a premium, it's probably worth a bit more, but $4,000 would be pushing it.  So by replacing the engine I end up spending more on the truck than the truck is worth.

That's one way of looking at the vehicle's value.  But its value to me is a more complicated issue.  A new truck would cost $20,000.  That's a lot of money for a vehicle that I'll drive maybe 400 miles per month.  A comparable truck on the used market will run me between $4,000 and $6,000.  I can probably get $1,500 or $2,000 for my existing truck if I find a buyer who's willing to do the work himself to put in a new engine.  I understand that there are many such people out there.  Buying a used truck and selling the existing one would probably end up costing me the same $4,000 out of pocket by the time I'm done.  The advantage of keeping what I have is that I know the vehicle's history.

The final thing to consider is what I call the aggravation factor.  At some point the mental pain of futzing with something becomes overwhelming and I am willing to spend money to make it go away.  A brand new truck has an aggravation factor of almost zero.  That makes it quite attractive.  It looks to me as though the aggravation factor for the used truck versus replacing engine is about the same.  On one hand I have to deal with the inevitable minor glitches that usually accompany replacing an engine.  On the other hand I have the problem of getting used to whatever idiosyncracies a used vehicle will have.  It sounds like a wash to me.

At the moment I'm leaning towards replacing the engine.  If you have any insight on the matter I would appreciate hearing it.

Wednesday, 12 October, 2005

A bug solved and one lurking

My primary task on the graphics project is to update our 3D world editor to work with the new graphics engine.  We've been delivering monthly milestones to the customer so that they can see our progress and test the product.  We try to make bugs the highest priority.  Fortunately, we're working with a mature tool that has its idiosyncracies but is in most cases reasonably solid.  The drawback to working with such a tool is that, when a problem does crop up, it's usually tough to track down and repair.  I ran into such a problem today.

The Timeline Editor is a subsystem that lets you script motions of objects or cameras in the world.  It's a pretty simple concept:  set the object's positions at particular times and the underlying animation infrastructure will move the object smoothly between those positions.  The dialog looks like this:

The numbers are seconds.  In this example, I have set the camera's position at start (0 seconds) and at several other points in time.  This particular example is a 30 second level fly-through that returns to the starting point and then repeats.  The long vertical bar at the 4 second mark is the current position in the timeline.  You can use the mouse to move the time bar to any position and the objects being controlled will move to the proper position.

The important things to note here are the existence of a horizontal scrollbar and zoom capability.  In theory, the timeline can be any length.  You can use the scrollbar to scroll through the existing time marks, or drag the current time bar to the right in order to set a time mark far into the future.  This all works great, and I'm able to script some very long animations.  But I'm getting ahead of myself.

The problem I ran into today was that if I attempted to drag the time bar to the left of the zero mark, it would jump forward a few seconds.  And if I moved the mouse outside the window on the left, the thing would start scrolling forward in a big hurry.  It took a while to track down, but I finally traced the problem to this line of code in the WM_MOUSEMOVE handler:

int MouseX = LOWORD(lparam);

Makes sense, right?  WM_MOUSEMOVE passes the current mouse position as two unsigned words in the lparam:  horizontal position in the low word and vertical position in the high word.  This is standard Windows API stuff.  The problem is that the mouse position is actually a signed quantity.  If you drag the mouse outside the left border of the window, the horizontal position is -1.  Except due to the vagaries of binary representation and C++ conversion rules, converting an unsigned 16-bit word to a 32-bit integer results in an unsigned 16-bit value.  The above code worked as expected in Windows 3.1 when an int was 16 bits long.  Once I figured out what was happening, it was easy enough to change the code to this:

int MouseX = (short)(LOWORD(lparam));

That forces a signed conversion and then a sign extension.  Problem solved.

Except that there might be a much bigger problem lurking in that code.  If the scrollbar code is using the old WM_HSCROLL notifications, then the virtual window is limited to 32768 pixels, or about 655 seconds of timeline at 50 pixels per second.  The zoom capability limits that further, cutting the timeline length in half for each zoom level.  If I limit the zoom to three levels, giving the user a resolution of about 2.5 milliseconds, then the timeline is limited to only 75 seconds in length.  That's not long enough.

There are two possible solutions:  ignore the values passed with WM_HSCROLL and use the GetScrollPos function to read the scroll position, or get rid of the scrollbars and use a world-to-screen transformation.  For this application, a 32-bit scroll position would be plenty (giving me about 43 million seconds or roughly 500 days).  For other applications, even a 32-bit scroll position is inadequate.

These kinds of bugs keep programmers up nights.

Sunday, 09 October, 2005

USB FlashDrives The Next PC?

Slashdot ran an article this evening, USB FlashDrives The New PC?, linking to the AP article Flash Drives Make Any Computer 'Personal'.  The idea behind the article?  Your flash drive is your PC.  Rather than lugging around a laptop, just carry the flash drive and plug it into any PC.  The flash drive contains your operating system and data, making a truly portable computer.

The idea isn't new.  I mentioned the possibility in my Random Notes entry on August 29, 2003.  We have the flash drives and the ability to boot from them (although I don't know if it's possible to boot Windows from them).  What we don't have is a large installed base of computers designed to be used in this manner.  It's a great idea and I'd love to see it happen.  But I'm not going to hold my breath.  For now the idea will be limited to niches like colleges and some companies.  Somebody will have to make a compelling case for major hotel chains to implement this.

Friday, 07 October, 2005

Five Years of Random Notes

Today marks five years that I've been writing Jim's Random Notes.  I'm very surprised that I've been able to keep with it this long.  I never promised to write every day, but I have managed over 1,300 entries in the past five years.

No great revelations on this anniversary.  I just wanted to congratulate myself on staying with it.

Tuesday, 04 October, 2005

Fun with glue and Velcro

I dislike drilling holes in my truck.  As you can imagine, that makes it somewhat difficult to mount a radio in the cab.  My solution to the problem is this industrial strength Velcro that I picked up at Home Depot a couple of years ago.  It's plenty strong enough to support a two pound radio.  At least, the hook and loop part is.  The adhesive on the back is another matter entirely.

The radio hangs from a bracket that's designed to be bolted to a surface.  The top of the bracket is flat, making it an excellent place to stick on a piece of Velcro.  I used sandpaper to rough up the painted surface, cleaned the surfaces with the recommended alcohol-based cleaner, and attached the Velcro pads.  Like magic, the radio was stuck under the dashboard.  Until I parked the truck outside in the sun.  I came back to find the radio on the floorboard, the Velcro pieces still stuck to the dashboard.  The adhesive backing had peeled away from the mounting bracket.

Never one to give up easily, I pulled out my trusty tube of handyman's all-purpose adhesive and glued the Velcro strip to the bracket.  I let it sit overnight and the next morning installed it in the truck.  And that's the way it's worked for about a year.  Recently I noticed that the adhesive was starting to peel away again and over the weekend I found the radio hanging by a very thin strip.  This was an interesting failure mode.  My glue didn't come off.  The adhesive backing peeled away from the Velcro strip.

That all-purpose adhesive is some sticky stuff and hard to remove from whatever surface you've placed it on.  I soaked the bracket and the Velcro strip in a little Orange TKO overnight.  This morning the adhesive came right off.  The laundry room smells like orange oil, but that's a small price to pay.

My problem now is figuring out how to mount the darned radio.  I still don't like drilling holes.  I'm tempted, now that the adhesive backing is all gone, to try again with the all-purpose adhesive.  But I'm open to suggestion.  Anybody have ideas?

Sunday, 02 October, 2005

Ham radio happenings

I was busy most of last week in the final push to get the September milestone deliverable out to the client.  As a result I've not been doing much outside of normal work and home life.  About the only non-work activities I've managed recently involve ham radio.

  • I usually take my lunch break at home and try to take a full hour.  It's good to get away from what I'm working on to let my brain rest and allow whatever current problem filter through the back of my mind.  While I'm preparing (usually waiting for the microwave to finish) and eating my lunch I've been reading about radio wave propagation and studying for my Extra Class license.  It sure beats the techno-thrillers I've been reading recently, which seem to be the same old story with the same old characters doing the same old things with new little twists.  But that's a separate rant.
  • I've made it a habit each evening to turn on the HF rig and scan the bands.  I try to work at least one contact every night to get an idea not only about how my antenna works but also to get more proficient at operating, getting more familiar with generally accepted practices, and developing an ear for the way people say things on the air.  Anybody can work strong signals.  It takes practice to work weak signals and carry on a conversation with somebody for whom English is a second or third language.  I still do more listening than talking.
  • The semi-annual Belton ham fest (think of a flea market for radio gear) was this Saturday.  My friend Steve and I went up to walk around and peer at the cool stuff and to pick up a few things that are difficult to find locally.  The place was deserted compared to the last few times I've been there.  More than half of the table spaces inside were empty and it wasn't crowded at all.  I think Katrina and Rita had something to do with that, as Belton is usually well attended by people from Louisiana and east Texas.  The price of gas probably didn't help much, either.
  • The California QSO Party was held this weekend, with stations in California trying to make as many contacts as possible and stations in other states trying to contact as many California stations as they could.  Band conditions were about what you'd expect at the bottom of the solar cycle (which is to say not very good) but I still managed 50 contacts (all on 20 meters) in about four hours of operating on my modest little station.  Another skill that requires lots of practice is making oneself heard in a pileup.  I can't compete on signal strength with stations that have 500 watt amplifiers and directional antennas.  It takes patience, but eventually I get through.
  • Next week I start practicing Morse Code again.  At least, that's my plan.  For some reason, the idea of using Morse Code interests me more than voice.  We'll see how I feel about that once I get proficient and actually make some CW contacts.

Maybe I've gone a bit nuts with the radio, but I've found that the only way I learn something is by immersing myself in it and work on it a little every day.  It sure beats playing FreeCell or watching TV.