Ph.D. student studying programming languages at Purdue University. Advised by Ben Delaware. Formerly Square and elsewhere.

The past couple of days Paul and I have been working on reducing the memory footprint of our application. One of the things that caught me a little offguard was the size of some of our HashMaps that were caching data from a database for quick access.

The map is keyed on GUIDs from the table, which are 16 bytes each. They’re stored in an Object, adding another 8 bytes for a total of 24 bytes per key. (The Eclipse memory analyzer confirms this number).

The problem is that, while we had about 20 megabytes of data, our memory analyzer told us the map was holding on to well over 100 megabytes. I expected overhead for the map, but not a fivefold increase. So what gives?

Here’s the breakdown of memory overhead for a HashMap.Entry:

  • 4 bytes for the pointer to the key
  • 4 bytes for the pointer to the value
  • 4 bytes for the hashCode
  • 4 bytes for the next pointer (collisions are resolved by chaining colliding elements into a linked list)
  • 8 bytes for the Entry object itself

Total: 24 bytes

So, in my example, keys are 16 byte GUID contained in an object with an 8 byte overhead, for a total of 24 bytes per key. When I put it into the map, I get another 24 bytes of overhead for the HashMap.Entry object as detailed above, for a total of 48 bytes per key. Doing the math, \( \frac{48}{16} = 3 \), so there’s a threefold increase in the size of the stored data even before counting the space needed to hold the map’s values. (Adding in the values objects accounted for the rest of the space).

For small maps or maps with large keys, the overhead is too negligible to care about. And every mapping scheme is going to have overhead for pointers, etc. But when you have a ton of keys at only 16 bytes each, the 8 byte object overheads (and even the 4 byte hash, which doesn’t strictly need to be stored in the Entry) become pretty substantial.

I tried a couple of alternate routes, including writing my own open address / quadratic probing hash map backed by a plain byte array. However, since I can’t store pointers directly in a byte array (meaning there’s still overhead for the heap object), it didn’t end up saving as much space as I had hoped. The byte array stored 4 byte indices to a value object array, which in turn had 4 byte references to the value objects themselves. Plus, the byte array is pretty sparse – quadratic probing means a load factor of around 0.5 at most – and doing a memory efficient sparse array is about as memory intesnive as the original HashMap. In the end, the byte array backed map approach only brought me down from 48 to around 29 bytes on average per entry.

So how do you handle a large amount of data split into small chunks with small keys, without getting killed by the overhead? Is this just something you have to chalk up to the cost of a virtual machine?