Most programmers understand Garbage Collection but very few know that memory can get fragmented and filled with holes just like hard drives, with far more serious consequences.


All languages, interpreted or compiled, give programmers the ability to allocate and release spans of memory. Objects, structures or simple blobs of addressable space, they can be created, used and returned to the memory pool once they are no longer needed. But there’s a catch:

Even with the most efficient memory manager, even with the best-in-class garbage collection algorithm, there is no guarantee that after a piece of code has done its thing the memory will have the same capacity to hold data. Let this sink in for a second: you write your code to the best of your ability, your debugger and profilers tell you there is plenty of memory to go around, and yet your program crashes because it ran out of memory, and there is nothing you can do.

Consider, for example, a piece of code that takes a string already stored in memory, and simply adds an extra character at the end. Regardless of the language, and except in very special circumstances, the program will need to allocate a new chunk of memory to hold the new string, copy the data over (adding the character at the end) and then free the old memory block.


Rinse and repeat. A million times. Ten million times. Across days, weeks or months. The memory space of any non-trivial program becomes a series of holes where new data may not fit. Granted, with today’s computers and heap sizes, a condition like this is unlikely to happen in server-class hardware but in low-end devices it is a real possibility.

The solution to this problem is to run a process called Memory Compacting that physically relocates all objects in the application’s heap and re-writes references and pointers so that al free memory becomes a single block again:


Now, not all (in fact, very few) runtimes do this. The grand list of languages and runtimes that compact memory is:

  1. JRE: Java, Scala, groovy, etc.
  2. .NET CLI: C#, F#, Visual Basic and others
  3. LISP

The only viable alternative to memory compaction is not to use dynamic memory allocation at all, and only use statically-defined variables and stack-local variables. As you can imagine this reduces the flexibility of the algorithms that can be implemented, but has the advantage of being the only method that guarantees that the program will never run out of space to store objects and can calculate in advance the amount of memory that the program will need.

Not surprisingly the static memory management is the preferred method for it systems implemented on microcontrollers and other systems very very limited amount of memory. You have no way of doing it in any language other than assembler, C and (to a certain degree) C++.

A plea for sanity in memory management

With this in mind, i’d like to end this article asking you to please stop using scripting languages like Python, Ruby or PHP for projects that must run for months or years at a time even if it’s not on limited hardware. Just stop.

Use real languages that use a real runtime that will guarantee your program will run for as long as it needs to, or take matters into your own hands and do your own memory management with C. All other options will be problematic in the long run.