I am by no means an expert on lock-free programming. Only once in my professional career have I ever converted a lock based approach to a lock free approach (and yes it was both frustrating and exhilarating). But I've read a bit about it and played with some of what I've read out of curiosity.
At one level they seem to trade generality for speed (and, often, memory). In the same way that a general purpose sort can't* do better than O(n log(n)) but specialized sorts (e.g., radix sort, postman's sort) can (some are even constant-time!).
The specialized sorts take advantage of the structure of the thing being sorted (e.g., must be integral), hardware limitations (word-size and max addressable memory), uniqueness, etc... While it's true that these sorts can sometimes be adapted to a different class of elements (e.g., by tagging the element with the underlying sort element) doing so comes at the cost of increased memory (e.g., the width of the tag) and increased complexity. The adaptations still depend on the underlying constraints on the sort to achieve the performance gain.
Similarly, lock free algorithms tend to come with similar constraints: the expectation of a certain word size, uniqueness of index or uniqueness via algorithm (e.g., a GUID). Serializing (lock-based) access to a resource is conceptually quite simple. Limiting access to that resource to consumers that intrinsically can't collide (either by giving them a unique index or generating a guaranteed* unique GUID) increases the complexity of exclusive access at a cost of increased memory and/or increased computation (e.g., generating the GUID, indexing the consumer). But it also makes for much faster access as consumers no longer need to queue up waiting for a lock.
The more I think about it, the more I think lock free programming can be characterized by 2 propositions:
- Trading Generality for Speed.
- Trading Simplicity for Complexity.