I was thinking about how code evolves, since I found myself saying to Ari today, "It's about composition, not derivation. Everybody wants their own widget these days." Out of context that doesn't mean much, but let me explain.
Code gets chunky over time. It gets monolithic and intertwined, and it's inefficient to avoid this. Some code will only get used for one thing, ever, and it doesn't make sense to refactor that 100 lines of really spaghetti code into 5 functions.
But at the same time, for some code (more than you think), it does make sense--you refactor to make things just a little less intertwined, just a little more general, and eventually you can use it somewhere else entirely.
Why the caution? Why not do this all the time? All the popular programming books say to...
Well I've met Java programmers who make this sort of activity into an art form, with 75 interfaces and design patterns for a tiny simple function, so that their abstraction makes useful work impossible. Understanding their code requires days of reading. Basically, it's a way for them to show off, and that's not so good. (I don't know why this kind of thing centers around the Java part of the universe, but it seems to.)
In short, refactoring in the extreme makes for complexity, not simplicity. And simplicity is really the goal.
Some code will stay chunky, and that's fine.
Somewhat related to this discussion is one of the coding laws I kinda tripped over a number of years ago, and which I consider mostly magical:
If you write code that doesn't have huge dependencies, or has well thought-out ones, you'll find that you can reuse it, that you can use it for another purpose or in a whole other application, and it's not hard.
If you write code with huge dependencies, this won't happen. I really believe in "build the tools, then build the application" idea, and while you should feel great if you finish a feature that real people can use, you should feel even better if you make a feature and a bunch of reusable technologies at the same time.