It’s a natural observation, but it doesn’t address the floating point problem. I think the author should have said “fast or would accumulate floating point error” instead of “fast and would accumulate floating point error”.
You could compute in the reverse direction, starting from 1/n instead of starting from 1, this would produce a stable floating point sum but this method is slow.
Edit: Of course, for very large n, 1/n becomes unrepresentable in floating point.
Interesting follow-up question: What is the distance between the set of harmonic numbers and the integers? i.e. is there a lower bound on the difference between a given integer and its closest harmonic number? If so, for which integer is this achieved?
is there a reason the direct definition would be slow, if we cache the prior harmonic number to calculate the next?
You could compute in the reverse direction, starting from 1/n instead of starting from 1, this would produce a stable floating point sum but this method is slow.
Edit: Of course, for very large n, 1/n becomes unrepresentable in floating point.