I think Qiaochu Yuan was probably joking, but is actually on the right track, in the sense that the answer to the question really depends on what theoretical, physical, or practical model of computation you have in mind.
It's not unreasonable to imagine models in which a computation can take 0 time. For example, you could have a machine made of gears and levers where you set up the gears in a certain position as an input. To get the output, you turn a crank until, after k revolutions, it freezes up and won't turn anymore. The number k is both the output and the running time. It's possible that k is zero, so the running time is zero. Of course this doesn't count the time it takes to walk into the room, set up the inputs, grasp the crank, etc. But it's not completely unreasonable to imagine a useful mathematical model where you don't count those things as running time.
It's also not unreasonable to imagine that you could have faster and faster information processing as n grows. Sometimes more information makes your task easier. If I'm a book editor and my job is to take a pile of submissions, select the best one, and then write the author a request for the necessary revisions, my job could get easier when the pile is taller, because I can afford to be more picky, I'll probably get a better manuscript, and it won't require as many revisions.
However, I think there is a more fundamental issue that prevents anything from being faster than O(1), which is that we're talking about digital computing, not analog. Whether it's a Turing machine or a desktop computer or the system of gears and levers described above, a digital computer typically operates in discrete cycles. You can get a result in 0 cycles, or in 1 cycle, but you can't have half a cycle. If you really want to have O(1/n) performance, then there has to be some $n_o$ such that for $n \ge n_o$ the running time is no more than 1/2 a cycle. This means that for $n \ge n_o$ the computation always terminates in 0 cycles. The example of the levers and gears shows that it is possible for a computation to terminate in 0 cycles, but there can't be any such thing as a nontrivial computation that always terminates in 0 cycles for large n. For a computation like that, you simply wouldn't need to use the machine for $n \ge n_o$. Another way of putting it is that if your computation always terminates in 0 cycles for $n \ge n_o$, then your computation isn't just faster than O(1), it's O(0), which means it's trivial.
So I think the answer is that performance faster than O(1) is not reasonable for a digital computer, and since big-O notation is really designed for digital computers, it's not obvious how to pose the question in the case of analog computers.