Follow

something i’m thinking about this morning is the surprising number of low-level computer bits where it would be useful to be able to send information backwards in time.

in cs theory, it’s been proven that this capability makes P = PSPACE, which may give you some idea of the potential, but in terms pretty far removed from actual computers.

But what if I told you that, if we could put a wire on a chip that sent one bit exactly four cycles backwards in time (in practice you probably need several such wires and a variety of negative delays) then branch prediction would become unnecessary?

One microscopic closed timelike curve would eliminate thousands of transistors and I don’t even know how to quantify the reduction in design complexity. No time paradoxes are involved; it’s “just” a matter of providing access to information that isn’t computed until a few cycles after it’s needed.

All forms of “speculative execution” are workarounds for information not being available until after the point where it’s needed, and they can all be eliminated in the same way.

Perhaps less obviously, memory caching might be unnecessary as well. Caches work around the time it takes to send a signal from the CPU to the RAM and get a response. Well, what if the response travels backwards in time to immediately after the request is sent?

(This is a practical illustration of the idea that time travel is equivalent to FTL travel—the *reason* it takes thousands of cycles to get a message over to the RAM and back is, in fact, the speed of light.)

(A bunch of Asimov’s stories have powerful computers built partially or entirely in hyperspace. He never explains why, that I can remember, and it *could* have just been technobabble like the positronic brains, but I like to think he knew that it really would be useful for internal signaling to run faster than light.)

Knowledge of the future is also useful at the software level, but in ways that skate closer to the line of temporal paradox.

If you know what a program’s next several memory allocations will be, you can make better decisions about where to place the *current* allocation in memory—but this may involve predicting events *outside the computer* (if the program’s allocation choices depend on whether a network packet arrives before or after the next time someone moves the mouse, for instance) and that strikes me as beyond the bounds of plausible extensions to physics.

Sign in to participate in the conversation
hackers.town

A bunch of technomancers in the fediverse. This arcology is for all who wash up upon it's digital shore.