As long as you are stealing time, there will be D(A)emons.

Show thread


Is this finally the "game over" vulnerability everyone's afraid of?

@theblacksquid Until we stop using speculative execution... yes.

@thegibson You really need a way to rewind all effects, including cache invalidation/eviction effects, of any speculated operation, to fix this…

I am not a processor designer, but I’m thinking the best way to do this is something like:

  • Reserve a portion of every cache for speculative effects, and only allow speculation to affect that portion (note that this may cause cache stalls more frequently depending on how this is sized, and you might even be able to dynamically size the speculation/real cache partitioning for performance, but you have to be careful here to ensure that the dynamic sizing itself doesn’t reintroduce a Spectre-class vulnerability - maybe make this a system-wide (so a VM can’t tweak it) boot-time parameter or something and allow the OS to profile the performance impact for future tuning on next reboot).
  • If speculation succeeds, every cache line affected by the speculative event is committed to the “real” cache, and the same number of cache lines are evicted from the “real” cache (and then allocated to the speculative pool), causing an identical effect to successful speculation on a current processor.
  • If speculation fails, because the speculation’s cache effects happened in reserved cache lines specifically for speculation… you just evict the lines the failed speculation touched.

Therefore, all cache effects of a failed speculation are guaranteed to be reversed, and Spectre is killed.

@bhtooefr Just another speculative process to get abused I think.

The big caveat you've got here is "be careful here".

but I could be pleasantly surprised.

@thegibson I mean, there’s also just making it a fixed partition size, that entirely eliminates that risk

or even using two separate copies of the cache, and on every successful speculation, using the new copy and copying it to the other copy for the next speculation

@bhtooefr And so the reason I am skeptical is why haven't processors utilizing such a thing happening?

3 years in... solutions should be in the pipeline by now.

@thegibson I’m suspecting it’s considered too much of a performance/power/die space hit

that’s the real catch

if you want the same size of usable cache, this idea needs more cache, and that means slower cache, more power-hungry cache, and physically larger cache

@bhtooefr right.

so ultimately it's not profitable to do it until there is a big mass market exploit of this...

Same as it's ever been, they won't do it until someone makes them.

@thegibson mind you, my idea still produces significantly better performance than simply deleting all caches and running on what you can manually shove into registers directly from DRAM as if it’s the 1980s still, or deleting all speculative execution and branch prediction as if it’s the early 1990s still

@thegibson @bhtooefr i think people are addicted to the performance gains that you get by cheating like this.

@TheGibson I'm failing to find anything substantial on this; not the paper they wrote, no flashy web site, no CVE, nothing; until then I'm a bit skeptical.

Sign in to participate in the conversation

A bunch of technomancers in the fediverse. Keep it fairly clean please. This arcology is for all who wash up upon it's digital shore.