There's a model of computation called 'interaction nets' / 'interaction calculus', which reduces in a more physically-meaningful, local, topologically-smooth way.
I.e. you can see from these animations that LC reductions have some "jumping" parts. And that does reflect LC nature, as a reduction 'updates' many places at once.
IN basically fixes this problem. And this locality can enable parallelism. And there's an easy way to translate LC to IN, as far as I understand.
I'm a noob, but I feel like INs are severely under-rated. I dunno if there's any good interaction net animations. I know only one person who's doing some serious R&D with interaction nets - that's Victor Taelin.
tromp 1 days ago [-]
> there's an easy way to translate LC to IN
While easy, it sadly doesn't preserve semantics. Specifically, when you duplicate a term that ends up duplicating itself, results will diverge.
There exist more involved semantics preserving translations, using so-called croissants and brackets, or with the recent rephrased approach of [1].
Speaking of Victor Taelin, what's the latest on https://higherorderco.com/ ? His work is really inspiring and amazing
killerstorm 5 hours ago [-]
He shares the progress on Twitter quite often. In the last year they shifted the focus away from raw performance (as beating existing stuff is rather daunting) and into rather unique stuff with code synthesis, perhaps relevant to formal verification of vibe-coded code, etc.
tromp 2 days ago [-]
You can enter (λn.n(λc.λa.λb.cb(λf.λx.f(afx)))Fn0)7 to compute the function Col' from [1] to 7, resulting in (3*7+1)/2 = 11. Unfortunately, this visualization is much less insightful than showing the 7 successive succ&swap operations:
I.e. you can see from these animations that LC reductions have some "jumping" parts. And that does reflect LC nature, as a reduction 'updates' many places at once.
IN basically fixes this problem. And this locality can enable parallelism. And there's an easy way to translate LC to IN, as far as I understand.
I'm a noob, but I feel like INs are severely under-rated. I dunno if there's any good interaction net animations. I know only one person who's doing some serious R&D with interaction nets - that's Victor Taelin.
While easy, it sadly doesn't preserve semantics. Specifically, when you duplicate a term that ends up duplicating itself, results will diverge.
There exist more involved semantics preserving translations, using so-called croissants and brackets, or with the recent rephrased approach of [1].
[1] https://arxiv.org/abs/2505.20314