Could somebody provide a bit of context on what exactly this is? It seems interesting, but I have no idea what I am looking at.
joshmoody24 20 minutes ago [-]
Lambda calculus is a model of computation. A pretty lightweight model that can still do everything that other programming languages can. Combinators are an even simpler model of computation that is still equally powerful. That simplicity / power ratio is what makes them cool.
A while back I built all the way up to FizzBuzz from just S and K combinators. Took days of doing all the math by hand, lol.
To complement leethomps answer, combinatory logic is a branch of Mathematics that was started in the 1920s by a mathematician called Moses Shönfinkel which deals with "functions that do stuff and return other functions".
This was developed by some names that may be more familiar (Haskell Curry, Alan Turing, Kurt Gödel, Bertrand Russell). It was proved to be identical to both the lambda calculus and the Turing machine and became the basis for modern computing.
What we see here are some of those key building blocks that were studied in the 20s and 30s and have been now applied to modern programming languages.
Functional languages use them a lot because you can express a lot of things as just combinations and compositions of other functions. Array languages often take this to an extreme by expressing complex numeric algorithms with only a few symbols.
What you see above is the logic/processing order of how those functions fit together. For example you can express a mean as something like `(+/#)` - a 5 letter anonymous function that can be applied to an array - because of all the applications and combinations being implicit in the structure of the language, as denoted in the link.
leethomp 6 hours ago [-]
Many primitives in array languages match the behaviour of certain combinators in combinatory logic. The page shows (left to right) the symbol for a certain combinator, its effective operation in APL syntax where x and y are left and right arguments (APL operators are either infix or single-parameter prefix) and F and G are similarly left and right function arguments, the 'bird' is a sort of colloquial name for a particular combinator, 'TinyAPL' is the operator that matches the combinator in the author's APL implementation, and the diagram is a way of explaining how the combinator works visually
Can we solve for x and y? All I see is algebra here, is my intuition wrong?
travisjungroth 2 hours ago [-]
More directly than the other comments: No you can’t solve for x and y here and yes your intuition is wrong.
These are functions. I don’t know your level of knowledge in math or programming and what that would mean to you. Here’s an example.
double(x) -> x*2
So, double(3) = 6. You can’t solve for x because x doesn't have a value. It’s a placeholder for whatever you put in.
These combinators are functions that take other functions and return them unmodified. “Unmodified” is a little misleading because it can do things like drop inputs.
1 hours ago [-]
seanhunter 4 hours ago [-]
The intuition here is that combinators are higher order functions which take functions and combine them together in various ways. So for a simple example "fix" is a combinator in regular maths where
Fix f = {f(x): f(x) = x for all x in the domain of f}
So if f is a function or a group action or whatever, the fixed-point set of f is all points x in the domain of f such that f(x)=x. ie the points which are unchanged by x. So if f is a reflection, the points which sit on the axis of reflection.
The fixed-point combinator is of particular relevance to this site because it's often called the y combinator.
travisjungroth 2 hours ago [-]
No one who would ask that question would be able to understand your answer.
general_reveal 49 minutes ago [-]
I’m going to frame this comment.
seanhunter 55 minutes ago [-]
Hehe. Sorry. Yes perhaps you’re right. Wasn’t trying to be obtuse but I didn’t express that particularly clearly.
Sharlin 4 minutes ago [-]
Your explanation was about five years worth of math studies beyond what GP was asking.
Zhyl 5 hours ago [-]
It's more like a recipe (for functions).
The first example, I, is an identity function. It takes y and returns y.
The second, K, is a constant which takes X and y and returns x.
This gets more complicated as you go along. The idea is that you get rid of a lot of the syntax for composition and have it all be implicit by what you put next to each other (given APL programs are usually one long line of a bunch of different symbols all representing functions).
skydhash 4 hours ago [-]
Combinators can be a bit sill for values. The usefulness come when you use them as a meta language for functions.
observationist 4 hours ago [-]
Combinators are math, and a little like Lisp - building functions from primitives and operations with the ability to apply them, where even the notion of variables are functions - functions all the way down.
When considering logic and functions, when thinking in the space of combinators, you can ask questions like "What is Plus times Plus" and have a sensible result.
https://www.youtube.com/watch?v=RcVA8Nj6HEo
Combinators are awesome.
The site linked by OP is a specific collection of combinators with bird names, riffing on the "To Mock a Mockingbird" puzzle book and subsequent meme of giving combinators bird names.
momentoftop 2 hours ago [-]
Or better yet, the y combinator is this: W S (Q (S I I))
The whole point is that we don't need no stinking variables.
laszlokorte 3 hours ago [-]
Based on other existing material on the topic (like the excellent code_report youtube channel) I once wrote an introduction to combinators and lambda calculus targetted at javascript developers (mostly targetted at my younger self) [1]
In short a combinator is a pure function that accesses only identifiers that are provided as arguments.
Length(x,y) { sqrt(xx + yy) } is not a combinator because it relies on global definitions for plus, times and sqrt.
But foo(x, y, b, u, v) { v(b(u(x), u(y))) } is a combinator because it only composes functions that are given as arguments.
Foo(3,5,+,square,sqrt) would result in the same value as length(3,5) so foo can be regarded as capturing the compositional structure of the euclidean distance calculation.
This site is actually named after one of the most popular and widely used Combinators in lisp.
roadside_picnic 3 hours ago [-]
> in lisp.
Technically you cannot implement a proper Y-combinator in Lisp (well, I'm sure in Common Lisp and Racket there is some way) because the classic Y-combinator relies on lazy, not strict, evaluation. Most of the "Y-combinators" people have implemented in Lisp/Scheme/JavaScript/etc are more accurately described as the "applicative order Y-combinator" (also Z-combinator)
Funnily enough, you also cannot* implement the Y-combinator in Haskell (probably the most popular language with lazy evaluation) because the type system will not be happy with you (the Y-combinator, by it's nature, is untyped).
cryptonector 3 hours ago [-]
Specifically the Y combinator enables recursion in a language that otherwise does not support recursion but does support closures.
momentoftop 2 hours ago [-]
Combinators were an attempt to do logic (and computation falls out) without having to mess around with variables and variable substitution, which is annoying and inelegant because you have to worry about syntax issues like variable capture. Combinators were an attempt to do logic with a much simpler and cleaner syntax.
So combinator logic starts with a really simple language, based on a small alphabet of primitive combinators. You can see a bunch listed on the webpage:
I, K, W, C, B, Q, ....
These are the primitive bits of syntax. The only other feature in the language is the ability to apply one combinator to another combinator. You write an application of a combinator "x" to another combinator "y" as "x y", and for convenience, you treat these applications as left associative, so "x y z" means "(x y) z": that is, first apply y to x, and then apply z to the resulting combinator.
Two typical combinators are K and S, with which you can form more complex combinators like
K K
S K
K K K
K (K K)
K (S K)
...
Combinators generally come with simplification rules, and the ones for K and S are:
K x y = x
S f g x = f x (g x)
With these, we can start doing interesting reductions like:
S K K x = K x (K x) = x
Now the weird fact: we're suddenly Turing Complete. It turns out that every possible computation is expressible just by building a big combinator out of K and S and applying those two simplification rules. No other machinery is needed.
K and S are not the only combinators with this property, and others form an adequate Turing Complete basis.
If you've heard of the Curry-Howard correspondence (Curry was responsible for combinatory logic), then combinators provide probably the simplest example of it, since if you give combinators types, you realise you are working with what's called a "Hilbert style" deduction system for propositional logic, which is the simplest sort of formal logical system. Indeed:
1. Hilbert's first two axioms for his version of the calculus are exactly the types for K and S above
2. K and S are invocations of these axioms
3. Application is modus ponens
4. The combinator S K K above corresponds to the proof that p → p.
5. The simplification of S K K x is proof normalisation (if you ever see the proof S K K x for some proof x, you should simplify it to just the proof x).
gregfjohnson 2 hours ago [-]
(Show HN?) There is a deep and lovely connection between the Y-combinator and the classic version of Godel's incompleteness theorem: https://gregfjohnson.com/incompleteness/
worldsayshi 30 minutes ago [-]
Does anyone have a grasp of how this relates to interaction nets?
rdevilla 3 hours ago [-]
I wish universal and eternal patterns like this were studied more often in software engineering. Perhaps we would have a chance in hell of finding canonical representations of common structures and even programs instead of basket weaving majors fucking reinventing the wheel every 5 minutes with yet another half-baked poorly understood Python or JavaScript instantiation of a common pattern. Imagine still writing for loops and munging indices instead of expressing things in terms of higher order functions like folds or maps...
Eh, I don't need to imagine; we're still stuck at that same level of infantilism. Instead of actually graduating to higher order atoms and primitives of thought though, we can just have the AI slop out another 100k LOC. Then the system will have so much incidental complexity that it becomes impossible to distill out its essence, because there no longer is one.
abeppu 3 hours ago [-]
While I agree that we keep reinventing stuff, in CS doesn't the ease of creating isomorphisms between different ways of doing things mean that canonicalization will always be a matter of some community choosing their favorite form, perhaps based on aesthetic or cultural reasons, rather than anything "universal and eternal"?
rdevilla 3 hours ago [-]
We can still speak of equivalence classes under said isomorphisms and choose a representative out of them, up to the aesthetic preferences of the implementor. We are nowhere near finding equivalence classes or isomorphisms between representations because the things being compared are probably not equal, thanks to all the burrs and rough corners of incidental (non essential) complexity.
pklausler 3 hours ago [-]
The logical combinators that I know all have definitions in the untyped lambda calculus. Is there a typed variant of logical combinators?
momentoftop 1 hours ago [-]
Most of them have simple types and are easy to define in ML or Haskell.
I : a -> a
I x = x
K : a -> b -> a
K x y = x
W : (a -> a -> b) -> a -> b
W f x = f x x
C : (a -> b -> c) -> b -> a -> c
C f x y = f y x
B : (a -> b) -> (c -> a) -> c -> b
B f g x = f (g x)
Q : (a -> b) -> (b -> c) -> a -> c
Q f g x = g (f x)
There are, however, combinators that do self-application (crucially used in the definition of the Y combinator) and these do not have simple types.
mercatop 3 hours ago [-]
[dead]
cat-whisperer 1 hours ago [-]
omega and y are missing buddy! I was so looking forward to having them represented in APL
ux266478 5 hours ago [-]
A bit of an aside: I wonder how much array-oriented languages like APL and J would benefit from being implemented on top of an interaction net machine?
superlopuh 3 hours ago [-]
I raised this in person to a number of array language implementors (and Connor Hoekstra) last year and they weren't familiar with interaction nets. I'm not sure that I was successful in convincing them that this was worth looking into, partially because I'm not yet personally convinced that this is worth looking into.
hrmtst93837 6 hours ago [-]
The y-combinator is widely regarded as the best combinator :)
actionfromafar 2 hours ago [-]
The y-combinator is widely regarded as the widest combinator.
Rendered at 19:02:20 GMT+0000 (Coordinated Universal Time) with Vercel.
A while back I built all the way up to FizzBuzz from just S and K combinators. Took days of doing all the math by hand, lol.
Here's my write up of doing that. I did it in JavaScript because most combinator articles online were prohibitively academic for my layman mind. https://joshmoody.org/blog/programming-with-less-than-nothin...
This was developed by some names that may be more familiar (Haskell Curry, Alan Turing, Kurt Gödel, Bertrand Russell). It was proved to be identical to both the lambda calculus and the Turing machine and became the basis for modern computing.
What we see here are some of those key building blocks that were studied in the 20s and 30s and have been now applied to modern programming languages.
Functional languages use them a lot because you can express a lot of things as just combinations and compositions of other functions. Array languages often take this to an extreme by expressing complex numeric algorithms with only a few symbols.
What you see above is the logic/processing order of how those functions fit together. For example you can express a mean as something like `(+/#)` - a 5 letter anonymous function that can be applied to an array - because of all the applications and combinations being implicit in the structure of the language, as denoted in the link.
BQN, another array language has a page of documentation describing the same concept for their language with a bit more explanation for the combinator newcomer: https://mlochbaum.github.io/BQN/tutorial/combinator.html
These are functions. I don’t know your level of knowledge in math or programming and what that would mean to you. Here’s an example.
double(x) -> x*2
So, double(3) = 6. You can’t solve for x because x doesn't have a value. It’s a placeholder for whatever you put in.
These combinators are functions that take other functions and return them unmodified. “Unmodified” is a little misleading because it can do things like drop inputs.
Fix f = {f(x): f(x) = x for all x in the domain of f}
So if f is a function or a group action or whatever, the fixed-point set of f is all points x in the domain of f such that f(x)=x. ie the points which are unchanged by x. So if f is a reflection, the points which sit on the axis of reflection.
The fixed-point combinator is of particular relevance to this site because it's often called the y combinator.
The first example, I, is an identity function. It takes y and returns y.
The second, K, is a constant which takes X and y and returns x.
This gets more complicated as you go along. The idea is that you get rid of a lot of the syntax for composition and have it all be implicit by what you put next to each other (given APL programs are usually one long line of a bunch of different symbols all representing functions).
The y combinator is this: λf.(λx.x x)(λx.f(x x))
Lambda diagrams get you visualizations like this:
https://tromp.github.io/cl/diagrams.html
When considering logic and functions, when thinking in the space of combinators, you can ask questions like "What is Plus times Plus" and have a sensible result. https://www.youtube.com/watch?v=RcVA8Nj6HEo
Combinators are awesome.
The site linked by OP is a specific collection of combinators with bird names, riffing on the "To Mock a Mockingbird" puzzle book and subsequent meme of giving combinators bird names.
The whole point is that we don't need no stinking variables.
In short a combinator is a pure function that accesses only identifiers that are provided as arguments.
Length(x,y) { sqrt(xx + yy) } is not a combinator because it relies on global definitions for plus, times and sqrt.
But foo(x, y, b, u, v) { v(b(u(x), u(y))) } is a combinator because it only composes functions that are given as arguments.
Foo(3,5,+,square,sqrt) would result in the same value as length(3,5) so foo can be regarded as capturing the compositional structure of the euclidean distance calculation.
[1]: https://static.laszlokorte.de/combinators/
Technically you cannot implement a proper Y-combinator in Lisp (well, I'm sure in Common Lisp and Racket there is some way) because the classic Y-combinator relies on lazy, not strict, evaluation. Most of the "Y-combinators" people have implemented in Lisp/Scheme/JavaScript/etc are more accurately described as the "applicative order Y-combinator" (also Z-combinator)
Funnily enough, you also cannot* implement the Y-combinator in Haskell (probably the most popular language with lazy evaluation) because the type system will not be happy with you (the Y-combinator, by it's nature, is untyped).
So combinator logic starts with a really simple language, based on a small alphabet of primitive combinators. You can see a bunch listed on the webpage:
These are the primitive bits of syntax. The only other feature in the language is the ability to apply one combinator to another combinator. You write an application of a combinator "x" to another combinator "y" as "x y", and for convenience, you treat these applications as left associative, so "x y z" means "(x y) z": that is, first apply y to x, and then apply z to the resulting combinator.Two typical combinators are K and S, with which you can form more complex combinators like
...Combinators generally come with simplification rules, and the ones for K and S are:
With these, we can start doing interesting reductions like: Now the weird fact: we're suddenly Turing Complete. It turns out that every possible computation is expressible just by building a big combinator out of K and S and applying those two simplification rules. No other machinery is needed.K and S are not the only combinators with this property, and others form an adequate Turing Complete basis.
If you've heard of the Curry-Howard correspondence (Curry was responsible for combinatory logic), then combinators provide probably the simplest example of it, since if you give combinators types, you realise you are working with what's called a "Hilbert style" deduction system for propositional logic, which is the simplest sort of formal logical system. Indeed:
Eh, I don't need to imagine; we're still stuck at that same level of infantilism. Instead of actually graduating to higher order atoms and primitives of thought though, we can just have the AI slop out another 100k LOC. Then the system will have so much incidental complexity that it becomes impossible to distill out its essence, because there no longer is one.