So after Haskell, and helping with LINQ, this is what Erik Meijer has been focusing on.
duskwuff 12 hours ago [-]
This article feels extremely imprecise. The syntax of the "language" changes from example to example, control structures like conditionals are expressed in English prose, some examples are solved by "do all the work for me" functions like the "toPdf()" example...
This whole thing feels like an elaborate LLM fantasy. Is there any real, usable language behind these examples, or is the author just role-playing with ChatGPT?
YeGoblynQueenne 55 minutes ago [-]
More to the point, is ChatGPT role-playing the author?
rvasa 3 hours ago [-]
Great to start off ... then we will end-up reinventing/re-specifying functions for reusability, module/packages for higher-level grouping, types/classes, state machines, control-flows [with the nuances for edge cases and exit conditions], then we will need error control, exceptions; sooner or later concurrency, parallelism, data structures, recursion [lets throw in monads for the Haskellians amoung us]; who knows .. we may even end up with GOTOs peppered all over the English sentences [with global labels] & wake up to the scoping, parameter passing. We can have a whole lot of new fights if we need object-oriented programming; figure out new design patterns with special "Token Factory Factories".
We took a few decades to figure out how to specify & evolve current code to solve a certain class of problems [nothing is perfect .. but it seems to work at scale with trade-offs]. Shall watch this from a distance with pop-corn.
AIPedant 12 hours ago [-]
I know ACM Queue is a non-peer-reviewed magazine for practitioners but this still feels like too much of an advertisement, without any attempt whatsoever to discuss downsides or limitations. This really doesn't inspire confidence:
While this may seem like a whimsical example, it is not intrinsically easier or harder for an AI model compared to solving a real-world problem from a human perspective. The model processes both simple and complex problems using the same underlying mechanism. To lessen the cognitive load for the human reader, however, we will stick to simple targeted examples in this article.
For LLMs this is blatantly false - in fact asking about "used textbooks" instead of "apples" is measurably more likely to result in an error! Maybe the (deterministic, Prolog-style) Universalis language mitigates this. But since Automind (an LLM, I think) is responsible for pre/post validation, naively I would expect it to sometimes output incorrect Universalis code and incorrectly claim an assertion holds when it does not.
Maybe I am making a mountain out of a molehill but this bit about "lessen the cognitive load of the human reader" is kind of obnoxious. Show me how this handles a slightly nontrivial problem, don't assume I'm too stupid to understand it by trying to impress me with the happy path.
tannhaeuser 3 hours ago [-]
Prolog works indeed very well as target for generation by an LLM, for input problems limited and similar enough in nature to given classes of templated in-context examples, so well indeed that the lack of a succinct, exhaustive text description of your problem is becoming the issue. At which point you can specify your problem in Prolog directly considering Prolog was also invented to model natural language parsing and not just for solving constraint/logic problems, or could employ ILP techniques to learn or optimize Prolog solvers from existing problem solutions rather than text descriptions. See [1].
>> Since we're not doing original research, but rather intend to demonstrate a port of the Aleph ILP package to ISO Prolog running on Quantum Prolog, we cite the problem's definition in full from the original paper (ilp09):
Aleph? In 2025. That's just lazy, now. At the very least they should try Metagol or Popper, both with dozens of recent publications (and I'm not even promoting my own work).
sys13 17 hours ago [-]
Glad to see focus being put on keeping humans in the drivers seat, democratizing coding with the help of AI. The syntax is probably still too verbose to be easily accessible, but I like the overall approach.
ethanwillis 6 hours ago [-]
> Universalis ensures that even those with minimal experience in programming can perform advanced data manipulations.
Is it a good thing to make this easier? We're drowning in garbage already.
Rendered at 11:00:29 GMT+0000 (Coordinated Universal Time) with Vercel.
This whole thing feels like an elaborate LLM fantasy. Is there any real, usable language behind these examples, or is the author just role-playing with ChatGPT?
We took a few decades to figure out how to specify & evolve current code to solve a certain class of problems [nothing is perfect .. but it seems to work at scale with trade-offs]. Shall watch this from a distance with pop-corn.
Maybe I am making a mountain out of a molehill but this bit about "lessen the cognitive load of the human reader" is kind of obnoxious. Show me how this handles a slightly nontrivial problem, don't assume I'm too stupid to understand it by trying to impress me with the happy path.
[1]: https://quantumprolog.sgml.net
Aleph? In 2025. That's just lazy, now. At the very least they should try Metagol or Popper, both with dozens of recent publications (and I'm not even promoting my own work).
Is it a good thing to make this easier? We're drowning in garbage already.