NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Traversing nested data-structures in various languages (github.com)
11235813213455 1111 days ago [-]
Just wanting to say that all JS solutions are quite 'bad', except the simple for-of, because they're combining pure array methods (.map, .reduce) and side-effects (external variable to that method), not even talking about modifying Array protoype https://github.com/josevalim/nested-data-structure-traversal..., never do this, don't touch to objects you don't own, and that others would access and not expect to be modified!

I'll make a PR to try to contribute, instead of just complaining like that, hehe

Btw I'm surprised the structure isn't deeply nested (requiring recursion to process it), it'd be more interesting

saiojd 1111 days ago [-]
For the sake of argument, there's really nothing wrong with combining map/reduce with side-effects, especially for things like counters were the alternative of threading the variables everywhere can end up being much more complex very quickly.
zdragnar 1111 days ago [-]
For the sake of argument, use `forEach` if you want to introduce side effects. `map` and `reduce` are names with meaning, and passing impure functions to them is more confusing than passing some procedural function to `forEach` or using `for of`.

It is something of a nit-picky argument to be sure, but it is a good mental habit to have.

Edit: I think being strict with the semantic building blocks is a good mental habit, not necessarily making nit-picky arguments.

the-smug-one 1111 days ago [-]
Being disciplined is part of good engineering. To have clearly defined idioms is important, so I want to reinforce that this is not nit-picky.

You can argue that anything is nit-picky, under the guise of subverting good engineering in exchange for getting to be lazy.

ghufran_syed 1111 days ago [-]
Interesting how different some of the approaches are in the same language, never mind different languages. In clojure, the “atom” version is 5 lines (not including data or tests), while the “zipper” version is 47 lines and the “reduce” version is 31 lines
Nihilartikel 1111 days ago [-]
Clojure also has some third party libraries like Specter [1] that make nested traversal and mutation very succinct and performant.

The author of Specter goes so far as to label traversal/update of deep immutable structures as "Clojure's Missing Piece".

[1] https://github.com/redplanetlabs/specter

lilactown 1110 days ago [-]
The Clojure reduce solution is pretty low quality. Here's my 15 line proposal: https://github.com/josevalim/nested-data-structure-traversal...
1110 days ago [-]
raspasov 1111 days ago [-]
Yes, the atom is a hack and discouraged but when you're stuck with a poorly defined data structure, it can be an escape hatch.
Hurtak 1110 days ago [-]
The whole problem seems to be bizarrely defined when it comes to data structures.

Basically the input is `data+config` merged into one object and the return value is `data+config+tranversal_properties` merged into one object with lots of solutions mutating the original input data.

argvargc 1110 days ago [-]
Hence, a revealing and useful challenge.
fermigier 1111 days ago [-]
What are the OP's conclusion from this experiment ?

I was expecting the FP languages solutions to be exceptionally short and elegant, but after a quick glance it seems to me that the Python and Python-like languages (e.g. Nim, Zig, etc.) are "winning" (YMMV of course).

OskarS 1111 days ago [-]
The lesson for me is that this particular problem is exceptionally well suited for imperative code, and makes functional code look terrible in comparison. It's not just the "Python-likes" that do well: both the C and C++ code (C++ in particular) are very straightforward, simple and very readable, but any functional language solution that uses reduce or whatever looks terrible in comparison.

The striking example is the JavaScript one: the simple imperative solution (for_of.js) is perfectly fine and lovely, but the other ones (map_with_state.js in particular) are total nightmares. Most of them are (almost certainly) significantly less performant than the simple imperative ones.

Not all problems are like this, of course, there are many examples where functional style code shines. But certainly there are many problems like this one, where imperative code is just the superior paradigm.

Scarbutt 1110 days ago [-]
The lesson for me is that this particular problem is exceptionally well suited for imperative code, and makes functional code look terrible in comparison.

This is the case for 80% of real world code in IME.

higerordermap 1109 days ago [-]
As long as there aren't too many variables keeping state, imperative should be fine.
zelphirkalt 1111 days ago [-]
I think this is perhaps only a repository for keeping things or approaches, which has surfaced here, because of the fame of the author.

The results and code might be rather individual preferences or ad-hoc solutions to typical problems. The data in the example does not require a lot of recursion to be elegantly processed. It is too simple an example for that.

The second aspect is, that the example code might have different priorities than the viewer. If I want to process a tree concurrently on multiple cores, then perhaps I should not be using much mutable state. However, if my priority is not concurrency and multi-core, then perhaps I can write it in very little code in Python and similar languages. When my priorities shift to concurrency and multi-core and lets say I got a deeply nested tree, I might have to adapt all these examples and suddenly the FP languages shine.

lukashrb 1111 days ago [-]
The author is aware of the fact that mutability makes this kind of problem easier [0]

[0] https://twitter.com/josevalim/status/1379771275627921409?s=1...

_old_dude_ 1111 days ago [-]
I'm not sure the problem reflect anything, while updating a data structure like this is fairly standard, the lesson counter is reset globally, which is not a common operation.

For me the problem is too artificial to be meaningful.

nickjj 1111 days ago [-]
> For me the problem is too artificial to be meaningful.

The problem came from implementing a real feature into a course platform.

The data structure contains a list of sections and lessons to build a table of contents. The position stores the order they are being displayed in.

In the course viewing page lessons are annotated with a "#N" where N is the position and it goes from 1 all the way until the last lesson's count. This way folks could be watching the course and be like "Hey, I'm on lesson #56 and have a question".

But on the course description page the table of contents was displayed in such a way where the position / lesson count was reset for each section.

Although in the real life implementation this code is wrapped into a function and the reset_lesson_position is being supplied by a boolean function argument which controls whether or not the lesson's position gets reset for every section. We added it as an attribute to the section directly for the sake of this example to keep the problem more scoped to the core idea of the problem.

_old_dude_ 1111 days ago [-]
It makes a lot of more sense.

`resetLessonPosition` is not a member of the data structure but a parameter of the function that generates the positions and it has an effect on all sections.

yakubin 1111 days ago [-]
The Scheme example is shorter than any of the Python ones.
buro9 1111 days ago [-]
Why do I see things like this and still think that XML and XSLT made a lot of this stuff very simple and portable and that we lost a good bit of progress by choosing JSON, YAML, etc after we already had great validation of schemas, translation, streaming and DOM parsing, etc.
rwmj 1111 days ago [-]
XPath is the real killer feature for parsing XML. I don't think it's possible to use it in this particular example, but in the more generally useful cases where you want to pull (eg) all subnodes with key matching a particular string, XPath is great.

Here's it being used in real code (search for "xpath_"):

https://github.com/libguestfs/virt-v2v/blob/master/v2v/parse...

https://github.com/libguestfs/virt-v2v/blob/master/v2v/parse...

giobox 1110 days ago [-]
JSONPath confers pretty much the same concept onto JSON too with much the same syntax. There are JPath implementations for most languages now.

> https://github.com/json-path/JsonPath

kstrauser 1110 days ago [-]
JMESPath is similar: https://jmespath.org
lmm 1111 days ago [-]
I'm guessing it's because you haven't tried to write an XSLT recently. Yes, it's a way to declarative express transformations which has a lot of elegant theoretical properties. But that doesn't matter when it's so, so painfully tedious to actually write.
saurik 1111 days ago [-]
XSL/T just kept getting better; like, I agree with exactly what you said for 1.0--which sadly is all libxslt (and thereby browsers that relied on it) ever supported--but XSL/T 2.0 was pretty epic, and there is now even a 3.0 as of a few years ago.
emmanueloga_ 1111 days ago [-]
Agreed, XSLT 3.0 is amazing... and XQuery provides a more palatable syntax, with similar semantics.

Kinda ironic that given the problem definition no solution has been provided on XQuery or XSLT yet, the widely used domain specific languages that better suit the problem :-).

I believe there's a lot of inspiration to be had from the X-family of languages. The "secret sauce" is XPath. We need a new language drawing inspiration from the tree processing of X* languages but with more modern syntax and less with a less byzantine type system (read: anything but XML Schema :-). Less angle brackets would help too!

---

Bonus points: add tuples and prolog like queries to such language (thinking of SPARQL) ... now that would be something really powerful.

bmn__ 1111 days ago [-]
> We need a new language drawing inspiration

This exists (optics/lenses), but most programmers don't know about it. I expect this to hit mainstream programming languages a few years later.

I don't know of a simple, jargon-free explanation of the concepts. WP is useless, sadly. http://enwp.org/Bidirectional_transformation

Also see: https://news.ycombinator.com/item?id=24710565

josevalim 1111 days ago [-]
Thanks for sharing! Solutions to this problem in the vein of jq and optics were precisely why I started the repo above.
bradrn 1111 days ago [-]
> This exists (optics/lenses), but most programmers don't know about it. … I don't know of a simple, jargon-free explanation of the concepts.

Let me have a go! I’d explain optics/lenses as giving field access as a data structure. That is, a lens is an object which describes how to access an element from a ‘larger’ value. For example, you can define _1 and _2 as lenses for accessing the first and second element respectively of a tuple:

    _1 :: Lens' (a,b) a
    _2 :: Lens' (a,b) b
(Using Haskell syntax here as that’s what I know best, though I’ve simplified the types slightly.)

The great advantage of having these as first-class values is that you can now manipulate them. Of course, the main things you’ll want to do with these are get and set things:

    > (1,2) ^. _1
    1

    > (1,2) & _1 .~ 3
    (3,2)
More interestingly, you can also use a function to modify the value to which a lens is pointing:

    > (1,2) & _1 %~ negate
    (-1,2)
Most usefully, you can also compose lenses. For instance, to get the second element of the first element of a nested tuple, you can compose _2 and _1 using dot notation:

    _1._2 :: Lens' ((a,b),c) b
Or, for a more realistic example, here’s a lens which accesses the ‘numSeen’ field of the value corresponding to key "keyToIncrement":

    (ix "keyToIncrement")._numSeen :: Lens' (Map String MyStructure) Int
(Well, strictly speaking, this is actually a traversal or a prism or something, because "keyToIncrement" might not exist, but let’s ignore that for this really simple example…)

And now that you have it, you can use this new lens to do anything you want with this value: you can get it, set it, increment it, decrement it, print it, etc.

Additionally, you don’t have to restrict yourself to just lenses. For example, ‘traversals’ are a generalisation of lenses which allow you to access 0, 1 or more elements, rather than restricting access to just one element at a time. For instance, you can use ‘both’ to transform, well, both elements of a tuple:

    > (1,2) & both 5~ negate
    (-1,-2)
Or you can use ‘traversed’ to access each element of, say, a list:

    > [1,2,3,4,5] & traversed .~ 0
    [0,0,0,0,0]
Of course, there are other ways of doing each of these. If you happen to be working with mutable values, you can sometimes use dot notation to access deeply nested fields. Some usecases of ‘traversed’ can be replaced by a map. However, there really is no comparable substitute for lenses when working with deeply nested immutable structures, especially ones of unknown shape.
gugagore 1111 days ago [-]
> you can define _1 and _2 as lenses for accessing the first and second element respectively of a tuple:

    _1 :: Lens' (a,b) a
    _2 :: Lens' (a,b) b
From the looks of it, this is to access the first and second element specifically of a 2-tuple. I suppose if you had to generalize it, you would actually get a traversal or a prism or something, because those elements might not exist.
bradrn 1110 days ago [-]
> From the looks of it, this is to access the first and second element specifically of a 2-tuple.

Yes, this is correct. (I don’t know about other languages, but it’s worth noting that in Haskell, tuples are of fixed size — 2-tuples, 3-tuples etc. are considered completely different types. A result of this is that ‘tuple of any length’ is not a valid concept in Haskell; you have to write a separate definition for each tuple size.)

> I suppose if you had to generalize it, you would actually get a traversal or a prism or something, because those elements might not exist.

Well, it depends how you generalise it. Haskell lens libraries, for instance, use a more general type which allows _1 and _2 to work with 2-tuples and larger, _3 to work with 3-tuples and larger, _4 to work with 4-tuples and larger, and so on. Thus these remain lenses, as they always access exactly one value.

Another generalisation is to define a lens allowing access of an arbitrary element of a list. This may not return a value, and thus is indeed a traversal.

iudqnolq 1110 days ago [-]
This sounds essentially impossible without some kind of garbage collection. Am I missing something?
bradrn 1110 days ago [-]
Well, Haskell does have garbage collection, but why would lack of garbage collection make this problematic? (There seems to be a Rust crate for lenses: https://lib.rs/crates/lenses, though I can’t assess it as I’ve never used Rust.)
iudqnolq 1110 days ago [-]
That's interesting. That crate provides immutable access only and requires the fields to be trivial to copy. For example, I think there's no way for you to mutate one field if you've handed out a lense to a different field.

The core problem would be you can't write something like this

  fn return_part() -> &Part {
    let whole = get_whole();
    &whole.part
  }
because whole is "dropped" at the end of the function call.

If you want to hand out parts you have to put the whole somewhere where it will stick around until you're done with the parts, and doing that nicely pretty much requires GC (you could also use lenses only in very restricted scopes or just leak memory).

alehander42 1111 days ago [-]
however: why those instead of iterators or functions? how is a `lens` not a function in general (or a yielding iterator) which return some kind of views?
Twisol 1110 days ago [-]
In the face of mutation and dynamic typing, yes, an iterator providing mutable references to the focused piece of the subject would give you something basically like a traversal (a generalization of a lens over list-like things). However, consider the case of a struct with five fields, each of a different type -- you can't just for-each over those. And you don't quite have the black-box composability benefits of lenses, since you need a value to iterate over.

A lens is a (1) functional (2) bi-directional (3) first-class and (4) composable generalization of (5) struct field names in most traditional languages.

1) "Functional" meaning we eschew mutation and implicit context. Mutation is a very powerful feature of a language, and much can be encoded using it. When we remove mutation as an option, many of the things we would have just used mutation for separate into distinct and rich solution spaces.

2) "Bi-directional" meaning we can use the same widget for both accessing a sub-part of a whole, and for replacing that sub-part within a whole. I personally call these operations "extract" and "replace", since "get" and "set" feel like they raise the specter of mutability.

3) "First-class" means that the lenses themselves are manipulable constructs within the language. In a language like C or Java, I can have "obj.a.b.c", but I cannot have ".a.b.c" itself on its own. Even if you consider method references (like "Obj::a" in Java), you've only obtained the ability to extract the "a" field from an Obj -- any particular method or function is only going to go in one direction.

4) "Composable" means that you can take two lenses and fit them together to get a new lens. Conceptually, I think this is understood -- we intuitively understand how to navigate an object graph when writing down an expression like "obj.a.b.c" -- but having a composable first-class abstraction lets us do the same thing programmatically. (It need not even be at "runtime" -- imagine a compile-time macro system that programmatically generates nested accesses from some base rules you define. I think you've got the rudiments of a dependency injection system here.)

5) "Struct field" means that for the given type, the field is always accessible. "List element" (or "tree leaf", ...) would give you traversals instead of lenses, and "tagged union variant" / "subclass" would give you prisms.

As for potential applications of the approach, there's the approach dependency injection graphs outlined above. I've also used ideas from lenses/prisms in serialization/deserialization and web service middleware. (Middleware compose really well as functional optics!)

With profunctor optics, you can even construct self-describing operations. For instance, imagine a JSON parser that can be directly interrogated to obtain the schema it accepts, or a middleware stack that can be interrogated to determine which headers it looks at, and which routes it accepts. Personally, this is the area I'm most excited about with functional optics.

bradrn 1110 days ago [-]
Excellent explanation! I think this is far better than my post at explaining why optics are so useful.

> With profunctor optics, you can even construct self-describing operations. For instance, imagine a JSON parser that can be directly interrogated to obtain the schema it accepts, or a middleware stack that can be interrogated to determine which headers it looks at, and which routes it accepts. Personally, this is the area I'm most excited about with functional optics.

I’m not sure this has anything to do with profunctor optics per se; you can do it with any structure which is polymorphic enough. (A neat example I saw recently: Build Systems à la Carte [0] allow you to get a list of dependencies from a build task, as long as that task is Applicative i.e. does not have dynamic dependencies.)

[0] https://www.microsoft.com/en-us/research/uploads/prod/2018/0...

Twisol 1110 days ago [-]
> Excellent explanation! I think this is far better than my post at explaining why optics are so useful.

Thank you!

> I'm not sure this has anything to do with profunctor optics per se

Yes, fair! I didn't mean to imply that only profunctor constructions can be self-descriptive. Rather, I meant that optics based solely on the function arrow don't really give you anywhere to annotate with description in the first place. By assuming less about the operations they're lifting, profunctor optics give you more freedom to model a self-descriptive profunctorial operation.

In the `(a -> f b) -> (s -> f t)` formulation, the lens takes a function and returns an adapted function. There's nowhere to put additional information, and even if you could stash it on the `f`, you'd have to feed it an `s` just to get at it.

In the `p a b -> p s t` formulation, the function arrow (if it's used in your chosen `p` at all!) doesn't have to be the top-level type you return; you could have something like `(a -> f b, Metadata)` instead. You can add combinators to your concrete `p` that manage the metadata and let you combine multiple such operations, and intermix them freely with profunctor-generic lenses that don't care what concrete profunctor you're using. In the end, you get a `p a b` with the metadata immediately available and not necessarily hidden under a function application.

This distinction only comes up because lenses are polymorphic in both the input and output types. The example of a JSON parser is only polymorphic in its output type (we can assume the input is always some String), so you can easily get a `Parser b` with all the metadata at the surface. Since Profunctors have two type parameters, one covariant and one contravariant, we can build models that vary over the input type while retaining the ability to be self-descriptive.

I read a blog post recently [0] that showed some examples of using Profunctors in this way. It also shows the relationship between Profunctor and Arrow; the latter is basically a Strong Profunctor (you can turn a `p a b` into a `p (c, a) (c, b)` that threads a `c` through unaltered) where you can also compose Profunctor elements directly (`p a b -> p b c -> p a c`). In the end, you get an entire operation `p a z` in your Profunctor, with any statically observable metadata ready to be extracted from its structure.

[0] https://elvishjerricco.github.io/2017/03/10/profunctors-arro...

1110 days ago [-]
saagarjha 1110 days ago [-]
Swift has some of this with keypaths.
alehander42 1111 days ago [-]
thanks! i think i now understood the idea
emmanueloga_ 1110 days ago [-]
optics/lenses solve this problem in a framework that is jargon-heavy and difficult to understand, and most implementations I've seen use languages with very strict type system.

The XSLT+XPath approach is more dynamic, easier to grasp (imho) and can probably be learned in an afternoon.

Tradeoffs, tradeoffs... I wouldn't want to write most programs in Haskell but you could write pretty much any program in, say, JavaScript ... :-)

lifthrasiir 1111 days ago [-]
I find the very idea of XSLT simply pointless. It is fundamentally structured around templates and doesn't accept anything beyond XML (even in XSLT 3.0, where you can have an initial template but can't get rid of it), making it cumbersome for general programming. It is also very verbose so it is not suitable for scripting as well. For XSLT to be useful you need a complex enough transformation (so you actually want templates) and nothing else (so you can avoid general programming); that use case would be pretty rare.
nerdponx 1111 days ago [-]
Sounds like declarative pattern matching which sounds just great to me [0].

I won't deny that XML sucks to write. But I'm not yet ready to throw the baby out with the bathwater.

Maybe we keep using XML as a machine-readable serialization format (and XSLT as serialization of pattern-matching data transformations!) and people use native data structures in their own programming languages. Or we all switch to S-expressions.

0: https://github.com/noprompt/meander

lifthrasiir 1111 days ago [-]
Pattern matching is great, but probably not if everything is pattern matching. Also I'm okay with XML as long as it doesn't power an urge to make everything XML and everything connected via the semantic web (lol). XSLT is a remnant of that urge and should have been a non-XML-based language. (The same criticism applies to, e.g. RDF vs. Notation3.)
nerdponx 1111 days ago [-]
Very good point. Sexprs for all!
quotemstr 1111 days ago [-]
It's interesting to me how weighty and important technical decisions get made on the basis of the most superficial aesthetic considerations. XML's data model is beautiful, but on a syntactic level, it's kind of ugly, and I think it's mostly for this reason that a certain segment of the industry eschews it even today.

Honestly, if XML had kept SGML-style implicit closing tags, we might have never seen the rise of alternative data markup systems.

That is, instead of

    <foo>some_value</foo>
you should be able to write (as you can write in SGML)

    <foo>some_value</>
There. Now in exchange for a slight increase in parser complexity, you address a big syntactic wart (the repetition) the drives people away from XML. Seems like a win to me.
mands 1111 days ago [-]
Couldn't agree more, using the XML toolchain (via lxml in python) for a recent project and it couldn't have gone better - you can get really far quickly with XPath, XSLT, and Relax-NG schemas to ensure correctness. Writing XSLT extensions in python in participate is really powerful.
arethuza 1111 days ago [-]
I did a lot of work back in the day with XML, XML Schemas, XSLT etc.

I much prefer JSON, JSON Schema even JSON Path (although not used the latter quite as much as the first two).

Edit: XSLT could be kind of neat in some contexts if used with a degree of restraint, but it usually wasn't.

raspasov 1111 days ago [-]
Non-idiomatic ClojureScript, 10 lines:

https://gist.github.com/raspasov/0521706c3f963a9209152bfb2be...

IMO The problem is poorly defined. Data structures should avoid control flow instructions like "reset_lesson_position"

js8 1111 days ago [-]
What if we rephrased it as having the lists sorted by additional field "category", and you are supposed to reset the counter in a new category?
josevalim 1111 days ago [-]
Correct. In the real-life example this came from, a separate parameter would control how to reset the lesson counter but it was moved into a boolean property to keep the problem focused.

Your "category" idea would be a nice addition to the problem though. The logic of when to reset would still be fairly simple and folks would have to explore how to represent the initial category value before traversal starts. I assume most would solve it with nil and maybe/option types.

leontrolski 1111 days ago [-]
In python/javascript, I'm a big fan of using yield - https://leontrolski.github.io/recursing-with-yield.html

It just feels nice to me.

dsego 1111 days ago [-]
What are the benefits over the classic way?
seanhunter 1111 days ago [-]
It can be a very elegant way to encapsulate annoying complexity in an interface that is easy for callers to use correctly.

Typical example: say you need to connect to an api that has pagination. You call it and you get a chunk which has a number of things and a counter of remaining chunks. Normally this king of thing requires the caller to keep a lot of messy state.

with yield you can make an interface that the caller can just iterate over and it will give them one thing at a time and fetch the next chunk if the current one is exhausted and there are chunks remaining. The state of the "current chunk", "chunks remaining", index etc can all be enclosed so the caller doesn't need to know anything about them.

gugagore 1111 days ago [-]
You could require the caller to keep an opaque handle, and have all the state encapsulated on that handle. The caller won't know if this state is messy or not, and I think this does a good job of encapsulating annoying complexity.

But it doesn't prevent the caller from passing the handle to another function, or copying the handle, which could mess things up, and I suppose `yield` helps with that.

But most of all, I'd say it makes it easier to write the callee code, not the caller code, since the callee does not need to "reify" the state of "current chunk" and "chunks remaining" into an object, nor (more crucially) the state of loops and other control flow.

iudqnolq 1110 days ago [-]
Absolutely. It's like how async can be replaced with a struct and a giant switch statement, but that's a PITA to write manually.
citrin_ru 1110 days ago [-]
TIL about postfix dereference in Perl: https://perldoc.perl.org/perlref#Postfix-Dereference-Syntax

One of advantages of Perl to me stable syntax - usually I can open code written by someone nowadays after not using Perl and not encounter any unknown language features. Turned out even in Perl there are new features you need to learn first before you can read the code, though compare to other languages it still easy to keep up with all new features.

jedisct1 1111 days ago [-]
The Rust and Haskell versions are complicated to understand for such a simple task.
Chris_Newton 1111 days ago [-]
IMHO, the Haskell version using mapAccumL[1] is actually quite a nice demonstration of the power of having lots of computational patterns available immediately from the standard library.

The biggest problem in Haskell that a language like JS doesn’t have is the types: you’re converting both the outer type for sections and the inner type for lessons into something else, so in any language with explicit static types you’re going to have the overhead of specifying those, or you have to hack around it a bit as we see here.

However, if you ignore the type boilerplate and the definition of the input data, the substance of the algorithm here would be around the same length as the imperative JS for-of version, particularly with less verbose names for the local variables as idiomatic Haskell tends to use.

With all four types explicitly defined and using another map-accumulate instead of the zipWith so updating the lessons with their positions was done similarly to updating the sections, I think it could be quite readable as well, even with Haskell’s limited support for record types.

[Edit: Having actually tried this now, even with DuplicateRecordFields, my Haskell code still seems unnecessarily verbose because of the record manipulations, but I still quite like the use of mapAccumL for making the pattern of computation explicit.]

[1] https://github.com/josevalim/nested-data-structure-traversal...

momentoftop 1110 days ago [-]
I've always liked `mapAccumL`, and nowadays, it amuses that it is implemented by mapping with a state monad (and thus it works over arbitrary Traversables).

With that precedent, I'm happy to implement the given problem using State and traverse, so it's structured much the same as the obvious imperative version:

    incr :: (MonadState s m, Num s) => m s
    incr = get <* modify (+1)

    annotate :: [Section x] -> [Section Int]
    annotate ss = flip evalState 1 . forM (zip [1..] ss) $ \(i,Section title reset lessons _) -> do

      when reset (put 1)

      lessons <- (traverse . traverse) (const incr) lessons

      pure $ Section title reset lessons i
jkelleyrtp 1111 days ago [-]
That seems to be a choice of the author, a new approach has been uploaded that uses the exact same logic as the python approach and is similarly as succinct.

https://github.com/josevalim/nested-data-structure-traversal...

alpaca128 1111 days ago [-]
The map-mutable variant in Rust doesn't seem all too complicated to me; but I have the advantage of being used to the iterator+closure syntax which admittedly can look unnecessarily confusing sometimes.

I personally often avoid long chained iterator operations, especially if they involve `for_each()`. Usually they're barely shorter than a for loop but seem to be harder to figure out for the type & ownership checker, leading to annoying issues with less helpful error messages. And just like with pipes in Bash I often run into situations where I wish I had multiple "channels" through which I can pass data, and a loop doesn't have any limitation like that.

Luckily there are multiple neat ways to solve problems in the language.

taeric 1110 days ago [-]
The common lisp solutions should include at least one LOOP example. Since that would almost certainly be directly comparable to the python example.
taeric 1110 days ago [-]
Amusingly, I think I will lose this if I don't link it here. https://github.com/josevalim/nested-data-structure-traversal... is my attempt at getting a LOOP solution in.

As noted in the message, I originally did this in elisp, which felt a bit more natural, for some reason.

1111 days ago [-]
marvel_boy 1111 days ago [-]
Simplicity wins. Zig wins again. One more time.
dastx 1111 days ago [-]
I don't know about you but having to prefix every line of what I'm assuming is something similar to heredoc with `\\` doesn't seem simple to me. It's as bad as having to suffix each line with `\` to escape new lines. That's an odd design choice and horrible UX.

> section.Object.get("reset_lesson_position").?.Bool

Also requiring the provide the type every time you retrieve an element from a JSON string again seems odd. JSON already has the data type, why do I need to provide the type every time I retrieve the data?

yakubin 1111 days ago [-]
> Also requiring the provide the type every time you retrieve an element from a JSON string again seems odd. JSON already has the data type, why do I need to provide the type every time I retrieve the data?

Because JSON is a serialization format, so as such it is external to your code. One day an element in the JSON you receive is a boolean, but another day it's a string, because another system had a bug and started producing junk data. Now in your code it's good to 1. document what type you expect, 2. trigger an error if the types don't match. It also has the benefit that all types can be checked statically, without providing any surrogate input. A schema integrated somehow with the language could be a more optimal solution though. But no schema, and no type annotations in the code is a no-no from me.

UPDATE: Now that I'm thinking about it, I'm wondering why JSON was used in the Zig example at all, instead of using builtin data types. Seems like a really odd choice.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 03:41:33 GMT+0000 (Coordinated Universal Time) with Vercel.