NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
How to make a fast dynamic language interpreter (zef-lang.dev)
tiffanyh 2 hours ago [-]
I see Lua was included, wish LuaJIT was as well.
pizlonator 2 hours ago [-]
I bet LuaJIT crushes Zef! Or rather, I would hope that it does, given how much more engineering went into it

There are many runtimes that I could have included but didn’t.

Also, it’s quite impressive how much faster PUC Lua is than QuickJS and Python

raincole 58 minutes ago [-]
Because QuickJS is really slow. Don't be fooled by the name. It's almost an order of magnitude slower than node/v8.

(I suppose the quick in QuickJS means "quick for a pure interpreter without JIT compilation or something...)

pizlonator 53 minutes ago [-]
based on this data, it’s probably slower than JSC’s or V8’s interpreter

So like that’s wild

zephen 1 hours ago [-]
> it’s quite impressive how much faster PUC Lua is than QuickJS and Python

Python's execution time is mostly spent looking up stuff. I don't think lua is quite as dynamic.

pizlonator 1 hours ago [-]
Lua is way more dynamic
zephen 45 minutes ago [-]
I suppose it depends on where you are looking for dynamicity. In some ways, lua is much more laissez faire of course.

But in Python, everything is an object, which is why, as I said, it spends much of its time looking things up. And things like bindings for closures are late, so that's more lookups as well.

In lua, many things aren't objects, and, for example, you can add two numbers without looking anything up. Another issue, of course, when you do that, is that you could conceivably overflow an integer, but that can't happen in Python either.

The Python interpreter has some fast paths for specific object types, but it is really limited in the optimizations it can do, because there simply aren't any unboxed types.

grg0 4 hours ago [-]
Interesting, thanks for sharing. It is a topic I'd like to explore in detail at some point.

I also like how, according to Github, the repo is 99.7% HTML and 0.3% C++. A testament to the interpreter's size, I guess?

pizlonator 4 hours ago [-]
I committed the statically generated site, which is wastefully large because how I generate the code browsers

But yeah the interpreter is very small

boulos 2 hours ago [-]
How's your experience with Fil-C been? Is it materially useful to you in practice?
pizlonator 2 hours ago [-]
I’m biased since I’m the Fil.

It was materially useful in this project.

- Caught multiple memory safety issues in a nice deterministic way, so designing the object model was easier than it would have been otherwise.

- C++ with accurate GC is a really great programming model. I feel like it speeds me up by 1.5x relative to normal C++, and maybe like 1.2x relative to other GC’d languages (because C++’s APIs are so rich and the lambdas/templates and class system is so mature).

But I’m biased in multiple ways

- I made Fil-C++

- I’ve been programming in C++ for like 35ish years now

vlovich123 54 minutes ago [-]
I’m curious. Given the overheads of Fil-C++, does it actually make sense to use it for greenfield projects? I like that Fil-C fills a gap in securing old legacy codebases, I’m just not sure I understand it for greenfield projects like this other than you happen to know C++ really well.
pizlonator 49 minutes ago [-]
It made sense because I was able to move very quickly, and once perf became a problem I could move to Yolo-C++ without a full rewrite.

> happen to know C++ really well

That’s my bias yeah. But C++ is good for more than just perf. If you need access to low level APIs, or libraries that happen to be exposed as C/C++ API, or you need good support for dynamic linking and separate compilation - then C++ (or C) are a great choice

Futurmix 3 hours ago [-]
[flagged]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 05:39:27 GMT+0000 (Coordinated Universal Time) with Vercel.