Author of the post here - happy to answer any questions.
beltranaceves 4 hours ago [-]
Great job! I'm working in a similar blog post and it was fun seeing how you approached it.
I was surprised the wasm implementation is fast enough, I was even considering writing webGpu compute shaders for my solver
philzook 5 hours ago [-]
Beautiful stuff, great post!
gxcode 5 hours ago [-]
Thank you, really appreciate that.
mjburgess 6 hours ago [-]
I'd be interested, if any one had suggestions, on MPC applied to ML/AI systems -- it seems this is an underserved technique/concern in MLEng, and I'd expect to see more on it.
lagrange77 1 hours ago [-]
There is a big overlap between Optimal Control and Reinforcement Learning, in case you didn't know.
there's a lot of work in the broad area. most of it doesn't engage with the classical control theory literature (arguably it should).
some keywords to search for recent hot research would be "world model", "decision transformer", "active inference", "control as inference", "model-based RL".
I hacked it using MPPI and it only works on the cartpole model so as to not have to dwell in Javascript too long; just click the 'MPPI Controller' button and you can perturb the model and see it recover.
tantalor 6 hours ago [-]
I love this kind of stuff because it seems like a roughly equal blend of art, science, and engineering.
Rendered at 21:33:19 GMT+0000 (Coordinated Universal Time) with Vercel.
Also Steve Brunton does a lot on the interface between control theory and ML on his channel: https://www.youtube.com/channel/UCm5mt-A4w61lknZ9lCsZtBw/pla...
some keywords to search for recent hot research would be "world model", "decision transformer", "active inference", "control as inference", "model-based RL".
I hacked it using MPPI and it only works on the cartpole model so as to not have to dwell in Javascript too long; just click the 'MPPI Controller' button and you can perturb the model and see it recover.