> This dichotomy gels really well with the way my brain works. I’m able to channel short bursts of creative energy into precisely mapping the domain or getting type scaffolding set up. And then I’m able to sustain long coding sessions to actually implement the feature because the scaffolding means I rarely have to think too hard.
It's why I keep begging my team, every time there's a new codebase (or even a new feature), to stop throwing `any` onto everything more complicated than a primitive. It is exhausting. It forces me to waste energy on the shitty, tedious parts. It forces me to debug working code just to find out how it works before I can start my work.
They tend to take the quickest solution to everything -- which means everyone else has to do the same work over and over again until someone (me, invariably) sits down and makes a permanent record of it.
In doing this they ensure that I can't trust any of their code, which is counterproductive for what should be obvious reasons. Every time I work on established, untyped (or poorly typed) code, it's like I'm writing new code with hidden, legacy dependencies.
stouset 2 days ago [-]
The longer I’m in this industry, the more I find that there are two types of programmers: those who default to writing every program procedurally and those who default to doing so declaratively.
The former like to brag about how quickly they can go from zero to a working solution. The latter brag about how their solutions have fewer bugs, need less maintenance, and are easier to refactor.
I am squarely in the latter camp. I like strong and capable type systems that constrain the space so much that—like you say—the implementation is usually rote. I like DSLs that allow you to describe the solution and have the details implemented for you.
I personally think it’s crazy how much of the industry tends toward the former. Yes there are some domains where going the time from zero to a working product is critical. And there are domains where the most important thing is being able to respond to wildly changing requirements. But so much more of our time and energy is spent maintaining code than writing it in the first place that upfront work like defining and relating types rapidly pays dividends.
I have multiple products in production at $JOB that have survived nearly a decade without requiring active maintenance other than updating dependencies for vulnerabilities. They have had a new version deployed maybe 3-5 times in their service lives and will likely stay around for another five years to come. Being able to build something once and not having to constantly fix it is a superpower.
dasil003 2 days ago [-]
> Yes there are some domains where going from the time from zero to a working product is critical. And there are domains where the most important thing is being able to respond to wildly changing requirements
I agree with your observations, but I'd suggest it's not so much about domain (though I see where you're coming from and don't disagree), but about volatility and the business lifecycle in your particular codebase.
Early on in a startup you definintely need to optimize for speed of finding product-market fit. But if you are successful then you are saddled with maintenance, and when that happens you want a more constrained code base that is easier to reason about. The code base has to survive across that transition, so what do you do?
Personally, I think overly restrictive approaches will kill you before you have traction. The scrappy shoot-from-the-hip startup on Rails will beat the Haskell code craftsmen 99 out of 100 times. What happens next though? If you go from 10 to 100 to 1000 engineers with the same approach, legibility and development velocity will fall off a cliff really quickly. At some point (pretty quickly) stability and maintainability become critical factors that impact speed of delivery. This is where maturity comes in—it's not about some ideal engineering approach, it's about recognition that software exists to serve a real world goal, and how you optimize that depends not only on the state of your code base but also the state of your customers and the business conditions that you are operating in. A lot of us became software engineers because we appreciate the concreteness of technical concerns and wanted to avoid the messiness of human considations and social dynamics, but ultimately those are where the value is delivered, and we can't justify our paychecks without recognizing that.
stouset 2 days ago [-]
Sure it’s important for startups to find market traction. But startups aren’t the majority of software, and even startups frequently have to build supporting services that have pretty well-known requirements by the time they’re being built.
We way overindex on the first month or even week of development and pay the cost of it for years and years thereafter.
k1musab1 23 hours ago [-]
Well said. This summarizes my experience quite succinctly. Many an engineer fails to understand the importance of distinguishing between the different tempo and the immediate vs long-term goals.
sfn42 1 days ago [-]
I'm not convinced that this argument holds at all. Writing good code doesn't take much more time than writing crap code, it might not take any more time at all when you account for debugging and such. It might be flat out faster.
If you always maintain a high standard you get better and faster at doing it right and it stops making sense to think of doing it differently as a worthwhile tradeoff.
mithametacs 1 days ago [-]
That's the hard part of project management.
Is it worth spending a bit more time up-front, hoping to prevent refactoring later, or is it better to build a buggy version then improve it?
I like thinking with pen-and-paper diagrams; I don't enjoy the mechanics of code editing. So I lean toward upfront planning.
I think you're right but it's hard to know for sure. Has anyone studied software methodologies for time taken to build $X? That seems like a beast of an experimental design, but I'd love to see.
sfn42 1 days ago [-]
I personally don't actually see it as a project management issue so much as a developer issue. Maybe I'm lucky but in the projects I've worked, a project manager generally doesn't get involved in how I do my job. Maybe a tech lead or something lays down some ground rules like test requirements etc but at the end of the day it's a team effort, we review each other's code and help each other maintain a high quality.
I think you'd be hard pressed to find a team that lacks this kind of cooperation and maintains consistently high quality, regardless of what some nontechnical project manager says or does.
It's also an individual effort to build the knowledge and skill required to produce quality code, especially when nobody else takes responsibility of the architectural structure of a codebase, as is often the case in my experience.
I think that in order to keep a codebase clean you have to have a person who takes ownership of the code as a whole, has plans for how it should evolve etc. API surfaces as well as lower level implementation details. You either have a head chef or you have too many cooks, there's not a lot of middle ground in my opinion.
dasil003 23 hours ago [-]
I hear you, and agree there’s not much overhead in basic quality, but it’s a bit of a strawman rebuttal to my point. The fact is that the best code is code that is fit for purpose and requirements. But what happens when requirements change? If you can anticipate those changes then you can make implementation decisions that make those changes easier, but if you guess wrong then you may actually make things worse by over-engineering.
To make things more complicated, programmers need practice to become fluent and efficient with any particular best practice. So you need investment in those practices in order for the cost to be acceptable. But some of those things are context dependent. You wouldn’t want to run consumer app development the way you run NASA rover development because in the former case the customer feedback loop is far more important than being completely bug free.
layer8 2 days ago [-]
What does strong typing have to do with procedural vs. declarative? IMO strong typing is beneficial regardless.
whilenot-dev 2 days ago [-]
Not OP, but I'd answer...
A strong type system is your knowledge about the world, or more precisely, your modeled knowledge about what this world is or contains - the focus is more on data structures and data types, and that's as declarative as it can get with programming languages(?). I'd also call it to be holistic.
A procedural approach focussed more on how this world should be transformed - through the use of conditional branching and algorithms. The focus feels to be less on circumstances of this world, but more to be on temporary conditions of micro-states (if that makes any sense). I'd would call it to be reductionistic.
mmis1000 8 hours ago [-]
It's the difference between "how?" and "what?". A procedural approach describe the step you do something. But not what is the problem you want to solve and why you want to do this to solve it. A declarative approach on the other end, describe the goal and intended solution first and try to make a proper procedure to achieve the goal.
The two approach have their own cons and pros. But aren't explicitly exclusive. Sometimes the goal and solution aren't that clear. So you do it procedurally until you find a POC(Proof of concepts) that may actually solve the problem. And refine it in a declarative way.
mithametacs 2 days ago [-]
Grandparent was such a good post otherwise!
I love strong types. I love for loops. I love stacks.
GP! Try Rust. Imperative programming isn’t orthogonal to types. You can go hard in Rust. (I loved experimenting with it but I like GC)
GP! Try data driven design. Imperative programming isn’t orthogonal to declarative.
Real talk, show me any declarative game engine that’s worth looking at. The best ones are all imperative and data driven design is popular. Clearly imperative code has something going for it.
and the advantages aren’t strictly speed of development, but imperative can be clearer. It just depends.
stouset 2 days ago [-]
I adore Rust. My point isn’t that you can’t have both, but that the two types of programmers have different default approaches to problem solving. One prefers to model the boundaries of domain as best they can (define what it should look like before implementing how it works), one prefers to do things procedurally (implement how it works and let “what it looks like” emerge as a natural result).
Neither is strongly wrong or right, better or worse. They have different strengths in different problem areas, though I do think we’ve swung far too hard toward the procedural approach in the last decade.
mithametacs 1 days ago [-]
I agree with the distinction in approaches. In other words, it sounds like you're distinguishing agile and waterfall.
I just find it odd to analogize:
> agile : waterfall :: imperative : declarative
stouset 1 days ago [-]
Either you misread something or I communicated poorly.
2 days ago [-]
RadiozRadioz 23 hours ago [-]
I think TypeScript is part of the problem here. It's a thin layer atop a dynamically typed language with giant escape hatches and holes. I think it's great if you're stuck in JS, it's so much better than JS, but I can't think why anyone would choose it compared to a "real" statically typed language.
rendaw 2 days ago [-]
I think GPs point is that they haven't gone from zero to a working solution, they've gone from zero to N% towards a working solution and then slowed down everyone else. Maybe for the most trivial programs they can actually reach a solution.
You can't write a program without knowing that x is a string or a number, your only choice is whether you document that or not.
mithametacs 2 days ago [-]
Yes you can, you handle every case equally. You don’t even need the reflection mechanisms to be visible to the user with a good type system. A good type system participates in codegen.
for a really simple example: languages which allow narrowing a numeric to a float, but also let you interpolate either into a string without knowing which you have.
A statically typed Console.log in JS/TS would be an unnecessary annoyance.
jfwuasdfs 2 days ago [-]
Agreed 100%. Other benefit is fearless refactoring since API changes appear as type errors.
With no type-checking you need to gamble testing all codepaths.
throwaway2037 2 days ago [-]
> those who default to writing every program procedurally and those who default to doing so declaratively.
What is the difference between programming procedurally and programming declaratively? I saw these terms used this way.
Mikhail_Edoshin 2 days ago [-]
It is actually a rather hard question. There is a web page somewhere where the author asks it, lists possible answers and get amazed by some of the definitions, such as "declarative is parallelizable". Cannot find it now, unfortunately.
I would say that imperative is the one that does computation in steps so that one can at each step decide what to do next. Declarative normally lacks this step-like quality. There are non-languages that consist solely of steps (e.g. macros in some tools that allow to record a sequence of steps), but while this is indeed imperative, this is not programming.
stouset 2 days ago [-]
Here’s my personal attempt at a definition.
One side cares more about how the solution is implemented. They put a lot of focus on the stuff inside functions: this happens, then that happens, then the next thing happens.
The other side cares more about the outside of functions. The function declarations themselves. The types they invoke and how they relate to one another. The way data flows between parts of the program, and the constraints at each of those phases.
Obviously a program must contain both. Some languages only let you do so much in the type system and everything else needs to be done procedurally. Some languages let you encode so much into the structure of the program that by the time you go to write the implementations they’re trivial.
mithametacs 1 days ago [-]
Sounds like a page worth another search if you don't mind. I'll give you Internet Points in return!
-- my attempt:
Imperative defines the order and semantics of each step.
Declarative defines the prerequisites and intent of each step.
I read that and the followup. It’s good writing but I slightly disagree. Imperative is actually a closer mathematical formalism for some things.
I find imperative better for expressing state machines. I find declarative better for backtracking.
You can write a state machine with just a loop, an assignable, and conditions. Writing state in prolog is irritating.
Mikhail_Edoshin 14 hours ago [-]
You don't even need a loop. Steps, conditions, and a 'goto'. Loop are actually a design mistake. They try to bound 'goto' by making it structured. They are declarative, by the way. As a special case or even as a common case they are fine, but not when they try to completely banish 'goto'. They are strictly secondary.
Similarly declarative programming is strictly secondary to imperative. It is a limited form of imperative that codifies some good patterns and turns them into a structure. But it also makes it hard or impossible not to use these patterns.
Mikhail_Edoshin 14 hours ago [-]
(I would also say that state machine is a foundational model for programming.)
DontchaKnowit 2 days ago [-]
I am also squarely declarative, but currently use a language for work that forces me to be procedural pretty much always and it kinda sucks. My code always feels bad to me and the cognitive load is always super high
eyelidlessness 1 days ago [-]
Is it the language that forces procedural code? In my experience it’s usually the stdlib, but the language itself is capable of declarative constructs outside of existing APIs. If that’s the case, an approach like “functional core, imperative shell” is often a good one. You can treat the stdlib like it’s any other external API, and walk it off as such.
ninetyninenine 2 days ago [-]
It’s not crazy, you have a million instructions and you’re just going to write all of that out in a single declaration or a list of procedures?
First off the declaration is better as it’s less error prone but it comes at the cost of being harder to write and harder to interpret.
Imagine if we communicated with one declarative run on sentence. No paragraphs at all…
Procedures are default and easier to our nature as humans.
Mikhail_Edoshin 2 days ago [-]
Declarative programming is essentially programming through a parameter. The declaration is that parameter that will be passed to some instruction. In small doses declarative programming occurs with every function call. In declarative programming the parameter is essentially the whole program and the instruction is implicit; we know more or less how it works, but generally assume it just exists or even forget about it and take it as the way things work.
Of course declarative programming is simpler and less error prone. But it is also essentially inflexible. The implicit instruction is finite and will inevitably run into a situation when the baked execution logic does not quite fit. It will be either inefficient or require a verbose and repetitive parameter, or just flat out incapable of doing what is desired. In this case declarative programming fails; it is impossible to fix unless we rewrite the underlying instruction.
E.g. 'printf' is a small example of declarative programming. It does work rather well, especially when the compiler is smart about type checks, but once you want to vary the text conditionally it fails. (The thing that replaces 'printf' are template engines that basically reimplement same logic and control statements you already have in any language and the engine works as an interpreter of that logic. The logic is rather crude and limited and the finer details of formatting are left to callbacks that are mostly procedural.) For example, how do I format a list so that I get "A" for 1, "A and A" for 2, and "A, A, and A" for more? Or how I format a number so that the thousand separator appears only if the number is greater than 9999? Or what to do if I have an UTF-8 output, but some strings I need to handle are UTF-16? The existing declarative way did not foresee these cases and to add them to the current model would complicate it substantially. But if I have a simple writer that writes basically numbers and strings I can very quickly write procedures for these specific cases.
Instructions are primary by their nature. A piece of data on its own cannot do anything. It always has an implicit instruction that will handle it. So instructions are the things we have to master.
dismalaf 2 days ago [-]
> I personally think it’s crazy how much of the industry tends toward the former.
It's because most people who use technology literally don't care how it works. They have real, physical problems in the real world that need to be solved and they only care if the piece of technology they use gives them the right answer. That's it. Literally everything programmers care about means nothing to the average person. They just want the answer. They might care about performance if they have to click the same button enough times, and maybe care about bugs if it's something that is constantly in their face. But just working is enough...
nine_k 2 days ago [-]
> they only care if the piece of technology they use gives them the right answer.
A poorly typed program often would give a wrong answer, or, in a less dangerous case, crash.
Same for physical parts: you want them to solve some business problem, but often you won't go for the absolute cheapest, because they may work poorly.
dismalaf 2 days ago [-]
I'm thinking more along the lines of how scripting languages are often used in, say, scientific domains (Python, R, etc...). Or how JavaScript and Ruby are more popular than, say, Rust and Haskell for startups.
"Poorly typed" means different things to different people, in the context of this article and thread it would probably mean weakly typed or dynamically typed? Which has nothing to do at all with the correctness of a formula or what output a program will produce.
tyingq 2 days ago [-]
And often, how fast they have some mvp thing in their hands...the better. Maybe not actually better, but politically better or whatever.
Quality, maintainability, etc, is less important in that moment. And, like many things in companies, short term desires dominate.
ervine 2 days ago [-]
Why is `any` allowed at all? Enable strict mode, set up your linter, don't allow any implicit or explicit `any` anywhere.
Without this, Typescript is next to useless. Not knowing if the types are good is worse than no types at all.
tshaddox 2 days ago [-]
FYI, TypeScript strict mode does not prevent explicit `any` (only implicit `any`).
For sure, linting is just as necessary as typescript for a sane codebase.
phyrex 2 days ago [-]
Progressive typing of an untyped code base. Types that are too complex to represent in that type system.
ervine 2 days ago [-]
Yep, adopting strict after the fact is a different conversation, but one that has been talked about a bunch and there is even tooling to support progressive adoption.
Types that are too complex... hmmmm - I'm sure this exists in domains other than the bullshit CRUD apps I write. So yeah, I guess I don't know what I don't know here. I've written some pretty crazy types though, not sure what TypeScript is unable to represent.
mikepurvis 2 days ago [-]
Progressive code QA in general is IMO an underexplored space. Thankfully linters have now largely given way to opinionated formatters (xfmt, black, clang-format) but in the olden days I wished there was a way to check in a parallel exemptions file that could be periodically revised downward but would otherwise function as a line in the sand to at least prevent new violations from passing the check.
I'd be interested in similar capabilities for higher-level tools like static analyzers and so on. The point is not to carry violations long term, but to be able to burn down the violations over time in parallel to new development work.
nemetroid 1 days ago [-]
This is how we introduced and work with clang-tidy. We enabled everything we eventually want to have, then individually disabled currently failing checks. Every once in a while, someone fixes an item and removes it from the exclusion list. The list is currently at about half the length we started out with.
prmph 1 days ago [-]
> not sure what TypeScript is unable to represent.
I want a type that represents a string of negative prime numbers alternating with palindromes, separated by commas.
ervine 1 days ago [-]
Oh yeah, you have to get into branded types for this I think, which means a parsing step. Fair point.
sesm 2 days ago [-]
Important note: TS doesn't let you enable strict mode on per-file basis. Flow allowed that.
t-writescode 2 days ago [-]
* Common functions such as parsing functions in languages that don't support function overloading
* "equals" and other global functions.
shepherdjerred 2 days ago [-]
Typescript have solutions for both of those problems: conditional types and generics
You could try to craft your own type to match google's schema or hunt down 3rd party types, but just doing `(window as any)["__grecaptcha_cfg"]` gets the job done much faster and it's fairly isolated from the rest of the code so it doesn't matter too much.
swatcoder 2 days ago [-]
You don't have to provide complete types. If you know what you need to access, and what type to expect (you darn well should!), you only have to tell TypeScript about those specific properties and values.
Generally, the conveniences of allowing any are swamped by the mess it accumulates in a typical team.
digging 2 days ago [-]
Agreed; with 3rd party APIs I type down to the level of the properties I actually need. And when I use a property but don't care what the type is, I use `unknown`. That will throw an error if the type matters, in which case, you can probably figure out what's needed. Although I agree with the article that sometimes fudging the rules is acceptable, it's extremely rare that a 3rd party API is so difficult it's worthwhile. And enforcing `any` as an error means you have to be very intentional in disabling the linter rule for a particular line if that really is the best option.
wesselbindt 2 days ago [-]
When you quarantine third party code with an adapter (which you should probably be doing anyway), you can make your adapter well-typed. This is not hard to do, and it pays dividends.
tom_ 2 days ago [-]
TypeScript has "unknown" for this, forcing you to cast it, possibly to any, every time you use it. A much better type for your variables of unknown type!
ervine 2 days ago [-]
Yeah, those are few and far between, generally there will be a DefinitelyTyped for anything popular, and you start choosing libs that are written in TypeScript over ones that aren't.
But for your own handwritten application code, there is no excuse to use `any`.
2 days ago [-]
neverartful 2 days ago [-]
For large code bases, the team has to pay the piper one way or the other. Pay up front with static typing or pay later with nearly infinite test cases to prove that it all works. To be sure, just because you're using a statically typed language does not mean that the code is bug free. It just means that it should all be correct with respect to types.
Buttons840 2 days ago [-]
That phrase "debug working code" paints a nice picture of the unproductive part of dynamic types.
mystified5016 4 hours ago [-]
This is why I hate duck typing. I have to deconstruct the entire goddamn program to figure out what kind of object is being manipulated.
shepherdjerred 2 days ago [-]
Zod [0] is my favorite TypeScript library. It really helps ensure that all the little nooks and crannies of your application can be properly typed.
An example is receiving API response with `fetch`. Normally you'd cast the response to the expected type, but it's not uncommon for you to misunderstand what the API can return or for the API to be changed/updated. Zod lets you verify your types at runtime so that if your API returns something unexpected then you can act on it.
This is really useful anytime you're interacting with I/O or user input. For example, I've used Zod for: loading from JSON files, reading from local storage, parsing URL params, or validating form input.
I'd also say checkout Valibot. It has a nice composable pipe¹ API so a small handful of atoms is expressive over a lot of type needs. It's also more petite and performant.
I've long been a fan of strongly typed languages, but have settled into using dynamic types a bit mostly for ecosystem reasons. Recently I've gotten into embedded programming and initially my thought was that I would have a much easier time getting into it by learning C and C++ simply because those are pretty well standardized on embedded devices. And it has been so fucking painful. The build systems absolutely suck...cryptic, finicky, and archaic. Small variations in developer environments cause so many errors. The package management is basically non-existent. Maybe you use Boost or a couple other libraries, but mostly you avoid them because the package management is terrible and build systems are even worse. You're basically writing everything from scratch. The thing I overlooked the most though was the type systems. C and C++ are statically typed, but weakly so. And therefore, the thought of using the type system for safety guard rails doesn't exist. You specify types so that programs will compile, and that's it.
In comes Rust. I know the memory management is the thing that sells it, but in an embedded project, I'm simply not doing any dynamic memory allocation, so I didn't think I'd see much benefit. I mostly tried it because Cargo is an amazing build system. But the thing that has sold me on it is the type system. Within my first 5 minutes of porting a magnetic encoder driver, the type system caught an error in my ported code. I used the wrong pin for my SPI MOSI connection to my driver. It absolutely blows me away that the type system knew I was using the wrong pin. Turns out the code I could never get to work in C was broken because I was referencing the wrong pin, and I never knew why. Fifteen minutes later, it caught another error: by sharing the SPI bus, it could identify that there were more than two devices connected, because there were two CS pins declared, and because the device wasn't exclusive, I had to wrap the bus in a refcell for memory safety. Absolutely amazing.
I never thought that embedded programming could be fun, and strong types were what changed my mind.
sinuhe69 2 days ago [-]
Yeah, I see the same problem with C and C++ in embedding, too. My guess is that people in embedding were working originally in low-level language like assembly where types don’t exist (at least that's what I did with the PIC and older AVR microcontrollers). So when they introduced higher level languages such as C into it, they did it in an assembly-like fashion and not truly in the spirit of high level languages. For example macros are still extensively used for definition of a lot of things, including IO-pins, and they undergo no type checking. C is not a strong -typed language anyway and this mode of thinking and programming continues to spill-over when C++ was introduced. Type is used merely to limit memory allocation and ensure memory alignment, not truly used in its conceptual sense.
Of course, the resource constraints and efficiency of the compiled code play a significant role here. So unless a new memory efficient strict-typed language/compiler comes along with a modern mindset, the power of type will not come into play.
moshegramovsky 2 days ago [-]
C++ guy here. I love being able to make a change and watch the compiler tell me what's broken. In the past year, I was able to justify large scale changes to a big codebase because I could say with confidence that the type system would reveal all. And it did.
Measter 2 days ago [-]
Yeah same, though with Rust. I once did a refactor of my compiler project where I completely rewrote how the AST was represented, then spent the next three or four hours fixing compiler errors.
Worked perfectly first time, because the type system allowed compiler to tell me everything that was broken.
hansvm 2 days ago [-]
I can't look at a conversation about types without thinking of hexing the technical interview [0]. It's not quite the sort of thing TFA is talking about, but how does everyone here feel about sneaking parsers and other code that normally exists at runtime or in the build system into the type system instead?
It's always fun getting to see people experience the positives of type systems. So many of the most popular and "easy" / "user-friendly" languages drop types in favor of friendliness and speed. The most vocally popular web languages - Python, Ruby and Javascript - all seem to either ignore or not have types at all.
People make huge projects in them and start learning new techniques and then the weight of the choices they made begin to grow.
Enter: Types, a frequent savior. Not always the best choice for everyone, but a very good and useful thing.
I welcome this person on their journey!
stavros 2 days ago [-]
> The most vocally popular web languages - Python, Ruby and Javascript - all seem to either ignore or not have types at all.
All three of those languages have ways to use typing, so this statement is only true in the sense of types not being mandatory, which is also the case in any language that has the Any type.
pavel_lishin 2 days ago [-]
But Python, Ruby and Javascript effectively encourage you to skip using types out of the box. (Elixir, too - it supports @spec, but doesn't require it.)
If you want to skip out on types in Typescript, you have to be explicit about it.
axelthegerman 2 days ago [-]
And:
> How types make easy problems hard
Sure that's sometimes an issue with the type system itself (looking at you Sorbet) but also preventing the programmer from taking advantage of the flexibility, expressiveness and elegance the language itself my add (yes, Ruby).
But even Typescript, which is arguably one of the better type system/language pairings out there, often causes more headache than it's worth.
umvi 2 days ago [-]
I've seen some truly insane TS types. Ones that validate SQL queries and stuff.
The problem with complex TS types is that there's no debug tooling. You can't set a breakpoint, you can't step through the type propagation. If there's an issue you have to just do trial and error, which can be very tedious.
It was so hard to fix this bug that I found it easier to just rewrite the entirety of the library but using code generation instead of ultra complex TS types to accomplish the same outcome.
incrudible 2 days ago [-]
If you see programming primarily as a creative outlet, maybe Typescript is not for you. Otherwise, I can vouch for the techniques described in the article, they really keep the code manageable and understandable. They guide you towards working on specific things rather than premature generalizations, but if the specifics change (as your understanding changes), they will also help you change the code without fear. The escape hatch (any) is always there if you need it.
motoboi 2 days ago [-]
It’s funny and sad to have people talking about types while I have Java and and IDE that basically writes code itself.
JavaScript was a bad trip, guys.
langsoul-com 2 days ago [-]
One thing author neglected to mention is just how time comsuimg making everything types is.
Sure, primative types are fine, but anything more and it's a massive pain.
Actually, types are best when someone else already did all that work.
pkoird 2 days ago [-]
Looks like a job for LLMs?
TZubiri 2 days ago [-]
Looking forward to the response article on how types can make easy problems hard.
shepherdjerred 2 days ago [-]
I don’t think it takes much creativity to write such an article. Type systems can be frustrating to work with and throw arcane errors that take a lot of experience to decipher.
The real question IMO is are the benefits worth the cost.
TZubiri 1 days ago [-]
I think the article is too easy. I would like to see actual experiments.
Group A writes a couple of tasks without typing. Group B writes it with typing. Compare development times, execution times, quality, security, etc...
moshegramovsky 2 days ago [-]
In C++ you can make all kinds of easy problems much harder with C-style casts or static_cast.
neverartful 2 days ago [-]
True, but C and C++ are not the only statically types languages available (thankfully!).
digging 2 days ago [-]
Honestly, yes, I'm curious to hear that perspective. The negative responses to "TypeScript makes JS programming fun and easy" are always pretty ill-formed, and I really want to know if there's a genuine argument against it in any complex application. (My suspicion is that no, there is not, but I'm trying to be generous and curious.)
shepherdjerred 2 days ago [-]
The biggest con is that you have to do all of the legwork of learning how static typing works, and types in TS can be fairly complex.
When you have a team of engineers, this means your entire team needs to either learn or lean on an expert when tougher situations arise.
turbojet1321 2 days ago [-]
The thing is, you have to implicitly understand the "types" of javascript objects anyway, otherwise you can't use them.
All the type system does is make that implicit knowledge explicit, and a long the way, stops you from doing things that are likely to cause issues.
shepherdjerred 1 days ago [-]
Yes, but properly typing your JS objects can still be quite hard
akdor1154 2 days ago [-]
Less charitably, it means you need a competent team?
shepherdjerred 2 days ago [-]
Competent in types, yes. Just like you’d want a team competent in functional programming before starting a project in Haskell.
It would be unfair to consider your team incompetent just because they are experts with another set of tools. It’s also unreasonable to expect these things to be quickly learned (TypeScript types are not friendly). But I think it’s reasonable to explain the benefits of this approach and to help your ramp up and learn the skill.
But, anyway, I understand the frustration. I’m usually the one trying to get my team to understand the value of modeling problems in type systems.
tubthumper8 2 days ago [-]
If complex situations arise, they can slap `any` on it, at least it would be explicit, and marker to revisit in the future.
Is there really that much legwork otherwise? Adding ": string" to a function parameter assumes they know what a string is (which should already be the case), adding an object type assumes knowing what an object is, etc.
shepherdjerred 2 days ago [-]
There is a big difference between typing your application (e.g. changing (arg) => {} to (arg: string): void => {}) and modeling your application in the type system.
Simply adding types is usually not too difficult and it is still quite beneficial. It does eliminate certain kinds of bugs.
Modeling your application in a type system means making invalid states unrepresentable and being as precise as possible. This is a lot more work, but again is eliminates more kinds of bugs that can occur.
An example of this being complex: earlier this week I wrote a generic React component that allows users to define which columns of a table are sortable. I wanted to prevent any invalid configurations from being passed in. This is what it looks like: https://tinyurl.com/bdh6xbp6
It's a bit complex but the compiler can guarantee that you're using the component correctly. This is more important and useful when it comes to business logic.
digging 1 days ago [-]
Yes, but, I'd argue it's trading one type of endless, tedious work for a different type of concrete, meaningful work.
shepherdjerred 1 days ago [-]
I agree! It’s still not an easy sell though.
crm9125 2 days ago [-]
I'm not trained as a programmer/software engineer, but this was ChatGPT's response:
1. Added Boilerplate and Ceremony:
Simple tasks may require extra type declarations and structures, adding “ceremony” that feels unnecessary for quick one-off solutions.
2. Rigid Type Constraints:
Combining different data types or working with unclear data shapes can force complex type solutions, even for simple logic, due to strict compilation rules.
3. Complex Type Definitions for Simple Data:
Handling semi-structured data (like JSON) requires elaborate type definitions and parsing, where dynamically typed languages let you manipulate data directly.
4. Refactoring Overhead:
Small changes in data types can cause widespread refactoring, turning minor edits into larger efforts compared to flexible, dynamically typed environments.
5. Complexity of Advanced Type Systems:
Powerful type features can overwhelm trivial tasks, making a few lines of code in a dynamic language balloon into complex type arguments and compiler hints.
JoeAltmaier 2 days ago [-]
All of those come down to "Let the compiler guess about my data, and it may produce correct results in some of the cases."
A risk is, unexpected data (empty field instead of zero; a real number introduced in untested corner cases where only an integer will actually work etc) can cause issues after deployment.
Those 'complex' requirements mean, if you want a reliably correct program well then you'll have to put in this much work. But go ahead, that 'trivial task' may become something less trivial when your task fails during Christmas sales season or whatever.
shreddit 2 days ago [-]
I also recently switched from javascript to typescript and noticed a clear improvement on my speed to write code. Before i had to constantly switch between files to check what i exactly passed to a function.
But I knew C# before so a typed language is nothing new for me.
But when i started with javascript exactly this “untyped” language felt like something good, it felt much less of a burden to think about the code beforehand.
Now i look at dozens of projects which a have to be converted to typescript because I simply cannot deal with this typelessness anymore…
yawaramin 2 days ago [-]
Fun fact: TypeScript can typecheck JSON files directly. I use this ability to define all my translation keys in my code as a type and enforce that my translated messages JSON files all have the correct keys.
smj-edison 2 days ago [-]
I see proponents of type systems often mention that types make refactoring easier and safer, but wasn't the first refactoring browser made for and in Smalltalk? How did Smalltalk maintain the types when performing large changes?
E.g.: “In a reflective environment such as Smalltalk, any change to the system can be detected by the system. Therefore, it is possible to write programs that depend on the objects being a particular size, or that call methods by getting a string from the user and calling perform: with it. Therefore, it is impossible to have totally correct, nontrivial refactorings. However, the refactorings in our system handle most Smalltalk programs, but if a system uses reflective techniques, the refactorings will be incorrect.”
“To correctly rename a method, all calls to that method must be renamed. This is difficult in an environment that uses polymorphism to the extent that Smalltalk does. Smalltalk also allows dynamically created messages to be sent via the perform: message. If an application uses this approach, any automatic renaming process has the potential of failure. Under these conditions, guaranteeing the safety of a rename is impossible.
“The Refactoring Browser uses method
wrappers to collect runtime information. […] Whenever a call to the old method is detected, the method wrapper suspends execution of the program, goes up the call stack to the sender and changes the source code to refer to the new, renamed method. Therefore, as the program is exercised, it converges towards a
correctly refactored program. […] The major drawback to this style of refactoring is that the analysis is only as good as your test suite. If there are pieces of code that are not executed, they will never be analyzed, and the refactoring will not be completed for that particular section of code.”
smj-edison 2 days ago [-]
Interesting, thanks for the link and quotes! I've been trying to find more information Smalltalk in the 90s, with XP, GoF, and other interesting developments—I wasn't alive when all that was happening and it's been fun to rediscover.
"A very large Smalltalk application was developed at Cargill to support the operation of grain elevators and the associated commodity trading activities. The Smalltalk client application has 385 windows and over 5,000 classes. About 2,000 classes in this application interacted with an early (circa 1993) data access framework. The framework dynamically performed a mapping of object attributes to data table columns.
Analysis showed that although dynamic look up consumed 40% of the client execution time, it was unnecessary.
A new data layer interface was developed that required the business class to provide the object attribute to column mapping in an explicitly coded method. Testing showed that this interface was orders of magnitude faster. The issue was how to change the 2,100 business class users of the data layer.
A large application under development cannot freeze code while a transformation of an interface is constructed and tested. We had to construct and test the transformations in a parallel branch of the code repository from the main development stream. When the transformation was fully tested, then it was applied to the main code stream in a single operation.
Less than 35 bugs were found in the 17,100 changes. All of the bugs were quickly resolved in a three-week period.
If the changes were done manually we estimate that it would have taken 8,500 hours, compared with 235 hours to develop the transformation rules.
The task was completed in 3% of the expected time by using Rewrite Rules. This is an improvement by a factor of 36."
from “Transformation of an application data layer” Will Loew-Blosser OOPSLA 2002
>Changing our database schema should cause us to see errors in our frontend code.
I shuddered.
shepherdjerred 2 days ago [-]
Why is that bad?
Kuraj 2 days ago [-]
This seems like a good thing if front-end fragmentation is not an issue, ie. if it's hosted on a server and kept in sync with the backend through deployment.
As a mobile app? Maybe not.
I'm honestly a bit puzzled which scenario the original article is envisioning when arguing this, because it mentions mobile, but then also argues for monorepos, which are kind of at odds with each other, unless you somehow force your mobile users to always be using the version that matches your back-end.
shepherdjerred 2 days ago [-]
Oh, I see. Yeah, you'd definitely need to be very careful in this case.
To be fair though, this is a general problem when clients and servers might not agree on data formats. You can still safely do what the author is describing since type checking occurs at compile-time and not runtime.
You would, of course, need to be sure your app handles whatever the API/db returns at runtime though. But, again, this is a general problem.
If that's true, then how is a type like this (non-empty array) useful at all, if it can't be relied on at runtime?
TS should say, you can't pop() this array, unless TS can infer it has >1 elements. Otherwise it can enter a state at runtime which doesn't conform to the type. That seems bad!
bruce343434 2 days ago [-]
Could also be that pop() can now error out at runtime
tantalor 1 days ago [-]
Interestingly, [].pop() does not throw an error.
throwaway2037 2 days ago [-]
> type NonEmptyArray<T> = [T, ...T[]];
I never used TypeScript before, but this looks very useful. Is that possible in C+++ templates or Java generics or C# generics?
cjr 2 days ago [-]
nice post :)
I’m surprised there’s not been any mention of effect (http://effect.website/) yet, as it is kind of the next level up if you really want to model things like errors, dependencies and side effects in the type system, using functional concepts borrowed from more pure functional languages.
It would be a bit of a risk adopting this into a shared code base depending on your team and the kinds of devs you’re looking to hire, but it could be useful to some folk that feel like they want even more type safety.
tinthedev 2 days ago [-]
Not to devalue the author, or their findings/learnings... but I could see this was a JavaScript/Typescript coder (very likely self-taught) learning about typing paradigms.
A lot of people, especially people from the same background, would benefit from branching out and learning some other programming languages/paradigms.
Some... statically typed and actually compiled languages. Maybe to take an entry course in CS.
It's very popular to hate on formal education, especially in software, but all these lessons would have been learned in the first semester or two.
tlf 2 days ago [-]
That's interesting to hear. I started out with a formal CS education learning Java & C in school. I've found that traditional CS education doesn't really take this approach. A lot of what I was exposed to was very OOP-heavy practices that emphasized data modeling via class hierarchy. To me, the expressiveness of the Typescript system (being able to do things like sum types or branded types) is what unlocked a lot of potential despite not being a compiled language.
tpoacher 2 days ago [-]
Your experience of Java likely relates to Java 8, or possibly even prior versions.
Modern Java is an entirely different beast, which feels very functional these days.
(and yes, I say this as someone who teaches these things as part of an undergrad java course)
TexanFeller 2 days ago [-]
After using languages like Scala, Java(I use 17) feels like a joke in terms of type system expressiveness and ability to use functional patterns. I currently have to switch between the two and the kindest thing I can say about Java is it’s getting better(very slowly).
Even though the language is getting less painful, frameworks like Spring that do things at runtime instead of compile time, including rewriting bytecode on startup to inject code, make the ecosystem quite hostile to folks who want to work in a stricter, safer manner that’s easier to reason about(expressed in the language, not in some annotation based metalanguage with no principles and whose implementation changes randomly).
We need to stop defending Java and move on to something actually modern and good. Scala has fallen from favor, so maybe Rust is the next thing I’ll try.
Groxx 2 days ago [-]
A lot of the runtime fiddling is indeed a plague (the limited reflection is one of my favorite parts of Go, it means I can trust function call boundaries FAR more), but Java does do some nice things. E.g. I wish every language had as powerful of a compile time system as Java does - annotation processors and compile-time byte-code weaving enable magic "best of all worlds" stuff like Lombok, and it integrates with IDEs transparently. And hprof -> MAT is absolutely incredible compared to the memory-profiling capabilities of most languages.
TexanFeller 1 days ago [-]
The debugging and profiling features are definitely better than most, but other languages running on the JVM benefit from that too.
I think most of what people use Lombok for though are features that should be part of the core language by now, or would be better as library methods instead of annotations. Like generating constructors, equals, and hashCode methods - case classes and data classes in Scala and Kotlin respectively handled that within the language spec many years ago. I need to try Java’s new Records, perhaps they handle that stuff now. Lombok and friends also include features that change language semantics like @SneakyThrows.
Byte code injection sometimes also changes language semantics. Early in my career I spent a few hours perplexed by why my code was encountering null when the code path I was examining used only non-nullable primitives. Turned out injection and rewriting had turned my primitive long into a nullable Long. I don’t like not being able to understand my code from just reading the code. The magic means I have to be aware of spooky action at a distance mechanisms and review their documentation. I also need to open the debugger more regularly to inspect what’s actually happening at runtime instead of just mentally compiling my code.
ninetyninenine 2 days ago [-]
The irony is that before types became popular with interpreted languages and modern languages like golang or rust, untyped languages became MORE popular because of formal education.
The reason why is because most formal education curriculums teach C++ which ironically is more error prone and contains error conditions far harder to debug then untyped interpreted languages like python or JavaScript or ruby which were coming into popularity at the time. This is of course despite the fact that C++ has a type system with generics.
Because of this, a lot of people tended to associate typing with something more error prone and harder to work with. It wasn’t until the advent of typescript, golang and rust when people started to get the difference.
neverartful 2 days ago [-]
Semi-pedantic point -- Python is not untyped. In fact, it's strongly typed. It's not statically typed, but rather dynamically typed.
ninetyninenine 1 days ago [-]
Is it? Then why can’t parameters be checked by default when called in a function? Anything passes through with zero runtime checks. Any type checks you need to implement it yourself.
neverartful 1 days ago [-]
Because Python isn't a statically typed language. Many will argue that this is a huge benefit of Python since you don't have to declare types. There are newer developments like mypy that allows you to add types as annotations, but the data types that you declare with the annotation is not enforced.
ninetyninenine 20 hours ago [-]
Right and I asked you a question and it wasn’t answered. If it’s dynamically typed how come I don’t get type checking at runtime for functions I defined?
motorest 2 days ago [-]
> A lot of people, especially people from the same background, would benefit from branching out and learning some other programming languages/paradigms.
I completely agree. I started reading the article expecting to read something interesting or smart about functional programming,but it turns out the blogger is just very vocal at telling the world their excitement over reinventing the wheel and being completely obliovius to what actually represents very basic things in any intro to software engineering course.
shepherdjerred 2 days ago [-]
What? I have never heard of a school teaching the importance of static typing esp when it comes to engineering practices
jfwuasdfs 2 days ago [-]
Very true. You need to go to a school that specializes in type theory [1].
> What? I have never heard of a school teaching the importance of static typing esp when it comes to engineering practices
The blogger is not a really talking about static typing. The blogger is waxing lyrical over designing a domain model and then writing an application around it. You know, what others call basic software architecture.
Wait until the blogger learns of the existence of Domain-Driven design.
tugu77 2 days ago [-]
100% this. For a C++ or Rust programmer this reads so weird.
Don't get me wrong, I'm not hating on JS here, and I have lots of beef with C++, but I fully agree with your take that TS barely scratches the surface of the statically typed world.
shepherdjerred 2 days ago [-]
Typescript has one of the most advanced type systems of any of the common languages
chamomeal 2 days ago [-]
Nawww I’d say something like python or php is “scratching the surface”. Typescript’s type system is phenomenal and pretty deep.
Honestly I think it’s the most interesting one to work with, too. Which is not always a good thing, but it is fun.
The only type systems I’ve seen that are similarly expressive are rust and Haskell. Even go doesn’t come anywhere close.
FractalHQ 2 days ago [-]
This isn’t true, is it? I’ve only ever heard that TypeScript has one of the most advanced type systems of any mainstream language.. but I don’t have enough experience with other languages to know how true that is.
2 days ago [-]
throwuxiytayq 2 days ago [-]
Based on my small amount of work done in TS, it seemed like one of the more advanced type systems out there. To the detriment, even. The language was just huge, and that was years ago.
eyelidlessness 2 days ago [-]
And it’s so advanced because it was/is designed to represent the types of real world dynamic JavaScript. More often than not, when people complain about the complexity of the types they encounter in the TS type system, they’re really complaining about the types of the underlying JS (which are the same whether they’re expressed statically or not).
wk_end 2 days ago [-]
There's a cultural problem in the TypeScript ecosystem, I find, where people are impressed (with both themselves and others) when complex interfaces can be expressed in the type system, and tend to embrace that instead of settling for simpler (and often admittedly more verbose) ones. Maybe that's because they're an ex-JS programmer who wants to use the exact same interface they'd use in JS with no compromise, or maybe it's just because they think it's cool. Either way I think it's really detrimental to TypeScript as a whole.
gejose 2 days ago [-]
> Maybe that's because they're an ex-JS programmer who wants to use the exact same interface they'd use in JS with no compromise, or maybe it's just because they think it's cool
That sounds a little reductive and gate-keepy. Maybe an advanced type system allowing for complex types to be expressed easily actually allows you to write simpler, more effective code.
Curious if you have any specific examples though.
TheHegemon 2 days ago [-]
Do you have some examples for that?
Most cases I've seen with more complex interfaces is due to the fact that it is what the interface truly expects. Usually making it simpler tends to mean it's actually wrong or incomplete.
wk_end 2 days ago [-]
This is hand-wavey, but that can't be true: less complex type systems manage to express all kinds of interfaces correctly all the time (sometimes at the cost of verbosity, but that that’s usually a good trade-off is the point).
You're asking me to tell on my coworkers, and I'm too loyal to throw them under the bus :)
Well, OK, here's one, but I'll keep it as blameless as possible. We had a thing where we wanted to register some event handlers. The primary use of these event handlers was to run a selector, and if the selected data changed, trigger an update, passing the selected data along. The initial implementation used existential types to store a list of callbacks, each returning different selected data. The "driver" then did the equality checking and update triggering. We later changed this, so that the callbacks - as far as the driver was concerned - all returned `void`, eliminating the need for an existential type. We just had to move the equality checking and update triggering to inside the callbacks.
Some features are straightforward translations: anywhere you have overloading and/or optional arguments you can (and often should) simplify by refactoring into multiple functions.
For a concrete, public example...well, I remember the Uppy library had a lot of stuff like this. A lot of work goes into making it's "Plugin" interface look the way it does (start at [1] and keep reading I guess) for instance, and while I haven't sat down and re-engineered it I don't think it needs to be this way, if you're willing to give up some of the slickness of the interface.
I think there’s a difference between ideal library code and ideal business logic code.
The more you lean into crazy ass generics in your library, the simpler and more error-free the user can make their biz logic code. Really nicely typed libraries almost give you zero chances to fuck things up, it’s amazing.
But then again most of your devs wont be able to understand all those generics, so you need to keep your biz logic types relatively simple.
shepherdjerred 2 days ago [-]
Adding static typing only shows you how bad your code already is
TheTaytay 2 days ago [-]
I didn't get the impression that this was a self-taught or newbie coder. I think their audience is not assumed to be a CS grad though.
I found it a good and well-reasoned explanation of _why_ he enjoys types in a large codebase. He does take time to explain different type concepts, but I assumed that was because he doesn't assume his audience is familiar with all of them. Considering that the opinion "types are good and helpful in a codebase" is not universally held, even by very experienced/productive coders (see https://world.hey.com/dhh/turbo-8-is-dropping-typescript-701... or basically any ruby codebase), I think articles like this have a definite place.
2 days ago [-]
throwuxiytayq 2 days ago [-]
In my experience, university is one of the least efficient ways to learn CS. The actually useful classes are few and far between, dwarfed by useless outdated courses, courses that aren’t very relevant to the job, and classes that are sadly lead by incompetent burnouts who don’t know that they’re teaching, come terribly unprepared, and in general seem to hate their job. Most of the people there have theoretical experience in writing software. But maybe that’s just my shitty university. I dunno. Supposedly one of the better ones.
karaterobot 2 days ago [-]
On the other hand, as an Engish Lit major who taught himself programming from zero, and worked as a programmer for over a decade, all my experience writing software was practical, and I don't think that's the right way to go either. I wish I'd had any level of theoretical education that might have exposed me to fundamental concepts you (with yer fancy book-learnin') probably take for granted. If someone just learns on the job, or just learns as they go, they don't learn stuff until they need to. They learn it in a hurry, and on a deadline. That's not the best way to get a firm handle on tricky subjects, and maybe as a consequence, I always felt a couple steps behind my peers.
digging 2 days ago [-]
As someone in a similar position, may I take a tangent? I'm curious what you transitioned into out of programming. The stress of feeling "always behind" is taking its toll on me, and I wonder about another career change often.
karaterobot 2 days ago [-]
I joined the software industry at a small consultancy that needed me to do a lot of different things, including both programming and design. So I got experience doing both of those. When I left the consultancy world in 2016, I had to decide whether to sell myself to employers as either a programmer or a designer—normal companies want you to pick a single lane—so I just focused on my design experience, and started doing that as a day job. I went from a fancy title to a much less fancy title for my first job as a designer, but more or less worked back up from there. I think for most programmers, their fork in the road would be to stay as an individual contributor or become a manager, but I don't want to be a manager, and was lucky to have a different path to fall back on.
CaptainNegative 2 days ago [-]
I think the problem is in going for a Computer Science degree when you really meant to study Software Engineering.
n4r9 2 days ago [-]
I imagine quite a bit of a Computer Science degree is relevant if you plan to be a computer scientist.
goatlover 2 days ago [-]
A computer science degree is for the science of computing, not whatever is the latest in the workplace. You learn that on the job or a boot camp. Computer science is much more than the current popular framework and tools. It's the principles for how software works.
throwuxiytayq 2 days ago [-]
Yes, I wish I finished my degree having learned literally any of that.
kazinator 2 days ago [-]
If the code writes itself due to type declarations, it must be mindless drivel, not something containing "hard problems".
For instance, if we just declare some data structures for computational geometry, like points, line segments and whatnot, code for, say, intersecting two meshes is not effing going to write itself!
The author is living in some CRUD world of pulling things from one API or database, converting to a different data model, and stuffing them into another API, with maybe some HTML generation sprinkled on top.
revskill 2 days ago [-]
Structural typing is the key here.
2 days ago [-]
Barrin92 2 days ago [-]
>Using the same language everywhere. Naturally, if we want to share type information as much as possible, we need to be using the same language
This goes to the heart of what's not great about this, types impose global semantics on a piece of software, they introduce coupling. (It's why Alan Kay used to stress "late binding of all things") as a feature of managing complexity.
In fact one result of this kind of programming were microservices. What do they do? Reintroduce runtime dynamism. It's not often framed that way but there's a reason you see more statically typed microservices than Lisp or Erlang ones. It's because they're an attempt to get away from the coupling imposed by type driven programming and towards more independence of each service. Which is already baked into message based, dynamic languages.
And there's also a fundamental misunderstanding about data and types in the article.
> Making our types represent the “truth”
Types can't represent truth. Real world data doesn't have types. It changes incrementally however it wants, and all the time. You can use types to not let something you don't want into your program, but you can never represent arbitrary real world data by matching types onto them.
tikhonj 2 days ago [-]
> Types can't represent truth. Real world data doesn't have types. It changes incrementally however it wants, and all the time. You can use types to not let something you don't want into your program, but you can never represent arbitrary real world data by matching types onto them.
That's true for static types... but it's just as true for the constructs in your dynamically typed code! Nothing about dynamic typing makes your logic or data representation any more adaptable, it just makes the rigid models inherent to your code implicit rather than explicit.
pavel_lishin 2 days ago [-]
> Types can't represent truth. Real world data doesn't have types. It changes incrementally however it wants, and all the time.
That's like saying "Maps can't represent truth". They're a model that works well enough, just like types do, if you do it right.
Garlef 2 days ago [-]
> you can never represent arbitrary real world data by matching types onto them.
I think that's why the article references the 'parse, don't validate' article: Real world data is messy and so you ingest it into your business logic at the system boundaries via parsing.
esafak 2 days ago [-]
> Types can't represent truth. Real world data doesn't have types. It changes incrementally however it wants, and all the time.
What do you mean? If a variable is a date/time or an integer etc. in the real world it should stay that way and be represented as such in software.
leeeeeepw 2 days ago [-]
AI chatbot example, you're chatting and then there could be some Mark down, latex, images maybe base64 encoded webp, audio files, we can type all this stuff do all the validations and such, understand it better and there's some gains to be had doing that, then likely someone like Facebook comes along with this Giant byte Transformer expert system with all of the data specific optimisations, that's kind of the bitter lesson but I guess the main point is that the problem itself of how to best communicate with AI is not necessarily solved so you can get bogged down typing the best possible ways to do it but it's a moving target.
Same with a lot of systems like say a search system that tracks data using the best embedding, well we don't know what that best embedding is in terms of price performance and encoded knowledge perf it's just a moving target.
Send me the system that tracks important metrics effecting the stock market, or a weather system etc... the sensors are all updating and then an entirely new thing comes along like Starlink that helps us track the weather in totally new ways
msanlop 2 days ago [-]
> get bogged down typing the best possible ways to do it but it's a moving target
As someone who is still learning this is a huge reason why I've come to love dynamic languages. Any project I do involves a lot of rewriting as the code evolves, I found that trying to predict ahead of time what the structures and types will be is mostly a waste of time.
The best middle ground for me so far has been using python with type hints. It allows for quick iteration, I can experiment and only then update the types to match what I have, so that I can still get LSP help and all that.
But I could see this being less relevant with more experience
ervine 2 days ago [-]
Not if your types represent something you don't control, like an API response.
DangitBobby 2 days ago [-]
Having runtime type validation utilities such as Typebox help here. Even in the absence of type guarantees at I/O boundaries (where you have to just "lie" about the type you really have, which would be `unknown` if we were honest), I'd certainly rather my code at least specify what types it expects from the server than attempt to access arbitrary attributes in various places.
ervine 1 days ago [-]
Oh yeah definitely, I write the types that I expect from the API - the point the original comment is making is that the state of reality is not your types, it's what the actual API returns.
But yeah if you use something like Zod you can at least say "this is what I'm pretty sure the API should return" but also define what should happen if things change / don't meet your types.
turbojet1321 2 days ago [-]
The quote from the article is incorrect, though. We have a C# backend and a TS front end, with the shared types generated for TS using nswag. I'm sure there are many other variations on this, too.
akira2501 2 days ago [-]
> The default return type for posthog.getFeatureFlag is string | boolean | undefined.
You could just as well say "how bad APIs make simple problems complicated and how you might strain at a type system to pretend this bad design is worth keeping."
I mean, "string or boolean or undefined," is not at all a "type." This is a poorly specified contract for an overloaded interface with terrible semantics built on the abuse of language grammar.
It's why I think a language with the core semantics of JavaScript plus a goofy type system are never going to produce anything worth actually having. The two sides of the system are constantly at odds with each other. People are mostly using the type system to paper over bad design semantics.
wryoak 2 days ago [-]
The first point lost me because it pushed responsibility for functioning software onto the user. If your entire application ecosystem has to be recompiled because you conceived of a new clever way to conceptualize the data that doesn’t actually affect the user experience, what happens is I have to update my damn bank app every time you think you’ve done a smart, even though you’ve changed nothing in regards to my checking my balance.
Rendered at 20:43:54 GMT+0000 (Coordinated Universal Time) with Vercel.
> This dichotomy gels really well with the way my brain works. I’m able to channel short bursts of creative energy into precisely mapping the domain or getting type scaffolding set up. And then I’m able to sustain long coding sessions to actually implement the feature because the scaffolding means I rarely have to think too hard.
It's why I keep begging my team, every time there's a new codebase (or even a new feature), to stop throwing `any` onto everything more complicated than a primitive. It is exhausting. It forces me to waste energy on the shitty, tedious parts. It forces me to debug working code just to find out how it works before I can start my work.
They tend to take the quickest solution to everything -- which means everyone else has to do the same work over and over again until someone (me, invariably) sits down and makes a permanent record of it.
In doing this they ensure that I can't trust any of their code, which is counterproductive for what should be obvious reasons. Every time I work on established, untyped (or poorly typed) code, it's like I'm writing new code with hidden, legacy dependencies.
The former like to brag about how quickly they can go from zero to a working solution. The latter brag about how their solutions have fewer bugs, need less maintenance, and are easier to refactor.
I am squarely in the latter camp. I like strong and capable type systems that constrain the space so much that—like you say—the implementation is usually rote. I like DSLs that allow you to describe the solution and have the details implemented for you.
I personally think it’s crazy how much of the industry tends toward the former. Yes there are some domains where going the time from zero to a working product is critical. And there are domains where the most important thing is being able to respond to wildly changing requirements. But so much more of our time and energy is spent maintaining code than writing it in the first place that upfront work like defining and relating types rapidly pays dividends.
I have multiple products in production at $JOB that have survived nearly a decade without requiring active maintenance other than updating dependencies for vulnerabilities. They have had a new version deployed maybe 3-5 times in their service lives and will likely stay around for another five years to come. Being able to build something once and not having to constantly fix it is a superpower.
I agree with your observations, but I'd suggest it's not so much about domain (though I see where you're coming from and don't disagree), but about volatility and the business lifecycle in your particular codebase.
Early on in a startup you definintely need to optimize for speed of finding product-market fit. But if you are successful then you are saddled with maintenance, and when that happens you want a more constrained code base that is easier to reason about. The code base has to survive across that transition, so what do you do?
Personally, I think overly restrictive approaches will kill you before you have traction. The scrappy shoot-from-the-hip startup on Rails will beat the Haskell code craftsmen 99 out of 100 times. What happens next though? If you go from 10 to 100 to 1000 engineers with the same approach, legibility and development velocity will fall off a cliff really quickly. At some point (pretty quickly) stability and maintainability become critical factors that impact speed of delivery. This is where maturity comes in—it's not about some ideal engineering approach, it's about recognition that software exists to serve a real world goal, and how you optimize that depends not only on the state of your code base but also the state of your customers and the business conditions that you are operating in. A lot of us became software engineers because we appreciate the concreteness of technical concerns and wanted to avoid the messiness of human considations and social dynamics, but ultimately those are where the value is delivered, and we can't justify our paychecks without recognizing that.
We way overindex on the first month or even week of development and pay the cost of it for years and years thereafter.
If you always maintain a high standard you get better and faster at doing it right and it stops making sense to think of doing it differently as a worthwhile tradeoff.
Is it worth spending a bit more time up-front, hoping to prevent refactoring later, or is it better to build a buggy version then improve it?
I like thinking with pen-and-paper diagrams; I don't enjoy the mechanics of code editing. So I lean toward upfront planning.
I think you're right but it's hard to know for sure. Has anyone studied software methodologies for time taken to build $X? That seems like a beast of an experimental design, but I'd love to see.
I think you'd be hard pressed to find a team that lacks this kind of cooperation and maintains consistently high quality, regardless of what some nontechnical project manager says or does.
It's also an individual effort to build the knowledge and skill required to produce quality code, especially when nobody else takes responsibility of the architectural structure of a codebase, as is often the case in my experience.
I think that in order to keep a codebase clean you have to have a person who takes ownership of the code as a whole, has plans for how it should evolve etc. API surfaces as well as lower level implementation details. You either have a head chef or you have too many cooks, there's not a lot of middle ground in my opinion.
To make things more complicated, programmers need practice to become fluent and efficient with any particular best practice. So you need investment in those practices in order for the cost to be acceptable. But some of those things are context dependent. You wouldn’t want to run consumer app development the way you run NASA rover development because in the former case the customer feedback loop is far more important than being completely bug free.
A strong type system is your knowledge about the world, or more precisely, your modeled knowledge about what this world is or contains - the focus is more on data structures and data types, and that's as declarative as it can get with programming languages(?). I'd also call it to be holistic.
A procedural approach focussed more on how this world should be transformed - through the use of conditional branching and algorithms. The focus feels to be less on circumstances of this world, but more to be on temporary conditions of micro-states (if that makes any sense). I'd would call it to be reductionistic.
The two approach have their own cons and pros. But aren't explicitly exclusive. Sometimes the goal and solution aren't that clear. So you do it procedurally until you find a POC(Proof of concepts) that may actually solve the problem. And refine it in a declarative way.
I love strong types. I love for loops. I love stacks.
GP! Try Rust. Imperative programming isn’t orthogonal to types. You can go hard in Rust. (I loved experimenting with it but I like GC)
GP! Try data driven design. Imperative programming isn’t orthogonal to declarative.
Real talk, show me any declarative game engine that’s worth looking at. The best ones are all imperative and data driven design is popular. Clearly imperative code has something going for it.
and the advantages aren’t strictly speed of development, but imperative can be clearer. It just depends.
Neither is strongly wrong or right, better or worse. They have different strengths in different problem areas, though I do think we’ve swung far too hard toward the procedural approach in the last decade.
I just find it odd to analogize:
> agile : waterfall :: imperative : declarative
You can't write a program without knowing that x is a string or a number, your only choice is whether you document that or not.
for a really simple example: languages which allow narrowing a numeric to a float, but also let you interpolate either into a string without knowing which you have.
A statically typed Console.log in JS/TS would be an unnecessary annoyance.
With no type-checking you need to gamble testing all codepaths.
I would say that imperative is the one that does computation in steps so that one can at each step decide what to do next. Declarative normally lacks this step-like quality. There are non-languages that consist solely of steps (e.g. macros in some tools that allow to record a sequence of steps), but while this is indeed imperative, this is not programming.
One side cares more about how the solution is implemented. They put a lot of focus on the stuff inside functions: this happens, then that happens, then the next thing happens.
The other side cares more about the outside of functions. The function declarations themselves. The types they invoke and how they relate to one another. The way data flows between parts of the program, and the constraints at each of those phases.
Obviously a program must contain both. Some languages only let you do so much in the type system and everything else needs to be done procedurally. Some languages let you encode so much into the structure of the program that by the time you go to write the implementations they’re trivial.
-- my attempt:
Imperative defines the order and semantics of each step.
Declarative defines the prerequisites and intent of each step.
The algorithms each can implement are equivalent.
https://existentialtype.wordpress.com/2013/07/18/what-if-any...
Semantic is another word that is hard to define.
I find imperative better for expressing state machines. I find declarative better for backtracking.
You can write a state machine with just a loop, an assignable, and conditions. Writing state in prolog is irritating.
Similarly declarative programming is strictly secondary to imperative. It is a limited form of imperative that codifies some good patterns and turns them into a structure. But it also makes it hard or impossible not to use these patterns.
First off the declaration is better as it’s less error prone but it comes at the cost of being harder to write and harder to interpret.
Imagine if we communicated with one declarative run on sentence. No paragraphs at all…
Procedures are default and easier to our nature as humans.
Of course declarative programming is simpler and less error prone. But it is also essentially inflexible. The implicit instruction is finite and will inevitably run into a situation when the baked execution logic does not quite fit. It will be either inefficient or require a verbose and repetitive parameter, or just flat out incapable of doing what is desired. In this case declarative programming fails; it is impossible to fix unless we rewrite the underlying instruction.
E.g. 'printf' is a small example of declarative programming. It does work rather well, especially when the compiler is smart about type checks, but once you want to vary the text conditionally it fails. (The thing that replaces 'printf' are template engines that basically reimplement same logic and control statements you already have in any language and the engine works as an interpreter of that logic. The logic is rather crude and limited and the finer details of formatting are left to callbacks that are mostly procedural.) For example, how do I format a list so that I get "A" for 1, "A and A" for 2, and "A, A, and A" for more? Or how I format a number so that the thousand separator appears only if the number is greater than 9999? Or what to do if I have an UTF-8 output, but some strings I need to handle are UTF-16? The existing declarative way did not foresee these cases and to add them to the current model would complicate it substantially. But if I have a simple writer that writes basically numbers and strings I can very quickly write procedures for these specific cases.
Instructions are primary by their nature. A piece of data on its own cannot do anything. It always has an implicit instruction that will handle it. So instructions are the things we have to master.
It's because most people who use technology literally don't care how it works. They have real, physical problems in the real world that need to be solved and they only care if the piece of technology they use gives them the right answer. That's it. Literally everything programmers care about means nothing to the average person. They just want the answer. They might care about performance if they have to click the same button enough times, and maybe care about bugs if it's something that is constantly in their face. But just working is enough...
A poorly typed program often would give a wrong answer, or, in a less dangerous case, crash.
Same for physical parts: you want them to solve some business problem, but often you won't go for the absolute cheapest, because they may work poorly.
"Poorly typed" means different things to different people, in the context of this article and thread it would probably mean weakly typed or dynamically typed? Which has nothing to do at all with the correctness of a formula or what output a program will produce.
Quality, maintainability, etc, is less important in that moment. And, like many things in companies, short term desires dominate.
Without this, Typescript is next to useless. Not knowing if the types are good is worse than no types at all.
You'd need to reach for something like https://typescript-eslint.io/rules/no-explicit-any/
Types that are too complex... hmmmm - I'm sure this exists in domains other than the bullshit CRUD apps I write. So yeah, I guess I don't know what I don't know here. I've written some pretty crazy types though, not sure what TypeScript is unable to represent.
I'd be interested in similar capabilities for higher-level tools like static analyzers and so on. The point is not to carry violations long term, but to be able to burn down the violations over time in parallel to new development work.
I want a type that represents a string of negative prime numbers alternating with palindromes, separated by commas.
You could try to craft your own type to match google's schema or hunt down 3rd party types, but just doing `(window as any)["__grecaptcha_cfg"]` gets the job done much faster and it's fairly isolated from the rest of the code so it doesn't matter too much.
Generally, the conveniences of allowing any are swamped by the mess it accumulates in a typical team.
But for your own handwritten application code, there is no excuse to use `any`.
An example is receiving API response with `fetch`. Normally you'd cast the response to the expected type, but it's not uncommon for you to misunderstand what the API can return or for the API to be changed/updated. Zod lets you verify your types at runtime so that if your API returns something unexpected then you can act on it.
This is really useful anytime you're interacting with I/O or user input. For example, I've used Zod for: loading from JSON files, reading from local storage, parsing URL params, or validating form input.
[0]: https://zod.dev/
In comes Rust. I know the memory management is the thing that sells it, but in an embedded project, I'm simply not doing any dynamic memory allocation, so I didn't think I'd see much benefit. I mostly tried it because Cargo is an amazing build system. But the thing that has sold me on it is the type system. Within my first 5 minutes of porting a magnetic encoder driver, the type system caught an error in my ported code. I used the wrong pin for my SPI MOSI connection to my driver. It absolutely blows me away that the type system knew I was using the wrong pin. Turns out the code I could never get to work in C was broken because I was referencing the wrong pin, and I never knew why. Fifteen minutes later, it caught another error: by sharing the SPI bus, it could identify that there were more than two devices connected, because there were two CS pins declared, and because the device wasn't exclusive, I had to wrap the bus in a refcell for memory safety. Absolutely amazing.
I never thought that embedded programming could be fun, and strong types were what changed my mind.
Of course, the resource constraints and efficiency of the compiled code play a significant role here. So unless a new memory efficient strict-typed language/compiler comes along with a modern mindset, the power of type will not come into play.
Worked perfectly first time, because the type system allowed compiler to tell me everything that was broken.
[0] https://aphyr.com/posts/342-typing-the-technical-interview
People make huge projects in them and start learning new techniques and then the weight of the choices they made begin to grow.
Enter: Types, a frequent savior. Not always the best choice for everyone, but a very good and useful thing.
I welcome this person on their journey!
All three of those languages have ways to use typing, so this statement is only true in the sense of types not being mandatory, which is also the case in any language that has the Any type.
If you want to skip out on types in Typescript, you have to be explicit about it.
> How types make easy problems hard
Sure that's sometimes an issue with the type system itself (looking at you Sorbet) but also preventing the programmer from taking advantage of the flexibility, expressiveness and elegance the language itself my add (yes, Ruby).
But even Typescript, which is arguably one of the better type system/language pairings out there, often causes more headache than it's worth.
The problem with complex TS types is that there's no debug tooling. You can't set a breakpoint, you can't step through the type propagation. If there's an issue you have to just do trial and error, which can be very tedious.
Here's a super complex type I ran into in the wild that has a bug that is extremely difficult to fix unless you burn hours on trial and error: https://github.com/openapi-ts/openapi-typescript/blob/main/p...
(the bug in question, still unfixed): https://github.com/openapi-ts/openapi-typescript/issues/1769
It was so hard to fix this bug that I found it easier to just rewrite the entirety of the library but using code generation instead of ultra complex TS types to accomplish the same outcome.
JavaScript was a bad trip, guys.
Sure, primative types are fine, but anything more and it's a massive pain.
Actually, types are best when someone else already did all that work.
The real question IMO is are the benefits worth the cost.
Group A writes a couple of tasks without typing. Group B writes it with typing. Compare development times, execution times, quality, security, etc...
When you have a team of engineers, this means your entire team needs to either learn or lean on an expert when tougher situations arise.
All the type system does is make that implicit knowledge explicit, and a long the way, stops you from doing things that are likely to cause issues.
It would be unfair to consider your team incompetent just because they are experts with another set of tools. It’s also unreasonable to expect these things to be quickly learned (TypeScript types are not friendly). But I think it’s reasonable to explain the benefits of this approach and to help your ramp up and learn the skill.
But, anyway, I understand the frustration. I’m usually the one trying to get my team to understand the value of modeling problems in type systems.
Is there really that much legwork otherwise? Adding ": string" to a function parameter assumes they know what a string is (which should already be the case), adding an object type assumes knowing what an object is, etc.
Simply adding types is usually not too difficult and it is still quite beneficial. It does eliminate certain kinds of bugs.
Modeling your application in a type system means making invalid states unrepresentable and being as precise as possible. This is a lot more work, but again is eliminates more kinds of bugs that can occur.
An example of this being complex: earlier this week I wrote a generic React component that allows users to define which columns of a table are sortable. I wanted to prevent any invalid configurations from being passed in. This is what it looks like: https://tinyurl.com/bdh6xbp6
It's a bit complex but the compiler can guarantee that you're using the component correctly. This is more important and useful when it comes to business logic.
1. Added Boilerplate and Ceremony: Simple tasks may require extra type declarations and structures, adding “ceremony” that feels unnecessary for quick one-off solutions.
2. Rigid Type Constraints: Combining different data types or working with unclear data shapes can force complex type solutions, even for simple logic, due to strict compilation rules.
3. Complex Type Definitions for Simple Data: Handling semi-structured data (like JSON) requires elaborate type definitions and parsing, where dynamically typed languages let you manipulate data directly.
4. Refactoring Overhead: Small changes in data types can cause widespread refactoring, turning minor edits into larger efforts compared to flexible, dynamically typed environments.
5. Complexity of Advanced Type Systems: Powerful type features can overwhelm trivial tasks, making a few lines of code in a dynamic language balloon into complex type arguments and compiler hints.
A risk is, unexpected data (empty field instead of zero; a real number introduced in untested corner cases where only an integer will actually work etc) can cause issues after deployment.
Those 'complex' requirements mean, if you want a reliably correct program well then you'll have to put in this much work. But go ahead, that 'trivial task' may become something less trivial when your task fails during Christmas sales season or whatever.
E.g.: “In a reflective environment such as Smalltalk, any change to the system can be detected by the system. Therefore, it is possible to write programs that depend on the objects being a particular size, or that call methods by getting a string from the user and calling perform: with it. Therefore, it is impossible to have totally correct, nontrivial refactorings. However, the refactorings in our system handle most Smalltalk programs, but if a system uses reflective techniques, the refactorings will be incorrect.”
“To correctly rename a method, all calls to that method must be renamed. This is difficult in an environment that uses polymorphism to the extent that Smalltalk does. Smalltalk also allows dynamically created messages to be sent via the perform: message. If an application uses this approach, any automatic renaming process has the potential of failure. Under these conditions, guaranteeing the safety of a rename is impossible.
“The Refactoring Browser uses method wrappers to collect runtime information. […] Whenever a call to the old method is detected, the method wrapper suspends execution of the program, goes up the call stack to the sender and changes the source code to refer to the new, renamed method. Therefore, as the program is exercised, it converges towards a correctly refactored program. […] The major drawback to this style of refactoring is that the analysis is only as good as your test suite. If there are pieces of code that are not executed, they will never be analyzed, and the refactoring will not be completed for that particular section of code.”
http://www.laputan.org/pub/papers/opdyke-thesis.pdf
As-for "performing large changes" --
"A very large Smalltalk application was developed at Cargill to support the operation of grain elevators and the associated commodity trading activities. The Smalltalk client application has 385 windows and over 5,000 classes. About 2,000 classes in this application interacted with an early (circa 1993) data access framework. The framework dynamically performed a mapping of object attributes to data table columns.
Analysis showed that although dynamic look up consumed 40% of the client execution time, it was unnecessary.
A new data layer interface was developed that required the business class to provide the object attribute to column mapping in an explicitly coded method. Testing showed that this interface was orders of magnitude faster. The issue was how to change the 2,100 business class users of the data layer.
A large application under development cannot freeze code while a transformation of an interface is constructed and tested. We had to construct and test the transformations in a parallel branch of the code repository from the main development stream. When the transformation was fully tested, then it was applied to the main code stream in a single operation.
Less than 35 bugs were found in the 17,100 changes. All of the bugs were quickly resolved in a three-week period.
If the changes were done manually we estimate that it would have taken 8,500 hours, compared with 235 hours to develop the transformation rules.
The task was completed in 3% of the expected time by using Rewrite Rules. This is an improvement by a factor of 36."
from “Transformation of an application data layer” Will Loew-Blosser OOPSLA 2002
https://dl.acm.org/doi/10.1145/604251.604258
I shuddered.
As a mobile app? Maybe not.
I'm honestly a bit puzzled which scenario the original article is envisioning when arguing this, because it mentions mobile, but then also argues for monorepos, which are kind of at odds with each other, unless you somehow force your mobile users to always be using the version that matches your back-end.
To be fair though, this is a general problem when clients and servers might not agree on data formats. You can still safely do what the author is describing since type checking occurs at compile-time and not runtime.
You would, of course, need to be sure your app handles whatever the API/db returns at runtime though. But, again, this is a general problem.
Like, the compiler should throw a type error if you try to pop() from an array with one element.
TS should say, you can't pop() this array, unless TS can infer it has >1 elements. Otherwise it can enter a state at runtime which doesn't conform to the type. That seems bad!
I’m surprised there’s not been any mention of effect (http://effect.website/) yet, as it is kind of the next level up if you really want to model things like errors, dependencies and side effects in the type system, using functional concepts borrowed from more pure functional languages.
It would be a bit of a risk adopting this into a shared code base depending on your team and the kinds of devs you’re looking to hire, but it could be useful to some folk that feel like they want even more type safety.
A lot of people, especially people from the same background, would benefit from branching out and learning some other programming languages/paradigms.
Some... statically typed and actually compiled languages. Maybe to take an entry course in CS.
It's very popular to hate on formal education, especially in software, but all these lessons would have been learned in the first semester or two.
Modern Java is an entirely different beast, which feels very functional these days.
(and yes, I say this as someone who teaches these things as part of an undergrad java course)
Even though the language is getting less painful, frameworks like Spring that do things at runtime instead of compile time, including rewriting bytecode on startup to inject code, make the ecosystem quite hostile to folks who want to work in a stricter, safer manner that’s easier to reason about(expressed in the language, not in some annotation based metalanguage with no principles and whose implementation changes randomly).
We need to stop defending Java and move on to something actually modern and good. Scala has fallen from favor, so maybe Rust is the next thing I’ll try.
I think most of what people use Lombok for though are features that should be part of the core language by now, or would be better as library methods instead of annotations. Like generating constructors, equals, and hashCode methods - case classes and data classes in Scala and Kotlin respectively handled that within the language spec many years ago. I need to try Java’s new Records, perhaps they handle that stuff now. Lombok and friends also include features that change language semantics like @SneakyThrows.
Byte code injection sometimes also changes language semantics. Early in my career I spent a few hours perplexed by why my code was encountering null when the code path I was examining used only non-nullable primitives. Turned out injection and rewriting had turned my primitive long into a nullable Long. I don’t like not being able to understand my code from just reading the code. The magic means I have to be aware of spooky action at a distance mechanisms and review their documentation. I also need to open the debugger more regularly to inspect what’s actually happening at runtime instead of just mentally compiling my code.
The reason why is because most formal education curriculums teach C++ which ironically is more error prone and contains error conditions far harder to debug then untyped interpreted languages like python or JavaScript or ruby which were coming into popularity at the time. This is of course despite the fact that C++ has a type system with generics.
Because of this, a lot of people tended to associate typing with something more error prone and harder to work with. It wasn’t until the advent of typescript, golang and rust when people started to get the difference.
I completely agree. I started reading the article expecting to read something interesting or smart about functional programming,but it turns out the blogger is just very vocal at telling the world their excitement over reinventing the wheel and being completely obliovius to what actually represents very basic things in any intro to software engineering course.
[1]: https://cstheory.stackexchange.com/questions/50780/which-uni...
The blogger is not a really talking about static typing. The blogger is waxing lyrical over designing a domain model and then writing an application around it. You know, what others call basic software architecture.
Wait until the blogger learns of the existence of Domain-Driven design.
Don't get me wrong, I'm not hating on JS here, and I have lots of beef with C++, but I fully agree with your take that TS barely scratches the surface of the statically typed world.
Honestly I think it’s the most interesting one to work with, too. Which is not always a good thing, but it is fun.
The only type systems I’ve seen that are similarly expressive are rust and Haskell. Even go doesn’t come anywhere close.
That sounds a little reductive and gate-keepy. Maybe an advanced type system allowing for complex types to be expressed easily actually allows you to write simpler, more effective code.
Curious if you have any specific examples though.
Most cases I've seen with more complex interfaces is due to the fact that it is what the interface truly expects. Usually making it simpler tends to mean it's actually wrong or incomplete.
You're asking me to tell on my coworkers, and I'm too loyal to throw them under the bus :)
Well, OK, here's one, but I'll keep it as blameless as possible. We had a thing where we wanted to register some event handlers. The primary use of these event handlers was to run a selector, and if the selected data changed, trigger an update, passing the selected data along. The initial implementation used existential types to store a list of callbacks, each returning different selected data. The "driver" then did the equality checking and update triggering. We later changed this, so that the callbacks - as far as the driver was concerned - all returned `void`, eliminating the need for an existential type. We just had to move the equality checking and update triggering to inside the callbacks.
Some features are straightforward translations: anywhere you have overloading and/or optional arguments you can (and often should) simplify by refactoring into multiple functions.
For a concrete, public example...well, I remember the Uppy library had a lot of stuff like this. A lot of work goes into making it's "Plugin" interface look the way it does (start at [1] and keep reading I guess) for instance, and while I haven't sat down and re-engineered it I don't think it needs to be this way, if you're willing to give up some of the slickness of the interface.
[1] https://github.com/transloadit/uppy/blob/main/packages/%40up...
The more you lean into crazy ass generics in your library, the simpler and more error-free the user can make their biz logic code. Really nicely typed libraries almost give you zero chances to fuck things up, it’s amazing.
But then again most of your devs wont be able to understand all those generics, so you need to keep your biz logic types relatively simple.
I found it a good and well-reasoned explanation of _why_ he enjoys types in a large codebase. He does take time to explain different type concepts, but I assumed that was because he doesn't assume his audience is familiar with all of them. Considering that the opinion "types are good and helpful in a codebase" is not universally held, even by very experienced/productive coders (see https://world.hey.com/dhh/turbo-8-is-dropping-typescript-701... or basically any ruby codebase), I think articles like this have a definite place.
For instance, if we just declare some data structures for computational geometry, like points, line segments and whatnot, code for, say, intersecting two meshes is not effing going to write itself!
The author is living in some CRUD world of pulling things from one API or database, converting to a different data model, and stuffing them into another API, with maybe some HTML generation sprinkled on top.
This goes to the heart of what's not great about this, types impose global semantics on a piece of software, they introduce coupling. (It's why Alan Kay used to stress "late binding of all things") as a feature of managing complexity.
In fact one result of this kind of programming were microservices. What do they do? Reintroduce runtime dynamism. It's not often framed that way but there's a reason you see more statically typed microservices than Lisp or Erlang ones. It's because they're an attempt to get away from the coupling imposed by type driven programming and towards more independence of each service. Which is already baked into message based, dynamic languages.
And there's also a fundamental misunderstanding about data and types in the article.
> Making our types represent the “truth”
Types can't represent truth. Real world data doesn't have types. It changes incrementally however it wants, and all the time. You can use types to not let something you don't want into your program, but you can never represent arbitrary real world data by matching types onto them.
That's true for static types... but it's just as true for the constructs in your dynamically typed code! Nothing about dynamic typing makes your logic or data representation any more adaptable, it just makes the rigid models inherent to your code implicit rather than explicit.
That's like saying "Maps can't represent truth". They're a model that works well enough, just like types do, if you do it right.
I think that's why the article references the 'parse, don't validate' article: Real world data is messy and so you ingest it into your business logic at the system boundaries via parsing.
What do you mean? If a variable is a date/time or an integer etc. in the real world it should stay that way and be represented as such in software.
Same with a lot of systems like say a search system that tracks data using the best embedding, well we don't know what that best embedding is in terms of price performance and encoded knowledge perf it's just a moving target.
Send me the system that tracks important metrics effecting the stock market, or a weather system etc... the sensors are all updating and then an entirely new thing comes along like Starlink that helps us track the weather in totally new ways
As someone who is still learning this is a huge reason why I've come to love dynamic languages. Any project I do involves a lot of rewriting as the code evolves, I found that trying to predict ahead of time what the structures and types will be is mostly a waste of time. The best middle ground for me so far has been using python with type hints. It allows for quick iteration, I can experiment and only then update the types to match what I have, so that I can still get LSP help and all that.
But I could see this being less relevant with more experience
But yeah if you use something like Zod you can at least say "this is what I'm pretty sure the API should return" but also define what should happen if things change / don't meet your types.
You could just as well say "how bad APIs make simple problems complicated and how you might strain at a type system to pretend this bad design is worth keeping."
I mean, "string or boolean or undefined," is not at all a "type." This is a poorly specified contract for an overloaded interface with terrible semantics built on the abuse of language grammar.
It's why I think a language with the core semantics of JavaScript plus a goofy type system are never going to produce anything worth actually having. The two sides of the system are constantly at odds with each other. People are mostly using the type system to paper over bad design semantics.