> A piece of advice I've given junior engineers is to write everything twice. Solve the problem. Stash your code onto a branch. Then write all the code again. I discovered this method by accident after the laptop containing a few days of work died. Rewriting the solution only took 25% the time as the initial implementation, and the result was much better.
This is true, and I've discovered it myself by losing a branch.
However, who the hell has time to write everything twice? There's 1,000 things waiting to be written once.
Tier3r 9 days ago [-]
I fear not the man who has written 10,000 features once, but the man who has written 1 feature 10,000 times
Cthulhu_ 8 days ago [-]
You'd think after a decade of writing forms and form validation on the internets I'd get better at it, but nope, have to reinvent the wheel every time for every framework/form library because nothing is ever finished or good enough. I had hopes HTML5 would fix it but it didn't.
gwynforthewyn 8 days ago [-]
You sound much more experienced as a web developer than me, so you probably know this exists, but just trying to be helpful I want to be sure you know about client side form validation in html5 https://developer.mozilla.org/en-US/docs/Learn/Forms/Form_va...
I appreciate this doesn't cover anything more than the basics, so something like the normal behaviour of comparing two password fields for the same content doesn't work, but I find these controls are useful for getting something simple up and running.
marcosdumay 8 days ago [-]
This is full of issues. There has been a recent discussion about it here¹, you may be interested on that.
Interesting adaptation of the phrase. I think I may fear the 10k feature person more than the 1 feature
yen223 9 days ago [-]
My <TagComponent /> to render tags can kill a person
hackable_sand 8 days ago [-]
I'll bring the portal gun.
swyx 8 days ago [-]
this is unironically Ryan Florence and React routers
dcuthbertson 8 days ago [-]
> However, who the hell has time to write everything twice?
Nobody. The article even says, "N.B. Obviously, don't write literally everything twice. It's a heuristic. Apply intelligently."
swyx 8 days ago [-]
how are we going to have a robust internet forum if we are not allowed to interpret quotes to the extremes of their meaning in order to have an easy punching bag for our comments?
winternewt 7 days ago [-]
That's the part that made the least sense to me. If he doesn't actually mean to rewrite it twice, what does he _mean_? Copy and paste the old code? Rewrite half it? Typically (my "intelligent application") I will read the code I wrote yesterday and make improvements, is that sufficient?
spiffytech 9 days ago [-]
I think the mitigating factor is "slow is smooth, smooth is fast".
If you raise the quality of your codebase, you can implement those 1,000 things faster, and you reduce the odds of they'll have to be reworked.
netdevnet 8 days ago [-]
In the world of professional software development, economic value is king. This rule has marginal declining utility after some point and in some cases it is just not worth it. Think prototyping, tight deadlines, etc. In an ideal world, taking some additional time pays off, in other worlds, it doesn't and gets you in PIP. inb4 you say that's not the kind of company you want to work in, most companies are like these and not everyone can afford to apply and get a place at the 2 companies in your country where code quality is actually valued
zusammen 8 days ago [-]
Sadly, this is true. Even in big companies there are managers who don’t recognize an internal sales culture as a problem.
This game, as played at high levels, is mostly about selling work, not doing it. By the time the results of good or bad “doing” draw notice, the adept salespeople and managers have already been promoted and it’s no longer their problem.
animal_spirits 9 days ago [-]
This only works if you are in charge of the timelines or have a manager who values quality over quantity
ajmurmann 9 days ago [-]
And/or an organization that has the faith and politics that allow to wait till smooth starts paying back and becomes fast. Frequently you also have everyone benefiting from smoothness but not everyone contributing to it which results in the folks who pump out tickets as fast as possible looking like they are just strictly better. Easy to adapt to and game as an IC, but I find it frustrating as a manager.
netdevnet 8 days ago [-]
> wait till smooth starts paying back and becomes fast
By that time, you will have probably left the company and someone with less care about code quality will come and undo your work.
Also, like security, it is hard to show to a manager data and graphs showing how much we are saving
ajmurmann 2 days ago [-]
Yes, very true. You can only make it work if everyone is bought in and even then it's ready to regress by someone skipping steps and getting praise from the business while other engineers are fixing their stuff.
KronisLV 8 days ago [-]
> This only works if you are in charge of the timelines or have a manager who values quality over quantity
And also if you're not in a bad starting position: you could have a system that is very hard and slow to work with, meaning that you can't feasibly introduce much easy to use functionality, especially if the domain logic is tightly coupled.
pif 8 days ago [-]
> smooth is fast
Such reasoning is based on a flawed assumption: that the value of time is constant in time.
While smooth is indeed fast in the long run, it is slower in the short run, and time before the release date is much, much, much more worthy than time after the deadline.
Shipping functionalities now and bugfixes later is what pays your salary. Waiting for the perfect code makes your customer seek comfort at your competition.
tokinonagare 8 days ago [-]
Most deadline are fake.
pif 8 days ago [-]
Yes and no.
External deadlines are often meaningless, but customers are not the only users of the application. Once you release your part and keep developing/debugging/polishing your code, your colleagues can move on with their job and so on and so on.
As unfortunate and unnatural as this may seem to us programmers, shipping _is_ a feature in professional software development; and, in the quality/time continuum, "something now" beats "all of it tomorrow" in every scenario.
PS: I do get that the decision on releasing a product to the public has different constraints in the medical or aeronautical industry than a photo sharing website, still enabling the rest of the organization to move on with their tasks is too often underrated.
netdevnet 8 days ago [-]
Tell that to the customer, as a web/mobile agency, when they ask you to contractually commit to a date for the release of their web app
yamazakiwi 8 days ago [-]
Or when your boss sets a goal for 0 missed deadlines and lets people go for not hitting them.
fmbb 8 days ago [-]
Yes indeed.
But you still have to put your stuff out there to test it.
All code is a cost. Features are what users pay for. There is scarcely objectively good code. One person’s smooth is another person’s rough.
Deliver stuff. Act on user feedback.
If you are not developing commercial software, the above is invalid advice.
raister 9 days ago [-]
I once thought I lost a paper I'd written a while back (before OverLeaf and Cloud), and I've rewritten it only to find it back again: when I compared the versions, the second one had better sentences altogether. The core framing was there, but the quality was higher.
magicalhippo 8 days ago [-]
It doesn't get really good until the third rewite, in my experience.
That said I think it's worth noting the advice is to junior engineers. These days I feel most of my code is good enough most the time to not warrant a rewrite, as I can usually catch my self before doing something too stupid.
marcosdumay 8 days ago [-]
> as I can usually catch my self before doing something too stupid
About me, I dunno. I can usually catch myself doing something stupid. But fixing it on the act often takes time from finding the stupidity hidden in the more complex corners (usually on how the software interacts with other things).
Nowadays, I tend to stop to fix visible issues if other people are going to interact with the software. But if not, I find it much more valuable to get a bad thing there fast, so all the problems become known.
gavmor 8 days ago [-]
> who the hell has time to write everything twice?
If writing everything twice results in more maintainable code (eg higher cohesion, lower coupling, less verbose, more self-explanatory) then it's possible to get massive returns over the life of the module.
golergka 9 days ago [-]
> However, who the hell has time to write everything twice?
The developer who's going to receive page duty alerts about the feature, will have to fix its bugs, track user analytics over the feature, suggest and implement improvements to improve these metrics, write documentation, educate and support other team members on it.
RangerScience 9 days ago [-]
There’s a thousand things waiting to be written once… because they’re each coping with something else that was written once.
Write it twice, then you can use it a dozen ways, and now you’ve got a hundred things to write instead of a thousand.
progmetaldev 9 days ago [-]
Assuming you've written it correct the second time. If not, now you've got a deeper problem because it's most likely more complex. If things are waiting and coping with something to be written once again, how have you used your time to decide what needs to be re-written, versus something else that will probably be used more than you expected and needs a serious rewrite.
If you write code and don't think it needs to be rewritten, you are either an expert in your domain, or believe you have written code that fits your problem perfectly. Again, if you are not an expert in your domain, then what you have written is at best a solution that works without a second thought, but more likely could use another rewrite. Most software does not need a rewrite, but if we're talking about ways to reuse in a thousand ways, rather than a hundred ways, you need to have the luxury to rewrite code that is used in so many places that it's almost required.
flir 8 days ago [-]
This is "build one to throw away" writ small.
Kinrany 8 days ago [-]
> Rewriting the solution only took 25% the time
misnome 8 days ago [-]
So 125% time total
phrenq 8 days ago [-]
Yes, which the article mentions:
“So you get maybe 2x higher quality code for 1.25x the time — this trade is usually a good one to make on projects you'll have to maintain for a long time.
Yeah, I'm seeing something of a fallacy in these comments that writing something once necessarily means it's written badly and will break/page people/etc. Sometimes you gotta get a lot of stuff done that has low complexity (e.g. in a startup) and writing it twice really is a waste of time.
A few basic sanity checks (some unit tests, a little discipline avoiding the abstraction high, whatever your flavour may be) is fine in many scenarios. Not all features require tons of monitoring and documentation. Everything in our line of work is a trade-off!
Kinrany 8 days ago [-]
A heuristic becomes a fallacy only when applied without understanding the reasons behind it.
Every rule has exceptions, this isn't worth mentioning.
bubblyworld 8 days ago [-]
That is exactly what I'm seeing - heuristics applied lazily. If it wasn't worth mentioning it was at least worth replying to, right? ;)
airtonix 8 days ago [-]
[dead]
w10-1 9 days ago [-]
People learn to work better by reflecting on work. So any framework for self-observation is better than none.
I suspect that algorithms as a framework demonstrates the structural aspects (e.g., how some searches are more extensive), but might hide the driving factors. Indeed, the article examples were almost all hacking personality, not technical or process solutions.
E.g., most over-engineered solutions are driven by fear, often itself driven by critical or competitive environments. Conversely, much of the power of senior/staff engineers comes from the license to cut corners afforded their experience. Or people use the tools they know.
You can't get to great by holding onto good. It's easy to go from bad to good, but takes some courage to toss good to start over for great. The lesson there is that we stand (hide?) behind our work, and we need to let go of it to improve it.
A meta-lesson is that managers need to deeply understand personal space of each developer and the social dynamics of the team before they can do their job effectively, and a key part of that is likely checking in with developers in a way that enhances their courage and self-observation instead of making them fearful and paranoid.
trhway 9 days ago [-]
> much of the power of senior/staff engineers comes from the license to cut corners afforded their experience.
Absolutely. Jira is equally slow for everyone except for the ones who are allowed to skip it.
At the places where i managed to become star developer it was by hitting hard early and achieving great results, and after that you’d get a lot of slack cut for you which allows you to continue deliver at the star level with relatively light effort, and definitely much easier than say the mediocre grind i produce at the current place where “Jira” is truely slow with us.
shiroiushi 9 days ago [-]
So you were a "star developer" at some previous workplaces, but now you're not, and just produce at a "mediocre grind"? Why is that? And why work there now and not that the previous places?
I suspect this is the common tale of poor compensation: despite being a star developer, employers won't pay for properly for this, just a mediocre annual raise, so you move on after a couple of years to another place, which offers you a huge raise (new starting salary compared to the previous job's salary). And the new place, while giving you a poor environment that hampers your productivity, still pays much better than the previous places where your productivity was much higher. And because of this common workplace dynamic, it's usually not worth it to put in much effort to be a star employee, unless you find the rare employer that actually rewards you for it. Is my guess correct?
mywittyname 8 days ago [-]
> Is my guess correct?
My self observation is my performance is relative to how rushed I am to complete something. So I tend to perform much better at places that develop features on a 1-3 month cadence rather than somewhere that expects development progress to occur on a day/week basis. Even when the overall amount of time spent on development is the same.
I think having to show my unfinished work to people takes its toll on my confidence. I know it's crap because it's not finished, and I spend so much time talking about what's missing/poorly designed that I come away from demos thinking, "I just showed everyone how terrible I am at this."
My goal is always to be a star employee. Since I benefit from doing a great job just as much as my employers do.
trhway 9 days ago [-]
you've been around, you're absolutely right.
>not worth it to put in much effort to be a star employee
it isn't even possible at the places like i'm currently in, at least for me. I mean, i'm still rated as high performing employee, just a regular one at that. You do your components in a very large platform product, make sure that they aren't source of pain, and just coast under the radar.
ebiester 8 days ago [-]
There's a few intersecting problems:
The difference between high performance in a trusted environment and high performance in an environment that lacks trust is significant. The problem isn't Jira - it's peer reviews and QA and gates to show both that the process is followed and that the quality is sufficient. The "high performer" is granted the trusted environment -- or at least the organization structures itself to minimize the pain to the "high performer."
So, someone can do the same amount of work but not achieve the same results because of the system.
So, then the logical answer is to throw the processes out! Get rid of Jira and PR and don't let quality block release! That works in a small environment, but the moment your company needs to generate SOC audits you will fail and you won't be able to sell. Or you will never be able to go public because your SOX auditor won't pass you. (And we won't even talk about PCI compliance!)
So, then, you end up in a push/pull. One of the part time jobs of management is monitoring the system and balancing the constraints. Here are some ways people have tried to balance the constraints:
* Pair programming. You reduce the theoretical output by 50%, but you fix that by putting QA and PR into the process. (You also have a feedback loop from two people reducing blocks and can stop interruptions by only interrupting half of the pair.)
* Mob programming. You reduct the theoretical output by 25% (assuming team of 4) but you drop QA and PR and you always have someone available to unblock. If you have a team of specializing generalists, you can always have the experience to solve the problem.
* Surgical team. You have a series of junior developers who support a staff engineer. In this model, the staff engineer figures out the hard parts and feeds the rest to a series of junior engineers. Junior engineers are also available to write tools and tests and qa. It seems like you have fewer people working on the key problems, but one person having it all in their head and unblocked continuously can compensate for the lack of theoretical output. (In this model, it is hard to hire seniors because one bad senior can tank the entire team.)
* Team of seniors. You still have the reduced output of the process, but since they work in a high trust environment, PRs can be glossed over and QAs are just there to cover the release management. (One new developer can slow everyone down until they're up to speed, and a bad performer can tank the whole team as it devolves into mistrust as processes have to be applied to all.)
* Hierarchical teams: A combination of seniors and juniors where the seniors take harder problems and juniors support seniors and take less complicated problems. mid-level are senior-light. These teams look like they're working closer to theoretical maximum but end up being slowed down by the processes of mistrust.
* Scrum Master/Project manager. For complex projects requiring a lot of interaction and a large number of people, take the money allotted to developers and give it to non-developers who take care of those parts. People complain about this here because it's often misapplied.
* Star Developer autonomy. For people who have proven themselves, the organization warps around them. It's like the surgical team, except these people are often tied to multiple teams. Instead, the other teams work being unblocked by these star developers or clean up the messes/maintain the work of the star developers after the fact.
All of these can be valuable. Some of them are not "agile." But the bigger the organization, the more difficult the problems get because they're problems of interaction and trust.
And none of them are about the amount of work, but rather the amount of perceived productivity by a given person.
MillironX 9 days ago [-]
I fundamentally disagree with the "gun to the head" strategy.
One of the major projects I worked on was a virus genome analysis pipeline. Our initial funding was for single-segment virus analysis, but about a year into the project, our grant collaborators needed to demonstrate multi-segment virus support within two weeks. My PI and I agreed on a quick and dirty method that would produce the analyses needed and could be done in the allotted time. That piece of code was fundamental to how the whole pipeline worked, however, and so as the pipeline grew, it took on the shape of the "gun to the head" decision we had made to the point where other, more essential features of the pipeline had to be delayed so I could come up with more workarounds.
There were clearly other issues at play here (scope creep and lack of separation of concerns were huge). My time at the lab came to a close, but if I were to have a chance to continue that project, I would start over from scratch rather than deal with the baggage that that one "gun to the head" moment created.
I understand that it's a heuristic and not meant to be taken as 100% truth for every situation. I also understand that it's trying to avoid that "paralysis by analysis" that's so easy to fall into. I just question how useful it truly is as a heuristic, especially since it seems to go against the "write everything twice" algorithms presented in the rest of the piece.
tikhonj 9 days ago [-]
I'd say it goes with the "write everything twice" heuristic! If you're in an environment where you can write things twice—you have trust and autonomy and aren't encumbered by process—then writing an initial version as fast as possible gets you started faster, lets you play with something concrete and leaves you more room for your second version.
My best projects were like that. I'd figure out something quick—some combination of reducing scope and doing "things that don't scale"—then spend time refining the conceptual design and interfaces, and finally rewrite the initial piece based on that new design. This can absolutely work better than just trying to write something "good" the first time around, but it looks wasteful to somebody superficially tracking individual "tasks" you're working on.
qsort 9 days ago [-]
> I just question how useful it truly is as a heuristic
I think the author is presenting them as analytical tools that might or might not be useful depending on the situation.
Very often when faced with a difficult problem it's hard to know where to start attacking. Any idea, even if simplistic and wrong, can be useful to start gaining insight on what is going to work and why; even just refuting the original idea with a clear counterargument might suggest alternative avenues.
OT: this is IMO part of the reason why people like LLMs so much. Maybe the answer is trash, but articulating why it's trash gets you unstuck.
MajimasEyepatch 9 days ago [-]
As the saying goes, there's nothing more permanent than a temporary solution. If you're going to do this, you either have to explicitly plan for the cost of the rewrite to do it the "right" way after doing it the "wrong" way, or you have to accept that you're probably not going to revisit the "temporary" solution for a long time.
ilidur 9 days ago [-]
Review: An anonymous "distinguished CEO and engineer" suggests if you can't complete a feature in a day, delete your progress (except for tests) and start again the next day.
The author then recounts advice he gives to juniors, which is to stash the work and rewrite it, claiming that the next day the work will be rewritten in 25% of the time and 2x quality. This is unsubstantiated though. For juniors this suggests it will help them develop their capabilities to reason about implementations of problems without needing to face a a large amount of them.
The author then gives another advice which is to ask for a solution to a problem then after the initial proposal, ask for a 24h solution. This is meant to generate "the real solution". He likens it the a path algorithm heuristic to reach your goal quicker.
Overall the methods are not well discussed in terms of pros and cons, nor substantiated with experiments.
Opinion: I think they may help some juniors who need to build up experience and may become stuck in development patterns. But they would rarely be useful to develop someone to be a senior, if all they do is chase fast implementations. In a way the post gives conflicting advice: write twice and write better, and think twice and think about the fastest way to achieve the goal, instead of engineering a problem.
The author hasn't really convinced me of these approaches, and especially the last one smells of eXtreme Go Horse.
guappa 8 days ago [-]
Scientific approach would be to try this yourself.
ilidur 8 days ago [-]
I would say that's called an anecdote.
arzke 8 days ago [-]
Exactly. The "it works for me so it works" type of thinking is what makes people mistakenly conclude that homoeopathy works any better than a placebo.
guappa 6 days ago [-]
So how do you go from 0 experiments to 10000000 experiments without passing from 1 experiment?
Care to explain?
Perhaps if you're at 0 experiments you can't really have an opinion? Which is the situation of the person I replied to.
guappa 8 days ago [-]
And deciding if it works or not without trying even on yourself? What's that called?
neilv 9 days ago [-]
These are useful mental tools, to have in your mental toolbox, and apply them as they seem appropriate, but don't make a religion of any of them.
A related old idea about writing/reworking software three times: "Do it. Do it right. Do it fast."
Startups provide ample opportunities to get experience with what the article calls "gun to your head heuristic". You have to decide what and how to compromise.
And, if you want your startup to be successful (not just hit your metrics/appearances, and then job hop), you can't just do it like school homework (where the only goal is to slip something past a grader, and forget about it), but you have to creatively come up with a holistically good compromise solution, given all the factors. If you do this well, it's creative magic that can't be taught, but can be learned, through experience and will.
n4r9 9 days ago [-]
> A related old idea about writing/reworking software three times: "Do it. Do it right. Do it fast."
The version I've heard is "Make it correct. Make it readable. Make it performant."
rokob 9 days ago [-]
These are all variants of Kent Beck’s saying: “Make it work, make it right, make it fast.”
And this is not about writing anything thrice at all, it’s about picking right priorities/order
bn-l 8 days ago [-]
> Start working on the feature at the beginning of the day. If you don't finish by the end of the day, delete it all and start over the next day.
How experienced is this “CEO and engineer”?
booleandilemma 8 days ago [-]
I know, I laughed out loud and stopped reading after that sentence.
anonyfox 8 days ago [-]
A thing that took me several years to agree to is that almost everything can be coded within 24 hours if really needed - gun to the head situation. Will it be perfect/efficient/beautiful/... probably not. But it should roughly work like its supposed to.
If something cannot be coded within that 24 hours, something else is odd, not the feature. Transitioning from SWE to DevOps and then Leadership roles, most of my day actually is spent with all the reasons/excuses why "it cannot be done", and try to eliminate them. Probably my developers hate me for it, but I always push hard for an immediate first solution instead of doing days of soul-searching first, but over time we encounter and solve enough roadblocks (technical, social, educational, ...) that it actually happens more often than not to have surprisingly fast (and good enough) solutions. That speed is a quality in itself, since it frees up time to come back to things and clean up messes without the shipping pressure mounting up over days/weeks - something working is already there on day two.
The trick is of course to _not_ sell this 24 hour solution to upper management ever, or else it will become a hell of a mess fast once this becomes the outsider expectation.
lunarcave 9 days ago [-]
Although I disagree with the arbitrary limit of “one day per feature”, I agree with the broader point that useful constrains drive creativity.
082349872349872 9 days ago [-]
The generalisation I would make is: if you think, not just of a specific solution, but of the space of possible solutions, it makes it easier to find a point in that space which best "fits" (space, time, architecture) the program you're working on.
Rather than trying paths in the dark, first look at a map, then try a few paths.
vincnetas 8 days ago [-]
"Gun to your head" gives you quick working solutions. But then when you avoided the headshot you run away and leave that quick-fix-mess to someone else. So this should not be a long time strategy. Sure if your runway is approaching the end, do whatever is needed to stay alive.
phrenq 8 days ago [-]
Generously, I think the article is presenting it as a thought experiment to make sure you’ve thoroughly explored the solution space, while not literally recommending that you necessarily implement the “gun to your head” solution.
jasfi 8 days ago [-]
If you write a design doc first, you can save yourself a lot of time. It's very quick to write a new design, and designs are also much faster to revise than code is.
dakiol 9 days ago [-]
I spent 90% of the time reading and trying to understand the business logic behind the code that lies in the place that I think I need to put my feature in. So far, no methodology helps much with this part of software engineering.
The remaining 10% is rather straightforward.
perrygeo 8 days ago [-]
The "write it twice" strategy should be more common.
The alternative is "write it once and be stuck with it". You want tech debt, cause that's how you get tech debt.
By definition, the first time you do something, you will learn a ton. So you've gained two things: 1) hard-won technical knowledge and 2) a bunch of sub-optimal code written before you obtained that knowledge. The way I see it, keeping that proof-of-concept code around forever is a terrible tradeoff - giving up #1 to save #2 at any cost. Code isn't that special, especially on the first pass.
ww520 9 days ago [-]
For "write everything twice" I would say write some parts many times. You don't have to write everything twice. Most code are optimal or good enough already, but you should be willing to re-write some parts over and over again.
marhee 8 days ago [-]
Side note: “write everything twice” (WET) is originally meant as a reaction to DRY (“don’t repeat yourself”). I think the author uses the credo here for something else (which is still good advice).
wiremine 8 days ago [-]
I agree 100% with this approach, and I'd love to see some research to back it up.
I think a lot of software development is "way finding" - Experimentation to figure out an architecture, an implication, a performance improvement, etc.
We often don't a) call them experiments, and b) we don't constrain them well. I.e., we use the scope of the feature to bound the experiment, instead of taking a step back to figure out the right approach, we dive into implementation.
I'm curious if there's a more formal way to think about this all?
albrewer 7 days ago [-]
> "gun to the head" heuristic
Never underestimate the power of hiring a new employee and training them how to do what the software would do, thereby writing zero lines of code.
That $250k in initial feature development costs + maintenance might only be $75k in personnel costs a year, which is a break-even of ~4 years. Depending on the problem, that might be the best option.
Is the title a play on the "Algorithms to Live By" book.
Sounds clever :)
OutOfHere 9 days ago [-]
It is the worst advice I have read. If the goal is to divide everything to fit in a day, it would be better to use a static analyzer to enforce certain rules.
whatnotests2 8 days ago [-]
Nice play on words from Lakoff's "Metaphors We Live By"
ayewo 8 days ago [-]
Surprised you were the only one to pick up on the pun on the title of George Lakoff’s seminal book from 40+ years ago :)
mattlondon 9 days ago [-]
Gun to the head solutions: no tests, no quality, no future planning, hacks on top of hacks, tech debt.
As a manager, this "thought exercise" is dangerous. You think it is a fun and harmless exercise to your reports to really focus, your reports at best think you don't trust them, and at worst think you are threatening them with violence (hypothetical or otherwise) unless they tell you what they think you want them to say. Psychology safety will be at rock bottom pretty quickly.
Nice job you have there, would be a reaaaaalll shame if anything hypothetical happened to it huh? Now estimate again, and this time don't make me angry <imitates pulling a trigger of an invisible gun at someone's head>.
Absolutely terrible behaviour.
me-vs-cat 8 days ago [-]
He does need a better name.
> The purpose of the thought experiment isn't to generate the real solution. It's meant to put a lower bound on the solution. Then you think of a real solution with that lower bound in eyesight, and you'll find it's often better than your original solution.
He does not say to actually implement the quickest hack you can imagine, nor to skip the mundane steps that avoid tech debt. He says spend 10 minutes imagining a quick hack after laying out your initial solution, and incorporate anything you learned into your actual work. To me, it sounds very similar to a startup chipping away at their original idea to get a MVP.
> The purpose here is to break their frame and their anchoring bias. If you've just said something will take a month, doing it in a day must require a radically different solution.
rcgorton 9 days ago [-]
[dead]
vtodekl 9 days ago [-]
[dead]
Rendered at 18:42:18 GMT+0000 (Coordinated Universal Time) with Vercel.
This is true, and I've discovered it myself by losing a branch.
However, who the hell has time to write everything twice? There's 1,000 things waiting to be written once.
I appreciate this doesn't cover anything more than the basics, so something like the normal behaviour of comparing two password fields for the same content doesn't work, but I find these controls are useful for getting something simple up and running.
Also, it doesn't do much.
1 - https://news.ycombinator.com/item?id=41976529
Nobody. The article even says, "N.B. Obviously, don't write literally everything twice. It's a heuristic. Apply intelligently."
If you raise the quality of your codebase, you can implement those 1,000 things faster, and you reduce the odds of they'll have to be reworked.
This game, as played at high levels, is mostly about selling work, not doing it. By the time the results of good or bad “doing” draw notice, the adept salespeople and managers have already been promoted and it’s no longer their problem.
By that time, you will have probably left the company and someone with less care about code quality will come and undo your work.
Also, like security, it is hard to show to a manager data and graphs showing how much we are saving
And also if you're not in a bad starting position: you could have a system that is very hard and slow to work with, meaning that you can't feasibly introduce much easy to use functionality, especially if the domain logic is tightly coupled.
Such reasoning is based on a flawed assumption: that the value of time is constant in time.
While smooth is indeed fast in the long run, it is slower in the short run, and time before the release date is much, much, much more worthy than time after the deadline.
Shipping functionalities now and bugfixes later is what pays your salary. Waiting for the perfect code makes your customer seek comfort at your competition.
External deadlines are often meaningless, but customers are not the only users of the application. Once you release your part and keep developing/debugging/polishing your code, your colleagues can move on with their job and so on and so on.
As unfortunate and unnatural as this may seem to us programmers, shipping _is_ a feature in professional software development; and, in the quality/time continuum, "something now" beats "all of it tomorrow" in every scenario.
PS: I do get that the decision on releasing a product to the public has different constraints in the medical or aeronautical industry than a photo sharing website, still enabling the rest of the organization to move on with their tasks is too often underrated.
But you still have to put your stuff out there to test it.
All code is a cost. Features are what users pay for. There is scarcely objectively good code. One person’s smooth is another person’s rough.
Deliver stuff. Act on user feedback.
If you are not developing commercial software, the above is invalid advice.
That said I think it's worth noting the advice is to junior engineers. These days I feel most of my code is good enough most the time to not warrant a rewrite, as I can usually catch my self before doing something too stupid.
About me, I dunno. I can usually catch myself doing something stupid. But fixing it on the act often takes time from finding the stupidity hidden in the more complex corners (usually on how the software interacts with other things).
Nowadays, I tend to stop to fix visible issues if other people are going to interact with the software. But if not, I find it much more valuable to get a bad thing there fast, so all the problems become known.
If writing everything twice results in more maintainable code (eg higher cohesion, lower coupling, less verbose, more self-explanatory) then it's possible to get massive returns over the life of the module.
The developer who's going to receive page duty alerts about the feature, will have to fix its bugs, track user analytics over the feature, suggest and implement improvements to improve these metrics, write documentation, educate and support other team members on it.
Write it twice, then you can use it a dozen ways, and now you’ve got a hundred things to write instead of a thousand.
If you write code and don't think it needs to be rewritten, you are either an expert in your domain, or believe you have written code that fits your problem perfectly. Again, if you are not an expert in your domain, then what you have written is at best a solution that works without a second thought, but more likely could use another rewrite. Most software does not need a rewrite, but if we're talking about ways to reuse in a thousand ways, rather than a hundred ways, you need to have the luxury to rewrite code that is used in so many places that it's almost required.
“So you get maybe 2x higher quality code for 1.25x the time — this trade is usually a good one to make on projects you'll have to maintain for a long time.
N.B. Obviously, don't write literally everything twice. It's a heuristic. Apply intelligently.”
A few basic sanity checks (some unit tests, a little discipline avoiding the abstraction high, whatever your flavour may be) is fine in many scenarios. Not all features require tons of monitoring and documentation. Everything in our line of work is a trade-off!
Every rule has exceptions, this isn't worth mentioning.
I suspect that algorithms as a framework demonstrates the structural aspects (e.g., how some searches are more extensive), but might hide the driving factors. Indeed, the article examples were almost all hacking personality, not technical or process solutions.
E.g., most over-engineered solutions are driven by fear, often itself driven by critical or competitive environments. Conversely, much of the power of senior/staff engineers comes from the license to cut corners afforded their experience. Or people use the tools they know.
You can't get to great by holding onto good. It's easy to go from bad to good, but takes some courage to toss good to start over for great. The lesson there is that we stand (hide?) behind our work, and we need to let go of it to improve it.
A meta-lesson is that managers need to deeply understand personal space of each developer and the social dynamics of the team before they can do their job effectively, and a key part of that is likely checking in with developers in a way that enhances their courage and self-observation instead of making them fearful and paranoid.
Absolutely. Jira is equally slow for everyone except for the ones who are allowed to skip it.
At the places where i managed to become star developer it was by hitting hard early and achieving great results, and after that you’d get a lot of slack cut for you which allows you to continue deliver at the star level with relatively light effort, and definitely much easier than say the mediocre grind i produce at the current place where “Jira” is truely slow with us.
I suspect this is the common tale of poor compensation: despite being a star developer, employers won't pay for properly for this, just a mediocre annual raise, so you move on after a couple of years to another place, which offers you a huge raise (new starting salary compared to the previous job's salary). And the new place, while giving you a poor environment that hampers your productivity, still pays much better than the previous places where your productivity was much higher. And because of this common workplace dynamic, it's usually not worth it to put in much effort to be a star employee, unless you find the rare employer that actually rewards you for it. Is my guess correct?
My self observation is my performance is relative to how rushed I am to complete something. So I tend to perform much better at places that develop features on a 1-3 month cadence rather than somewhere that expects development progress to occur on a day/week basis. Even when the overall amount of time spent on development is the same.
I think having to show my unfinished work to people takes its toll on my confidence. I know it's crap because it's not finished, and I spend so much time talking about what's missing/poorly designed that I come away from demos thinking, "I just showed everyone how terrible I am at this."
My goal is always to be a star employee. Since I benefit from doing a great job just as much as my employers do.
>not worth it to put in much effort to be a star employee
it isn't even possible at the places like i'm currently in, at least for me. I mean, i'm still rated as high performing employee, just a regular one at that. You do your components in a very large platform product, make sure that they aren't source of pain, and just coast under the radar.
The difference between high performance in a trusted environment and high performance in an environment that lacks trust is significant. The problem isn't Jira - it's peer reviews and QA and gates to show both that the process is followed and that the quality is sufficient. The "high performer" is granted the trusted environment -- or at least the organization structures itself to minimize the pain to the "high performer."
So, someone can do the same amount of work but not achieve the same results because of the system.
So, then the logical answer is to throw the processes out! Get rid of Jira and PR and don't let quality block release! That works in a small environment, but the moment your company needs to generate SOC audits you will fail and you won't be able to sell. Or you will never be able to go public because your SOX auditor won't pass you. (And we won't even talk about PCI compliance!)
So, then, you end up in a push/pull. One of the part time jobs of management is monitoring the system and balancing the constraints. Here are some ways people have tried to balance the constraints:
* Pair programming. You reduce the theoretical output by 50%, but you fix that by putting QA and PR into the process. (You also have a feedback loop from two people reducing blocks and can stop interruptions by only interrupting half of the pair.)
* Mob programming. You reduct the theoretical output by 25% (assuming team of 4) but you drop QA and PR and you always have someone available to unblock. If you have a team of specializing generalists, you can always have the experience to solve the problem.
* Surgical team. You have a series of junior developers who support a staff engineer. In this model, the staff engineer figures out the hard parts and feeds the rest to a series of junior engineers. Junior engineers are also available to write tools and tests and qa. It seems like you have fewer people working on the key problems, but one person having it all in their head and unblocked continuously can compensate for the lack of theoretical output. (In this model, it is hard to hire seniors because one bad senior can tank the entire team.)
* Team of seniors. You still have the reduced output of the process, but since they work in a high trust environment, PRs can be glossed over and QAs are just there to cover the release management. (One new developer can slow everyone down until they're up to speed, and a bad performer can tank the whole team as it devolves into mistrust as processes have to be applied to all.)
* Hierarchical teams: A combination of seniors and juniors where the seniors take harder problems and juniors support seniors and take less complicated problems. mid-level are senior-light. These teams look like they're working closer to theoretical maximum but end up being slowed down by the processes of mistrust.
* Scrum Master/Project manager. For complex projects requiring a lot of interaction and a large number of people, take the money allotted to developers and give it to non-developers who take care of those parts. People complain about this here because it's often misapplied.
* Star Developer autonomy. For people who have proven themselves, the organization warps around them. It's like the surgical team, except these people are often tied to multiple teams. Instead, the other teams work being unblocked by these star developers or clean up the messes/maintain the work of the star developers after the fact.
All of these can be valuable. Some of them are not "agile." But the bigger the organization, the more difficult the problems get because they're problems of interaction and trust.
And none of them are about the amount of work, but rather the amount of perceived productivity by a given person.
One of the major projects I worked on was a virus genome analysis pipeline. Our initial funding was for single-segment virus analysis, but about a year into the project, our grant collaborators needed to demonstrate multi-segment virus support within two weeks. My PI and I agreed on a quick and dirty method that would produce the analyses needed and could be done in the allotted time. That piece of code was fundamental to how the whole pipeline worked, however, and so as the pipeline grew, it took on the shape of the "gun to the head" decision we had made to the point where other, more essential features of the pipeline had to be delayed so I could come up with more workarounds.
There were clearly other issues at play here (scope creep and lack of separation of concerns were huge). My time at the lab came to a close, but if I were to have a chance to continue that project, I would start over from scratch rather than deal with the baggage that that one "gun to the head" moment created.
I understand that it's a heuristic and not meant to be taken as 100% truth for every situation. I also understand that it's trying to avoid that "paralysis by analysis" that's so easy to fall into. I just question how useful it truly is as a heuristic, especially since it seems to go against the "write everything twice" algorithms presented in the rest of the piece.
My best projects were like that. I'd figure out something quick—some combination of reducing scope and doing "things that don't scale"—then spend time refining the conceptual design and interfaces, and finally rewrite the initial piece based on that new design. This can absolutely work better than just trying to write something "good" the first time around, but it looks wasteful to somebody superficially tracking individual "tasks" you're working on.
I think the author is presenting them as analytical tools that might or might not be useful depending on the situation.
Very often when faced with a difficult problem it's hard to know where to start attacking. Any idea, even if simplistic and wrong, can be useful to start gaining insight on what is going to work and why; even just refuting the original idea with a clear counterargument might suggest alternative avenues.
OT: this is IMO part of the reason why people like LLMs so much. Maybe the answer is trash, but articulating why it's trash gets you unstuck.
The author then recounts advice he gives to juniors, which is to stash the work and rewrite it, claiming that the next day the work will be rewritten in 25% of the time and 2x quality. This is unsubstantiated though. For juniors this suggests it will help them develop their capabilities to reason about implementations of problems without needing to face a a large amount of them.
The author then gives another advice which is to ask for a solution to a problem then after the initial proposal, ask for a 24h solution. This is meant to generate "the real solution". He likens it the a path algorithm heuristic to reach your goal quicker.
Overall the methods are not well discussed in terms of pros and cons, nor substantiated with experiments.
Opinion: I think they may help some juniors who need to build up experience and may become stuck in development patterns. But they would rarely be useful to develop someone to be a senior, if all they do is chase fast implementations. In a way the post gives conflicting advice: write twice and write better, and think twice and think about the fastest way to achieve the goal, instead of engineering a problem.
The author hasn't really convinced me of these approaches, and especially the last one smells of eXtreme Go Horse.
Care to explain?
Perhaps if you're at 0 experiments you can't really have an opinion? Which is the situation of the person I replied to.
A related old idea about writing/reworking software three times: "Do it. Do it right. Do it fast."
Startups provide ample opportunities to get experience with what the article calls "gun to your head heuristic". You have to decide what and how to compromise.
And, if you want your startup to be successful (not just hit your metrics/appearances, and then job hop), you can't just do it like school homework (where the only goal is to slip something past a grader, and forget about it), but you have to creatively come up with a holistically good compromise solution, given all the factors. If you do this well, it's creative magic that can't be taught, but can be learned, through experience and will.
The version I've heard is "Make it correct. Make it readable. Make it performant."
https://wiki.c2.com/?MakeItWorkMakeItRightMakeItFast
How experienced is this “CEO and engineer”?
If something cannot be coded within that 24 hours, something else is odd, not the feature. Transitioning from SWE to DevOps and then Leadership roles, most of my day actually is spent with all the reasons/excuses why "it cannot be done", and try to eliminate them. Probably my developers hate me for it, but I always push hard for an immediate first solution instead of doing days of soul-searching first, but over time we encounter and solve enough roadblocks (technical, social, educational, ...) that it actually happens more often than not to have surprisingly fast (and good enough) solutions. That speed is a quality in itself, since it frees up time to come back to things and clean up messes without the shipping pressure mounting up over days/weeks - something working is already there on day two.
The trick is of course to _not_ sell this 24 hour solution to upper management ever, or else it will become a hell of a mess fast once this becomes the outsider expectation.
Rather than trying paths in the dark, first look at a map, then try a few paths.
The remaining 10% is rather straightforward.
The alternative is "write it once and be stuck with it". You want tech debt, cause that's how you get tech debt.
By definition, the first time you do something, you will learn a ton. So you've gained two things: 1) hard-won technical knowledge and 2) a bunch of sub-optimal code written before you obtained that knowledge. The way I see it, keeping that proof-of-concept code around forever is a terrible tradeoff - giving up #1 to save #2 at any cost. Code isn't that special, especially on the first pass.
I think a lot of software development is "way finding" - Experimentation to figure out an architecture, an implication, a performance improvement, etc.
We often don't a) call them experiments, and b) we don't constrain them well. I.e., we use the scope of the feature to bound the experiment, instead of taking a step back to figure out the right approach, we dive into implementation.
I'm curious if there's a more formal way to think about this all?
Never underestimate the power of hiring a new employee and training them how to do what the software would do, thereby writing zero lines of code.
That $250k in initial feature development costs + maintenance might only be $75k in personnel costs a year, which is a break-even of ~4 years. Depending on the problem, that might be the best option.
Algorithms we develop software by (18.08.2024)
https://news.ycombinator.com/item?id=41284409
Sounds clever :)
As a manager, this "thought exercise" is dangerous. You think it is a fun and harmless exercise to your reports to really focus, your reports at best think you don't trust them, and at worst think you are threatening them with violence (hypothetical or otherwise) unless they tell you what they think you want them to say. Psychology safety will be at rock bottom pretty quickly.
Nice job you have there, would be a reaaaaalll shame if anything hypothetical happened to it huh? Now estimate again, and this time don't make me angry <imitates pulling a trigger of an invisible gun at someone's head>.
Absolutely terrible behaviour.
> The purpose of the thought experiment isn't to generate the real solution. It's meant to put a lower bound on the solution. Then you think of a real solution with that lower bound in eyesight, and you'll find it's often better than your original solution.
He does not say to actually implement the quickest hack you can imagine, nor to skip the mundane steps that avoid tech debt. He says spend 10 minutes imagining a quick hack after laying out your initial solution, and incorporate anything you learned into your actual work. To me, it sounds very similar to a startup chipping away at their original idea to get a MVP.
> The purpose here is to break their frame and their anchoring bias. If you've just said something will take a month, doing it in a day must require a radically different solution.