I'm not sure I follow. So you failed to measure software productivity in lines of code, therefore it follows that "There's No Such Thing as Software Productivity"? Don't you think that giving up after n=1 attempts at measuring software productivity might be a tad too fast to draw a generalized claim of impossibility? I might argue the real lesson learned is "Lines of Code are Not a Measure of Productivity in an Isolated, Toy Example".
I suspect this sort of thing gets promulgated because it kind of massages our ego, like yes, they can measure other sorts of productivity, but not ours, oh no, we're too complex and intelligent, there's no way to measure the deep sorts of work that we do! Which, yes, OK, we're not exactly bricklayers, but surely, if you had to, you could do better.
n_ary 3 days ago [-]
> it kind of massages our ego, like yes, they can measure other sorts of productivity, but not ours, oh no, we're too complex and intelligent, there's no way to measure the deep sorts of work that we do! Which, yes, OK, we're not exactly bricklayers, but surely, if you had to, you could do better.
Productivity is not a cut-n-dry stat for any kind of knowledge work.
For an admin, we could say they were more productive if they handled more cases than last month. But this does not account for tricky cases or ambiguous cases, which might have taken 2x time to sort out but on rote numbers it looks they were less productive.
For us, there are no specific goals(except arbitrary imaginary deadlines by clueless management and sprint points) to correctly measure productivity. A junior engineer may spend 4h each day blasting out many lines of codes, a mid/senior may spend more hours in reviews or meetings. A 5 line PR with test coverage is faster to review than a 800+ LOC PR with additional tests and have significant risk of breaking something, so review number is also not a good indicator. I need to coordinate with min. 12 people to get any new credentials or access to specific env owned by another team, which is not countable as productive but unavoidable. How about that garbage meeting where I was just spending time yawning because some new tech lead who believes that a modular monolith system is better if it was rewritten as event based. Productivity measurement is hard.
To take your envy(I sense it on the quoted remark) to another place, think about how you go about measuring productivity of a CEO, PO, PM, Scrum Master, Agilist/Agile-Specialist etc?
theamk 3 days ago [-]
Sure, productivity it's not cut-n-dry stat, but it's not completely impossible to measure either. For a team, there are very clear goals: to make product that customers like, and to keep improving it for a long time. The last part is especially hard to measure, so people do all sorts of approximations, some very bad.
For individual, productivity is measured on multiple axis - new long-term features, throw-away and prototypes, code maintenance, fire-fighting, code reviews, high-level design, outside-of-team communications, intra-team communications (for example helping teammates), archeology, specific parts of your project, probably others. In ideal team, everyone is strong on at least one or two axes, so no person is worse than others. In real teams, I've seen people who were weak on every single axis and did not seem to improve with time. I've also seen people who are great at everything, and it's extremely nice when such people exist - but they are very rare.
(Btw, re your specific example: if you weren't productive because you were waiting for others to approve your access then you were not productive. There is nothing complex about this, there were plenty of times when I've said, "I did nothing for project S last week because they still could not give me access")
kitd 3 days ago [-]
For a team, there are very clear goals: to make product that customers like, ...
With respect, that goal is far from clear. How do you measure how much they like the product? Have we achieved more "likes" this sprint than last?
Customers like a product because it removes their problems. QED.
feoren 2 days ago [-]
> With respect, that goal is far from clear. How do you measure how much they like the product?
Completely agree: focus groups, A/B testing, Net Promoter Score, 5-stars, like/dislike -- all of those systems are notorious for being unreliable in various ways, and optimizing for them is the cause of many sub-optimal decisions and Prisoner's Dilemmas in many industries.
> Customers like a product because it removes their problems. QED.
Your first point was "actually the world is really complicated" (strong agree) and your second was "actually the world is super simple" (strong disagree). Customers like products for wild reasons, unknown even to themselves. I've heard some slot-machine addicts get irritated if they win the jackpot, because it breaks them out of their state of flow. You could argue "well clearly that lack of flow was the problem slot machines are solving" but then you're just using circular reasoning: if a customer likes a product, it must be solving the problem for them that they previously did not have that product. I don't think you could have looked at the slot-machine addict a year before they started gambling and deduced that they had some sort of problem that slot machines needed to fill. There's often no clear link from a-priori diagnosable "problems" that people have to the products that they buy. In the universe populated by logical econs that do act this way, store shelves and advertising look very different.
johnfn 3 days ago [-]
> To take your envy(I sense it on the quoted remark) to another place, think about how you go about measuring productivity of a CEO, PO, PM, Scrum Master, Agilist/Agile-Specialist etc?
I have no envy; I'm a software engineer as well. I just don't seem to struggle, as others claim to, in measuring productivity. I find it fairly straightforward to see that some people are more productive than others, at least in my workplace where we can hold most things constant (e.g. meeting count, managerial ability, etc, etc). Yes, there may be external factors, and that's unfortunate. Yes, it's not fair that some people are more productive than others despite putting in half the effort. But I'm not going to stick my head in the sand and pretend it's invisible.
WgaqPdNr7PGLGVW 3 days ago [-]
> I just don't seem to struggle, as others claim to, in measuring productivity.
Because you are measuring at a very broad and basic level.
Steve is more productive than Susan.
Great. How much more productive? Can you turn it into a number?
Can you still do it consistently when Steve and Susan are in different teams in different parts of the organisation trying to achieve different goals?
I've done DB upgrades that took 10 minutes and I've done DB upgrades that took 3-4 months. What changed was not my productivity but the nature of the problem. Yet from the outside they were both just DB upgrades.
If Susan had done the DB upgrade in 12 weeks could we confidently claim that Steve could have done it in 11 weeks? Steve hasn't even done a DB upgrade since he joined the company. Perhaps Steve could have done it in 10 minutes?
theamk 3 days ago [-]
I don't think anyone can get numbers, but partial ordering is much easier.
If Steve and Susan are in different part of organization, the answer is "cannot compare". If they are doing different job, the answer is the same.
But every once in a while there is a scenarios when you can compare people easily.
There has a weekly rotation to be an support person for other team. During his week, John always answers questions quickly and to the team's satisfaction. Meanwhile James struggles to answer them and cannot troubleshoot product his team is writing. This has been going on for multiple months and hundreds of questions for each, so it's not "bad week" unlucky or fluke. We now know who is better at answering questions about product.
John and James are doing DB migrations, they did many dozens of them. The migrations are assigned randomly. But John is usually finishing his migrations with no problems, while James often caused outages or missing data. A few times James took over two months to migrate, so the task was taken from him and given to John. John had to discard everything James did and migrate everything from scratch. Now there is a migration for a very important client and CEO is fed up with random assignment.. who is he going to choose?
WgaqPdNr7PGLGVW 2 days ago [-]
> I don't think anyone can get numbers, but partial ordering is much easier.
Agreed.
> If Steve and Susan are in different part of organization, the answer is "cannot compare". If they are doing different job, the answer is the same.
These are the situations where we would get the most value from the metrics though.
The team level already has an Engineering Manager or Tech Lead who can directly deal with team level problems.
gregors 2 days ago [-]
What your scenario doesn't address is that while John finished his migrations on time, James has designed the flagship order processing pipeline something that John could never pull off.
Or maybe while John is technically adept, he's also a huge jerk and belittles people at standup, while James is the quintessential communicator with jr devs, etc.
Real life is messy. I've seen more people get replaced due to attitude or teamfit issues than specifically due to technical incompetence.
theamk 2 days ago [-]
Your first scenario: possible, but quite unlikely. If James cannot even perform migrations without causing outages or dropping data, the chances are that "flagship order processing pipeline" he made is similarly bad, and even if it works, it's likely has outages and missing data. I've never seen a developer who can only do hard tasks but is genuinely bad at simple tasks (They may refuse to do those, but if they start on them they'll do them well.)
Your second scenario is unfortunately very likely, people are jerks, and if they are also high performers (or high bullshitters) then can get away with it.
Either way I fully agree one one should be firing/promoting people based on a single metric, even if that metric is very relevant to the job description. That doesn't mean that "there is no such thing", or that if you really need to get that DB migration done, you want to choose a "quintessential communicator with jr devs".
rob74 3 days ago [-]
I would argue that here you are talking less about productivity and more about basic competence?
johnfn 3 days ago [-]
> Great. How much more productive? Can you turn it into a number?
This is moving goalposts. OP's argument was "There's No Such Thing as Software Productivity", not "You Can't Convert Software Productivity into a Floating Point Number With 3 Decimals of Accuracy."
travisgriggs 3 days ago [-]
There’s no real dependable/reproducible single linear measure of software productivity”.
Would that be a fairer assertion?
gizmo 3 days ago [-]
By that measure there also isn't a real dependable/reproducible scalar that measures athleticism. Nonetheless some people are clearly more athletic than others. Also within a single sport we can easily see that some players are better than others. Here too you could object that people who play offense should not be compared to players in defense. Or that it's not individual players that matters but the team as a whole. And yet, we can still figure out easily who the star players are.
WgaqPdNr7PGLGVW 2 days ago [-]
> This is moving goalposts.
Then I shall move the goalposts. Can you address my shifted goalpost?
I personally did not interpret the author as literally meaning there is no such thing as software productivity but I agree the way he wrote it was confusing and could be interpreted that way.
Even in his toy example he clearly stated Peter did a better job than Frank.
InsideOutSanta 3 days ago [-]
"What changed was not my productivity but the nature of the problem"
I think that's the source of the problem: it's impossible to measure the "work" required to solve most software problems. If you tell me to carry a stone up a hill, I can put that in a formula and know exactly how much work I'll have to do. But if you give me a ticket to do a DB upgrade, I can, at best, make an educated guess.
So by the time I close the ticket, how much work have I done, and how do I know whether the time I've spent it proportional to the work I've done?
nradov 3 days ago [-]
I've been on all sides of this issue and I guarantee that as an individual contributor you lack the perspective and visibility into all of the factors that make up net productivity for your colleagues. Once you get a little more maturity and experience you'll probably understand this better.
hoppp 2 days ago [-]
There is a difference between observing productivity and measuring it. If you measure it must be quantized into units which is hard, but you can observe it as is and come to a good conclusion too, without measurement.
ipaddr 3 days ago [-]
The ability to look productive is a different skill. The ability to create any impression is a skill some have. Creating fires and putting them out like a hero is another skill.
What are your eyes measuring? Are you being fooled?
theamk 3 days ago [-]
If you are IC, it's pretty easy to see if your teammates are "looking productive" vs "are really productive", as you know all the code and how it works.
For other teams yeah, it's harder, or maybe even impossible if the teams are isolated.
InsideOutSanta 3 days ago [-]
> If you are IC, it's pretty easy to see if your teammates
> are "looking productive" vs "are really productive"
I'm not sure how true this really is. We all like to think that we're good at estimating other people's ability, particularly if we work closely with them, but how can you validate that impression?
Something that happened to me at one point in my career was that I joined a team that was half new hires with little experience, and half long-term employees with a lot of experience. Because the long-term employees were typically extremely busy, I ended up answering a lot of questions from the entry-level devs. So I spent time documenting the system, doing trainings, doing 1-on-1 code reviews, and so on.
During the performance review, feedback from the entry-level devs was that I was highly productive, and feedback from the long-term employees was that I was a slacker.
jchw 3 days ago [-]
No offense but reading this comment with the utmost attempt at interpreting it in good faith, it kind of sounds like you're just going off of vibes and can't actually quantify productivity any more than anyone else can. Otherwise... How exactly are you quantifying it? If you had to show your work, could you?
johnfn 3 days ago [-]
I dunno, the opposite side seems equally absurd to me. Do you work at a software company? Do you have coworkers? Are you honestly telling me that, gun to your head, you couldn't say that some are more productive and some are less? That if you had to choose a co-founder for your next startup, that you would have NO idea where to begin, that any of them would be equally as good as the next?
jchw 3 days ago [-]
I mean look, some people seem to be extremely productive, and sure, I could say that those people are almost surely more productive than others. It's still genuinely very hard to quantify if those people are actually vastly more productive, or if it just looks like that because they produce more obvious artifacts of their productivity. Hell, what is productivity, is it more productive to fix 1000 bugs or to be the tech lead on a product that reaches 1 million ARR? Does it matter what bugs?
I stand by what I said, whether people like it or not.
Including the part about going off vibes. Please explain how this "you can just kind of tell" mentality is not literally just going off of vibes?
(And I wouldn't choose a cofounder only based on productivity, anyways. I'm not even sure that would be the main criteria.)
johnfn 3 days ago [-]
How would you choose a cofounder? I would call whatever metric you use to determine "would that person be a good cofounder" productivity. (OK, sure, let's narrow it down and say that you were tasked with choosing a purely technical cofounder; perhaps in this hypothetical you're helping your non-technical friend find one?) Do you still believe you couldn't do better than chance?
jchw 3 days ago [-]
Well, a lot of the factors explicitly have nothing to do with general productivity at all, such as alignment. The goal with regards to selecting for productivity would be "someone who seems productive enough" but getting hung up on exactly how productive feels like a mistake, as aside from being unquantifiable, that isn't going to be the only or probably even main factor that decides the fate of the endeavor. Productivity is not the most challenging problem in software development. In my opinion, as far as factors for success for a technology endeavor goes, even overall technical competence winds up factoring in fairly low at the end of the day. What's important is meeting the threshold you need, not getting as high of a score as possible.
I should clarify that I actually agree with you if what you're saying is that the best measure of productivity we have is literally just going off your gut, but my argument is that this gut feeling is terrible. This is in large part because humans are biased, our gut feelings are swayed by things that simply shouldn't factor in. We tend to have more positive opinions of people that we think are like us and we sometimes wind up having negative opinions of someone based on stupid things like disagreeing with them on something that is ultimately irrelevant to whether or not they are productive.
And to me, the ultimate nail in the coffin is really in the question of what really constitutes productivity. Productivity is supposed to measure the efficiency of producing something, which already has pitfalls in and of itself when dealing with things that do have discrete, measurable indicators of progress, but programming doesn't, it's not even always obvious if progress is forward or backward sometimes. What's most important, performance optimizations, disaster planning, features, robustness, minimizing resource utilization? The best answer you can generally say is "It depends," and moreover, everyone will have a different set of competencies they're best at, so a person doesn't really have some single "productivity value" you could summarize them with. Yet, all of those things are pretty important for any serious technology organization, so you would want people with a variety of different affinities.
At that point, it starts to beg the question of whether or not attempting to directly measure software productivity is a worthwhile endeavor. I'd argue not.
robertlagrant 3 days ago [-]
I suppose the question is: can you supply a word you would use when deciding on who to hire for a technical role? Does anyone off the street contribute the same amount of that word, or would you be selective?
jchw 2 days ago [-]
Isn't it obvious? Hiring people who are actually competent is hard. It's not just me, major corporations have found the same issue.
I do believe that you can use basic tests to determine whether someone has more technical competence than some random guy off the street, but it only works up to a point. If you try to test deeper and deeper knowledge, you might create a mirage where someone who isn't very competent appears competent because you just happen to hit on strong spots on their very sparse experience. (My imposter syndrome reasons that this is why I passed the Google interview so easily some years ago.)
But for example, interviews don't even really bother trying to determine any direct proxies for productivity. Usually, they just stick to trying to determine technical competence, communication, ability to work on a team, and evaluate their history of technical accomplishments. A list of accomplishments is evidence of productivity, to a degree, but not having a long list is not evidence of a lack of productivity, and neither will tell you what will actually happen when you hire the person. References will at least give you someone else's gut feelings (or lies) regarding someone's productivity, but any reasonably competent person is going to have people who can vouch for them even if their productivity is actually not very impressive. It's not like there's some huge punishment for embellishing someone when you're being interviewed as a reference for them.
In the context of hiring people, gut feelings are probably the best thing we have, but they're subject to horrendous bias. Even if you are highly enlightened and can recognize your own biases with a great degree of humility, this is not generally the case for most people. Because of that, Google's interviews have a lot of layers of abstraction designed to eliminate bias from the process, but then again, they also wound up doing a study where they hired people who were ultimately turned down for the job and found that those people had around the same chance at succeeding at Google as the people who were hired. (Can't find the source for this because Google Search is useless nowadays. Maybe their hiring process is to blame.) And yet, there's no doubt that even with this in mind bias will still impact the interviews, because the interviewer does ultimately have to transcribe the interview and they can choose to omit or paraphrase things in a way that makes it look worse to the committee overseeing things; likewise, you can "correct" what the person is saying if you felt it was "close enough", or omit entire segments that looked weaker. Sure, you're not supposed to, but I would bet you 10:1 that even people not intending to be biased wind up doing this. Maybe they're second-guessing themselves when they do it: was it my fault they didn't answer better? Were they saying it right all along and I just wasn't understanding?
I was actually involved in a lot of interviewing and hiring especially early on in my career. I still believe gut feeling was the best instinct I had, but there was a time when I didn't agree with a hire and was proven horrifically wrong very soon after. Granted, that mostly comes down to how you evaluate someone's technical competence, not necessarily productivity, but I think the point stands either way.
robertlagrant 2 days ago [-]
Sorry, that's a bit of a wall of text. I think the most salient thing to the topic I can pick out, without creating a corresponding wall of my own, is this:
> there was a time when I didn't agree with a hire and was proven horrifically wrong very soon after
Based on what? Did they turn out to be unproductive, but in a good way? What was that way?
jchw 2 days ago [-]
I mean, not much to say. They turned out to be productive, just fine. They had no problems grasping the codebase and what they didn't know they were able to learn. I had to admit to them that I didn't vouch for them during the interview process and was simply incorrect.
WgaqPdNr7PGLGVW 3 days ago [-]
> Don't you think that giving up after n=1 attempts at measuring software productivity might be a tad too fast to draw a generalized claim of impossibility?
Software developers should look at anyone claiming they can measure software productivity as a snake oil salesman.
We have seen hundreds of attempts over the years and they have all "failed". More accurately they all have large error bars and biases.
Researchers can and should continue looking into how to measure software development productivity. It is likely over the next few decades we will start to understand how to measure it appropriately.
michaelmrose 3 days ago [-]
Bricklayers have measures that are usable by people who know little to nothing about the profession who are incapable of doing the work. This is what people want. An objective measure that can be applied by people functionally incapable of actually doing the work.
Wherein the work isn't repeating existing work no such measure should be expected to exist.
Any creative work is going to suffer from the same problem.
If you asked a brick layer to develop a better workflow for his fellows to work more effectively in a particular sitiation it would be the same. Are you going to measure words per minute?
gregors 2 days ago [-]
The problem is that corporate types don't see programming as creative work. Type faster!!!! If you push the buttons faster the product gets build faster!
Also imagine every other week as the bricklayer had half a wall built a PM comes out and says, "tear this all down and move it 3 feet to the north", and then the week after, "now move it east 1 foot, and why is it taking you so long?"
perrygeo 2 days ago [-]
> "Lines of Code are Not a Measure of Productivity in an Isolated, Toy Example"
Calling this phenomenon a toy example is a bit out of touch. I've seen this every single day of my 25 year career. Value is produced by solving problems, not writing code. The solution with fewer lines of code, fewer new abstractions, lower complexity (yes, complexity IS objective and quantifiable) is invariably the best solution. Solutions that involve no code at all are the gold standard.
Adding/subtracting the right code produces value. But adding the wrong code decreases value. The logic of "productivity as code" contains a fatal flaw - it ignores this and assumes all code is right. Reality: everything hinges on the right vs wrong distinction which is inherently subjective. Code is too often a net liability - you must account for the risk!
By contrast, the "productivity as solving problems" approach is entirely objective. You can observe the problem, test hypotheses about the cause, and measure the situation after the intervention. A well stated problem has no ambiguity.
I don't see any support whatsover for the "productivity as code" idea. It's empirically false and lacks logical consistency.
strulovich 3 days ago [-]
Ok, but then what is a way to do it?
The text gives an example to the core problem, and to argue differently requires thinking around it.
In practice. I’ve seen many attempts at measuring productivity, but once you dig into them, you see they are just abstraction mechanisms above something that is similar to lines of code.
I have yet to see an idea that sidesteps the core issue described in this post. Also, it applies to many types of work, and software is not unique in any way.
bruce511 3 days ago [-]
Happy Customers.
All other measures are a proxy for happy customers.
Actually, happy customers is also a proxy (the real measure is profits) but measuring profits directly (in the short term) can lead to decisions that have adverse long term effects. It's too easy to increase profits in the short term by avoiding long-term expenses.
So, if you're in the business of software, the goal is happy customers. (And I use the word Customers carefully here. Not just Users who pay nothing, but Customers who spend money.)
In a business context, it's really the only thing that matters. But, of course, it can be hard to measure (are they Happy?) and relies on multiple disciplines. Production (coding), Marketing, Sales, Support, Documentation, Training- all need to be working well to make it work.
Ultimately if the big picture doesn't lead to Happy Customers (again, I stress, in a businesses context) then no-one is "being productive."
Cthulhu_ 2 days ago [-]
While customer satisfaction is a better (albeit murky) metric, value generated / profit is a better one ultimately. Of course, measuring developer productivity is a means to that end; how much did or will it cost to reach this value generated?
Anyway there's this adage that once a metric (like productivity) becomes a target it ceases to be a useful metric. But this doesn't seem to apply for value / revenue much, so I suppose it's good to keep an eye on this vague productivity metric.
hammock 3 days ago [-]
It’s demand. How much demand is there for your product
As the other commenter pointed out, happy customers means nothing if they aren’t actively paying you
n4r9 3 days ago [-]
Arguably the business is aiming for Paying Customers, not necessarily Happy ones :)
bruce511 2 days ago [-]
I chose happy for a reason :)
So the business is chasing profits. In the short term that means customers paying money - any will do (happy or unhappy).
But in the long term, happy is the key. Happy customers are the single biggest marketing tool you have. Happy customers promote and recommend you. Unhappy customers do the opposite (and are more effective at doing so.)
So, if the metric stops at Customers then you are greatly missing the long-term value. Since a good business is planning for the long term, not just right now, Happy customers I the correct metric.
Remember, you ultimately get what you measure (no more).
Cthulhu_ 2 days ago [-]
> Happy customers are the single biggest marketing tool you have. Happy customers promote and recommend you. Unhappy customers do the opposite (and are more effective at doing so.)
And yet, there's some irrational counter examples; there's video games that has huge detractors while having huge financial success. Negative reviews on Steam, "4000 hours played". The metrics say they aren't happy with the product... but they still play it, talk about it, may have pulled in friends to play it, and spend money on it.
People can be unhappy about a product but still pay and promote it, counter-intuitively. Of course, the unhappiness is what they say, their behaviour says otherwise so for the sake of the metrics they would be considered happy I suppose?
virgilp 2 days ago [-]
When a measure becomes the objective, it ceases to be a good measure.
Even if you could perfectly measure customer happiness (very hard, as you note) - it's relatively easy to make customers happy by giving them more value than what they pay for. Sure, that may cost your business more money than what it makes with said customers, but hey, who cares, "profit" was not the metric...
(and as you note, if you make "profit" the metric, that has its own set of challenges - e.g. the optimization towards short-term profit in detriment of the long-term sanity, which is what we observe in a lot of corporations).
bruce511 2 days ago [-]
>> When a measure becomes the objective, it ceases to be a good measure.
Yes and no in this case. Yes, you can naked customers happier with more value, more overhead (ie more support staff and do on.)
Yes, in the short term this might reduce profit. If you go too far down this road you might go bankrupt. No measure works if you dont use the "can we afford it" metric.
But nothing turbo-charges profits (in the short, and more importantly, long term) than happy customers. Ultimately they pay more, theny pay more often, they encourage others to pay.
If you optimize for happy customers, and stay solvent, you have the foundation for a solid long-term business.
I will add that starting with Happy Users (who get stuff for free) and turning them into Happy Customers later is really really hard. Simply giving the thing away (or charging so little it amounts to the same thing) is not what I'm suggesting. You can start with a lower price, yes, but regular price hikes are part of yhe process until uou find your natural price level.
The interesting corollary to this approach seems to be that productivity barriers are largely external.
A potential riske seems to be feedback systems where job satisfaction is determined by high or low pay.
eacapeisfutuile 3 days ago [-]
As already mentioned, people measure value return by revenue gain. It is irrelevant to attribute it to some construct like a line of code.
nemomarx 3 days ago [-]
profit generated I think is the high level one, and then you want to dig from there into how much the software development contributed to this.
nomel 3 days ago [-]
That's very tricky thing to quantify, especially with "unsung heroes". If my work is in preventing problems, the guy that fixes problems will be seen as the one that contributes more to profit, since impact is directly observed/measured.
This is something that one of the orgs I worked for eventually realized. The people f'ing up, and then fixing their mistakes, were the ones getting promotions/bonuses/raises, because they were the ones interacting with all the execs.
foobiekr 3 days ago [-]
Revenue per employee $ spent.
feoren 2 days ago [-]
> they can measure other sorts of productivity
Spoiler: they can't do that either. How do you measure the productivity of a bridge engineer designing a new suspension bridge? Number of struts placed on the blueprint? Even with more manual labor, it's very hard. Who's the most productive out of these three:
1. A carpenter who builds a house from a blueprint in 6 weeks. It collapses after 3 years.
2. A carpenter who builds a house from the same blueprint in 12 weeks. It stays up, but has visible defects that affect its value.
3. A carpenter who builds a house from the same blueprint in 20 weeks. It stays up for over a century and eventually becomes a historical landmark due to its lasting beauty and careful construction.
Which one was more productive?
nradov 3 days ago [-]
Function point analysis works well enough for measuring software productivity in most domains, provided that quality is held constant. Like other software productivity metrics it's only meaningful at the level of a complete product team and worse than useless for evaluating individual team members.
So you can measure productivity with a reasonable level of accuracy and consistency, but then what? For most organizations it's not actionable so calculating that metric ends up as another form of waste.
robertlagrant 3 days ago [-]
> Function point analysis works well enough for measuring software productivity in most domains, provided that quality is held constant
Well enough for what? And how do you measure quality?
rob74 3 days ago [-]
This is not an isolated toy example, it's reductio ad absurdum (https://en.wikipedia.org/wiki/Reductio_ad_absurdum) - and until today I haven't seen a way to measure software productivity (might also apply to other knowledge work) that is resistant to that...
EarthBlues 3 days ago [-]
I think people get hung up on wanting to collapse developer productivity into a single dimension, usually for stack-ranking purposes. This, I think, is always going to punish good engineers and reward bad ones to some degree.
Measuring developer productivity should, in my opinion, have one dimension for speed, one for quality, and one for user impact. LOC can be fine as a measurement for speed, you just don’t want to look at it in isolation. You would want to also measure, for example, escape rate and usage for the features the developer worked on, and be willing to change or refine these if circumstances require it.
You also need to look for different profiles based on the developer’s level of seniority. A senior dev probably shouldn’t be writing as much code as a contributor, but their user impact should be high, and escape rate low. Analyzing differences between teams is important, as well. A team that has a lot of escapes or little user impact probably has issues that need management attention, and may not have anything at all to do with individual developer productivity or ability.
In brief, the numbers are there to help you make better management decisions, not to relieve you of having to make them.
JimDabell 2 days ago [-]
I agree. I know from looking at a story that if Alice picks it up it’ll take her about a day and be decent quality, but if Bob picks it up it’ll take him about a week and be worse quality. And I know that this will be pretty consistent week-in, week-out.
Developer productivity exists and I am totally comfortable describing Alice as being more productive than Bob. The fact that there isn’t a good, generic way to distill this into particular values does not mean that the phenomenon doesn’t exist.
We’ve all worked with Alice and Bobs. Claiming there’s no such thing as developer productivity is denying what we’ve all experienced. We aren’t special snowflakes whose work is beyond the ken of mortal men. Our work has value and sometimes we produce a lot of value in a particular time period and sometimes we product a little. This is productivity.
fumeux_fume 3 days ago [-]
You certainly did fail to see the idea being conveyed by a simple, but poignant anecdote. If this is all you have to say, I might argue the real ego being massaged is your own; by yourself.
voidhorse 3 days ago [-]
You can say this about any discipline. The root of the issue is that productivity for productivity's sake is meaningless, and it makes no sense to measure productivity as a general property when outputs vary.
A tire factory has a distinct, singular goal: produce tires. It does this continually. Productivity is meaningful, but only in relation to a target that is typically specified by externalities (e.g. amount of demand)
A software company is usually not in the business of producing consumable commodities so this kind of measurement does not make sense. It can make sense to measure productivity during a period for delivering a particular piece of software within a given time bound, but once it's delivered, productivity becomes meaningless. You always need to understand productivity in relation to some purpose and I don't know how these knuckleheads who think this abstract idea is basically like a concrete measurable essence, like mass, or liquid, got leadership positions.
thuanao 3 days ago [-]
Seems to me you’re talking about measuring software. Once we pick a measure though, we can calculate output/input (productivity).
Imagine two people tasked with producing the same software, or software satisfying the same requirements or test suite. What would you call the person who produces it faster? More productive?
Measuring software in financial terms or lines of code might not be the right measurement of software in all situations, but surely we can measure time and cost to produce software or software that satisfies equivalent requirements.
hammock 3 days ago [-]
You’re responding to the headline, not the article. The article has a plainly stated thesis: “Even if it could be measured, productivity in software does not approximate business value in any meaningful way”
theamk 3 days ago [-]
The article contains this sentence, yes, but the rest of the text does not support it.
The article argues that lines-of-code is bad metric, and no one argues about this.
thuanao proposes measuring "number of completed projects", and I think it's a pretty good metric in some situations which don't involve long-term maintenance.
One example I can come up with is data science: two engineers are given identical requirements to ingest the data (once) and calculate certain metrics. If the assignments are identical, we can calculate and compare their productivity, in "projects/month"
lucianbr 3 days ago [-]
Long-term maintenance is a confounding factor of course. People sometimes write software faster with the downside that it wil be harder to maintain.
So remove the confounding factor (at the same time removing most projects we would like to apply this to) and it becomes an easy problem?
But there are many confounding factors. Your data science project results will be different (different metric values outputed) and you don't know which one is correct, or closer to correct. Now how does the "projects/month" number look?
Or maybe one of the engineers uses way more compute-hours for getting the job done. Their projects/month number is better, but is that really a better outcome? Compute is not free.
By the time you remove most confounding factors, the productivity measure will only apply to an insignificant number of projects.
taberiand 3 days ago [-]
Depends in the coming months and years which was more stable, maintainable and extensible.
3 days ago [-]
Buttons840 3 days ago [-]
Isn't the output, the singular goal, always money? (As sad as that may be.)
Rury 3 days ago [-]
Money is just a means to obtain goods and services. So no... money isn't really the goal.
fellowniusmonk 1 days ago [-]
Money isn't the goal.
It's much more important.
It's both the measure, the scale, the environment, and the terms of exchange.
It's the simplest globally agreed on proxy for the transmission of everything that exists.
Thus money is a proxy for everything no matter how abstract or concrete, freedom, self determination, the ability to bring complex things in your imagination to fruition by enlisting the help of others.
Money is nothing but...
If I was a fish, money would be my water and my gills.
Language & Communication itself is even less than money, it's just air vibrating or scribbles on paper, it has no purpose or meaning but what we give it.
It's also the fundamental operating system and protocol of both individuals and humanity.
Money is the knot that cannot be untied.
Goodhart's law is everywhere and in all things.
Buttons840 3 days ago [-]
Yes, but people lose sight of this.
I suppose in the context of this thread, we can look at the productivity of a system that includes money or does not.
A small system, like a tire production line, can measure its productivity in terms of money, which is outside the system. But a large system, like society, includes money, and cannot measure its own productivity using internal things.
deepsun 3 days ago [-]
Money is the measure of usefulness.
By the way, why suddenly the money=bad sentiment is so popular? I thought USSR example loudly showed us what happens when people think that money is evil.
bovermyer 2 days ago [-]
"Money is the measure of usefulness?"
Good lord. There's so much wrong with this statement that I doubt I can meaningfully respond to it. I'll try, though.
If we have to reduce measures of usefulness to a single metric, then why not percentage of users who respond favorably?
I daresay that people find air quite useful, and for the most part, air is free.
deepsun 2 days ago [-]
Easy -- users often don't respond favorably about MAJOR spending categories that are absolutely required. For example -- there's big negative sentiment about healthcare and its costs, but I rarely hear favorable responses like "thank you the country for we have healthcare".
Or "thank you military that we haven't got invaded yet" (we are all users of the military). That doesn't depend on the country/region.
"Users" are too short-sighted to invest in major spending categories. Only the countries who can reasonably push their population towards long-term investments still exist.
bovermyer 2 days ago [-]
Your obsession with framing everything in terms of money is quite remarkable.
deepsun 2 days ago [-]
Money is a bad measure of usefulness, but it's better than any other working measure.
bovermyer 2 days ago [-]
Fascinating.
I disagree completely, of course. I imagine our backgrounds are wildly different.
deepsun 2 days ago [-]
Yes, seems like it. I grew up in USSR, and now appreciate capitalism a lot.
Same thing with democracy by the way: yes it's a bad and stupid system, but I've seen the alternative. Many people expect democracy to make ideal and fair decisions, while I see it as just a protection against the most egregious and blatant violations. E.g. the US has "Deficient democracy" rating (only two parties), but it's still infinitely better than any dictatorship.
bovermyer 2 days ago [-]
I grew up in a combination of the USA and New Zealand, so I was raised in democracy of multiple varieties. The US's version is nowhere near as good as NZ's.
Political systems don't define my worldview, though; they largely exist in the background for me. This is probably because I've never had to worry about a political system killing me before. Trump's ascendancy may change that, but for the time being, it's still just noise.
Capitalism and whatever variation of communism that the USSR had are not the only ways to organize society. Even the USA doesn't have pure capitalism; if it did, there would be no government regulations whatsoever.
Given that, I'm curious about why you focus on currency as such a central driving metric for everything. I don't believe it's just because of political background. What else might make money so important for you?
Buttons840 3 days ago [-]
Suddenly? Who first said money was the root of all evil?
deepsun 2 days ago [-]
> The phrase "money is the root of all evil" is a common saying that originates from the Bible, specifically 1 Timothy 6:10, which actually states "For the love of money is the root of all evil," meaning it's not money itself that is evil, but the excessive desire for it and the actions people might take to acquire wealth that can lead to negative consequences like greed and unethical behavior.
We in our company don't do anything unethical nor unlawful, but love money very much, and are very glad other people find our work useful.
PS: Just remembered that many people who respect the Bible, for example members of The Church of Jesus Christ of Latter-day Saints, don't have any problems with money.
fellowniusmonk 1 days ago [-]
1 Timothy 6:10.
Kinds. All Kinds.
musicale 3 days ago [-]
When companies focus on profit at the expense of all other goals, the result is typically disadvantageous to customers, employees, and society at large.
See also: crapification; financialization; parasitic private equity
I can't speak for HN, but I imagine some readers are more interested in the benefits of cool technology beyond putting more money in the pockets of investors and management.
deepsun 2 days ago [-]
> When companies focus on profit at the expense of all other goals, the result is typically disadvantageous to customers, employees, and society at large.
I grew up in USSR and saw with my own eyes what happens when personal profit is not the #1 focus.
Yes, _commercial_ companies should also focus on everything else as well, but profit must be #1. And the society steer the businesses towards good things by implementing laws that businesses must follow.
Commercial corporations _exists_ to make money to shareholders. It's written in laws and company bylaws. That's exactly what society decided commercial companies should do. If a _commercial_ company puts something else before profits -- it can be sued by shareholders (lawsuits follow laws, aka boundaries put by society).
And by the way, we already have framework to do your idea: companies can register as "non-profit" or "public benefit". So your suggestion is easily implemented today by forbidding commercial companies and only allowing non-profits or public benefit.
lanstin 2 days ago [-]
That works out ok as long as they are one of a large numbers of producers, competing equally, without monopoly power or excessive physical power, for a large number of consumers free to Ursula their own best interest; a certain amount of information symmetry is also a necessary axiom.
If instead you have a small number of monopolies with legislative capture, then you need them to be virtuous.
csb6 3 days ago [-]
> I would argue that what good software developers do is remove problems. The opposite, in fact, of production
But something is being produced - it is version 2.0 of the software. This is an artifact that is then shipped to users or deployed to a server. Peter’s solution fixed the issue and did not (seemingly) create further maintenance burden, which would have taken attention away from other tasks, i.e. reduced future productivity.
I agree that metrics for programmer productivity are often useless (e.g. using lines of code is a bad idea for obvious reasons), but it seems silly to claim that the entire concept of productivity does not apply to the production of software.
foobiekr 3 days ago [-]
Silly and convenient. No one takes the claim that you can’t measure software productivity seriously and everyone simultaneously agrees that simple scalar metrics often fail to show the big picture in any and all disciplines.
The rest is just usual software guy hubris and lack of awareness of the discipline.
musicale 3 days ago [-]
> No one takes the claim that you can’t measure software productivity seriously
At least three people do, and they are mentioned in the article: the author, his colleague, and Martin Fowler.
I suspect that software productivity (like the productivity of scientists, artists, composers, etc.) on any non-trivial project may be measurable on a scale of years, but not necessarily months or calendar quarters. A larger issue is that productivity is often due to external factors as much as it is to individual effort, and overnight success is often the result of extended periods of limited progress or even repeated failure.
impure 3 days ago [-]
Let's say that you have two runners running the same marathon. The first one, Frank, sprints at full speed and eventually tires out and slows down. The second runner, Peter, takes a nap first and then finishes the marathon at the exact same time. Which of these two runners was faster in the race? The answer is: It doesn't matter. And therefore there is no such thing as running speed.
There is such a thing as productivity in programming. If you could measure it it would likely be some combination of peer review and an analysis of the impact implemented features and fixes had. Some companies actually have programmers rate each other. I don't know how well it works and I think it can lead to perverse scenarios. But you can come up with metrics that are positively correlated with productivity.
robertlagrant 3 days ago [-]
The answer is neither is faster, because they finished at the same time.
dools 3 days ago [-]
I think this is just a semantic quibble based on a narrow interpretation of the word "produce".
Because it also means:
"cause (a particular result or situation) to happen or exist."
default-kramer 3 days ago [-]
Yeah, the comments on the post already said it all. First Isaac says
> You seem to be playing a definition word game. [...] Obviously productivity does approximate business value in a very meaningful way when it is defined in terms of delivered business value.
Then the author responds
> Martin asserts "any true measure of software development productivity must be based on delivered business value". I agree, and I propose there is no such thing. We're better off dropping metaphors altogether and just talking about what programmers do: Solve problems.
So I guess the author wants to replace the term "software productivity" with something like "problem-solving productivity, measured in terms of business value"...? Kind of silly IMO, but to each his own.
rootedbox 3 days ago [-]
Sure there is. It's a ratio of inputs to outputs.. even in the example the inputs and outputs are measurable.
The only thing this article gets at is that engineers may not know how to calculate their own productivity; but it doesn't means it's not calculable.
halfcat 3 days ago [-]
> It's a ratio of inputs to outputs.. even in the example the inputs and outputs are measurable.
But reality is never that clear cut. How’s the ratio look when:
- Peter goes to the park and the breakthrough doesn’t come?
- Or it comes 3 weeks later?
- Or he deletes 100 lines of code and introduces a new bug?
pixl97 3 days ago [-]
>It's a ratio of inputs to outputs.. even in the example the inputs and outputs are measurable.
So more lines of code is better!
Um, we know this doesn't work that way as a good measure.
This is like comparing algorithms that do the same thing to algorithms that do different things. You're not going to get good valid comparisons. Metrics for one thing may not work at all for another.
cjensen 3 days ago [-]
The output is measured with dollars, not lines of code. So are the inputs.
It's a perfectly cromulent measure so long as we understand the limitations of the measure. For example, trying to measure the productivity of a day or a sprint? That's silly. Measure the output of a team which does not produce an entire product? Won't work because you'd have to figure out how to apportion the productivity.
"productivity, in economics, the ratio of what is produced to what is required to produce it. Usually this ratio is in the form of an average, expressing the total output of some category of goods divided by the total input of, say, labour or raw materials."
cjensen 2 days ago [-]
It's implied in the definition. Consider the units: a ratio should not have units. Lines of code per programmer per day would have weird units, for example, and could not be compared against number of windows installed per day for per car window installer. The only way for productivity to be useful is to normalize the inputs and outputs into money.
18 hours ago [-]
lucianbr 3 days ago [-]
It's a perfectly good measure except it does not help us at all.
The whole reason for this discussion is situations like Microsoft having 200k employees and making $240B in a year. Which employees, teams, or even departments are more productive? They want to know.
And even if it did not matter, likely the expense of this year influences the income over multiple future years, so you compare the dollars in / dollars out for which periods?
lordnacho 3 days ago [-]
Isn't this whole debate a repeat of the old talking point about what is value? Labour theory of value, that kind of thing?
This is simply never going to end if carried on along the lines of this article, or along the lines of most of the comments here.
There's no way to objectively and reasonably put a value on something.
All we have is a theory of subjective value, which does a bit of handwaving about utility and works out some ways where we can come to a price, regardless of the fact that the values in peoples' heads are subjective.
Thus the distinction between knowledge work and "tangible work" like bricklaying is actually a moot point. Yes, you can measure "productivity" of bricklaying in metres per day, but ultimately you care about value, not amounts of wall.
The arguments about one guy definitely being more productive than another are similar. One person values speed, another values maintenance costs downstream. It is subjective what ought to be more important.
irjustin 3 days ago [-]
I realized why this post rubs me the wrong way.
It complains, but doesn't offer a solution. It simply criticizes and says "all engineers cannot and thusly should not be measured".
The ironic thing is, the blog post is implicitly measuring by not explicitly measuring. The measurement is the bug ticket itself and whatever value attached to it.
But to this end, I generally agree. There are qualitative and quantitative measurements. Quantitative is the value of the ticket commonly ascribed by the team (scrum? agile? whatever). Qualitative should come up in review.
Qualitative is SO HARD. Top down? Team 360? Mixed? But it must be undertaken and refined by the team at each level of the org. Otherwise you will run into the exact situation described by the blog post and you won't know how to judge left from right, good from bad. Maybe the blog post's example isn't that great, too much information is missing to make a solid judgement, but you need to decide who to reward via promotion, annual raises and who to reprimand and who to not change.
But still, all systems are terrible, but you must pick one less it be picked for you.
fcantournet 3 days ago [-]
Why do you need to evaluate people constantly and pit them against each other ?
Why not give a raise to everyone and see what happens ?
It's how we sent rockets to the moon, it seemed to work ok.
ebiester 2 days ago [-]
Because the alternative is based on your manager's impressions. You get fired because "your manager doesn't like you." You get a promotion because "your manager likes you." Your coworker sits around doing very little and nobody notices for a few months.
Your company and boss is sued because someone says their firing was based on discrimination because they can't prove it was for performance.
The truth is that managers can suss out 90% of the problems without a number. However, we are asked to document the hell out of it if we want to do something about it. And twice in my career I've been wrong: I mistook someone who was quiet for not doing much until I dove into the work. I trust my gut, but I confirm with numbers.
irjustin 2 days ago [-]
> It's how we sent rockets to the moon, it seemed to work ok.
There are so many problems rooted in this statement. Government program, presidential mandate (i.e. unlimited budget), no competition.
SpaceX is clearly better than NASA except maybe they don't push hard enough nor evaluate their engineers so you have a nice stable job if you do nothing.
pjs_ 3 days ago [-]
In the scenario described in the article, Peter and Paul both achieve the same outcome in the same wall clock time. Obviously they are equivalently productive despite different working styles, by construction.
But this doesn't account for the more realistic examples of Prakash, who completely fails to deliver a working solution, or delivers half a solution, and Percy, who gets it done two weeks late. I'm pretty sure you can define a shit_done/time_elapsed productivity metric for those two guys that is worse than that of Peter and Paul.
Maybe I am a cynic but I suspect that some people are upvoting this because the framing makes them feel OK about getting paid six figures to work three hours a day...
jFriedensreich 3 days ago [-]
Thats why bugs and chores have 0 points per default in pivotal tracker. Thats also why there was a push to force us say "as a user i..." at the beginning of story descriptions to make sure this is something creating user value. With these guards in place i don't follow the argument that productivity is not measurable, if a team builds features that solve an expressed user need and lets even say go through a final user acceptance check, this is a productive process and it is also very measurable!
toss1 2 days ago [-]
YUP
My first rule of software programming is: Work Hard to Avoid Writing Code
— Code is habitat for bugs; more code equals more bugs, and interactions between separate parts of code can harbor even more "interesting" varieties.
— Code takes time to run. Any NoOp is faster than even your best hand-tuned assembly module.
Obviously, this is not absolute in any sense — it breaks down as soon as you need the system to actually do something, at which point you must write some code. But it should be the minimal amount to get the job done, and nothing more.
Obligatory car analogy: While doing some amateur sportscar racing, a coach asked me
"What are the things you do that slow the car down?".
I thought for a moment and started saying "when I start into a corner, if I do a bit too much...",
he interrupted saying "NO, no, what are the BIG things you do that slow the car down?".
"Oh, like braking and turning and lifting off the throttle?".
"Right. So what that means is that you should always avoid doing those things. Obviously, you will certainly have to so some of them as soon as you approach the end of the front straight after the start, but make sure you understand your car, the track, and your skills to the point where you do only the absolute minimum."
Both the software and sportscar versions are deceptively simple — they take a LOT more thinking than it seems at first glance. And that thinking is totally worth it.
2 days ago [-]
manmal 3 days ago [-]
Solving problems is productivity. That also means, solving them in a way that the solution doesn’t spawn new problems down the road, which is why we should follow the
best practices. I can’t say whether deleting 100 lines is better than adding 1K lines, because before passing that judgement I would need to see what exactly those 100 and 1K lines are. OP is arguing too black and white IMO, and draws the conclusion before I‘m sold on the premise.
cushychicken 2 days ago [-]
Impact is the only useful metric.
It is staggeringly hard to measure.
Output is a weak proxy for impact. But it’s the one that makes intuitive sense to people. Doesn’t make it right or useful. I’m sure you all can envision a parable about your subfield of expertise that showcases how a seemingly light touch has a huge positive impact.
virgilp 2 days ago [-]
> It is staggeringly hard to measure.
It's straight up impossible. Best we can do is observe and attempt to measure proxies.
Take hypothetical situation: VPs A & B debate a business decision. Let's say A wins the argument and their solution leads to a revenue increase of $10M, and let's say we can confidently state that this is the outcome driven primarily by them winning the argument. Is the impact a net growth in business of $10M? In a sense, yes; but in some other sense, perhaps if they went with B's solution, the revenue growth would have been $15M? There's probably no way to know for sure unless you try both approaches, which is often impossible...
cushychicken 2 days ago [-]
My theory is that impact is heavily context dependent.
If you can solve a problem quickly, at the right time, with buyin from your org, that’s positive impact.
That general case differs wildly when you descend to the particulars.
mattmcknight 3 days ago [-]
So if you have a developer who writes no code and closes no tickets, who essentially does nothing except come to meetings, they are just as productive as a developer who writes some code and closes some tickets? Obviously there is a difference there. Negative code is still doing something as well. Back in the 90s we used Personal Software Process from the SEI, and we measured lines added, deleted, and changed. (as well as defects removed and added)
It becomes clear that simple quantity of code and tickets is not enough- but it's also not nothing. Part of what is missing is quality and assessments of task complexity. Part of what is missing is the other parts of the jobs, like design and code reviews.
I don't think it's hopeless, and can at least be used to look into why some people don't seem to make much at all.
3 days ago [-]
fuzzfactor 3 days ago [-]
You could always look at things in a more abstract way ;)
From one point of view, the users of the software are supposed to enjoy so much of a productivity increase that it's not supposed to matter if the coders are as productive as they could be or not.
Give or take a few hundred percent at least.
I realize that most people who've been to business school still aren't going to develop the needed acumen to handle a situation like this.
Too many times the only training retained is a knee-jerk over-reaction to a fraction of a percent :\
Ever see one of these "leaders" have a cow and it was as stupid as it could have possibly been?
I'm confident there are still some natural leaders that can thrive without worrying about every ounce of nose to the grindstone for their staff.
Some things you just can't fix.
fulafel 3 days ago [-]
Productivity in economics refers to how many units of output you can generate with a given amount of input, it doesn't take into account quality, usefulness, etc. Complaining "lines-of-code is not a sensical productivity measure" is too kind to the concept of productivity.
Using productivity as a metric leads to the same nonsequitur stuff in many many fields.
It works somewhat for industrial output when you're working with commodities. Or it can work in some more fields as an ancillary measure if you pair it with some other quality, customer satisfaction, outcomes etc measure. But usually you don't want to maximize work while getting good outcomes, you just want the good outcomes.
whoisthemachine 3 days ago [-]
Asking a software engineer to be more productive is akin to asking a mechanical engineer to be more productive. What does that end up looking like? More useless blueprints? It turns out when you ask software engineers to just crank out code, you just get lots of code.
wpwpwpw 3 days ago [-]
Good productivity is about being able to do all tasks as planned. That includes the management, as its task is to attribute tasks. And whoever creates tasks. And whoever defines the objectives those tasks should answer to.
musicale 3 days ago [-]
Optimizing task completion is different from optimizing task benefit.
jes5199 3 days ago [-]
the problem is that software does not directly produce value, it is a way to place a bet on what will be valuable
antupis 3 days ago [-]
Pretty spot on and that is why devops stuff and automatic tests are so important. Those just let your bet faster.
simpaticoder 3 days ago [-]
The difficulty of measuring productivity is particularly felt by senior developers. They save time and effort in ways that are non-obvious, which might be measured by dependencies that were not added, design patterns that were rejected, or processes that they pushed back on. Just like with living things, unchecked growth is unhealthy for an organism but the actions required are difficult to measure. One could start an attempt with a counter-factual narrative, but this does not map cleanly to KPIs.
robwwilliams 3 days ago [-]
Ok, but ironically the article does explain by a comparison of two alternatives just how to measure productivity—-solving problems most efficiently (least amount of maintainable code).
jascha_eng 3 days ago [-]
You can very well at least count the problems that were solved (or deleted) you can also probably measure the value those solutions have in revenue or another metric.
It's still true that measuring lines of code, time spent coding, commits or anything else is at best a proxy of productivity. It's also true that without any code changes problems more often than not don't get solved or we at least can't call the activity software development.
mprast 3 days ago [-]
I do appreciate the little coda at the end - nice that the author was self-aware enough to realize where the cruft was and cut it (and courteous to the reader!)
sanitycheck 3 days ago [-]
Also, there's no such thing as sculpture productivity because all sculptors are doing is removing material.
jschrf 3 days ago [-]
Time spent on "incidental complexity" (new features, key fixes, performance) versus time spent on "accidental complexity" (anything and everything else).
Easy metric to understand, easy metric to teach, just remember that it applies to teams and not individuals.
See: Out of the Tar Pit.
popcorncowboy 2 days ago [-]
Clicked for the bait, stayed for the background tile.
tbrownaw 3 days ago [-]
And yet somehow, software developers are doing something that's worth paying them a salary. And it's not always impossible to tell if someone's being held back at a lower salary band than they should be, or was promoted into a salary band that's higher than they're able to keep up with.
Analemma_ 3 days ago [-]
Yeah, these complaints are always so duplicitous. Somehow, it’s impossible to measure programmer productivity when it’s time for accountability or professional standards, but not when it’s time to assign annual bonuses.
liontwist 3 days ago [-]
Yep it’s just nonsense. If it’s hard to define or measure I guess it doesn’t exist?
akst 3 days ago [-]
In an economic sense there is absolutely software productivity, when it results in more outputs from the same economic inputs.
eacapeisfutuile 3 days ago [-]
This is fictious btw, or logic as follows is delete all code, deploy nothing, all bugs fixed, and achieve infinite productivity.
timeforcomputer 3 days ago [-]
Yay Ben Rady. He has a great podcast with Matt Godbolt (who made Compiler Explorer) called Two's Complement.
revskill 3 days ago [-]
Productivity to me is about choosing the best tradeoff while minimizing the costs of going ahead.
wcfrobert 3 days ago [-]
Wholeheartedly agree. SWEs are not factory workers. They don't punch in at 6am, put on their uniforms, make sure they meet their daily "lines of code" quota before clocking out at 10pm. We should not measure software productivity the same way we measure ball bearings production.
Using lines of code to track productivity is absurd (do people really believe it or is it just a strawman at this point?). I'm reminded of that midwit meme where a junior has very few lines of code written because they don't know the code base well enough, the midwit writes up an whole framework, and the senior engineer has a net negative lines of code contribution.
Theres also a fundamental difference between creating and maintaining code. Something like 10 guys wrote visicalc. Does that mean they were contributing millions of dollars in profit per hours? What about the maintainance to keep it going? Bug fixes? Patches? On call infra guys? What about opportunity cost of putting engineers on deadend projects?
My point is tracking productivity in software dev - maybe all knowledge work for that matter - is complicated. Maybe that's why there's so much "busywork" (emails, slack, tickets, meetings, etc). Everyone wants to look productive but no one knows what it means
0xbadcafebee 3 days ago [-]
"Put another way, productivity has no applicability as a metric in software.
"How much did we create today?" is not a relevant question to ask. Even if it
could be measured, productivity in software does not approximate business value
in any meaningful way. This is because software development is not an activity
that necessarily produces anything.
This is ridiculous, irrelevant, and wrong. Of course software development produces things. It produces software.
Which of these two developers was more "productive" today? The answer is: It doesn't
matter. What matters the that Peter solved the problem, while simultaneously reducing
long term maintenance costs for the team. Frank also solved the problem, but he
increased maintenance costs by producing code, and so (all other things being equal)
his solution is inferior. To call Peter more "productive" is to torture the metaphor
beyond any possible point of utility.
Ohhhhhhhhh. I get it. The author doesn't know what the word productivity means.
Productivity does not mean "increases business value while decreasing maintenance costs [and having no net negative impact in any way]". It doesn't even mean "solving a problem".
Productivity just means "to make something", or more specifically the rate at which something is made. That's all. You can make 10x more of something, and it can be garbage quality, but you did make it, and you did make more of it, so your productivity increased.
If you produce 10x more grain than you did yesterday, you are more productive. The grain might now be full of heavy metals, pesticides and toxins. But you did in fact produce more grain. If you were trying to measure productivity of usable, healthy, high-quality grain, that is a different measurement than just productivity of grain. You may assume everybody knows what you mean when you say "productive", but you'd be wrong.
fermigier 3 days ago [-]
Strawman argument. It would have been (semi) interesting if the title was "Software productivity can't be measured by lines of code" (but already very eloquently stated in https://folklore.org/Negative_2000_Lines_Of_Code.html ).
lazyant 2 days ago [-]
"There is no such thing as software productivity." != "There is no such thing as a simple, objective software productivity metric".
vonnik 3 days ago [-]
And yet … there are some engineers who manage to solve hard problems to get something new working and debug it, however long it takes, and many others who don’t and/or can’t.
danielmarkbruce 3 days ago [-]
And yet, Jeff Dean is significantly more productive than I am.
focusbox 3 days ago [-]
[dead]
andrewqa 3 days ago [-]
[dead]
daniel1492 2 days ago [-]
[dead]
lincpa 3 days ago [-]
[dead]
Rendered at 00:34:46 GMT+0000 (Coordinated Universal Time) with Vercel.
I suspect this sort of thing gets promulgated because it kind of massages our ego, like yes, they can measure other sorts of productivity, but not ours, oh no, we're too complex and intelligent, there's no way to measure the deep sorts of work that we do! Which, yes, OK, we're not exactly bricklayers, but surely, if you had to, you could do better.
Productivity is not a cut-n-dry stat for any kind of knowledge work.
For an admin, we could say they were more productive if they handled more cases than last month. But this does not account for tricky cases or ambiguous cases, which might have taken 2x time to sort out but on rote numbers it looks they were less productive.
For us, there are no specific goals(except arbitrary imaginary deadlines by clueless management and sprint points) to correctly measure productivity. A junior engineer may spend 4h each day blasting out many lines of codes, a mid/senior may spend more hours in reviews or meetings. A 5 line PR with test coverage is faster to review than a 800+ LOC PR with additional tests and have significant risk of breaking something, so review number is also not a good indicator. I need to coordinate with min. 12 people to get any new credentials or access to specific env owned by another team, which is not countable as productive but unavoidable. How about that garbage meeting where I was just spending time yawning because some new tech lead who believes that a modular monolith system is better if it was rewritten as event based. Productivity measurement is hard.
To take your envy(I sense it on the quoted remark) to another place, think about how you go about measuring productivity of a CEO, PO, PM, Scrum Master, Agilist/Agile-Specialist etc?
For individual, productivity is measured on multiple axis - new long-term features, throw-away and prototypes, code maintenance, fire-fighting, code reviews, high-level design, outside-of-team communications, intra-team communications (for example helping teammates), archeology, specific parts of your project, probably others. In ideal team, everyone is strong on at least one or two axes, so no person is worse than others. In real teams, I've seen people who were weak on every single axis and did not seem to improve with time. I've also seen people who are great at everything, and it's extremely nice when such people exist - but they are very rare.
(Btw, re your specific example: if you weren't productive because you were waiting for others to approve your access then you were not productive. There is nothing complex about this, there were plenty of times when I've said, "I did nothing for project S last week because they still could not give me access")
With respect, that goal is far from clear. How do you measure how much they like the product? Have we achieved more "likes" this sprint than last?
Customers like a product because it removes their problems. QED.
Completely agree: focus groups, A/B testing, Net Promoter Score, 5-stars, like/dislike -- all of those systems are notorious for being unreliable in various ways, and optimizing for them is the cause of many sub-optimal decisions and Prisoner's Dilemmas in many industries.
> Customers like a product because it removes their problems. QED.
Your first point was "actually the world is really complicated" (strong agree) and your second was "actually the world is super simple" (strong disagree). Customers like products for wild reasons, unknown even to themselves. I've heard some slot-machine addicts get irritated if they win the jackpot, because it breaks them out of their state of flow. You could argue "well clearly that lack of flow was the problem slot machines are solving" but then you're just using circular reasoning: if a customer likes a product, it must be solving the problem for them that they previously did not have that product. I don't think you could have looked at the slot-machine addict a year before they started gambling and deduced that they had some sort of problem that slot machines needed to fill. There's often no clear link from a-priori diagnosable "problems" that people have to the products that they buy. In the universe populated by logical econs that do act this way, store shelves and advertising look very different.
I have no envy; I'm a software engineer as well. I just don't seem to struggle, as others claim to, in measuring productivity. I find it fairly straightforward to see that some people are more productive than others, at least in my workplace where we can hold most things constant (e.g. meeting count, managerial ability, etc, etc). Yes, there may be external factors, and that's unfortunate. Yes, it's not fair that some people are more productive than others despite putting in half the effort. But I'm not going to stick my head in the sand and pretend it's invisible.
Because you are measuring at a very broad and basic level.
Steve is more productive than Susan.
Great. How much more productive? Can you turn it into a number?
Can you still do it consistently when Steve and Susan are in different teams in different parts of the organisation trying to achieve different goals?
I've done DB upgrades that took 10 minutes and I've done DB upgrades that took 3-4 months. What changed was not my productivity but the nature of the problem. Yet from the outside they were both just DB upgrades.
If Susan had done the DB upgrade in 12 weeks could we confidently claim that Steve could have done it in 11 weeks? Steve hasn't even done a DB upgrade since he joined the company. Perhaps Steve could have done it in 10 minutes?
If Steve and Susan are in different part of organization, the answer is "cannot compare". If they are doing different job, the answer is the same.
But every once in a while there is a scenarios when you can compare people easily.
There has a weekly rotation to be an support person for other team. During his week, John always answers questions quickly and to the team's satisfaction. Meanwhile James struggles to answer them and cannot troubleshoot product his team is writing. This has been going on for multiple months and hundreds of questions for each, so it's not "bad week" unlucky or fluke. We now know who is better at answering questions about product.
John and James are doing DB migrations, they did many dozens of them. The migrations are assigned randomly. But John is usually finishing his migrations with no problems, while James often caused outages or missing data. A few times James took over two months to migrate, so the task was taken from him and given to John. John had to discard everything James did and migrate everything from scratch. Now there is a migration for a very important client and CEO is fed up with random assignment.. who is he going to choose?
Agreed.
> If Steve and Susan are in different part of organization, the answer is "cannot compare". If they are doing different job, the answer is the same.
These are the situations where we would get the most value from the metrics though.
The team level already has an Engineering Manager or Tech Lead who can directly deal with team level problems.
Or maybe while John is technically adept, he's also a huge jerk and belittles people at standup, while James is the quintessential communicator with jr devs, etc.
Real life is messy. I've seen more people get replaced due to attitude or teamfit issues than specifically due to technical incompetence.
Your second scenario is unfortunately very likely, people are jerks, and if they are also high performers (or high bullshitters) then can get away with it.
Either way I fully agree one one should be firing/promoting people based on a single metric, even if that metric is very relevant to the job description. That doesn't mean that "there is no such thing", or that if you really need to get that DB migration done, you want to choose a "quintessential communicator with jr devs".
This is moving goalposts. OP's argument was "There's No Such Thing as Software Productivity", not "You Can't Convert Software Productivity into a Floating Point Number With 3 Decimals of Accuracy."
Would that be a fairer assertion?
Then I shall move the goalposts. Can you address my shifted goalpost?
I personally did not interpret the author as literally meaning there is no such thing as software productivity but I agree the way he wrote it was confusing and could be interpreted that way.
Even in his toy example he clearly stated Peter did a better job than Frank.
I think that's the source of the problem: it's impossible to measure the "work" required to solve most software problems. If you tell me to carry a stone up a hill, I can put that in a formula and know exactly how much work I'll have to do. But if you give me a ticket to do a DB upgrade, I can, at best, make an educated guess.
So by the time I close the ticket, how much work have I done, and how do I know whether the time I've spent it proportional to the work I've done?
What are your eyes measuring? Are you being fooled?
For other teams yeah, it's harder, or maybe even impossible if the teams are isolated.
I'm not sure how true this really is. We all like to think that we're good at estimating other people's ability, particularly if we work closely with them, but how can you validate that impression?
Something that happened to me at one point in my career was that I joined a team that was half new hires with little experience, and half long-term employees with a lot of experience. Because the long-term employees were typically extremely busy, I ended up answering a lot of questions from the entry-level devs. So I spent time documenting the system, doing trainings, doing 1-on-1 code reviews, and so on.
During the performance review, feedback from the entry-level devs was that I was highly productive, and feedback from the long-term employees was that I was a slacker.
I stand by what I said, whether people like it or not.
Including the part about going off vibes. Please explain how this "you can just kind of tell" mentality is not literally just going off of vibes?
(And I wouldn't choose a cofounder only based on productivity, anyways. I'm not even sure that would be the main criteria.)
I should clarify that I actually agree with you if what you're saying is that the best measure of productivity we have is literally just going off your gut, but my argument is that this gut feeling is terrible. This is in large part because humans are biased, our gut feelings are swayed by things that simply shouldn't factor in. We tend to have more positive opinions of people that we think are like us and we sometimes wind up having negative opinions of someone based on stupid things like disagreeing with them on something that is ultimately irrelevant to whether or not they are productive.
And to me, the ultimate nail in the coffin is really in the question of what really constitutes productivity. Productivity is supposed to measure the efficiency of producing something, which already has pitfalls in and of itself when dealing with things that do have discrete, measurable indicators of progress, but programming doesn't, it's not even always obvious if progress is forward or backward sometimes. What's most important, performance optimizations, disaster planning, features, robustness, minimizing resource utilization? The best answer you can generally say is "It depends," and moreover, everyone will have a different set of competencies they're best at, so a person doesn't really have some single "productivity value" you could summarize them with. Yet, all of those things are pretty important for any serious technology organization, so you would want people with a variety of different affinities.
At that point, it starts to beg the question of whether or not attempting to directly measure software productivity is a worthwhile endeavor. I'd argue not.
I do believe that you can use basic tests to determine whether someone has more technical competence than some random guy off the street, but it only works up to a point. If you try to test deeper and deeper knowledge, you might create a mirage where someone who isn't very competent appears competent because you just happen to hit on strong spots on their very sparse experience. (My imposter syndrome reasons that this is why I passed the Google interview so easily some years ago.)
But for example, interviews don't even really bother trying to determine any direct proxies for productivity. Usually, they just stick to trying to determine technical competence, communication, ability to work on a team, and evaluate their history of technical accomplishments. A list of accomplishments is evidence of productivity, to a degree, but not having a long list is not evidence of a lack of productivity, and neither will tell you what will actually happen when you hire the person. References will at least give you someone else's gut feelings (or lies) regarding someone's productivity, but any reasonably competent person is going to have people who can vouch for them even if their productivity is actually not very impressive. It's not like there's some huge punishment for embellishing someone when you're being interviewed as a reference for them.
In the context of hiring people, gut feelings are probably the best thing we have, but they're subject to horrendous bias. Even if you are highly enlightened and can recognize your own biases with a great degree of humility, this is not generally the case for most people. Because of that, Google's interviews have a lot of layers of abstraction designed to eliminate bias from the process, but then again, they also wound up doing a study where they hired people who were ultimately turned down for the job and found that those people had around the same chance at succeeding at Google as the people who were hired. (Can't find the source for this because Google Search is useless nowadays. Maybe their hiring process is to blame.) And yet, there's no doubt that even with this in mind bias will still impact the interviews, because the interviewer does ultimately have to transcribe the interview and they can choose to omit or paraphrase things in a way that makes it look worse to the committee overseeing things; likewise, you can "correct" what the person is saying if you felt it was "close enough", or omit entire segments that looked weaker. Sure, you're not supposed to, but I would bet you 10:1 that even people not intending to be biased wind up doing this. Maybe they're second-guessing themselves when they do it: was it my fault they didn't answer better? Were they saying it right all along and I just wasn't understanding?
I was actually involved in a lot of interviewing and hiring especially early on in my career. I still believe gut feeling was the best instinct I had, but there was a time when I didn't agree with a hire and was proven horrifically wrong very soon after. Granted, that mostly comes down to how you evaluate someone's technical competence, not necessarily productivity, but I think the point stands either way.
> there was a time when I didn't agree with a hire and was proven horrifically wrong very soon after
Based on what? Did they turn out to be unproductive, but in a good way? What was that way?
Software developers should look at anyone claiming they can measure software productivity as a snake oil salesman.
We have seen hundreds of attempts over the years and they have all "failed". More accurately they all have large error bars and biases.
Researchers can and should continue looking into how to measure software development productivity. It is likely over the next few decades we will start to understand how to measure it appropriately.
Wherein the work isn't repeating existing work no such measure should be expected to exist.
Any creative work is going to suffer from the same problem.
If you asked a brick layer to develop a better workflow for his fellows to work more effectively in a particular sitiation it would be the same. Are you going to measure words per minute?
Also imagine every other week as the bricklayer had half a wall built a PM comes out and says, "tear this all down and move it 3 feet to the north", and then the week after, "now move it east 1 foot, and why is it taking you so long?"
Calling this phenomenon a toy example is a bit out of touch. I've seen this every single day of my 25 year career. Value is produced by solving problems, not writing code. The solution with fewer lines of code, fewer new abstractions, lower complexity (yes, complexity IS objective and quantifiable) is invariably the best solution. Solutions that involve no code at all are the gold standard.
Adding/subtracting the right code produces value. But adding the wrong code decreases value. The logic of "productivity as code" contains a fatal flaw - it ignores this and assumes all code is right. Reality: everything hinges on the right vs wrong distinction which is inherently subjective. Code is too often a net liability - you must account for the risk!
By contrast, the "productivity as solving problems" approach is entirely objective. You can observe the problem, test hypotheses about the cause, and measure the situation after the intervention. A well stated problem has no ambiguity.
I don't see any support whatsover for the "productivity as code" idea. It's empirically false and lacks logical consistency.
The text gives an example to the core problem, and to argue differently requires thinking around it.
In practice. I’ve seen many attempts at measuring productivity, but once you dig into them, you see they are just abstraction mechanisms above something that is similar to lines of code.
I have yet to see an idea that sidesteps the core issue described in this post. Also, it applies to many types of work, and software is not unique in any way.
All other measures are a proxy for happy customers.
Actually, happy customers is also a proxy (the real measure is profits) but measuring profits directly (in the short term) can lead to decisions that have adverse long term effects. It's too easy to increase profits in the short term by avoiding long-term expenses.
So, if you're in the business of software, the goal is happy customers. (And I use the word Customers carefully here. Not just Users who pay nothing, but Customers who spend money.)
In a business context, it's really the only thing that matters. But, of course, it can be hard to measure (are they Happy?) and relies on multiple disciplines. Production (coding), Marketing, Sales, Support, Documentation, Training- all need to be working well to make it work.
Ultimately if the big picture doesn't lead to Happy Customers (again, I stress, in a businesses context) then no-one is "being productive."
Anyway there's this adage that once a metric (like productivity) becomes a target it ceases to be a useful metric. But this doesn't seem to apply for value / revenue much, so I suppose it's good to keep an eye on this vague productivity metric.
As the other commenter pointed out, happy customers means nothing if they aren’t actively paying you
So the business is chasing profits. In the short term that means customers paying money - any will do (happy or unhappy).
But in the long term, happy is the key. Happy customers are the single biggest marketing tool you have. Happy customers promote and recommend you. Unhappy customers do the opposite (and are more effective at doing so.)
So, if the metric stops at Customers then you are greatly missing the long-term value. Since a good business is planning for the long term, not just right now, Happy customers I the correct metric.
Remember, you ultimately get what you measure (no more).
And yet, there's some irrational counter examples; there's video games that has huge detractors while having huge financial success. Negative reviews on Steam, "4000 hours played". The metrics say they aren't happy with the product... but they still play it, talk about it, may have pulled in friends to play it, and spend money on it.
People can be unhappy about a product but still pay and promote it, counter-intuitively. Of course, the unhappiness is what they say, their behaviour says otherwise so for the sake of the metrics they would be considered happy I suppose?
Even if you could perfectly measure customer happiness (very hard, as you note) - it's relatively easy to make customers happy by giving them more value than what they pay for. Sure, that may cost your business more money than what it makes with said customers, but hey, who cares, "profit" was not the metric...
(and as you note, if you make "profit" the metric, that has its own set of challenges - e.g. the optimization towards short-term profit in detriment of the long-term sanity, which is what we observe in a lot of corporations).
Yes and no in this case. Yes, you can naked customers happier with more value, more overhead (ie more support staff and do on.)
Yes, in the short term this might reduce profit. If you go too far down this road you might go bankrupt. No measure works if you dont use the "can we afford it" metric.
But nothing turbo-charges profits (in the short, and more importantly, long term) than happy customers. Ultimately they pay more, theny pay more often, they encourage others to pay.
If you optimize for happy customers, and stay solvent, you have the foundation for a solid long-term business.
I will add that starting with Happy Users (who get stuff for free) and turning them into Happy Customers later is really really hard. Simply giving the thing away (or charging so little it amounts to the same thing) is not what I'm suggesting. You can start with a lower price, yes, but regular price hikes are part of yhe process until uou find your natural price level.
A potential riske seems to be feedback systems where job satisfaction is determined by high or low pay.
This is something that one of the orgs I worked for eventually realized. The people f'ing up, and then fixing their mistakes, were the ones getting promotions/bonuses/raises, because they were the ones interacting with all the execs.
Spoiler: they can't do that either. How do you measure the productivity of a bridge engineer designing a new suspension bridge? Number of struts placed on the blueprint? Even with more manual labor, it's very hard. Who's the most productive out of these three:
1. A carpenter who builds a house from a blueprint in 6 weeks. It collapses after 3 years.
2. A carpenter who builds a house from the same blueprint in 12 weeks. It stays up, but has visible defects that affect its value.
3. A carpenter who builds a house from the same blueprint in 20 weeks. It stays up for over a century and eventually becomes a historical landmark due to its lasting beauty and careful construction.
Which one was more productive?
https://doi.ieeecomputersociety.org/10.1109/32.44364
So you can measure productivity with a reasonable level of accuracy and consistency, but then what? For most organizations it's not actionable so calculating that metric ends up as another form of waste.
Well enough for what? And how do you measure quality?
Measuring developer productivity should, in my opinion, have one dimension for speed, one for quality, and one for user impact. LOC can be fine as a measurement for speed, you just don’t want to look at it in isolation. You would want to also measure, for example, escape rate and usage for the features the developer worked on, and be willing to change or refine these if circumstances require it.
You also need to look for different profiles based on the developer’s level of seniority. A senior dev probably shouldn’t be writing as much code as a contributor, but their user impact should be high, and escape rate low. Analyzing differences between teams is important, as well. A team that has a lot of escapes or little user impact probably has issues that need management attention, and may not have anything at all to do with individual developer productivity or ability.
In brief, the numbers are there to help you make better management decisions, not to relieve you of having to make them.
Developer productivity exists and I am totally comfortable describing Alice as being more productive than Bob. The fact that there isn’t a good, generic way to distill this into particular values does not mean that the phenomenon doesn’t exist.
We’ve all worked with Alice and Bobs. Claiming there’s no such thing as developer productivity is denying what we’ve all experienced. We aren’t special snowflakes whose work is beyond the ken of mortal men. Our work has value and sometimes we produce a lot of value in a particular time period and sometimes we product a little. This is productivity.
A tire factory has a distinct, singular goal: produce tires. It does this continually. Productivity is meaningful, but only in relation to a target that is typically specified by externalities (e.g. amount of demand)
A software company is usually not in the business of producing consumable commodities so this kind of measurement does not make sense. It can make sense to measure productivity during a period for delivering a particular piece of software within a given time bound, but once it's delivered, productivity becomes meaningless. You always need to understand productivity in relation to some purpose and I don't know how these knuckleheads who think this abstract idea is basically like a concrete measurable essence, like mass, or liquid, got leadership positions.
Imagine two people tasked with producing the same software, or software satisfying the same requirements or test suite. What would you call the person who produces it faster? More productive?
Measuring software in financial terms or lines of code might not be the right measurement of software in all situations, but surely we can measure time and cost to produce software or software that satisfies equivalent requirements.
The article argues that lines-of-code is bad metric, and no one argues about this.
thuanao proposes measuring "number of completed projects", and I think it's a pretty good metric in some situations which don't involve long-term maintenance.
One example I can come up with is data science: two engineers are given identical requirements to ingest the data (once) and calculate certain metrics. If the assignments are identical, we can calculate and compare their productivity, in "projects/month"
So remove the confounding factor (at the same time removing most projects we would like to apply this to) and it becomes an easy problem?
But there are many confounding factors. Your data science project results will be different (different metric values outputed) and you don't know which one is correct, or closer to correct. Now how does the "projects/month" number look?
Or maybe one of the engineers uses way more compute-hours for getting the job done. Their projects/month number is better, but is that really a better outcome? Compute is not free.
By the time you remove most confounding factors, the productivity measure will only apply to an insignificant number of projects.
It's much more important.
It's both the measure, the scale, the environment, and the terms of exchange.
It's the simplest globally agreed on proxy for the transmission of everything that exists.
Thus money is a proxy for everything no matter how abstract or concrete, freedom, self determination, the ability to bring complex things in your imagination to fruition by enlisting the help of others.
Money is nothing but...
If I was a fish, money would be my water and my gills.
Language & Communication itself is even less than money, it's just air vibrating or scribbles on paper, it has no purpose or meaning but what we give it.
It's also the fundamental operating system and protocol of both individuals and humanity.
Money is the knot that cannot be untied.
Goodhart's law is everywhere and in all things.
I suppose in the context of this thread, we can look at the productivity of a system that includes money or does not.
A small system, like a tire production line, can measure its productivity in terms of money, which is outside the system. But a large system, like society, includes money, and cannot measure its own productivity using internal things.
By the way, why suddenly the money=bad sentiment is so popular? I thought USSR example loudly showed us what happens when people think that money is evil.
Good lord. There's so much wrong with this statement that I doubt I can meaningfully respond to it. I'll try, though.
If we have to reduce measures of usefulness to a single metric, then why not percentage of users who respond favorably?
I daresay that people find air quite useful, and for the most part, air is free.
"Users" are too short-sighted to invest in major spending categories. Only the countries who can reasonably push their population towards long-term investments still exist.
I disagree completely, of course. I imagine our backgrounds are wildly different.
Same thing with democracy by the way: yes it's a bad and stupid system, but I've seen the alternative. Many people expect democracy to make ideal and fair decisions, while I see it as just a protection against the most egregious and blatant violations. E.g. the US has "Deficient democracy" rating (only two parties), but it's still infinitely better than any dictatorship.
Political systems don't define my worldview, though; they largely exist in the background for me. This is probably because I've never had to worry about a political system killing me before. Trump's ascendancy may change that, but for the time being, it's still just noise.
Capitalism and whatever variation of communism that the USSR had are not the only ways to organize society. Even the USA doesn't have pure capitalism; if it did, there would be no government regulations whatsoever.
Given that, I'm curious about why you focus on currency as such a central driving metric for everything. I don't believe it's just because of political background. What else might make money so important for you?
We in our company don't do anything unethical nor unlawful, but love money very much, and are very glad other people find our work useful.
PS: Just remembered that many people who respect the Bible, for example members of The Church of Jesus Christ of Latter-day Saints, don't have any problems with money.
Kinds. All Kinds.
See also: crapification; financialization; parasitic private equity
I can't speak for HN, but I imagine some readers are more interested in the benefits of cool technology beyond putting more money in the pockets of investors and management.
I grew up in USSR and saw with my own eyes what happens when personal profit is not the #1 focus.
Yes, _commercial_ companies should also focus on everything else as well, but profit must be #1. And the society steer the businesses towards good things by implementing laws that businesses must follow.
Commercial corporations _exists_ to make money to shareholders. It's written in laws and company bylaws. That's exactly what society decided commercial companies should do. If a _commercial_ company puts something else before profits -- it can be sued by shareholders (lawsuits follow laws, aka boundaries put by society).
And by the way, we already have framework to do your idea: companies can register as "non-profit" or "public benefit". So your suggestion is easily implemented today by forbidding commercial companies and only allowing non-profits or public benefit.
If instead you have a small number of monopolies with legislative capture, then you need them to be virtuous.
But something is being produced - it is version 2.0 of the software. This is an artifact that is then shipped to users or deployed to a server. Peter’s solution fixed the issue and did not (seemingly) create further maintenance burden, which would have taken attention away from other tasks, i.e. reduced future productivity.
I agree that metrics for programmer productivity are often useless (e.g. using lines of code is a bad idea for obvious reasons), but it seems silly to claim that the entire concept of productivity does not apply to the production of software.
The rest is just usual software guy hubris and lack of awareness of the discipline.
At least three people do, and they are mentioned in the article: the author, his colleague, and Martin Fowler.
I suspect that software productivity (like the productivity of scientists, artists, composers, etc.) on any non-trivial project may be measurable on a scale of years, but not necessarily months or calendar quarters. A larger issue is that productivity is often due to external factors as much as it is to individual effort, and overnight success is often the result of extended periods of limited progress or even repeated failure.
There is such a thing as productivity in programming. If you could measure it it would likely be some combination of peer review and an analysis of the impact implemented features and fixes had. Some companies actually have programmers rate each other. I don't know how well it works and I think it can lead to perverse scenarios. But you can come up with metrics that are positively correlated with productivity.
Because it also means:
"cause (a particular result or situation) to happen or exist."
> You seem to be playing a definition word game. [...] Obviously productivity does approximate business value in a very meaningful way when it is defined in terms of delivered business value.
Then the author responds
> Martin asserts "any true measure of software development productivity must be based on delivered business value". I agree, and I propose there is no such thing. We're better off dropping metaphors altogether and just talking about what programmers do: Solve problems.
So I guess the author wants to replace the term "software productivity" with something like "problem-solving productivity, measured in terms of business value"...? Kind of silly IMO, but to each his own.
The only thing this article gets at is that engineers may not know how to calculate their own productivity; but it doesn't means it's not calculable.
But reality is never that clear cut. How’s the ratio look when:
- Peter goes to the park and the breakthrough doesn’t come?
- Or it comes 3 weeks later?
- Or he deletes 100 lines of code and introduces a new bug?
So more lines of code is better!
Um, we know this doesn't work that way as a good measure.
This is like comparing algorithms that do the same thing to algorithms that do different things. You're not going to get good valid comparisons. Metrics for one thing may not work at all for another.
It's a perfectly cromulent measure so long as we understand the limitations of the measure. For example, trying to measure the productivity of a day or a sprint? That's silly. Measure the output of a team which does not produce an entire product? Won't work because you'd have to figure out how to apportion the productivity.
Eg https://www.britannica.com/money/productivity says:
"productivity, in economics, the ratio of what is produced to what is required to produce it. Usually this ratio is in the form of an average, expressing the total output of some category of goods divided by the total input of, say, labour or raw materials."
The whole reason for this discussion is situations like Microsoft having 200k employees and making $240B in a year. Which employees, teams, or even departments are more productive? They want to know.
And even if it did not matter, likely the expense of this year influences the income over multiple future years, so you compare the dollars in / dollars out for which periods?
This is simply never going to end if carried on along the lines of this article, or along the lines of most of the comments here.
There's no way to objectively and reasonably put a value on something.
All we have is a theory of subjective value, which does a bit of handwaving about utility and works out some ways where we can come to a price, regardless of the fact that the values in peoples' heads are subjective.
Thus the distinction between knowledge work and "tangible work" like bricklaying is actually a moot point. Yes, you can measure "productivity" of bricklaying in metres per day, but ultimately you care about value, not amounts of wall.
The arguments about one guy definitely being more productive than another are similar. One person values speed, another values maintenance costs downstream. It is subjective what ought to be more important.
It complains, but doesn't offer a solution. It simply criticizes and says "all engineers cannot and thusly should not be measured".
The ironic thing is, the blog post is implicitly measuring by not explicitly measuring. The measurement is the bug ticket itself and whatever value attached to it.
But to this end, I generally agree. There are qualitative and quantitative measurements. Quantitative is the value of the ticket commonly ascribed by the team (scrum? agile? whatever). Qualitative should come up in review.
Qualitative is SO HARD. Top down? Team 360? Mixed? But it must be undertaken and refined by the team at each level of the org. Otherwise you will run into the exact situation described by the blog post and you won't know how to judge left from right, good from bad. Maybe the blog post's example isn't that great, too much information is missing to make a solid judgement, but you need to decide who to reward via promotion, annual raises and who to reprimand and who to not change.
But still, all systems are terrible, but you must pick one less it be picked for you.
It's how we sent rockets to the moon, it seemed to work ok.
Your company and boss is sued because someone says their firing was based on discrimination because they can't prove it was for performance.
The truth is that managers can suss out 90% of the problems without a number. However, we are asked to document the hell out of it if we want to do something about it. And twice in my career I've been wrong: I mistook someone who was quiet for not doing much until I dove into the work. I trust my gut, but I confirm with numbers.
There are so many problems rooted in this statement. Government program, presidential mandate (i.e. unlimited budget), no competition.
SpaceX is clearly better than NASA except maybe they don't push hard enough nor evaluate their engineers so you have a nice stable job if you do nothing.
But this doesn't account for the more realistic examples of Prakash, who completely fails to deliver a working solution, or delivers half a solution, and Percy, who gets it done two weeks late. I'm pretty sure you can define a shit_done/time_elapsed productivity metric for those two guys that is worse than that of Peter and Paul.
Maybe I am a cynic but I suspect that some people are upvoting this because the framing makes them feel OK about getting paid six figures to work three hours a day...
My first rule of software programming is: Work Hard to Avoid Writing Code
— Code is habitat for bugs; more code equals more bugs, and interactions between separate parts of code can harbor even more "interesting" varieties.
— Code takes time to run. Any NoOp is faster than even your best hand-tuned assembly module.
Obviously, this is not absolute in any sense — it breaks down as soon as you need the system to actually do something, at which point you must write some code. But it should be the minimal amount to get the job done, and nothing more.
Obligatory car analogy: While doing some amateur sportscar racing, a coach asked me
"What are the things you do that slow the car down?".
I thought for a moment and started saying "when I start into a corner, if I do a bit too much...",
he interrupted saying "NO, no, what are the BIG things you do that slow the car down?".
"Oh, like braking and turning and lifting off the throttle?".
"Right. So what that means is that you should always avoid doing those things. Obviously, you will certainly have to so some of them as soon as you approach the end of the front straight after the start, but make sure you understand your car, the track, and your skills to the point where you do only the absolute minimum."
Both the software and sportscar versions are deceptively simple — they take a LOT more thinking than it seems at first glance. And that thinking is totally worth it.
It is staggeringly hard to measure.
Output is a weak proxy for impact. But it’s the one that makes intuitive sense to people. Doesn’t make it right or useful. I’m sure you all can envision a parable about your subfield of expertise that showcases how a seemingly light touch has a huge positive impact.
It's straight up impossible. Best we can do is observe and attempt to measure proxies.
Take hypothetical situation: VPs A & B debate a business decision. Let's say A wins the argument and their solution leads to a revenue increase of $10M, and let's say we can confidently state that this is the outcome driven primarily by them winning the argument. Is the impact a net growth in business of $10M? In a sense, yes; but in some other sense, perhaps if they went with B's solution, the revenue growth would have been $15M? There's probably no way to know for sure unless you try both approaches, which is often impossible...
If you can solve a problem quickly, at the right time, with buyin from your org, that’s positive impact.
That general case differs wildly when you descend to the particulars.
It becomes clear that simple quantity of code and tickets is not enough- but it's also not nothing. Part of what is missing is quality and assessments of task complexity. Part of what is missing is the other parts of the jobs, like design and code reviews.
I don't think it's hopeless, and can at least be used to look into why some people don't seem to make much at all.
From one point of view, the users of the software are supposed to enjoy so much of a productivity increase that it's not supposed to matter if the coders are as productive as they could be or not.
Give or take a few hundred percent at least.
I realize that most people who've been to business school still aren't going to develop the needed acumen to handle a situation like this.
Too many times the only training retained is a knee-jerk over-reaction to a fraction of a percent :\
Ever see one of these "leaders" have a cow and it was as stupid as it could have possibly been?
I'm confident there are still some natural leaders that can thrive without worrying about every ounce of nose to the grindstone for their staff.
Some things you just can't fix.
Using productivity as a metric leads to the same nonsequitur stuff in many many fields.
It works somewhat for industrial output when you're working with commodities. Or it can work in some more fields as an ancillary measure if you pair it with some other quality, customer satisfaction, outcomes etc measure. But usually you don't want to maximize work while getting good outcomes, you just want the good outcomes.
It's still true that measuring lines of code, time spent coding, commits or anything else is at best a proxy of productivity. It's also true that without any code changes problems more often than not don't get solved or we at least can't call the activity software development.
Easy metric to understand, easy metric to teach, just remember that it applies to teams and not individuals.
See: Out of the Tar Pit.
Using lines of code to track productivity is absurd (do people really believe it or is it just a strawman at this point?). I'm reminded of that midwit meme where a junior has very few lines of code written because they don't know the code base well enough, the midwit writes up an whole framework, and the senior engineer has a net negative lines of code contribution.
Theres also a fundamental difference between creating and maintaining code. Something like 10 guys wrote visicalc. Does that mean they were contributing millions of dollars in profit per hours? What about the maintainance to keep it going? Bug fixes? Patches? On call infra guys? What about opportunity cost of putting engineers on deadend projects?
My point is tracking productivity in software dev - maybe all knowledge work for that matter - is complicated. Maybe that's why there's so much "busywork" (emails, slack, tickets, meetings, etc). Everyone wants to look productive but no one knows what it means
Productivity does not mean "increases business value while decreasing maintenance costs [and having no net negative impact in any way]". It doesn't even mean "solving a problem".
Productivity just means "to make something", or more specifically the rate at which something is made. That's all. You can make 10x more of something, and it can be garbage quality, but you did make it, and you did make more of it, so your productivity increased.
If you produce 10x more grain than you did yesterday, you are more productive. The grain might now be full of heavy metals, pesticides and toxins. But you did in fact produce more grain. If you were trying to measure productivity of usable, healthy, high-quality grain, that is a different measurement than just productivity of grain. You may assume everybody knows what you mean when you say "productive", but you'd be wrong.