NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
So where are all the AI apps? (answer.ai)
paxys 43 minutes ago [-]
It is incredibly easy now to get an idea to the prototype stage, but making it production-ready still needs boring old software engineering skills. I know countless people who followed the "I'll vibe code my own business" trend, and a few of them did get pretty far, but ultimately not a single one actually launched. Anyone who has been doing this professionally will tell you that the "last step" is what takes the majority of time and effort.
TeMPOraL 10 minutes ago [-]
> It is incredibly easy now to get an idea to the prototype stage

Yup. And for most purposes, that's enough. An app does not have to be productized and shipped to general audience to be useful. In fact, if your goal is to solve some specific problem for yourself, your friends/family, community or your team, then the "last step" you mention - the one that "takes majority of time and effort" - is entirely unnecessary, irrelevant, and a waste of time.

The productivity boost is there, but it's not measured because people are looking for the wrong thing. Products on the market are not solutions to problems, they're tools to make money. The two are correlated, because of bunch of obvious reasons (people need money, solving a problem costs money, people are happy to pay for solutions, etc.), but they're still distinct. AI is dropping the costs of "solving the problem" part, much more than that of "making a product", so it's not useful to use the lack of the latter as evidence of lack of the former.

danans 6 minutes ago [-]
> Anyone who has been doing this professionally will tell you that the "last step" is what takes the majority of time and effort.

That's true, but even the "last step" is being accelerated. The 10% that takes 90% of the time has itself been cut in half.

An example is turning debug logs and bug reports into bugfixes, and performance stats into infrastructure migrations.

The time required to analyze, implement, and deploy those has been reduced by a large amount.

It still needs to be coupled with software engineering skills - to decide between multiple solutions generated by an LLM, but the acceleration is significant.

adriand 21 minutes ago [-]
> Anyone who has been doing this professionally will tell you that the "last step" is what takes the majority of time and effort.

This is true, and I bet there are thousands of people who are in this stage right now - having gotten there far faster than they would have without Claude Code - which makes me predict that the point made in the article will not age well. I think it’s just a matter of a bit more time before the deluge starts, something on the order of six more months.

lebuin 16 minutes ago [-]
I'd argue that LLMs are not yet capable of the last step, and because most sufficiently large AI-generated codebase are an unmaintainable mess, it's also very hard for a human developer to take over and go the last mile.
Cthulhu_ 18 minutes ago [-]
Exactly, there have been loads of tools over time to make software development easier - like Dreamweaver and Frontpage to build websites without coding, or low/no-code platforms to click and drag software together, or all frameworks ever, or libraries that solve issues that often take time - and I'm sure they've had a cumulative effect in developer productivity and / or software quality.

But there's not one tool there that triggered a major boost in output or number of apps / libraries / products created - unless I missed something.

Sure, total output has increased, especially since the early 2010's thanks to both Github becoming the social network of software development, and (arguably) Node / JS becoming one of the most popular languages/runtimes out there attracting a lot of developers to publish a lot of tools. But that's not down to productivity or output boosting developments.

prhn 20 minutes ago [-]
Even beyond the engineering there are 100 other things to do.

I launched a vibe coded product a few months ago. I spent the majority of my time

- making sure the copy / presentation was effective on product website

- getting signing certificates (this part SUCKS and is expensive)

- managing release version binaries without a CDN (stupid)

- setting up LLC, website, domain, email, google search indexing, etc, etc

ryandrake 4 minutes ago [-]
Exactly. The "writing code" part is literally the easiest part of building a software business. And that was even before LLM assisted coding. Now it's pretty much trivial to just spew slop code until something works. The hard parts are still: making the right thing, making it good, getting feedback and idea validation, and the really hard part is turning it into a business.
zdc1 13 minutes ago [-]
Even if you have the app, you get to start the fun adventure of marketing it and actually trying to grow the damn thing
TeMPOraL 9 minutes ago [-]
Right. Which is something you neither need nor want if you just wanted to have an app.
dominotw 42 minutes ago [-]
all they did is annoy their friends and family by sharing their vibeslop app and asking for "feedback".

I really dont know how to respond to these requests. I am going to hide out and not talk to anyone till this fad passes.

Reminds of the trend where everyone was dj wanting you to listen their mixtrack they made on abbleton live

highstep 21 minutes ago [-]
Is it really that big of a deal to help/encourage a friend/family in these simple ways?? Do you have no time in life to smell the flowers?
bogwog 29 minutes ago [-]
When someone sends me an AI generated project or proposal, I just send them an AI generated reply I know they're not going to bother reading either.
101008 39 minutes ago [-]
"I think it's great, you should deploy it! Let me know when it's in production"
calvinmorrison 34 minutes ago [-]
It's helping with that part too. I was able to configure a grafana stack with the help of claude for our ansible scripts.
skeeter2020 32 minutes ago [-]
That's no where near the end stage of launching a business.
stronglikedan 31 minutes ago [-]
I used it to design my business cards!
ertgbnm 1 minutes ago [-]
Does the data not support a 2X increase in packages?

Pre-ChatGPT, in ~2020, there were about 5,000 new packages per month. Starting in 2025 (the actual year agents took off), there is a clear uptick in packages that is consistently about 10,000 or 2X the pre-ChatGPT era.

In general, the rate of increase is on a clear exponential. So while we might not see a step change in productivity, there comes a point where the average developer is in fact 10X productive than before. It just doesn't feel so crazy because it can about in discrete 5% boosts.

I also disagree with the dataset being a good indicator of productivity. I wouldn't actually suspect the number of packages or the frequency of updates to track closely with productivity. My first order guess would that AI would actually be deflationary. Why spend the time to open source something that AI can gen up for anyone on a case by case basis specific to the project. it takes a certain level of dedication and passion for a person to open source a project and if the AI just made it for them, then they haven't actually made the investment of their time and effort to make them feel justified in publishing the package.

The metrics I would expect to go up are actually the size of codebases, the number of forks of projects that create hyper customized versions of tools and libraries, and other metrics like that.

Overall, I'd predict AI is deflationary on the number of products that exist. If AI removes the friction involved with just making a custom solution, then the amount of demand for middleman software should actually fall as products vertically integrate and reduce dependencies.

hombre_fatal 4 minutes ago [-]
Maybe the top 15,000 PyPi packages isn't the best way to measure this?

Apparently new iOS app submissions jumped by 24% last year:

> According to Appfigures Explorer, Apple's App Store saw 557K new app submissions in 2025, a whopping 24% increase from 2024, and the first meaningful increase since 2016's all-time high of 1M apps.

The chart shows stagnant new iOS app submissions until AI.

Here's a month by month bar chart from 2019 to Feb 2026: https://www.statista.com/statistics/1020964/apple-app-store-...

turlockmike 49 minutes ago [-]
I deleted vscode and replaced with a hyper personal dashboard that combines information from everywhere.

I have a news feed, work tab for managing issues/PRs, markdown editor with folders, calendar, AI powered buttons all over the place (I click a button, it does something interesting with Claude code I can't do programmatically).

Why don't I share it? Because it's highly personal, others would find it doesn't fit their own workflow.

camdenreslink 40 minutes ago [-]
Technical people (which is by far the minority of people out there) building personal apps to scratch an itch is one thing.

But based on the hype (100x productivity!), there should be a deluge of high quality mobile apps, Saas offerings, etc. There is a huge profit incentive to create quality software at a low price.

Yet, the majority of new apps and services that I see are all AI ecosystem stuff. Wrappers around LLMs, or tools to use LLMs to create software. But I’m not really seeing the output of this process (net new software).

physicsguy 35 minutes ago [-]
I worked in an industry for five years and I could feasibly build a competitor product that I think would solve a lot of the problems we had before, and which it would be difficult to pivot the existing ones into. But ultimately, I could have done that before, it just brings the time to build down, and it does nothing for the difficult part which is convincing customers to take a chance on you, sales and marketing, etc. - it takes a certain type of person to go and start a business.
amrocha 6 minutes ago [-]
Nobody’s talking about starting businesses. The article is specifically about pypi packages, which don’t require any sales and marketing. And there’s still no noticeable uptick in package creation or updates.
raw_anon_1111 26 minutes ago [-]
There is no money in mobile apps. It came out in the Epic Trial that 90% of App Store revenue comes from in app purchases for pay to win games. Most of the other money companies are making from mobile are front end for services.

If someone did make a mobile app, how would it get up take? Coding has never been the hard part about a successful software product.

thewebguyd 19 minutes ago [-]
> Wrappers around LLMs, or tools to use LLMs to create software. But I’m not really seeing the output of this process

Because it's better to sell shovels than to pan for gold.

In the current state of LLMs, the average no-experience, non-techy person was never going to make production software with it, let alone actually launch something profitable. Coding was never the hard part in the first place, sales, marketing & growth is.

LLMs are basically just another devtool at this point. In the 90s, IDEs/Rapid App Development was a gold rush. LLMs are today's version of that. Both made developer's life's better, but neither resulted in a huge rush of new, cheap software from the masses.

morkalork 32 minutes ago [-]
Before LLMs, there were code sweatshops in India, Vietnam, Latin America, etc. and they've been pumping out apps and SaaS products for decades now.
the-smug-one 21 minutes ago [-]
And it was all crap software, no? EDIT: If it was crap, then that is still good for AI.
morkalork 3 minutes ago [-]
AI-powered devs are struggling to stand above it so it wasn't all crap, or, AI produced stuff is too
CodingJeebus 34 minutes ago [-]
I think this is the great conundrum with AI. I find it's most useful when I build my own tools from models. It's great for solving last-mile-problem types of situations around my workflow. But I'm not interested in trying to productize my custom workflow. And I've yet to encounter an AI feature on an existing app that felt right.

Problem is that all these companies trying to push AI experiences know that giving users unfettered access to their data to build further customization is corporate suicide.

oro44 38 minutes ago [-]
Well it’s mostly explained by the fact that most people lack imagination and can’t hold enough concepts about a particular experience to think about how to re-imagine it, to begin with.

Oh and sadly, llm’s are useless for the imaginative part too. Shucks eh.

peteforde 28 minutes ago [-]
I share this particular cynicism.

I have a list of ideas a mile long that gets longer every day, and LLMs help me burn through that list significantly faster.

However, the older I get, the more distraught I get that most people I meet "IRL" are simply not sitting on a list of problems they simply lack time to solve. I have... a lot of emotions around this, but it seems to be the norm.

If someone doesn't see or experience problems and intuitively start working out how they would fix them if they only had time, the notion that they could pair program effectively ideas that they didn't previously have with an LLM is absurd.

sputknick 43 minutes ago [-]
This is probably my favorite gain from AI assisted coding: the bar for "who cares about this app" has dropped to a minimum of 1 to make sense. I recently built an app for grocery shopping that is specific to how and where I shop, would be useless to anyone other than my wife. Took me 20 minutes. This is the next frontier: I have a random manual process I do every week, I'll write an app that does it for me.
ElFitz 35 minutes ago [-]
More than that. Building a throwaway-transient-single-use web app for a single annoying use kind of makes sense now, sometimes.

I had to create a bunch of GitHub and Linear apps. Without me even asking Codex whipped up a web page and a local server to set them up, collecting the OAuth credentials, and forward them to the actual app.

Took two minutes, I used it to set up the apps in three clicks each, and then just deleted the thing.

Code as transient disposable artifacts.

BoneShard 28 minutes ago [-]
I posted it recently, but now this works differently https://xkcd.com/1205/

You can get a throw away app in 5 mins, before I wouldn't even bother.

MeetingsBrowser 36 minutes ago [-]
What exactly were you bale to build in 20 minutes?
stavros 8 minutes ago [-]
I built a small app to emit a 15 kHz beep (that most adults can't hear) every ten minutes, so I can keep time when I'm getting a massage. It took ten minutes, really, but I guess it's in the spirit of the question.

For 20 minutes of time, I had a simple TTS/STT app that allows me to have a voice conversation with my AI assistant.

TeMPOraL 21 minutes ago [-]
Me, and photo editor tool to semi-automate a task of digitizing a few dozen badly scanned old physical photos for a family photo book. Needed something that could auto-straighen and auto-crop the photos with ability to quickly make manual adjustments, Gemini single-shotted me a working app that, after few minutes of back-and-forth as I used it and complained about the process, gained full four-point cropping (arbitrary lines) with snapping to lines detected in image content for minute adjustments.

Before that, it single-shot an app for me where I can copy-paste a table (or a subsection of it) from Excel and print it out perfectly aligned on label sticker paper; it does instantly what used to take me an hour each time, when I had to fight Microsoft Word (mail merge) and my Canon printer's settings to get the text properly aligned on labels, and not cut off because something along the way decided to scale content or add margins or such.

Neither of these tools is immediately usable for others. They're not meant to, and that's fine.

shafyy 18 minutes ago [-]
That's fine and all, but how much are you ready to pay to Anthropic and OpenAI to be able to do this? Like, is it worth 100 bucks a month for you to have your own shopping app?
joshmarinacci 7 minutes ago [-]
That is an excellent question. For me the answer is yes, but I'm unusual.
shafyy 2 minutes ago [-]
Haha great. I guess my wider point is that most people won't be ready to pay for it, and in the end there will be only two ways to monetize for OpenAI et al: Ads or B2B. And B2B will only work if they invest a lot into sales or if the business owners see real productivity gains one the hype has died one.
headcanon 31 minutes ago [-]
I've been getting close to that myself, I've been using VSCode + Claude Code as my "control plane" for a bunch of projects but the current interface is getting unwieldly. I've tried superset + conductor and those have some improvements but are opinionated towards a specific set of workflows.

I do think there would be value in sharing your setup at some point if you get around to it, I think a lot of builders are in the same boat and we're all trying to figure out what the right interface for this is (or at least right for us personally).

acessoproibido 44 minutes ago [-]
I would still be interested even if my personal workflow is different. These things can be very inspirational!
39 minutes ago [-]
skydhash 42 minutes ago [-]
> I deleted vscode and replaced with a hyper personal dashboard that combines information from everywhere.

Emacs with Hyperbole[0]?

[0]: https://www.gnu.org/software/hyperbole/

Igrom 34 minutes ago [-]
You can't mention Hyperbole and not say how you use it. I did not get past the "include the occasional action button in org-mode" phase.
neonnoodle 18 minutes ago [-]
actually the rules say that no one can ever explain what Hyperbole is for
Chris2048 44 minutes ago [-]
Wdym by "it does something interesting with Claude code I can't do programmatically"?
graeber_28927 38 minutes ago [-]
I'm guessing it's not a hard coded function, the button invokes. Instead it spawns a claude code session with perhaps some oredefined prompts, maybe attaches logs, and let's claude code "go wild". In that sense the button's effect wouldn't be programmatical, it would be nondeterministic.

Not OP, just guessing.

actionfromafar 31 minutes ago [-]
I have had the thought to write little "programs" in text or markdown for things which would just a chore to maintain as a traditional program. (I guess we call them "skills" now?) Think scraping a page which might change its output a bit every so often. It the volume or cadence is low, it may not be worth it to create a real program to do it.
bena 38 minutes ago [-]
It means he has a girlfriend. And she goes to a different school. In Canada. You've never heard of it.
vladostman 12 minutes ago [-]
Perfect analogy actually
skyberrys 46 minutes ago [-]
This sounds chaotic and fun.
gear54rus 23 minutes ago [-]
Sounds more like satire.
kylecazar 43 minutes ago [-]
... how did that replace vscode?

Do you never open a code editor?

headcanon 28 minutes ago [-]
Kind of. I'm finding that my terminal window in VSCode went from being at the bottom 1/3rd of my screen to filling the whole screen a lot of the time, replacing the code editor window. If AI is writing all of your code for you based on your chat session, a lot of editing capabilities aren't needed as much. While I wouldn't want to get rid of it entirely, I'd say an AI-native IDE would deemphasize code editing in favor of higher-level controls.
EGreg 40 minutes ago [-]
Well, I’m sharing it. If someone wants an early preview or to work w me on this, the calendly link is on the site:

https://safebots.ai

But it requires A LOT of work to make sure it is actually safe for people and organizations. And no, an .md file saying “PLEASE DONT PWN ME, KTHX” isn’t it at all. “Alignment” is only part of the equation.

If you’re not afraid to dive into rabbitholes, here is how it works: http://community.safebots.ai/t/layer-4-browser-extensions-pe...

tbeseda 36 minutes ago [-]
Sorry, I'm not sure how this relates to the content of the article. Sounds like an interesting experience, but this is an analysis of the Python ecosystem pre+post ChatGPT.
fny 2 minutes ago [-]
Claude Code was released for general use in May 2025. It's only March.

Also using PyPI as a benchmark is incredibly myopic. Github's 2025 Octoverse[0] is more informative. In that report, you can see a clear inflection point in total users[1] and total open source contributions[2].

[0]: https://github.blog/news-insights/octoverse/octoverse-a-new-...

[1]: https://github.blog/wp-content/uploads/2025/10/octoverse-202...

[2]: https://github.blog/wp-content/uploads/2025/10/octoverse-202...

causal 40 minutes ago [-]
AI makes the first 90% of writing an app super easy and the last 10% way harder because you have all the subtle issues of a big codebase but none of the familiarity. Most people give up there.
skeeter2020 24 minutes ago [-]
I spent about a week doing an "experiment" greenfield app. I saw 4 types of issues:

0. It runs way too fast and far ahead. You need to slow it down, force planning only and explicitly present a multi-step (i.e. numbered plan) and say "we'll do #1 first, then do the rest in future steps".

take-away: This is likely solved with experience and changing how I work - or maybe caring less? The problem is the model can produce much faster than you can consume, but it runs down dead ends that destroy YOUR context. I think if you were running a bunch of autonomous agents this would be less noticeable, but impact 1-3 negatively and get very expensive.

1. lots of "just plain wrong" details. You catch this developing or testing because it doesn't work, or you know from experience it's wrong just by looking at it. Or you've already corrected it and need to point out the previous context.

take-away: If you were vibe coding you'd solve all these eventually. Addressing #0 with "MORE AI" would probably help (i.e. AI to play/validate, etc).

2. Serious runtime issues that are not necessarily bugs. Examples: it made a lot of client-side API endpoints public that didn't even need to exist, or at least needed to be scoped to the current auth. It missed basic filtering and SQL clauses that constrained data. It hardcoded important data (but not necessarily secrets) like ports, etc. It made assumptions that worked fine in development but could be big issues in public.

take-away: AI starts to build traps here. Vibe coders are in big trouble because everything works but that's not really the end goal. Problems could range from 3am downtime call-outs to getting your infrastructure owned or data breaches. More serious: experienced devs who go all-in on autonomous coding might be three months from their last manual code review and be in the same position as a vibe coder. You'd need a week or more to onboard and figure out what was going on, and fix it, which is probably too late.

3. It made (at least) one huge architectural mistake (this is a pretty simple project so I'm not sure there's space for more). I saw it coming but kept going in the spirit of my experiment.

take-away: TBD. I'm going to try and use AI to refactor this, but it is non trivial. It could take as long as the initial app did to fix. If you followed the current pro-AI narrative you'd only notice it when your app started to intermittently fail - or you got you cloud provider's bill.

DougN7 31 minutes ago [-]
Well put. And that last 10% was always the hardest part, and now it’s almost impossible because emotionally you’re even less prepared for the slog ahead.
SAI_Peregrinus 24 minutes ago [-]
And as we all know, the first 90% of writing an app takes the first 90% of the time, and the last 10% takes the other 90% of the time.
esafak 6 minutes ago [-]
So the way is to read every line of code along the way.
ing33k 34 minutes ago [-]
Agree. I’ve also noticed that feature creep tends to increase when AI is writing most of the code.
peteforde 35 minutes ago [-]
Not sure that I'd look at python package stats to build this particular argument on.

First, I find that I'm using a lot fewer libraries in general because I am less constrained by the mental models imposed by library authors upon what I'm actually trying to do. Libraries are often heavy and by nature abstract low-level calls from API. These days, I'm far more likely to have 2-3 functions that make those low-level calls directly without any conceptual baggage.

Second, I am generalizing but a reasonable assertion can be made that publishing a package is implicitly launching an open source project, however small in scope or audience. Running OSS projects is a) extremely demanding b) a lot of pain for questionable reward. When you put something into the universe you're taking a non-zero amount of responsibility for it, even just reputationally. Maintainers burn out all of the time, and not everyone is signed up for that. I don't think there's going to be anything remotely like a 1:1 Venn for LLM use and package publishing.

I would counter-argue that in most cases, there might already be too many libraries for everything under the sun. Consolidation around the libraries that are genuinely amazing is not a terrible thing.

Third, one of the most recurring sentiments in these sorts of threads is that people are finally able to work through the long lists of ideas they had but would have never otherwise gotten around to. Some of those ideas might have legs as a product or OSS project, but a lot of them are going to be thought experiments or solve problems for the person writing them, and IMO that's a W not an L.

Fourth, once most devs are past the "vibe" party trick phase of LLM adoption, they are less likely to squat out entire projects and far, far more likely to return to doing all of the things that they were doing before; just doing them faster and with less typing up-front.

In other words, don't think project-level. Successful LLM use cases are commit-level.

noemit 7 minutes ago [-]
Theres tons of ai apps. They're all general use chatbots or coding agents. Manus, Cursor, ChatGPT. Almost every app that has a robust search uses a reranker llm. AI is everywhere.

As far as totally new products - I built one (Habit.am - wordless journaling for mental health) and new products require new habits, people trying new things, its not that easy to change people's behavior. It would be much easier for me to sell my little app if it was a literal plain old journal.

sid_talks 23 minutes ago [-]
AI does make me more productive. At least until the stage of getting my idea to the "working prototype stage". But in my personal experience, no one has been realistically able to get to the 10x level that a lot of people claim to have achieved with LLMs.

Yes, you do produce more code. But LoC produced is never a healthy metric. Reviewing the LLM generated code, polishing the result and getting it to production-level quality still very much requires a human-in-the-loop with dedicated time and effort.

On the other hand, people who vibe code and claims to be 10x productive, who produces numerous PRs with large diffs usually bog down the overall productivity of teams by requiring tenuous code reviews.

Some of us are forced to fast-track this review process so as to not slow down these "star developers" which leads to the slow erosion in overall code quality which in my opinion would more than offset the productivity gains from using the AI tools in the first place.

tiborsaas 32 minutes ago [-]
I fail to see why the author thinks Python packages are a good proxy for AI driven/built code. I've built a number of projects with AI, but I haven't created any new packages.

It's like looking at tire sales to wonder about where the EV cars are.

bodge5000 22 minutes ago [-]
This is addressed, though not quantified (I suppose because theres no central repository for that), in the introduction. To use your analogy, the author heard EV sales were through the roof, couldnt find any evidence that more EV's were actually on the road, so looked at tire sales to see if the answer was in there.
stronglikedan 28 minutes ago [-]
Nitpick, but tire sales are a good proxy for determining where EVs are, since the owners are usually suckered into buying special "EV" tires.
npilk 6 minutes ago [-]
I believe EVs also wear tires out faster (because they are heavier), so they need more frequent replacement.
happyPersonR 21 minutes ago [-]
This is going to cause people to react, but I think those of us that truly love opensource don't push AI generated code upstream because we know it's just not ready for use beyond agentic use. It's just not robust for alot of use common use cases because the code produces things that are hyper hardcoded by default, and the bugs are so basic, i doubt any developer that actually cared would push something so shamefully sloppy upstream with their name on it.

The tools for generating AI code aren't yet capable of producing code that is decent enough for general purpose use cases, with good robust tests, and clean and quality.

ballenf 9 minutes ago [-]
The thesis has it backwards. We will see fewer published/downloaded apps/packages as people rely on others less. I'm not sure we're quite there yet but I'm increasingly likely to spend a few minutes giving an LLM a chance to make a tool I need instead of sifting through sketchy and dodgy websites for some slightly obscure functionality. I use fewer ad-heavy sites that for converting a one text file format to another.

Personally, I see the paid or adware software market shrinking, not growing, as a testament to the success of LLMs in coding.

vjvjvjvjghv 40 minutes ago [-]
This remains me so much of the .COM bubble in 2000. A lot of clueless companies thought that they just need to “do internet” without any further understanding or strategy. They burned a ton of money and got nothing out of it. Other companies understood that the internet is an enabling technology that can support a lot of business processes. So they quietly improved their business with the help of the internet.

I see the same with AI. Some companies will use AI quietly and productively without much fuzz. Others are just using it as a marketing tool or an ego trip by execs but no real understanding.

arjie 27 minutes ago [-]
I won't make any claims as to the Python ecosystem and why there is no effect seen here (and I suppose no effect seen of the Internet on productivity) but one thing that is entirely normal for me now is that I never see the need to open-source anything. I also don't use many new open-source projects. I can usually command Claude Code to build a highly idiosyncratic thing of greater utility. The README.md is a good source of feature inspiration but there are many packages I simply don't bother using any more.

Besides, it's working for me. If it isn't working for others I don't want to convince them of anything. I do want to hear from other people for whom it's working, though, so I'm happy to share when things work for me.

tantalor 24 minutes ago [-]
Where are they? Well they aren't being uploaded to PyPI. 90% of the "AI apps" one-off scripts that get used by exactly one person and thrown away. The rest are too proprietary, too personal, or too weird to share.
EastLondonCoder 15 minutes ago [-]
I’ve done a event ticket system that’s in production. Stripe integration, resend for mailing and a scan app to scan tickets. It’s for my own club but it’s been working quite well. Took about 80 hours from inception to live with a focus on testing.

I’ve done some experiments with reading gedcom files, and I think I’m quite close to a demoable version of a genealogy app.

Biggest thing is a tool for remotely working musicians. It’s about 10000 lines of well written rust, it is a demoable state and I wish I could work more on it but I just started a new job.

But yeah, this wouldn’t have been possible if I hadn’t been a very experienced dev who knows how to get things live. Also I’ve found a way to work with LLMs that works for me, I can quickly steer the process in the right way and I understand the code thats written, again it’s possible that a lot of real experience is needed for this.

sesm 13 minutes ago [-]
Vibe coding is actually a brilliant MLM scheme: people buy tokens to generate apps that re-sell tokens (99% of those apps are AI-something).
jmarchello 19 minutes ago [-]
Looking at Python packages, or any developer-facing form of software, is not a good indicator of AI-based production. The key benefit of AI development is that our focus moves up a few layers of abstraction, allowing us to focus on real-world solutions. Instead of measuring Github, you need to measure feature releases, internal tools created, single-user applications built for a single niche use case.

Measuring python packages to indicate AI-based production is like measuring saw production to measure the effectiveness of the steam engine. You need to look at houses and communities being built, not the tools.

CrzyLngPwd 7 minutes ago [-]
The first 80% is the easy part, and good ol' Visual Basic was fabulous at it, but the last 80% is the time suck.

Same with vibe-coded stuff.

Sharlin 47 minutes ago [-]
The reason why the release cadence of apps about AI has increased presumably reflects the simple facts that

a) there are likely many more active, eager contributors all of a sudden, and

b) there's suddenly a huge amount of new papers published every week about algorithms and techniques that said contributors then eagerly implement (usually of dubious benefit).

More cynically, one might also hypothesize that

c) code quality has dropped, so more frequent releases are required to fix broken programs.

patchorang 30 minutes ago [-]
I've been vibe-coding a Plex music player app for MacOS and iOS. (I don't like PlexAmp) I've got to the point where they are the apps I use for listening to music. But they are really just in an alpha/beta state and I'm having a pretty hard time getting past that. The last few weeks have felt like I'm playing wack-a-mole with bugs and issues. It's definitely not at the point others will be willing to use it as their daily app. I'm having to decide now if I keep wanting to put time into it. The vibe-coding isn't as fun when you're just fixing bugs.
peteforde 24 minutes ago [-]
Genuinely curious: are you actually vibe coding (as in not writing or looking at the code) or are you pair programming with a current model (eg. Sonnet or Opus) using plan -> agent -> debug loops in something like Cursor?
justacatbot 23 minutes ago [-]
The bottleneck shifted but didn't disappear. Getting to a working prototype in a weekend is real, but error handling, edge cases, and ops work hasn't gotten much faster. Distribution is completely unchanged too. A lot of these 'where are the AI apps' questions are really asking why there aren't more successful AI businesses, which is a harder and very different problem.
andrewflnr 25 minutes ago [-]
By "apps" this author apparently means "PyPi packages". This is a bafflingly myopic perspective in a world of myopic perspectives. Do we really expect people vibecoding "apps" to put anything on PyPi as a result? They're consumers of packagers, not creators.

I don't blame people for responding to the title instead of the article, because the article itself doesn't bother to answer its own question.

esafak 2 minutes ago [-]
A better data set would have been the BigQuery Github stats.
mehagar 16 minutes ago [-]
Did you read the article? The author means software in general, not just user-facing apps.
andrewflnr 13 minutes ago [-]
Yes, I did, just to make sure it was as silly as I thought at first glance.

You do realize that "The author means software in general" is already a concession that they don't actually address the question in the title, right?

Feuilles_Mortes 15 minutes ago [-]
I like using it to make personal apps that are specific to my use-case and solve problems I've had for ages, but I like my job (scientist), and I don't want to run an app company.
skyberrys 46 minutes ago [-]
Wouldn't the apps go into the Apple store and Android play? I guess looking at python packages is valid, but I don't think it's the first thing someone thinks to target with vibe coding. And many apps go to be websites, a website never tells me much about how it is made as a user of the site.
jayd16 33 minutes ago [-]
To be fair, those markets are dominated by entrenched market leaders.
zabil 26 minutes ago [-]
I am learning music. I used codex to create a native metronome app, a circle of fifths app, a practice journal app. I try to build a native app alternatives.

I have no plans of publishing them or making the open source, so it will not be a part of this metric. I believe others are doing this too.

EruditeCoder108 44 minutes ago [-]
well, many apps i made are really good but i would never bother to share it, takes unnecessary effort and i don't really know what works best for me will work like that for others
32 minutes ago [-]
saidnooneever 39 minutes ago [-]
maybe some developers are more productive while the rest of em is laid off.. keeping the same release cadence but with fewer devs?

i know maybe this is not to your analysis as its about open source stuff, but this is the sentiment i see with some companies. rather than have 10x output which their clients dont need, they produce things cheaper and earn more money from what they produce. (and later lose that revenue to a breach :p)

CharlieDigital 22 minutes ago [-]

    > So, let’s ask again, why? Why is this jump concentrated in software about AI?...Money and hype
The AI field right now is drowning in hype and jumping from one fad to another.

Don't get me wrong: there are real productivity gains to be had, but the reality is that building small one-offs and personal tools is not the same thing as building, operationalizing, and maintaining a large system used by paying customers and performing critical business transactions.

A lot of devs are surrendering their critical thinking facilities to coding agents now. This is part of why the hype has to exist: to convince devs, teams, and leaders that they are "falling behind". Hand over more of your attention (and $$$) to the model providers, create the dependency, shut off your critical thinking, and the loop manifests itself.

The providers are no different from doctors pushing OxyContin in this sense; make teams dependent on the product. The more they use the product, the more they build a dependency. Junior and mid-career devs have their growth curves fully stunted and become entirely reliant on the LLM to even perform basic functions. Leaders believe the hype and lay off teams and replace them with agents, mistaking speed for velocity. The more slop a team codes with AI, the more they become reliant on AI to maintain the codebase because now no one understands it. What do you do now? Double down; more AI! Of course, the answer is an AI code reviewer!. Nothing that more tokens can't solve.

I work with a team that is heavily, heavily using AI and I'm building much of the supporting infrastructure to make this work. But what's clear is that while there are productivity gains to be had, a lot of it is also just hype to keep the $$$ flowing.

nyc_pizzadev 11 minutes ago [-]
A friend of mine who is tech savvy and I would say has novice level coding experience decided to build his dream app. Its really been a disaster. The app is completely broken in many different ways, has functionality gaps, no security, no thought out infrastructure, its pretty much a dumpster fire. The problem is that he doesn't know what he doesn't know, so its impossible for him to actually fix it beyond instructing the AI over and over to simply "fix it". The more this is done, the worst the app becomes. He's tried all the major AI vendors, from scratch, same result, a complete mess of code. He's given up on it now and has moved on with his life.

Im not saying that AI is bad, infact, its the opposite, its one of the most important tools that I have seen introduced in my lifetime. Its like a calculator. Its not going to turn everyone into a mathematician, but it will turn those who have an understanding of math into faster mathematician.

chistev 55 minutes ago [-]
On Show HN.
dawnerd 45 minutes ago [-]
Or the selfhosted subreddit.
severak_cz 37 minutes ago [-]
My guess - these are not not on PyPI because of libraries. AI generating is good when you don't care about how your app works, when implementation details does not matter.

When you are developing library it's exact opposite - you really care about how it works and which interface it provides so you end up writing it mostly by hand.

CalRobert 45 minutes ago [-]
So far, as sideloaded APKs on my tablet. Most recently one that makes it easier to learn Dutch and quiz myself based on captions from tv shows
j2kun 43 minutes ago [-]
Classic HN comment: ignore the article and respond directly to its title
somewhatjustin 22 minutes ago [-]
> So where are all the AI apps?

They're in the app stores. Apple's review times are skyrocketing at the moment due to the influx of new apps.

quikoa 52 minutes ago [-]
While a good post the title is a bit ambiguous. The post is about applications created using AI not applications with AI functionality embedded.
yoyohello13 6 minutes ago [-]
I’ll ask another question. Why isn’t software getting better? Seems like software is buggier than ever. Can’t we just have an LLM running in a loop fixing bugs? Apparently not. Is this the future? Just getting drowned in garbage software faster and faster?
furyofantares 9 minutes ago [-]
I have a number of small apps and libraries I've prompted into existence and have never considered publishing. They work great for me, but are slop for someone else. All the cases I haven't used them for are likely incomplete or buggy or weird, the code quality is poor, and documentation is poor (worse than not existing in many cases.)

Plus you all have LLMs at home. I have my version that takes care of exactly my needs and you can have yours.

raw_anon_1111 18 minutes ago [-]
I absolutely hate web development with a passion and haven’t done a new from the ground up web app in 25 years and even since then it was mostly a quick copy and paste to add a feature.

But since late last year even when it’s not part of the requirements leading app dev + cloud consulting projects, I’ll throw in a feature complete internal web admin site to manage everything for a project with a UI that looks like something I would have done 25 years ago with a decent UX.

They are completely vibe coded, authenticated with Amazon Cognito and the only things I verify are that unauthenticated users can’t access endpoints, the permissions of the lambda hosting environment (IAM role) and the database user it’s using permissions.

Only at most 5 people will ever use the website at a time - but yeah I get scalability for free (not that it matters) because it’s hosted on Lambda. (yes with IAC)

The website would not exist at all if it weren’t for AI.

Now just to be clear, if a website is meant for real people and the customer’s customers. I’ll insist on a real web designer and a real web developer be assigned to the project with me.

jimbob21 33 minutes ago [-]
Why would package be used as the standard? What person fully leveraging AI is going to put up packages for release? They (their AI model) write the code to leverage it themselves. There is no reason to take on the maintenance of a public package just because you have AI now. If anything, packages are a net drag on new AI productivity because then you'd have to worry about breaking changes, etc. As far as actual apps being built by AI, the same indie hackers that had garbage codebases that worked well enough for them to print money are just moving even faster. There are plenty of stories about that.
KingOfCoders 42 minutes ago [-]
alasano 32 minutes ago [-]
I wonder when we'll reach saturation of opinionated all-in-one frameworks like these.

superpowers/get-shit-done type bloated workflows that try to do everything.

this seems a bit different but still in the same mental category for me

Robdel12 46 minutes ago [-]
We’re in a personal software era. Or disposable software era however you want to look at it. I think most people are building for themselves and no longer needing to lean on community to get a lot of things done now.

Self plug, but basically that’s the TL;DR https://robertdelu.ca/2026/02/02/personal-software-era/

ed_elliott_asc 41 minutes ago [-]
I think this is right, I can get cause to build me something for my own use that I’d have given up at before, getting to the point of being useable still doesn’t make it shareable.
dddgghhbbfblk 42 minutes ago [-]
Hmmm, my anecdotal experience doesn't match up with this article. Personally I am seeing an explosion of AI-created apps. A number of different subreddits I use for disparate interests have been inundated with them lately. Show HN has experienced the same thing, no?
Kye 14 minutes ago [-]
Cool data. What do I do with it? None of my use cases involve writing software, so I don't think this is _for_ me since my extensive AI use wouldn't show up in git commits, but I'm not sure who it's for. When I'm talking to artist friends, musician friends, academic friends, etc data is nice to have but I'm talking in stories: the real thing I did and how it made me better at the thing.
nemo44x 15 minutes ago [-]
AI is unbelievably useful and will continue to make an impact but a few things:

- The 80/20 rule still applies. We’ve optimized the 20% of time part (a lot!) but all the hype is only including the 80% of work part. It looks amazing and is, but you can’t escape the reality of ~80% of the time is still needed on non-trivial projects.

- Breathless AI CEO hype because they need money. This stuff costs a lot. This has passed on to run of the mill CEOs that want to feel ahead of things and smart.

- You should be shipping faster in many cases. Lots of hype but there is real value especially in automating lots of communication and organization tasks.

codybontecou 44 minutes ago [-]
Stuck behind Apple's app review process.
brontosaurusrex 41 minutes ago [-]
Intel AI denoiser in Blender.
vjvjvjvjghv 36 minutes ago [-]
There is a ton of AI use in photography software. It has improved masking dramatically, denoise is much better, removing objects is easier. But these aren’t sold as “AI apps” but as photo editing tools that use AI as a tool.
chaostheory 35 minutes ago [-]
I feel that your assumption that everyone will want to share is a flawed one.
gos9 24 minutes ago [-]
I, for one, am not publishing my “apps” for others to use because my “apps” make me money
enraged_camel 38 minutes ago [-]
This article is very poorly researched and reasoned, but it's in the "AI hater" category so I guess it's no surprise it's on the front page.

Number of iOS apps has exploded since ChatGPT came out, according to Sensor Tower: https://i.imgur.com/TOlazzk.png

Furthermore, most productivity gains will be in private repos, either in a work setting or individuals' personal projects.

moralestapia 40 minutes ago [-]
I agree with the premise of the article, in the sense that there has not been, and I don't think there will be, a 100x increase in "productivity".

However, PyPi is not really the best way to measure this as the amount of people who take time to wrap their code into a proper package, register into PyPi, push a package, etc... is quite low. Very narrow sampling window.

I do think AI will directly fuel the creation of a lot of personal apps that will not be published anywhere. AI lower the barrier of entry, as we all know, so now regular folks with a bit of technical knowledge can just build the app they want tailored to their needs. I think we´ll see a lot of that.

52-6F-62 53 minutes ago [-]
superkuh 56 minutes ago [-]
On my local computer used only by me because now I don't need a corporation to make them for me. In the past decades I'd make maybe one or two full blown applications for myself per 10 years. In the past year "I" (read: a corporate AI and I) have made dozens to scratch many itches I've had for a very long time.

It's a great change for a human person. I'm not pretending I'm making something other people would buy nor do I want to. That's the point.

dominotw 46 minutes ago [-]
I am now scared to talk to anyone. Eventually the conversation turns to AI and they want to talk or show their vibecoded app.

I am just tired boss. I am not going to look at your app.

threethirtytwo 34 minutes ago [-]
This is so stupid. I don't know whether AI has improved things but this is clearly cope, we're not even a year into the transition since agentic coding took over so any data you gather now is not the full story.

But people are desperate for data right? Desperate to prove that AI hasn't done shit.

Maybe. But this much is true. If AI keeps improving and if the trendline keeps going, we're not going to need data to prove something equivalent to the ground existing.

imperio59 14 minutes ago [-]
This is such copium for AI haters. I stopped working almost any single line of code at the beginning of this year and I've shipped 3 production projects that would have taken months or years to build by hand in a matter of days.

Except none of them are open source so they don't show up in this article's metrics.

But it's fine. Keep your head in the sand. It doesn't change the once in a lifetime shift we are currently experiencing.

rickydamta 2 minutes ago [-]
[dead]
ctdinjeu4 32 minutes ago [-]
[dead]
edwardsrobbie 36 minutes ago [-]
[dead]
genie3io 28 minutes ago [-]
[dead]
alex1sa 1 hours ago [-]
[flagged]
darth_aardvark 53 minutes ago [-]
So many execs and marketing people seem to think customers explicitly "want AI".

Most people do not want AI! Only a tiny segment of Middle Managers Looking To Leverage New Technology are actually excited by AI branding.

But, lots of people want software that does magically useful things, and LLMs can do that! Just...don't brand it as AI.

It's like branding a new computer with more processing power as "Jam Packed with Silicon and Capacitors!" instead of, "It starts up really fast!". Nobody needs to know implementation details if the thing is actually useful.

shermantanktop 37 minutes ago [-]
I’ve pointed this out to my VPs. Consumer sentiment shows a strong negative sentiment about AI, especially in unexpected places. Why are we convinced they will like an AI-forward feature?

There was no real answer but I got definite you’re-being-the-turd-in-the-punchbowl vibes.

bluefirebrand 46 minutes ago [-]
> Most people do not want AI!

Personally, I explicitly want "not AI"

I'm going to be a curmudgeon that is going out of my way to avoid it as much as I possibly can

acessoproibido 42 minutes ago [-]
I think its a very specific tech/HN bias.

I observe the complete opposite with some of my non-tech friends.

While we are sharing anecdotes and personal opinion:

I think most people don't care too much if its "AI" or not, they just want their problems solved...

camdenreslink 38 minutes ago [-]
Most of my nontechnical friends are either AI neutral, or have a negative AI sentiment. I don’t actually know anybody nontechnical that is enthusiastic about AI.
mb7733 51 minutes ago [-]
The question is where are all the new apps or features that are _written_ using LLMs, since everyone is 100x more productive now.
pennomi 42 minutes ago [-]
I mean, look at the Hacker News feed and you’ll get a pretty good sample of new apps and features written by LLMs.

Are they good apps and features? Ehhhh. But let’s not pretend that they’re missing.

the-smug-one 50 minutes ago [-]
Why did you let an LLM write this comment?
astro-lizard 44 minutes ago [-]
It's aggravating, thanks for calling this out. It's also against HN guidelines to let an LLM edit or write your comments.

> Don't post generated comments or AI-edited comments. HN is for conversation between humans.

powvans 45 minutes ago [-]
Quietly adopting the em dash is the move that humans who know, make.
acessoproibido 41 minutes ago [-]
What makes you say that?
serialNumber 35 minutes ago [-]
The short repetitive sentences, and the “it’s not x, it’s y” tone
ElFitz 53 minutes ago [-]
This. So much. Nobody cares whether it’s AI or goblins under the hood. Just like nobody cares about how smartphones or the internet work. The only thing that matters to the majority of user is what it does for (or to) them.

Apple’s marketing was (is?) textbook this.

Also, I’d bet most people building with LLMs don’t care, or even know about, PyPI.

oro44 40 minutes ago [-]
People just don’t learn do they?

It’s truly amazing. This is why I’m not surprised people are ‘blown away’ by llm’s. They were never truly intrinsically intelligent - they were expert regurgitators of knowledge on demand.

Steve already suffered from immense scar tissue of starting with the technology. And yet.. this wisdom blows over peoples minds. More fool them.

ElFitz 31 minutes ago [-]
> Steve already suffered from immense scar tissue of starting with the technology.

Funny. I just stumbled upon that specific OpenDoc video today.

https://youtube.com/watch?v=oeqPrUmVz-o

jayd16 37 minutes ago [-]
Can you name one? Why so coy?
ramesh31 53 minutes ago [-]
No one needs another SaaS. Games are the real killer app for AI. Hear me out.

I've wanted to make video games forever. It's fun, and scratches an itch that no other kind of programming does. But making a game is a mountain of work that is almost completely unassailable for an individual in their free time. The sheer volume of assets to be created stops anything from ever being more than a silly little demo. Now, with Gemini 3.1, I can build an asset pipeline that generates an entire game's worth of graphics in minutes, and actually be able to build a game. And the assets are good. With the right prompting and pipeline, Gemini can now easily generate extremely high quality 2d assets with consistent art direction and perfect prompt adherence. It's not about asking AI to make a game for you, it's about enabling an individual to finally be able to realize their vision without having to resort to generic premade asset libraries.

johndough 35 minutes ago [-]
I tried using Gemini for asset generation, but have not yet found a good way to animate them. It does not seem to understand sprite sheets or bone-based animation. Do you know a solution for that?
ramesh31 31 minutes ago [-]
>It does not seem to understand sprite sheets or bone-based animation. Do you know a solution for that?

This is precisely what I'm running into as well. There's a few SaaS solutions that are ok, but I gave up after an attempt at building a pipeline for it. Sticking with building 4X/strategy card games that don't need character animations for now until the models catch up.

dawnerd 43 minutes ago [-]
Except all of the ai created games posted to the various subreddits are awful. No one likes them, no one plays them. The ones that make it to steam end up getting abandoned when the devs hit a performance wall.

Game development just isn’t something AI can do well. Good games are not just recreations of existing titles.

ramesh31 38 minutes ago [-]
>Except all of the ai created games posted to the various subreddits are awful. No one likes them, no one plays them.

As with anything else, 95% of it will always be crap. Taste is now the great differentiator.

skydhash 36 minutes ago [-]
High quality assets is orthogonal to fun. If you can create a fun concept with generic assets, I believe you may find an artist willing to produce the assets for you.
notjes 48 minutes ago [-]
All apps you are using are made with AI.
erythro 38 minutes ago [-]
Not all of us get addicted to the rat race and wake up at 3am to run more Ralph loops. Some are perfectly content getting the same amount of work done as before, just with less investment of time and effort.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 15:29:15 GMT+0000 (Coordinated Universal Time) with Vercel.