NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Ask HN: Are AI dev tools lowering the barrier to entry for creating software?
sevensor 5 hours ago [-]
Every summer, my community pool has a cardboard regatta. Kids can use as much duct tape as they want to waterproof a cardboard box and paddle it 25 yards to the other side. Half of the vessels sink within a length or two and the kids have to swim to the edge of the pool. There’s no age limit, and last year a grown man entered a fully engineered catamaran design that beat all the others handily. The secret was using way more duct tape than anybody else.

AI dev tools are that catamaran. They’ll get you across the pool; you might even get half a mile from shore, but there you are, in the middle of the lake, sitting on cardboard and duct tape, wishing you knew how to swim.

drzzhan 4 hours ago [-]
The best analogy I've ever seen. Thank you for sharing!
joshstrange 4 hours ago [-]
> I am seeing more and more stories about people that don't know how to program are using AI to create software products.

They are, in every case I've seen, creating software _demos_. Those things will fall over under their own weight with 1-2 more iterations.

Someone with no code experience can say "Make snake!" and for other contrived examples and maybe even add a handful of features but very quickly they will code themselves into a corner that they can't get out of. Heck, I sometimes go 3-4 prompts deep on something with Aider then git reset back once it turns out something isn't going to work out.

If some has _fully launched_ a product using only AI to write _all_ the code (Press X to doubt) then it's either a product that will never grow past its initial feature set and/or something trivially copied (with or without AI).

What AI tools may change is the ability for "ideas people" to create a basic MVP (Of the tool itself, I don't think you are going to get an LLM to churn out a whole SaaS codebase without a developer guiding) and raise interest/funding/recruit-others. That's not the "barrier to entry" lowering, that's just a "better slide deck".

n0rdy 42 minutes ago [-]
I'd say not as of today's state of AI tools, but it's difficult to predict the future. So far, I can see many excited non-tech people who can build simple things or demos. But the real complexity starts behind that, once the solution needs to be deployed, maintained, extended with new features, bugs have to be fixed, etc. That's when it gets tricky.

I did a short experiment by trying to build an app with Cursor in the stack and domain I know nothing about. I got it to the first stage, and it was cool. But the app kept crashing once in a while with the memory issues, and my AI friend kept coming up with solutions that didn't help, but made the code more and more overengineered and harder to navigate. I'd feel sorry for those who'd need to maintain tools like this on stages like the one I described. Maybe that's the state of future start-ups out there?

boshalfoshal 5 hours ago [-]
I think the super played out twitter adage has some merit to it: "it makes 10x devs 100x devs."

Those who already have a high level idea of what to do and roughly how to execute it benefit the most from LLMs at the moment. This is very good for purely "technical" devs in greenfield environments. Less useful for super large interconnected codebases, but tools are getting there.

It will not, however, make a bad dev a good one magically. A bad software product is not usually bottlenecked by the software its running on, its bottlenecked by user experience and pmf. That still requires some skilled human input, but that could also change soon. Some people have better product intuition than others but couldn't execute on complex code, so LLMs do help here to an extent.

As of 2025, I think you still need to be a pretty decent dev even with LLM assistance.

pockmarked19 3 hours ago [-]
Not high level, you need to know exactly how it’s done. If you don’t at the start, then you will by the time you arrive at the working commit.

The exception (in that you must learn something) is in design, though. If you ask AI to add something to your API, and do it repeatedly, you will end up with a very poorly designed API, with separate endpoints for updating separate fields in the same record, etc, which will happily work fine.

Unless you knew what to do from the start, you’re going to make a lot of tech debt.

zlagen 4 hours ago [-]
After using the AI chatbots for some time, I think that they are not so useful for non programmers other than for doing small tools, that may be difficult to modify and polish by a non programmer. But they still fail and have subtle errors too often so they are more useful for programmers which already know what the AI is doing and can spot mistakes.
fxtentacle 2 hours ago [-]
I see no change. AI is the new "no code". Which means in both cases, projects outgrow their capabilities quite quickly and then they undergo a messy transition to traditional software development.
kanemcgrath 2 hours ago [-]
For people who want to learn programming, I think language models are a very powerful teaching tool. I have learned more programming than ever before because I have been able to ask questions, and get answers directly relevant to what I’m working on. Maybe it depends how you like to learn things, or maybe works better because I already know a base amount of programming knowledge, but I tell people who want to learn programming to ask chat gpt to teach them through a simple project.
Bjorkbat 3 hours ago [-]
Most of my observations have been that people are using it to make personal software. That is to say, software with an intended user base of just yourself and maybe friends and family.

For software meant to be consumed by the masses it's too unreliable for the all the boring details, but otherwise if you want something that serves a specific purpose then sure, it seems to work really well.

Otherwise though I haven't really hard of any non-technical founders leveraging it to finally get their app off the ground.

ingigauti 2 hours ago [-]
I think it's going lot lower. We are still at the horseless carage level with ai and coding, that is; using new tech in old way

I think we'll have a new programming language(natural language with rules), I'm biased though as I've made that language :)

Going lot lower

dvngnt_ 5 hours ago [-]
I think for prototyping and helping people with some programming abilities yes. No code tools have existed for years, so the barrier was somewhat low to build basic applications.

I've had many friends with "app ideas" and using tools can help them flesh out their value proposition

bko 5 hours ago [-]
Do any of them actually flesh out "app ideas" with AI?

I have the same type of friends and they would often ask me for help. But when you ask for even the minimal amount of work, they often just give up.

For instance, have an idea for an app? Draw out the screens. You can use a pen and paper. Where does this button take you?

AI can help with that sure, but its still a lot of work to go over and iterate with AI

ddgflorida 47 minutes ago [-]
Definitely has lowered the entry cost.
duxup 6 hours ago [-]
I feel like AI can get your started with less friction, but after that you need background knowledge to vet everything that you need to learn ... by not using AI.

It's a dynamic I can't quite put a name on right now but I think that's a barrier.

simonhfrost 5 hours ago [-]
I'm sceptical that you can create entire apps. It might be good to get an MVP off the ground, but once you need to modify code it gets exponentially complicated because: 1) you're not familiar with the code AI wrote and 2) what it writes is generally more complicated.
dehrmann 5 hours ago [-]
The thing that's lowered it most in the last many years has been Roblox. What's raised it the most is the decline of desktop computing and the rise of app computing.
andrei_says_ 3 hours ago [-]
I’d say that my best use so far is a more powerful autocomplete proposing full lines of repetitive code and sometimes writing short methods / functions for very common tasks (“export to csv with these headers”).

It saves time in searching documentation but sometimes hallucinates.

The key here is that I can tell the difference and I have spent time in many codebases and read up on code design theory.

So in my case it’s a multiplier of clear understanding and somewhat sufficient subject matter expertise.

For someone without expertise, the LLM quickly becomes a multiplier of the Dunning-Kruger Effect.

I know enough to not try and write an organic chemistry paper with an LLM. But Twitter tells everyone they can do a similar thing in the area of software engineering.

999900000999 5 hours ago [-]
Not really.

Chat GPT and friends are really good at grunt work, but as far as picking the tech stack and architecting out an actual solution, it falls flat.

Plus if you ever run into any real trouble, chat GPT has a very nasty habit of just telling you to keep doing it the same way. I've had times where I'll post the same code in multiple LLMs and get multiple incorrect answers. All while I'm thinking if I was an actual web developer this would take 30 seconds...

apwell23 2 hours ago [-]
No
sergiotapia 3 hours ago [-]
Have you seen the quality of said products? The demand for engineers that actually know what they're doing will grow. If you actually know what you're doing and used to write code by hand (LOL!), AI just let's you fly.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 22:51:55 GMT+0000 (Coordinated Universal Time) with Vercel.