NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
All my clients wanted a carousel, now it's an AI chatbot (adele.pages.casa)
operatingthetan 24 hours ago [-]
My partner works at a nonprofit and they paid some consultant for a chat bot. The next month they were surprised they got a $2000 bill for the API use and at first wondered if the bot was really popular. The analytics reveled that very few conversations were happening.

The consultants apparently had the bot load and fed it an immediate prompt which greeted the user. This was happening on every page load. Bad consultants, bad bot.

not_that_d 24 hours ago [-]
The amount of consultants that are very known and have large presence on developer communities and give a lot of talks and have no idea how to approach real world problems is impressive.
raverbashing 24 hours ago [-]
"Bad consultants" you mean, that's the average consultant
enos_feedler 24 hours ago [-]
“It's about visibility, the fear of looking behind”

This sums up everything driving the tech sector right now. From execs at big tech to nobodies on X.

EDIT; if I think about the nature of it. The visibility fight is the decreasing attention with increasing channels and noise. Visibility tactics go to the extreme. And the fear of looking behind comes from the previous tech cycles and the thoughts around what if you had missed those? And maybe those with the most fear are the ones that did.

bambax 23 hours ago [-]
> right now

It's always been like this. I used to build websites in the 90s and it was exactly like that. It was also horrible. People who had no tech background whatsoever making decisions on which tech to use (PHP vs ASP vs ColdFusion, remember those?); overpaying agencies to make HTML "templates" that had to have round corners everywhere. Etc.

Not everything's great today, but it's a little less bad I think.

enos_feedler 22 hours ago [-]
I don’t know. I think back to my first dialup connection and getting internet for the first time. In no way do I remember fear being a driver. I remember people being curious. Nobody ran around saying you need to get on the internet or you will be left in the dust. Would be curious if anyone had examples of this if I am wrong. Youtube links to old news broadcasts or magazine print ad archive or something.
grebc 24 hours ago [-]
It’s FOMO and it works every couple of years because the execs who buy in are different to the last lot of execs who got promoted/canned.
stuaxo 24 hours ago [-]
Well, the marketing from the AI companies is working.
enos_feedler 24 hours ago [-]
Thats the clever nature of the companies. They are playing on peoples fear to drive adoption. Its a bit sickening to me
Thanemate 24 hours ago [-]
"Adopt or be left behind" and the quality of the thing you're adopting relies heavily on how much training it receives by the users who are scared of being left behind.
halflife 24 hours ago [-]
These chatbot and google login are my most hated feature of current web.

Obviously it just a script embedded in the page, so it has not actual place in the design. So the effect, especially on mobile, is this dance of starting to read a page, have it obscured by annoying popups, and trying (and failing) to close the popup with the hidden 12x12 pixels x button.

Just like the entire ads market, it’s all forgery to drive up clicks so owners can say to the clients that there is interaction.

Don’t get me started on the recent YouTube ads on iPad that place a banner that sits on top of the video, hiding subtitles, and closing it is behind a menu that requires you to be a brain surgeon specialist in order to interact with, instead of clicking the ad itself. I currently have 15 tabs in safari for ads that I inadvertently clicked.

menno-sh 16 hours ago [-]
My favorite (read: what makes me want to send the website's owner a glitter bomb/GDPR data request, whichever's worse) is when they have a chat window that plays an audible notification sound every 5 minutes, but only after they apparently sense that the tab has since been buried in a stack of 30+ other tabs.

I eventually got bored telling those chatbots to "go fuck yourself" so I've now picked up the admittedly fun micro-hobby of jailbreaking customer support bots.

gherkinnn 9 hours ago [-]
The carousel also had a political component: you can have every exec's pet project above the fold on the start page. Users develop banner blindness and scroll past without noticing. In an odd way everybody wins.

> The real irony is that building something genuinely simple, something that loads instantly and says exactly what it needs to say and nothing more, is often harder than bolting on a chatbot. But that's invisible work. Nobody sees the restraint.

This applies to almost anything. Sadly.

smelendez 7 hours ago [-]
The best was when the carousel rotated on its own with no user controls, so you had to wait for it to get to the content you actually wanted.
h05sz487b 24 hours ago [-]
The obvious solution is to implement a mock chatbot that answers from a set of pregenerated wrong answers. Noone will know the difference.
grebc 24 hours ago [-]
Genius.
eterm 24 hours ago [-]
> No pop-ups. No blinking corners. Just content

Your clients seem to have got what they wanted, or at least someone who has learned to write like one.

efilife 24 hours ago [-]
Come on, this is clearly human-written People have been writing like this for very damn long
eterm 24 hours ago [-]
It isn't "clearly human-written" at all, the entire blog looks like LLM output, right from the very first post.

I'm not witch-hunting, there are just a lot of witches.

anotherevan 9 hours ago [-]
“Ever since I learned about confirmation bias I’ve been seeing it everywhere!”

The raging, “I don’t like this, so it must have been written by an LLM!” comments on HN have gotten so tiresome that I find when I see them I just down-vote them and move to the next thread. (Most of the time. You’re witches comment captured my attention and prompted a response. Well done — the comment must have been written by a human.)

— No tokens were harmed in the production of this comment. —

eterm 48 minutes ago [-]
There's no rage, and often I like or agree with the article, but it's still written by LLM.

That doesn't fill me with rage, just a sadness that people aren't sharing their own work and aren't using their own voice.

efilife 22 hours ago [-]
I just went through some of the posts and you are right. It's very suspicious, but I would say it's right at the edge of being plausibly written by a human. If it's LLM, then it's the first one I'm aware of that got me this good. I am usually the first one to point out that something reeks of LLM writing here (which I'm kinda ashamed of, considering how much I've been doing this).

Tbh the whole smolweb concept by this person seemed kinda weird right when I discovered it was a thing. It seems to not really be a thing but the person is really trying to convince you that it is

surgical_fire 19 hours ago [-]
Honestly, despite the regular "not X, just Y" constructs, I actually think it was written by a human. Or at the very least mostly written by a human.

Something about how the argument and rationale builds up does not scream LLM to me.

eterm 17 hours ago [-]
You may be sleeping on just how good LLMs have got at writing blog-posts.

Go ahead and ask your favourite one this:

> Can you draft a blog post titled, "All my clients wanted a carousel, now it's an AI chatbot!"

> Don't search the web, just go with vibes.

I did, and this was the result: https://richardcocks.github.io/chum/blogexample.html

Okay, not quite there, very much more obviously LLM than the OP, but a bit of tweaking, some feedback to drop the headings and the table, and:

https://richardcocks.github.io/chum/blogexample2.html

And that's with zero blog-writing "skills", with no memories, a fresh incognito session and only the title to prompt.

Complete with call-out:

> The feature was never really about the users. It was about the client feeling like they were keeping up. The technology changes. The psychology doesn't.

Complete with the horse-shit, "Honest dispatches from a decade in the web trenches"

johannesberlin 51 minutes ago [-]
both examples reek of LLM writing though, none of these are good
surgical_fire 16 hours ago [-]
You may have a point. The example you posted was a bit more obvious to be the work of a LLM, but not by far.

The interesting bit is that I don't really care about the subject matter. I was browsing the comments section and the discussion of whether the blog post was AI generated picked my interest, so I tried my hand at reading to see if I agreed or not.

I wonder what to make of this. Once the lines between LLM written and human written are blurred, what is the outcome?

In some scenarios I think it's alright; I honestly don't care if a tutorial on how to set up an application is AI generated, as long as it is correct. Hell, I routinely use LLM as a glorified web search for that exact thing.

Sometimes however it becomes pointless. An opinion piece being AI generated is little more than noise. What is even being attempted there? Raking in some adsense from page views? As long as people willingly engage with it, why stop?

The web has been for a long time a low-trust environment, and this exacerbates that. Why even bother to share an opinion.

copypaper 8 hours ago [-]
Same reason why WordPress is the de facto for businesses. You can create a "full" website with a few clicks and add some plugins to make it look complete despite it running like shit. It's all perception.
ludicrousdispla 23 hours ago [-]
>> A way of saying: we're keeping up.

Back in the day, websites could just put up an animated "under construction" gif.

Ozzie-D 22 hours ago [-]
Same energy as the carousel era. The client doesnt actually want a chatbot, they want to not feel behind. The question nobody asks is 'what would this chatbot actually do that a good FAQ page cant?' and usually the honest anwser is nothing, but it looks modern and thats enough to get through the meeting.
dbuxton 22 hours ago [-]
I had the same experience with chatbots, but we shipped a chatbot module a year ago that helps with complex config questions by reading and answering based on a Salesforce Experience site.

I was skeptical but it gets a 68 NPS from users, even if we do get the occasional "why are you investing in AI I hate it" coming through the feedback channel.

As ever, the issue is "what problem are you solving". If it's that you want more people to put their hand up and talk to you/order something, chatbots seem like a bad solution. If it's that you have a ton of complex docs that people have to read in order to implement and use your product, it's not the solution but it's probably part of a solution.

luke5441 22 hours ago [-]
If you have the docs public assuming a good search engine you don't need the chat bot since users can use e.g. Google AI.
24 hours ago [-]
wuhhh 24 hours ago [-]
I stress over this with my own website-for-work. If I make the developer’s version of my site, who am I talking to? Other devs. If I make the version that appeals to agencies and casual users, there’s a constant voice in my head trying to drag me back to something simpler, lighter, judging me for that threejs hero section. As with all things, I guess it’s a matter of finding the right balance. Web development sure is in a very strange place and transitioning hard right now - off topic but I’m seeing more and more people looking for work and fewer and fewer job postings, especially for freelancers like myself. But maybe I’m not advertising AI bot integrations hard enough.
drawfloat 24 hours ago [-]
Are casual users crying out for ai chat bots? From my experience the only stakeholder pushing for those is the business themselves.
wuhhh 24 hours ago [-]
By casual users, I mean non technical people who might reasonably be on my website because they’re looking to commission work
xigoi 17 hours ago [-]
Yes. Do those people want a chatbot?
gerdesj 9 hours ago [-]
Try a really daft innovation: a phone line staffed by real people.
armenarmen 6 hours ago [-]
This will be considered ultra premium in about 3 years
pocksuppet 22 hours ago [-]
Show your clients McMaster-Carr. It's not "simple". It is efficient.
djeastm 22 hours ago [-]
I love the site, but it's also worth noting that because it is not mobile-friendly it can afford to take full advantage of its efficient catalog nature and not feel the need to make compromises. Sometimes I wish we had said "browsers are for desktops, apps are for tablets/phones" and never tried to combine the two.
try-working 22 hours ago [-]
I've built chatbot demos for big corps like Walmart and other non-tech brands. What they want is "something that looks AI." The problem with chatbots is they don't work.
cjs_ac 1 days ago [-]
I think an important subtlety here is that clients/‘normies’ look at different websites to us, so the taste in websites that they cultivate is different to ours.
daveguy 9 hours ago [-]
In a world of carousels, banners, and chatbots. Be a McMaster-Carr.

Wait. That requires having a desirable and non-manipulative product or service? Hmmm... Lots of business in for a rough time.

foxglacier 9 hours ago [-]
Wait, websites were adding cookie banners when they didn't need them just to look modern?! Those bastards! Could there also be newsletter signup pop-ups on sites that won't even send you any spam but they do it to look cool?
K0balt 20 hours ago [-]
So the solution is: simple, beautiful, effective websites with no unnecessary complications. And certified hand calculated hashes, to show that you invested heavily in the presentation. Human POW. Brb off to launch my new blockchain.
Martin_Silenus 21 hours ago [-]
Girl, give them ELIZA, they won't even notice.
tosti 19 hours ago [-]
How does them ELIZA make you feel?
21 hours ago [-]
rienbdj 24 hours ago [-]
Bring back lightbox!
pdntspa 9 hours ago [-]
I am so sick of monkey-see monkey-do in business.
mananaysiempre 24 hours ago [-]
> Then the trend quietly died, as trends do. Not because anyone decided carousels were bad. Just because something newer came along to copy.

> [...]

> I've started asking clients a simple question when they bring it up. Not to be difficult, just to understand.

> [...]

> It's not about utility. It's not even really about the chatbot. It's about visibility, the fear of looking behind.

> [...]

> No pop-ups. No blinking corners. Just content, clear and immediate.

It’s been long enough that this might even have plausibly come from a human with LLM writing overrepresented in their brain rather than an LLM. But either way there’s this record-scratch feeling that I experience on each one of these, and (fittingly) it just completely knocks me out of the groove, requiring deliberate effort to resume reading.

And, I mean, none of these is even bad in isolation, but it sure feels like we’re due either a backlash where these patterns become underused even when appropriate, or them becoming so common they lose their power (is syntax subject to semantic bleaching?). Or perhaps both. Socioliguists are going to have a blast.

fallpeak 23 hours ago [-]
Have courage and trust your own instincts. Unless one is extremely disagreeable it's very tempting to hedge and avoid outright saying "this is AI" just in case you're wrong, but if you're literate and regularly exposed to AI outputs your instincts are likely quite accurate.

In this particular case the linked article is definitely AI generated.

K0balt 20 hours ago [-]
OTOH I’ve had blog posts I wrote two decades ago vehemently called out as AI generated. AI generated style unfortunately means writing that tested positively in human A/B testing, now over represented in a style used largely by AI.

So if you write in a way that engages the reader, you’re going to struggle not to use em dashes and the occasional a/b contrast, because those are challenging the reader to engage… but when overused, they not only don’t have the intended effect ( to break the reader out of passivity) , they also constitute a new kind of sin.

So no, don’t “trust your gut”. Trust the math. Is it too much? Or is it just trying to jar you out of not engaging with the prose?

But yeah, I’d say this article is likely written primarily with AI. Which doesn’t mean it’s not guided with intention and potentially important, it just means the article was probably commissioned and edited by a human, not written by one.

lelanthran 9 hours ago [-]
> OTOH I’ve had blog posts I wrote two decades ago vehemently called out as AI generated. AI generated style unfortunately means writing that tested positively in human A/B testing, now over represented in a style used largely by AI.

Everytime I see this claim, I ask for links to those blog posts. I have yet to get any links to the so-called "human" pattern that AI uses.

K0balt 5 hours ago [-]
This blog of my idle musings, specifically, has been a source of call-outs. In articles from back in 2013 of all things. I also noticed that( ChatGPT?) seems to have replied to one of my latest (2023) posts, which I find odd and improbable

https://bogon-flux.blogspot.com

I get what you mean though, to me I don’t see the hallmarks of AI writing, but you will find the occasional em-dash and contrasted-constructions. I think some people see an em-dash and decide them and there that it’s AI generated, probably because they are illiterate by any reasonable measure of the term.

bombcar 7 hours ago [-]
I used an em-dash once in 1998 so you can't call all my AI slop out as AI slop.

Checkmate.

KaiMagnus 18 hours ago [-]
It's kind of funny when I open some books nowadays and the writing style and formatting just immediately scream LLM sometimes. Not because the book was AI-generated, most are too old, but because LLMs were simply trained on these exact books and are now reproducing their style, which I guess was either popular or selected during training.

Anyways, really hard to push through and I need to remind myself to judge the text by its meaning. But if it's some random blog, my "tolerance" is lower and I don't want to spend my time reading nonsense, I just can't stand the writing style anymore either.

eterm 23 hours ago [-]
Indeed, consider these two posts linked below also from this blog. They look the same, they maintain the same impersonal writing style. There's no humanity to it at all.

They maintain such a consistent paragraph length that they're either a professional copyeditor or, as is clearly the case, are an LLM.

Humans deviate a lot more than this, they use run on sentences or lose the thread in their writing.

This blog however reads like every-other post on LinkedIn. Semi-professional tone, with a strong "You, Me" hook to most posts.

I encourage everyone to make an LLM-generated blog, don't post the articles anywhere, but generate one, to get a feeling for how these things write.

Because this is unmistakably LLM. I'd even go so far as to identify the model of these particular posts as ChatGPT.

Yet when we point this out, we're told it is "unmistakably human" and that we're rude for pointing it out.

https://adele.pages.casa/md/blog/the-joy-of-a-simple-life-wi...

https://adele.pages.casa/md/blog/finding_flow_in_code.md

joshka 22 hours ago [-]
Is this comment LLM generated?
fallpeak 22 hours ago [-]
What does that have to do with anything? These days any piece of text may or may not be AI generated (my money would be heavily on "no" for the post you asked about), but either way it isn't blatant slop so we can't tell.

It feels like you're trying for a lazy gotcha, but the actual point here is something like "AI models often generate writing with specific noticeable characteristics that make it obviously AI output, and TFA is an instance of such writing, and this should be called out when possible"

mananaysiempre 22 hours ago [-]
I started off hedging but by the end of the comment came to think that AI use—or lack thereof—was actually beside the point. I have feelings with regards to the situation where “the situation” includes some largely irrelevant-to-writing things like the mainframization and the “feelings” are not nearly coherent enough to graduate to thoughts. Thus (unlike some others) I don’t think that calling out writers or warning readers about AI is all that useful (or for that matter courageous). With respect to writers who use AI due to a lack of confidence, it’s probably even harmful. (Saying that as a person who manages to absolutely suck in embarrassing ways in multiple foreign languages. And also in English but less obviously. And likely in my native language too due to lack of use.) Meanwhile, TFA makes a decent point, and I am in no position to criticize people for being wordy.

The thing is, by now it doesn’t actually matter if AI or not AI or partly AI or whatever, because the record scratch is still there and still breaks my immersion. I could be oversensitive (I definitely am to some other English-language things, and also feel that others are to yet other things like em dashes), but it feels like there’s a new language/social-signalling thing now, and you may have to avoid it even if you’re not an LLM.

fao_ 8 hours ago [-]
With the amount of AI-generated slop content on the front of HN these days, I'm honestly reconsidering visiting this site in the first place. What's the point? It seems better to curate RSS from existing known-good sources.

The art of essay-writing seems to not be something people here care about any more. If a human didn't bother to write it, why should I bother to read it?! Just post up the bullet points you would feed the LLM, and let the people who want to do so, post it into their own LLMs so they can make the Content and shovel it into their eyeballs by themselves, instead.

mmooss 10 hours ago [-]
> if you're literate and regularly exposed to AI outputs your instincts are likely quite accurate.

There's no basis for that. The reason experts - for example, scientists in their own field - use objective fact is because reasoning like the parent is highly unreliable. What evidence shows is that people way overestimate their own intuition. It's not 'courage', it's foolishness.

franga2000 23 hours ago [-]
LLMs don't "own" this writing style. By definition they can't - they were trained on human writing after all! People wrote like this before and that's fine. You might not like the style, but saying it's because LLM writing has infested their brain is wrong, dismissive and dehumanising.
dxdm 23 hours ago [-]
Any style can cross the border into bad and get in the way of itself when it's turned up to 11, no matter who wrote it.

There've been stylistic fads before LLMs where a thing, with results just as chalkboard-screech-inducing as the current one. That this one is just a button-push away does make it worse, though, because it proliferates so greedily.

Bad writing is bad writing, and writing like an LLM is writing like an LLM. We should be able to call this out. In fact, calling out the human responsibility in it is the very opposite of dehumanizing to me.

franga2000 22 hours ago [-]
Yes, definitely, but the parent post was quite explicitly saying it was either LLM generated or the person's style was influenced by consuming LLM content.

Sure, call the style bad or even similar to LLMs, but there's no reason to believe the style came from LLMs. It existed before and people who used it before still exist and still use it now.

Hell, this person seems to be a web(site) developer, that's a very marketing-speak-heavy field. It's far morely likely that's where they "caught" thos style. It happened to me too back when I was still in it.

dxdm 22 hours ago [-]
I think the original comment is much more open-minded towards the author of the TFA than you are to the commenter.

> explicitly saying it was either LLM generated or the person's style was influenced by consuming LLM content

We might disagree here, but if we're strict they did not say "either/or", especially not explicitly. They raised two possibilities, but didn't exclude others.

> there's no reason to believe the style came from LLMs

They say "might" and "plausibly". I think there's no belief there until you assume it.

And even if: It's not unlikely that a contemporary author's mind is influenced by the prevalent LLM style. We are influenced by what we read. This has been happening to everyone for ages, without anyone questioning the agency of writers. There's nothing wrong with suggesting like that could be the case here. It's entirely human.

I know it's easy for one's mind to jump to conclusions, but I am not a fan of taking that as far as accusing someone of "dehumanizing" others. Such an escalation should ideally cause a pause and a think, before pressing submit.

mananaysiempre 21 hours ago [-]
Nah, the two possibilities were in fact exclusive in my mind (subject of course to the usual likelihood of any one thing I say being completely wrong, but that’s always in the background and not that useful to constantly point out). And it might be fair to say that it is unwise to attempt this kind of amateur psychoanalysis in public. It’s just that I don’t see being influenced by things you read as a big deal, let alone an accusation, let alone a dehumanizing one. See my neighbouring comment[1] for more on the last point.

[1] https://news.ycombinator.com/item?id=48073567

servo_sausage 23 hours ago [-]
Only to a limited extent, the fine tuning of these models uses a much smaller more curated set to generate tone and defaults.

The whole corpus is in there, but the standard style is tuned for.

piva00 23 hours ago [-]
I wonder how much marketing copy has poisoned the "default" writing style of LLMs, it surely has those undertones of pitching a sale in an uncanny valley way.
watwut 23 hours ago [-]
So I will say that things I read were not written in this style.

And people I read had better ability to not put in unneceasary random completely made up facts or illogical implications.

mananaysiempre 22 hours ago [-]
LLMs don’t own these expressions in the same sense that McDonald’s doesn’t own salt: they are undoubtedly making use of a strong reaction that humans have had—have been having—long before; but they did develop a way to mash that button on an industrial scale like few before them. (With of course a great deal of help from humans, be it via customer surveys or RLHF; or you could call it help from Moloch[1] in that the humans unwittingly or negligently assembled themselves into a runaway optimizer.) So I think it’s fair to say that LLMs do own this style, as in the balance of ingredients, even if they do not own the ingredients themselves. And anyway nothing in the social perception of language cares about fairness: low-class English speakers did not invent negative agreement (“double negatives”), yet it will still sound low-class to you and even me (and my native language requires negative agreement).

As for being dehumanizing, perhaps I did commit the sin of psychoanalysis at a distance here, but I’ve felt enough loose wires sticking out of my brain’s own language production apparatus that I don’t think pointing out the mechanistic aspects reduces anyone’s humanity.

For instance, nobody can edit their own writing until they forget what’s in it—that’s why any publishing pipeline needs editors, and preferably two layers of them, because the first one, who edits for style and grammar, consequently becomes incapable of spotting their own mechanical mistakes like typos, transposed or merged words, etc. Ever spotted a bug in a code-review tool that you’ve read and overlooked a dozen times in your editor? Why does a change in font or UI cause a presumably rational human being to become capable of drawing logical inferences they were not before? In either case, there seems to be a conclusion cache of sorts that we can’t flush and can’t disable, requiring these sorts of actually quite expensive hacks. I don’t think this makes us any less human, and it pays to be aware of your own imperfections. (Don’t merge your copy- and line editors into a single position, please?..)

As for syntactic patterns, I’ve quite often thought of a slick way to phrase things and then realized that I’d used it three times in as many sentences. On some occasions I’ve needed to literally grep every linking word in my writing to make sure I haven’t used a single specific one five times in a row. If you pay attention during meetings or presentations, you’ll notice that speakers (including me!) will very often reuse the question’s phrasing word for word regardless of how well it fits, without being aware of it in the slightest. (I’m now wondering if lawyers and witnesses train to avoid this.) Language production is stupidly taxing on the brain (or so I’ve heard), so the brain will absolutely take every possible shortcut whether we want it to or not.

Thus I expect that the priming effect I’m alleging can be very real even before getting into equally real intangibles like “taste”. I don’t think it dehumanizes anyone; you could say it dehumanizes everyone equally instead, but my point of view is that being aware of these mechanical realities of the mind is essential to competent writing (or thinking, or problem solving) in the same way that being aware of mechanical realities of the body is essential to competent dancing (or fighting, or doing sports). A bit of innocence lost is a fair trade for the wisdom gained.

(Not that I claim to be a particularly good writer.)

[1] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

K0balt 20 hours ago [-]
[dead]
fallpeak 23 hours ago [-]
[dead]
xtiansimon 21 hours ago [-]
> “…there’s this record-scratch feeling…”

The op is a blog post. You’re talking about blog post writing. Maybe you just don’t like their style?

It’s also true llm second drafts are a thing.

And it’s true both can ‘record scratch’ you right out of attention.

As well as the now present trend as readers to be impatient and quickly bored.

And this criticism of writing style (for my take this article is perfectly readable)—what is the aim? Call for writers to perform some kind of disclosure? Because without a goal, it sounds like complaining you don’t like the soup.

sebzim4500 23 hours ago [-]
None of that feels like AI smell to me despite the "it's not X it's Y" framing. I can't really explain why though.
Legend2440 7 hours ago [-]
The hilarious irony of AI-generating an article about overusing AI.
xigoi 17 hours ago [-]
I hate AI slop, but I also hate the HN trend of “this article uses a rhetorical device, therefore it’s AI-generated.”
delusional 23 hours ago [-]
None of those 4 look like AI slop to me. They lack the strange non-sequitur nature these contrasting statements generally have when made by AI. The version of the third example I would expect from a clanker would be more like

> It's not about utility. It's not even really about the chatbot. It's about novelty of talking to a machine

Which of course doesn't connect to the rest of the article contents, because the AI doesn't have any intention in its writing.

22 hours ago [-]
oezi 22 hours ago [-]
I mostly agree but some recent experiences with voice chat bots give me pause:

Fedex has now a voice bot when you call and it is kind of good and fast. I mean faster than navigating their website. It picks up directly after some boilerplate. It can understand me.

With website chatbots we could have similar leaps if they are done well and have access to CRM/ERP etc. to actually help you.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 08:13:19 GMT+0000 (Coordinated Universal Time) with Vercel.