It's kinda funny to me that many of the "pros" of this approach are the exact reasons so many abandoned MPAs in the first place.
For instance, a major selling point of Node was running JS on both the client and server so you can write the code once. It's a pretty shitty client experience if you have to do a network request for each and every validation of user input.
Also, there was a push to move the shitty code from the server to the client to free up server resources and prevent your servers from ruining the experience for everyone.
We moved away for MPAs because they were bloated, slow and difficult to work with. SPAs have definitely become what they sought to replace.
But that isn't because of the technology, it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again. Nothing about this technology will stop that.
PaulHoule 723 days ago [-]
I remember that all the web shops in my town that did Ruby on Rails sites efficiently felt they had to switch to Angular about the same time and they never regained their footing in the Angular age although it seems they can finally get things sorta kinda done with React.
Client-side validation is used as an excuse for React but we were doing client-side validation in 1999 with plain ordinary Javascript. If the real problem was “not write the validation code twice” surely the answer would have been some kind of DSL that code-generated or interpreted the validation rules for the back end and front end, not the fantastically complex Rube Goldberg machine of the modern Javascript wait wait wait wait and wait some more to build machine and then users wait wait wait wait wait for React and 60,000 files worth of library code to load and then wait wait wait wait even more for completely inscrutable reasons later on. (e.g. amazing how long you have to wait for Windows to delete the files in your node_modules directory)
berkes 723 days ago [-]
Even worse: Client-side validation and server-side validation (and database integrity validation) are all their own domains! I call all of these "domain logic" or domain validation just to be sure.
Yes, they overlap. Sure, you'll need some repetition and maybe, indeed, some DSL or tooling to share some of the overlapping ones across the boundaries.
But no! They are not the same. A "this email is already in use" is serverside, (it depends on the case). A "this doesn't look like an email-address, did you mean gmail.com instead of gamil.com" is client side and a "unique-key-constraint: contactemail already used" is even more down.
My point is, that the more you sit down (with customers! domain experts!) and talk or think all this through, the less it's a technical problem that has to be solved with DSLs, SPAs, MPAs or "same language for backend and UI". And the more you (I) realize it really often hardly matters.
You quite probably don't even need that email-uniqueness validation at all. In any layer. If you just care to speak to the business.
philderbeast 723 days ago [-]
"A "this doesn't look like an email-address"
unfortunately this also needs to be done server side, unless your trusting the client to send you information that is what your expecting?
client side validation makes for a good user experience, but it does not replace the requirement to validate things server side, and many times you will end up doing the same validations for different reasons.
berkes 722 days ago [-]
"It depends".
If it's merely a hint for the user (did you make a typo?) there's no need to ensure "this is a valid email address". in fact: foo@gamil.com is perfect valid email-address, but quite likely (though not certain!) not what the user meant.
I've seen hundreds of email-adres-format-validations in my career, server-side. The most horrible regexps, the most naïve assumptions[1]. But to what end?
What -this is a question that a domain expert or business should answer - does it matter if an email is valid? trump@whitehouse.gov is probably valid. As is i@i.pm[2]. What your business- expert quite likely will answer is something in line of "we need to be sure we can send stuff so that the recipient will can/read it", which is a good business constraint, but one that cannot be solved by validating the format of an email. One possible correct "validation" is to send some token to the email, and when that token is then entered, you -the business- can be sure that at least at this point in time, the user can read mail at that address.
[1] A recent gig was a Saas where a naïve implementor, years ago, decided that email-addresses always had a TLD of two or three letters: .com or .us and such. Many of their customers now have .shop or somesuch.
[2] Many devs don't realize that `foo` is a valid email-adress. That's foo without any @ or whatever. It's a local one, so rather niche and hardly used in practice; but if you decide "I'll just validate using the RFC, you'll be letting through such addresses too!". Another reason not to validate the format of email: it's arbitrary and you'll end up with lots of emails that are formatted correct, but cannot be used anyway.
bigtunacan 721 days ago [-]
Just because some places implemented the validation wrong does not mean the validation should not occur.
The validation is there to catch user mistakes before sending a validation email and ending up with unusable account creation.
berkes 721 days ago [-]
You are missing the point. Sorry for that.
It doesn't matter if an email has a valid format: that says almost nothing about it's validity. The only way you can be sure an address can receive mail(today) is by sending mail to it. All the rest is theatre.
And all this only matters if the business cares about deliverability in the first place.
bigtunacan 715 days ago [-]
No, I understood your point and I agree sending the email and getting some type of read receipt is necessary.
You seem to think that because of this client validation should be skipped. On that point I disagree. If you can tell that it's not a valid email address (bigtunacan@goog obviously invalid since missing a TLD) then no email should be sent. Good UX is to let the customer/user know there is a mistake in the email address.
dsego 723 days ago [-]
I think the main concern for frontend validation was before HTML5 came along with validation attributes. You can easily produce HTML input validation attributes from a Yup schema for example by using its serialization feature (https://github.com/jquense/yup#schemadescribeoptions-resolve...).
Here is an example from some silly code I wrote a while back testing htmx with deno
https://github.com/dsego/ssr-playground/
oweiler 723 days ago [-]
I always saw client side validation for improving UX and server side validation for improving security.
carlmr 723 days ago [-]
I once needed to order something in the company's ordering system, but for some reason my manager wasn't set as an approver, by virtue of some glitch, since it had worked a few weeks before, and if you wanted to change approvers you'd need the current approver to approve. But that wasn't set. A classical chicken and egg situation.
The button for changing approvers was greyed out, so out of boredom I changed it to active in the client-side code. Lo and behold after clicking the "active" button I got a box for selecting the approver.
I could select any user in the company. Even the CEO or myself.
I did the right thing and mentioned this to our IT Security department. Since obviously this could be used to order really expensive stuff in the name of the CEO or whoever.
They came back to me and told me, the vendor (I'm not sure I want to mention them here because they're happy to sue), knows about this for 3 years and won't fix it.
Takennickname 723 days ago [-]
Oracle. Must be.
jhartwig 722 days ago [-]
ServiceNow
sublinear 722 days ago [-]
Indeed servicenow is the clunkiest and saddest software in use today. Unbelievably terrible. You have to see it to believe it.
siefca 715 days ago [-]
Even worse^2, client-side validation may differ from server-side validation and from database-side validation. I cannot imagine client-side checking for a validity of a phone number using freshly downloaded database of current carriers and assignment rules in different countries, I prefer to maintain it server-side, even though it could be possible (thanks to guys from Google and their Libphonenumber). But again, I don't trust the client, so it needs to be re-validated later. Then it will be converted to some native data structure on order to make things faster and unified, a later it will go to a database with its own coercion and validation routines just before application will do a query. This validation trusts the format so it will just make sure the result of conversion is correct. But then the query itself carries a validation aspect: when the number must be unique in some context, it will return error, which will bubble up to user.
narag 723 days ago [-]
A "this doesn't look like an email-address, did you mean...
Stop right there.
I'm tired of receiving mail from people that gave my email address as if it was their own.
Never ever accept an email address unless you can instantly confirm it's valid sending an email and waiting for an answer. If the user can't access their email on the spot, just leave it blank and use another data as key.
I wish they included that in GDPR or something.
cromulent 723 days ago [-]
I think this is the point of the client side check though - if the user makes a typo (e.g. gamil.com) then the client side validation can prompt them to check, before the server sends the validation email and annoys the owner of the typoed address.
narag 722 days ago [-]
My point is that it doesn't matter if some arbitrary string looks like an email address, you need to check.
If it isn't valid the server won't annoy anyone. The problem is that the address is valid. And not theirs, it's mine.
The moment the users need to be careful, they will. Make the problem theirs, not mine.
"Sorry sir, the address you provided returns error" or "haven't you received the confirmation email YET? really? there are other customers in the line" and see how soon they remember the right address, perfectly spelled.
Even big ass companies like Paypal that have no problem freezing your monies, allow their customers to provide unchecked email addresses and send income reports there. (here)
RussianCow 720 days ago [-]
You can (and should) definitely do both. But needing to validate that a user has access to the entered email address doesn't mean you should do away with client-side validation entirely.
berkes 722 days ago [-]
You missed my point, I'm afraid.
I meant that it very much depends on the business-case (and hence laws and regulations) what exactly you'll have to verify, and therefore where you verify and validate it.
Do you need an address to contact people on? You'll must make sure that the user can read the emails sent to that by you. Do you merely use it as a login-handle? Then it probably only has to be guaranteed unique. Do you need to just store it in some address-book? Then just checking roughly the format is probably enough. "It depends".
CRConrad 719 days ago [-]
> Do you need an address to contact people on? You'll must make sure that the user can read the emails sent to that by you. Do you merely use it as a login-handle?
Pretty humongous dick move to use someone else's email address as one's own login for some website, wouldn't you agree? What if it's a popular website, and the owner of the address would like to use it for their id; why should anyone else be able to deprive them of that?
And thus it's also a dick move from the site operator to allow those dicks to do that. So no, it doesn't depend: Just don't accept untested email addresses for anything.
berkes 719 days ago [-]
Again: this depends on the business case.
Not all web-applications with a login are open for registration. Not all are public. Not all are "landgrab". Not all have thousands of users or hundreds of registrations a week. Not all are web applications and not all require email validation.
Some do. But, like your niche example proves: the business-case and constraints matter. There's no one size fits all.
CRConrad 719 days ago [-]
> I'm tired of receiving mail from people that gave my email address as if it was their own.
Did you mean “receiving mail intended for people that gave my email address”? Because that's how I usually notice that they did.
v0idzer0 723 days ago [-]
It really wasnt about client side validation or UX at all. You can have great UX with an MPA or SPA. Although I do think it’s slightly easier in an SPA if you have a complex client like a customizable dashboard.
Ultimately it’s about splitting your app into a server and client with a clear API bounday. Decoupling the client and server means they can be separate teams with clearly definied roles and responsibilities. This may be worse for small teams but is significantly better for large teams (like Facebook and Google who started these trends).
One example is your iOS app can hit the same API as your web app, since your server is no longer tightly coupled to html views. You can version your backend and upgrade your clients on their own timelines.
PaulHoule 723 days ago [-]
Ouch!
I’ve worked in two kinds of organizations. In one of them when there is a ‘small’ ticket from the viewpoint of management, one programmer is responsible for implementation but might get some help from a specialist (DBA, CSS god, …)
In the other a small ticket gets partitioned to two, three or more sub teams and productivity is usually reduced by a factor more than the concurrency you might get because of overhead with meetings, documentation, tickets that take 3 sprints to finish because subtasks that were one day late caused the team to lose a whole sprint, etc.
People will defend #2 by saying thar’s how Google does it or that’s how Facebook does it, but those monopolists get monopoly rents that subsidize wasteful practices and if wall street ever asks for “M0R M0NEY!” they can just jack up the ad load. People think they want to work there but you’ll just get a masterclass in “How to kill your startup.”
irrational 723 days ago [-]
I’ve worked at the same company for a long time. For about 15 years, my team was embedded in a business team and we managed things however we wanted. We could move very quickly. Then, about 5 years ago, we were moved into the tech organization. We were forced to adopt agile, sprints, scrum masters, jira, stand ups, etc. It probably takes 10 times longer to get the same amount of work done, with no improvement in quality. The amount of meetings is truly astonishing. I’m convinced the tech org mainly exists to hold meetings and any development work that occurs is purely accidental.
chiefalchemist 723 days ago [-]
But is your loss from adopting those teach standards, or from being un-embeded in the business team?
Tech orgs and those standards exist because:
- tech generally doesn't understand business
- the business struggles to express it's needs to tech
Embedding worked for you, but how big was your team? Could that scale?
I'm not questioning your success or your frustrations, but how unique was the situation for your success?
0xblinq 722 days ago [-]
Same experience as me. Scrum is a disease in this industry.
tomnipotent 723 days ago [-]
What you may not see is quality-of-life improvements for executive management, planning, and scheduling. Communication and alignment can be both more important and more profitable than just velocity alone.
charrondev 723 days ago [-]
I work at a company that makes a very clear distinction between API and View layer. Our API spans 200+ endpoints. We have 6 backend and 6 frontend developers.
As far as iterations go it’s very rapid. Our work teams are split into 1 backend and 1 frontend developer. They agree on an API spec for the project. This the contract between them and the frontend starts working immediately against a mock or very minimal version of the API. Iterate from there.
respondo2134 722 days ago [-]
This is a pretty popular approach, and I use it sometimes, but "agree on an API spec for the project" does gloss over how challenging and time consuming this can be. How many people here have ever gotten their API wrong? (raises hand). There's still a lot of ongoing coordination and communication.
charrondev 722 days ago [-]
Oh certainly. It’s pretty rare to get things exactly right on the first try. For this reason we hide new endpoints from our public open api spec and documentation until we are satisfied that some dust has settled on them a little bit.
Still you only have to get it mostly right. Enough to get started. This only starts to become a huge problem when the endpoints is a dependency of another team. When you’re in constant communication between the developer building the API and the developer building the client it’s easy to adjust as you go.
I find a key part of a workflow like this though especially if you have multiple teams is to have a lead/architect/staff developer or whatever you may call it be the product owner of the API.
You need someone ensure consistency and norms and when you have an API covering functionally as broad and deep as the one I work on, it’s important to keep in mind each user story of the API:
- The in house client using the API. This generally means some mechanism to join or expand related records efficiently and easily and APIs providing a clear abstraction over multiple different database table when necessary.
- The external client, used by a third party or the customer directly for automation or custom workflows. The biggest thing I’ve found helps these use cases is to be able to query records by a related field. For example if you have some endpoint that allows querying by a userID, being able to also query by by a name or foreignID passed over SSO can help immensely.
treeman79 723 days ago [-]
Yep. I was in a type 1 startup. Stuff got done fast.
Company forced us to type 2 using Angular. projects thar used to take a couple of days for one person became multi month efforts for a dozen developers across three teams.
ResearchCode 723 days ago [-]
Sounds like the problem is having "sprints". As far as I know, most teams at Google and Meta don't.
respondo2134 722 days ago [-]
They need scaled agile, where every 5 or 6 sprints you group them into an program increment, with extra overhead and even more ridiculous symbolic rituals. Your team is held to an arbitrary commitment months out, then executives shift the ground under your feet and make everything irrelevant. Dev teams love it!
</s>
ResearchCode 722 days ago [-]
It's remarkable that non-tech enterprise need all this agile for poor internal CRUD apps, but FAANG-scale product development somehow does not.
mdmglr 723 days ago [-]
What do teams at google and meta practice?
dial9-1 723 days ago [-]
it's called resume-driven development
wahnfrieden 723 days ago [-]
Speaks some truth to Graber's Bullshit Jobs thesis
tored 723 days ago [-]
Generally you don’t want to reuse the same API for different types of clients, you want backends for frontends (BFF) that are specialized for each use and can be moved forward in their own pace. The needs and the requirements differs a lot between a browser, app and server-to-server call.
And just because you serve HTML doesn’t necessary mean that you backend code is tightly coupled with the view code, HTML is just one adapter of many.
A boundary doesn’t get better just because you slip a HTTP barrier in between, this is the same type of misconception that has driven the microservice hysteria.
gofreddygo 723 days ago [-]
> you want backends for frontends (BFF) that are specialized for each use
third time I've heard this thing and the reasoning still escapes me.
First there's ownership. Backend team owns API. Frontend teams own clients (web/android/ios/cli) etc. Do you now have a BFF for each client type? Who owns it then ? Don't you now need more fullstacks ?
there's confusion.
Now you have 2 sets of contracts (API-BFF, BFF-clientIOS, BFF-clientAndroid, ...). You now have more human synchronization overhead. Changes take longer to percolate throughout. More scope for inconsistencies.
And there's performance.
Adding more hops isn't making it faster, simpler or cheaper.
Isn't is better to have the API team own the single source of ownership ?
andrekandre 723 days ago [-]
(not the op so this is jme...)
> Do you now have a BFF for each client type? Who owns it then ? Don't you now need more fullstacks ?
everyone has an opinion, but ime ideally you'd have 1 bff for all clients from the start
> there's confusion. Now you have 2 sets of contracts (API-BFF, BFF-clientIOS, BFF-clientAndroid, ...). You now have more human synchronization overhead. Changes take longer to percolate throughout. More scope for inconsistencies.
yep, i have literally experienced the chaos this can cause, including the endless buzywork to unify them later (usually its unify behind the web/html bff which breaks all kinds of frontend assumptions)
> Isn't is better to have the API team own the single source of ownership ?
it depends on what it means 'api team'... but ideally bff has its ownership separate from 'backend' wether that is in 'api team' or outside i think is less important ime
but... ideally this separation of ownership (backend backend, front end for backend) allows each to focus on the domain better without mixing up say localization in the lower level api's et
iow having a bff is sort of like having the view model as a server... that way multiple clients can be dead simple and just map the bff response to a ui and be done with it
(thats the ideal as i understand it anyways)
respondo2134 722 days ago [-]
Companies do this, but it is really hard to support. I prefer teams that own an entire vertical slice. Then they know their API and more importantly, The WHY? their API does what/how it does. A BE team can never know the entire context without exposure to the end use IME, and there is far less ownership. YMMV and it will ultimately come down to how your company is organized.
tored 723 days ago [-]
> Don't you now need more fullstacks ?
Yes. I’m generally against specialization and splitting teams. This of course depends on what type of organization you have and how complex the frontend is. iOS and Android is usually complex as it is so they are typically specialized but I would still keep them in the team.
Specialized teams not only creates synchronization issues between teams but also creates different team cultures.
What this does is that it induces a constant time delay for everything the organization does. Because teams no longer can solve an entire feature the organization instead spends more time on moving cards around in the planning tool of choice. The tiniest thing can require massive bureaucratic overhead.
Solutions also has a tendency to become suboptimal because no technician has an general overview of the problem from start to finish. And it also quite common that the same problem is solved multiple times, for each team.
By making BFFs specialized, instead of the teams, you don’t need to spend time to create and design a generalized API. How many hours hasn’t been wasted on API design? It adds nothing to customer satisfaction.
This also means that you separate public and private APIs. External consumers should not use the API as your own web client.
Specialized BFFs is not only to have a good fit for the client consuming it but it also about giving different views of the same underlying data.
E.g assume we have an article with multiple revisions (edits). Handling revisions is important for the Admin API but for the web client that serves the final version of the article not at all, it shouldn’t even be aware of that the concepts of revisions exists.
Creating a new a BFF is as easy as copy&paste an existing one. Then you add and remove what you need.
The differences between BFFs is usually how you view your schema (GET). Writing to your model (POST) is likely shared because of constraints.
What is then different views of the same data? An SQL query (or VIEW). Too many APIs just maps a database table to an endpoint 1:1, those APIs are badly designed because the consequence of that is that the client needs to do an asynchronous HTTP JOIN to get the data it needs, very inefficient.
By writing SQL to fit your BFFs you will then realize that the ORM is the main problem of your architecture, it usually the ORM that creates the idea that you only have one view of the same data, one table to one entity. But SQL is a relationship model, you can’t realistically express that with 1:1 only.
By removing the ORM you will also solve the majority of your performance issues, two birds one stone scenario.
Ownership of a BFF should ideally be by the ones consuming it.
iOS and Android can usually use the same BFF, they don’t differ that much to warrant a new BFF. If there are any differences between the two, give them different endpoints within the same BFF for that specific use case. When designing APIs one should be pragmatic, not religious.
BFF is nothing more than an adapter in hexagonal architecture.
btreecat 723 days ago [-]
> Yes. I’m generally against specialization and splitting teams. This of course depends on what type of organization you have and how complex the frontend is. iOS and Android is usually complex as it is so they are typically specialized but I would still keep them in the team.
Right why have someone _good_ at a particular domain who can lead design on a team when you can have a bunch of folks who are just ok at it, and then lack leadership?
> Specialized teams not only creates synchronization issues between teams but also creates different team cultures.
Difference in culture can be cultivated as a benefit. It can allow folks to move between teams in an org and feel different, and it can allow for different experimentation to find success.
> What this does is that it induces a constant time delay for everything the organization does. Because teams no longer can solve an entire feature the organization instead spends more time on moving cards around in the planning tool of choice. The tiniest thing can require massive bureaucratic overhead.
I've seen this true when I was by myself doing every from project management, development, testing, and deployment. Orgs can have multiple steak holders who might throw a flag at any moment or force inefficient processes.
> Solutions also has a tendency to become suboptimal because no technician has an general overview of the problem from start to finish. And it also quite common that the same problem is solved multiple times, for each team.
Generalists can also produce suboptimal solution because they lack a deeper knowledge and XP in a particular domain, like DB, so they tend to reach for an ORM because that's a tool for a generalists.
> By making BFFs specialized, instead of the teams, you don’t need to spend time to create and design a generalized API. How many hours hasn’t been wasted on API design? It adds nothing to customer satisfaction.
Idk what you're trying to claim, but API design should reflect a customers workflow. If it's not, you are doing it wrong. This requires both gathering of info, and design planning.
> This also means that you separate public and private APIs. External consumers should not use the API as your own web client.
Internal and external APIs are OK, this is just a feature of _composability_ in your API stack.
> Specialized BFFs is not only to have a good fit for the client consuming it but it also about giving different views of the same underlying data.
If the workflow is the same, you're basically duplicating more effort than if you just had a thin client for each platform.
> E.g assume we have an article with multiple revisions (edits). Handling revisions is important for the Admin API but for the web client that serves the final version of the article not at all, it shouldn’t even be aware of that the concepts of revisions exists.
Based on what? Many comment systems or articles use an edit notification or similar for correcting info. This is a case by case basis on the product.
> Creating a new a BFF is as easy as copy&paste an existing one. Then you add and remove what you need.
That sounds terrible, and very OO. I'd rather generate another client for my openapi documented API, in whatever language is most appropriate for that client.
> The differences between BFFs is usually how you view your schema (GET). Writing to your model (POST) is likely shared because of constraints.
That's a stretch, if I need a form, I likely need the same data if I'm on iOS, Android, native, or web. Again it's about execution of a workflow.
> What is then different views of the same data? An SQL query (or VIEW). Too many APIs just maps a database table to an endpoint 1:1, those APIs are badly designed because the consequence of that is that the client needs to do an asynchronous HTTP JOIN to get the data it needs, very inefficient.
Yes, those API are not being designed correctly, but I think you said folks are wasting too much time on design, so not sure what your arguing for here other than to not just try and force your clients to do excessive business logic.
> By writing SQL to fit your BFFs you will then realize that the ORM is the main problem of your architecture, it usually the ORM that creates the idea that you only have one view of the same data, one table to one entity. But SQL is a relationship model, you can’t realistically express that with 1:1 only.
Yet ORMs are tools of generalists. I agree they are generally something that can get in the way of a complex data model, but they are fine for like a user management system, or anything else that is easily normalized.
> By removing the ORM you will also solve the majority of your performance issues, two birds one stone scenario.
That depends a lot on how the orm is being used.
> Ownership of a BFF should ideally be by the ones consuming it.
Why? We literally write clients for APIs we don't own all the time, whenever we call out to an external/3p service. Treat your client teams like a client! Make API contracts, version things correctly, communicate.
> iOS and Android can usually use the same BFF, they don’t differ that much to warrant a new BFF. If there are any differences between the two, give them different endpoints within the same BFF for that specific use case. When designing APIs one should be pragmatic, not religious.
The workflows Shou be the same. The main difference between any clients are the inputs available to the user to interact with.
> BFF is nothing more than an adapter in hexagonal architecture.
That's what a client is...
tored 722 days ago [-]
You are comparing apples with oranges. I'm talking about organization, you about
individual developers.
I can have fullstack that is better than a specialist. Specialist only means that they have specialized in one part of the architecture, that doesn't necessarily mean that they solve problems particular well, that depends on the skill of the developer.
And the point is that even if they do have more skill within that domain, total overall domain can still suffer. Many SPAs suffer from this, each part can be well engineered but the user experience is still crap.
If your developers is lacking in skill, then you should definitely not split them up into multiple teams. But again I'm talking about organization in general, that splitting teams has a devastating effect on organization output. Difference in culture will make it harder to move between teams, thus the organization will have much more difficult time planning resources effectively.
BFF is all about reflecting the need of the client, but the argument was the a generalized API is better because of re-usability. The reason why you split into multiple BFFs is because the workflow isn't the same, it differs
a lot between a web client and a typical app. If the workflow is the same you don't split, that is why I wrote
BFF per client type, a type that has specific workflow (need & requirement).
> This is a case by case basis on the product.
Of course, it was an example.
> That sounds terrible, and very OO. I'd rather generate another client for my openapi documented API, in whatever language is most appropriate for that client
I'm talking about the server here, not the client.
> That's a stretch, if I need a form, I likely need the same data if I'm on iOS, Android, native, or web. Again it's about execution of a workflow.
But the authentication and redirects will probably be different, so you can reuse a service (class) for updating the model, but have different controllers (endpoints).
> Yes, those API are not being designed correctly
Every generalized API will have that problem in various degrees, thus BFF.
> Yet ORMs are tools of generalists.
Oh, you think a fullstack is generalist and thus doesn't know SQL. Why do you believe that?
> That depends a lot on how the orm is being used.
Most ORMs, especially if they are of type active record, just misses that mark entirely when it comes to relationship based data. Just the idea that one class maps to a table is wrong on so many levels (data mappers are better at this).
ORM entities will eventually infect every part of you system, thus there will be view code that have entities with a save method on, thus the model will be changed almost from everywhere, impossible to track and refactor.
Performance is generally bad, thus most ORMs has an opaque caching layer that will come back and bite you.
And typically is that you need to adapt your database schema to what the ORM manage to handle.
> We literally write clients for APIs we don't own all the time,
The topic here is APIs you control yourself within the team/organization. External APIs, either that you consume or you need to expose is different topic, they need to be designed (more). The point is internal APIs can be treated differently than external ones, no need to follow the holy grail of REST for your internal APIs. Waste of time.
But even with external APIs that you need to expose they can be subdivided into different BFFs, no need to squeeze them into one, this has the benefit that you can spend less time on overall design of the API, because the API is smaller (fewer endpoints).
> That's what a client is...
I'm specially talking about server architecture here, the client uses the adapter.
CRConrad 719 days ago [-]
> If your developers is lacking in skill
Are. En developer is, flera developers are.
> Most ORMs, especially if they are of type active record, just misses that mark entirely
Miss. En ORM misses, flera ORMs miss. (Du fixade ju "are"!)
> Performance is generally bad, thus most ORMs has
Have. En ORM has, flera ORMs have.
Kom igen, så jävla svårt är det inte.
siefca 715 days ago [-]
Agreed! There are many things in IT industry that are prone to this kind of almost magical thinking, and "boundaries" / "tight coupling" is one of them. I realized that when tried to actually compare some stuff I had been doing at work through years, being fascinated with uncoupling things. Well, if you start measuring it, even at the top level (time, people, money spent) then it is so clear that there are obvious tight couplings at architecture level (like data on wire containing some structure or transferring a state of application), and it is very tempting to remove them. But then we may actually find ourselves having a subtle tight coupling, totally not obvious, but effecting in a need of two teams or even two tech stacks and a budget more than twice the size because of communication / coordination costs.
towawy 723 days ago [-]
This development style might be a better DX for the teams. But Facebook on the web is an absolute dumpster fire if you use it in a professional capacity.
You can't trust it to actually save changes you've made, it might just fail without an error message or sometimes it soft-locks until you reload the page. Even on a reliable connection. Error handling in SPAs is just lacking in general, and a big part of that is that they can't automatically fall back to simple browser error pages.
Google seems to be one of the few that do pretty good on that front, but they also seem to be more deliberate for which products they build SPAs.
myth2018 723 days ago [-]
> Decoupling the client and server means they can be separate teams with clearly definied roles and responsibilities
How desirable this is depends on the UI complexity.
Complex UIs as the ones built by google and facebook will most likely benefit from that.
Small shops building CRUD applications probably won't. On the contrary: the user requirements often cross-cut client and server-side code, and separating these in two teams adds communication overhead, at the best of the hypotheses.
Moreover, experience shows that such separation/specialization leads to bloated UIs in what would otherwise be simple applications -- too many solutions looking for problems in the client-side space.
goodSteveramos 723 days ago [-]
There is no reason other than poorly thought out convenience to make the webbrowser/webserver interface the location of the frontend/backend interface. You can have a backend service that the web server and mobile apps all get their data from.
scns 723 days ago [-]
When a company gets to the stage where they actually need a mobile app, it is pretty easy to add API endpoints in many/most/all? major web frameworks. Starting out with the FE/BE split slows you down immensely.
siefca 715 days ago [-]
IMHO it is completly doable to do a state transfer with HTML to a mobile device instead of writing a separate application using a separate technology. Then we can deal with coupling server-side, e.g. "view's team" can use some templating system and "core team" can play with logic using JSP-Model2 architecture or something similar.
723 days ago [-]
bobthepanda 723 days ago [-]
There is a third option, which is that FE-facing teams maintain a small server side application that talks to other services. That way the API boundary is clearly defined by one team.
It sounds a lot more annoying to have to manage one client and many servers instead.
ratorx 723 days ago [-]
Or even skip the DSL and use JS for both client and server, just independently. Validation functions can/should be simple, pure JS that can be imported from both.
_heimdall 723 days ago [-]
Validation logic is surprisingly simple but almost always lives in different domains. Unique columns are a great example, the validation has to happen at the database layer itself and whatever language is used to call it will just be surfacing the error.
Language and runtime decisions really need more context to be useful. JS everywhere can work well early on when making a small number of devs as productive as possible is a goal. When a project scales parts of the stack usually have different priorities take over that make JS a worse fit.
hdjjhhvvhga 723 days ago [-]
> felt they had to switch to Angular about the same time and they never regained their footing in the Angular age
And in this case what actually happened is exactly what we had expected would happen: tons of badly-written Angular apps than need to be maintained for foreseeable future because at this point nobody wants to rewrite them so they become Frankensteins nobody wants to deal with.
CRConrad 719 days ago [-]
And people want to complain about COBOL.
moritzwarhier 722 days ago [-]
> then wait wait wait wait even more for completely inscrutable reasons later on. (e.g. amazing how long you have to wait for Windows to delete the files in your node_modules directory)
As far as I know, windows explorer has been extremely slow for this kind of operation for ages.
It's not even explainable by requiring a file list before starting the operation, I have no idea what it is about Windows explorer, it's just broken for such use cases.
Just recently, I had to look up how to write a robocopy script because simply copying a 60GB folder with many files from a local network drive was unbelievably slow (not to mention resuming failed operations). The purpose was exactly what I wrote: copy a folder in Windows explorer.
What does this have to do with React or JavaScript?
nsonha 723 days ago [-]
Can you argue a bit more genuinely and not pick on such a minor point as validation? I think parent mentioned other points? How about the logical shift to let client do client things, and server do server things? Server concatting html strings for bilions of users over and over again seems pretty stupid.
bigtunacan 723 days ago [-]
No more stupid than concatting json for those same users
nsonha 721 days ago [-]
Why not stress the argument further and say server "concats" http strings or sql strings? It's because of the nonsense from the web platform that inefficient text-based transports such as json became prevalent in the back-end btw.
beezlewax 722 days ago [-]
But you can download another node package from npm to delete those other npm packages: npkill. For whatever reason this is as they say in the javascript world "blazingly fast"
cutler 723 days ago [-]
Wasn't Ember the idiomatic choice before React? I don't remember Angular being that popular with Rails devs generally.
Glyptodon 723 days ago [-]
I definitely associate it (Angular) with Python BE devs for some reason.
benbristow 723 days ago [-]
It was/is quite popular with .NET developers due to TypeScript being very similar to C# and implementing similar patterns like dependency injection (I know dependency injection/IOC isn't .NET specific).
_heimdall 723 days ago [-]
This was my experience as well. Angular became really popular in enterprise teams that were already full of devs with a lot of C# and MVC or MVC experience.
RHSeeger 723 days ago [-]
> We moved away for MPAs because they were bloated, slow and difficult to work with. SPAs have definitely become what they sought to replace.
Plus we now get the benefit of people trying to "replace" built in browser functionality with custom code, either
The SPA broke it... Back button broken and a buggy custom implementation is there instead? Check.
or
They're changing things because they're already so far from default browser behavior, why not? ... Scrolling broken or janky because the developer decided it would be cool to replace it? Check.
There is a time and place for SPA (mail is a great example). But using them in places where the page reload would load in completely new content for most of the page anyways? That's paying a large cost for no practical benefit; and your users are paying some of that cost.
yellowapple 723 days ago [-]
> There is a time and place for SPA (mail is a great example). But using them in places where the page reload would load in completely new content for most of the page anyways? That's paying a large cost for no practical benefit; and your users are paying some of that cost.
Yep. It's bonkers to me that a page consisting mostly of text (say, a Twitter feed or a news article) takes even so much as a second (let alone multiple!) to load on any PC/tablet/smartphone manufactured within the last decade. That latency is squarely the fault of heavyweight SPA-enabling frameworks and their encouragement of replacing the browser's features with custom JS-driven versions.
On the other hand, having to navigate a needlessly-elongated history due to every little action producing a page load (and a new entry in my browser's history, meaning one more thing to click "Back" to skip over) is no less frustrating. Neither is wanting to reload a page only for the browser to throw up scary warnings about resending information simply because that page happened to result from some POST'd form submission.
Everything I've seen of HTMX makes it seem to be a nice middle-ground between full-MPA v. full-SPA: each "screen" is its own page (like an MPA), but said page is rich enough to avoid full-blown reloads (with all the history-mangling that entails) for every little action within that page (like an SPA). That it's able to gracefully downgrade back to an ordinary MPA should the backend support it and the client require it is icing on the cake.
I'm pretty averse to frontend development, especially when it involves anything beyond HTML and CSS, but HTMX makes it very tempting to shift that stance from absolute to conditional.
PaulHoule 723 days ago [-]
I remember writing high complexity rich internet applications (knowledge graph editors, tools to align sales territories for companies with 1000+ salespeople, etc.) circa 2005. It was challenging to do because I had to figure out how to update the whole UI when data came in from asynchronous requests, I had to write frameworks a bit like MobX or redux to handle the situation.
Even before that I was making Java applets to do things you couldn't do with HTML, like draw a finite element model and send it to a FORTRAN back end to see what it does under stress, or replace Apple's Quicktime VR plugin, or simulate the Ising model with the Monte Carlo methods.)
What happened around 2015 is that people gave up writing HTML forms and felt that they had to use React to make very simple things like newsletter signups so now you see many SPAs that don't need to be SPAs.
Today we have things like Figma that approach the complex UI you'd expect from a high-end desktop app, but in many ways our horizons have shrunk thanks to "phoneishness" and the idea that everything should be done with a very "simple" (in terms of what the user sees) mobile app that is actually very hard to develop -- who cares about how fast your build cycle is if the app store can hang up your releases as long they like?
com2kid 723 days ago [-]
> The SPA broke it... Back button broken and a buggy custom implementation is there instead? Check.
MPAs break back buttons all the damn time, I'd say more often than SPAs do.
Remember the bad old days when websites would have giant text "DO NOT USE YOUR BROWSER BACK BUTTON"? That is because the server had lots of session state on it, and hitting the browser back button would make the browser and server be out of sync.
Or the old online purchase flows where going back to change the order details would completely break the world and you'd have to re-enter all your shipping info. SPAs solve that problem very well.
Let's think about it a different way.
If you are making a phone app, would you EVER design it so that the app downloads UI screens on demand as the user explores the app? That'd be insane.
wwweston 723 days ago [-]
> "DO NOT USE YOUR BROWSER BACK BUTTON"?
Yeah, state mutation triggered by GET requests is going to make for a bad time, SPA or MPA. Fortunately enough of the web application world picked up enough of the concepts behind REST (which is at the heart of all web interaction, not just APIs) by the mid/late 00s that this already-rare problem became vanishingly rare well before SPAs became cancerous.
> going back to change the order details would completely break the world and you'd have to re-enter all your shipping info. SPAs solve that problem very well.
The problem is entirely orthogonal to SPA vs MPA.
> If you are making a phone app, would you EVER design it so that the app downloads UI screens on demand as the user explores the app?
It's not only EVER done, it's regularly done. Perhaps you should interrogate some of the reasons why.
But more to the point, if it's bad, SPAs seem to frequently manage to bring the worst of both worlds, a giant payload of application shell and THEN also screen-specific application/UI/data payload, all for reasons like developer's unfortunately common inability to understand that both JSON and HTML are perfectly serviceable data exchange formats (let alone that the latter sometimes has advantages).
com2kid 723 days ago [-]
> It's not only EVER done, it's regularly done. Perhaps you should interrogate some of the reasons why.
Content in the app is reloaded, sure, but the actual layout and business logic? Code that generally changes almost never, regenerated on every page load?
I know of technologies that are basically web wrappers that allow for doing that to bypass app store review processes, but I'd be pissed if an alarm clock app decided to reload its layout from a server every time I loaded it up!
The SPA model of "here is an HTML skeleton, fill in the content spaces with stuff fetched from an API" makes a ton more sense.
The application model, that has been in use for even longer, of "here is an application, fetch whatever data you need from whatever sources you need" is, well, a fair bit simpler.
Everyone is stuck with this web mindset for dealing with applications and I get the feeling that a lot of developers now days have never written an actual phone or desktop application.
> But more to the point, if it's bad, SPAs seem to frequently manage to bring the worst of both worlds, a giant payload of application shell and THEN also screen-specific info, all for reasons like developer's unfortunately common inability to understand that both JSON and HTML are perfectly serviceable data exchange formats (let alone that the latter sometimes has advantages).
I've seen plenty of MPAs that consist of multiple large giant mini-apps duct taped together.
Shitty code is shitty code!
heartbreak 723 days ago [-]
> here is an application (your browser), fetch whatever data you need (html) from whatever sources you need (my web server)
Parentheticals added.
ebiester 723 days ago [-]
It wasn't GET mutation... it was POSTs with multi-page forms that was the problem. It was such a pain to subdivide a form and create server and session state and intuit the return state. And what happens if you needed a modal with dynamic data? Did you pop open a new window and create a javascript call for the result? There was no great progressive answer to them.
Oh, and then request scope wasn't good enough because you needed to do a post-redirect-get? I will say that I do not think MPAs for web applications were the good old days.
KyeRussell 723 days ago [-]
Yeah. As someone that’s quite bearish on JS altogether, and as someone that’s worked on a few old-school multi-step forms recently, we can’t pretend that this was and still is anything other than a code and UX disaster. And…I’m not an idiot, I understand different HTTP request types and how browsers handle going back through history. I know that there’s not something obvious I’m missing. I’ve put the work in. The reality is that non-JS web technologies aren’t very good at some things that are quite common and that many people expect in anything more than a brochure site.
I’m just so miffed that it can end up necessitating roping in so much BS. Mind you, not necessarily in this example. Things like HTMX excite me. And, on the other side, things like Next.js and Remix that IMO are a breath of fresh air, even if they might not ultimately be heading in the right direction (I genuinely have no idea).
PaulHoule 723 days ago [-]
It is totally possible to make MPAs where reloads are never a problem.
As for phone apps these are undeniably a step backwards from desktop apps, web apps and every other kind of app. On the web you can deploy 100 times a day, waiting for the app store to approve changes to your app is like building a nuclear reactor in comparison.
All the time you get harassed in a physical store or a web site to "download our mobile app" and you know there ought to be a steaming pile of poop emoji because the mobile app almost always sucks.
One of the great answers to the app store problem is to move functionality away from the front end into the back, for instance people were looking to do this with chat bots and super apps back in 2017 and now that chatbots are the rage again people will see them as a way to get mobile apps back on internet time.
ebiester 723 days ago [-]
Sure. You maintain the entire application state in session scope or some sort of internal state. It was possible, but it was hell.
PaulHoule 722 days ago [-]
Or the other way around, you keep as much as you can on the client but use nonces in critical requests to prevent (accidental) replays.
seabass-labrax 723 days ago [-]
> Remember the bad old days when websites would have giant text "DO NOT USE YOUR BROWSER BACK BUTTON"?
It was even worse when the page didn't warn you, but would lose state all the same!
pier25 723 days ago [-]
> If you are making a phone app, would you EVER design it so that the app downloads UI screens on demand as the user explores the app? That'd be insane.
Good luck forcing users to download 50MB before they can use your web app.
The web and mobile/desktop apps are two totally different paradigms with different constraints.
723 days ago [-]
robertoandred 723 days ago [-]
Neither broken back buttons nor messy scrolling are unique to SPAs. You're just talking about bad websites.
codeflo 723 days ago [-]
I mean, there's nothing about an SPA that forces you to break the back button, to the contrary, it's possible to have a very good navigation experience and working bookmarks. But it takes some thinking to get it right.
afavour 723 days ago [-]
I don’t think “forces” is the right way to think about it. By default a SPA breaks navigation history etc (it’s right in the name). It’s not onerous to reimplement it correctly but reimplement you must.
RHSeeger 723 days ago [-]
And it's very common for it to be re-implemented incorrectly.
robertoandred 723 days ago [-]
No it doesn't and no you don't. Every modern SPA framework has solved that problem long ago.
afavour 723 days ago [-]
Right, so you agree: you have to reimplement it. You can just use a framework to do so.
It might be news to folks to learn that every single SPA framework has solved the problem entirely because it's really not an uncommon experience to have your browser history broken by a SPA. I believe that most frameworks implement the API correctly. I also believe a good number of developers use the framework incorrectly.
Joker_vD 723 days ago [-]
Or simply unaware about the whole "back button" debacle. Which is yet another stone to throw at the SPA camp: if using technology A requires a programmer to learn about more stuff (and do more work) than technology B for achieving pretty much the same end results, then technology A is inferior to B (for achieving those particular end results, of course).
robertoandred 723 days ago [-]
That's like saying you have to reimplement assembly arithmetic, you can just use the Calculator app to do so.
Bad websites are the results of bad developers, not the tool. You can have your history messed up by any kind of website.
afavour 723 days ago [-]
No, it’s like saying you’ve been provided with a calculator but may, if you wish, create your own calculator with some parts provided. No guarantee it adds numbers together correctly.
robertoandred 723 days ago [-]
Why would anyone choose to create and use their own unreliable calculator instead of what came installed?
afavour 723 days ago [-]
…that’s the point being made
robertoandred 723 days ago [-]
The point that bad websites are the result of bad developers, not the tools?
afavour 723 days ago [-]
I don't understand why this point is so complicated. Yes, bad SPA developers mess it up all the time. Bad MPA developers do not mess it up because it doesn't require reimplementation by said bad developer, it works out of the box.
robertoandred 723 days ago [-]
It works out of the box on SPAs too. It doesn't require reimplementation by said bad developer.
afavour 723 days ago [-]
We’re way too far into a thread for me to have to restate the original point I made in the first post. If what you’re saying is true we’d never see bad implementations of history in SPAs yet we do all the time.
But look, whatever. It’s Friday afternoon, I’m out of here. Have a good weekend.
robertoandred 723 days ago [-]
We see bad implementations of everything all the time. You think no bad MPAs exist?
jfvinueza 723 days ago [-]
Mail is not a good example. Why would you like to read a collection of documents through A Single Page interface? Gmail was a fantastic improvement over Hotmail and Yahoo, and it provided UX innovations we still haven't caught up with, yes, but MPAs are naturally more suited for reading and composing them. Overriding perfectly clear HTML structure with javascript should be reserved for web experiences that are not documents: that is, videogames, editors, etc (Google *Maps* is a good example). The quality of the product usually depends on how it was implemented more than the underlying technology, but as I see it is: if it's a Document, if the solution has a clear paper-like analogue, HTML is usually the best way to code it, structure it, express it. Let a web page be a web page and let the user browse through it with a web browser. If it's not, well, alright, let's import OpenGL.
kbenson 723 days ago [-]
Mail is good for a SPA because the main central view which shows the different items (emails) to view or take an action on is based on a resource intensive back-end request, so keeping that state present and not having to refresh it on many of the different navigation actions yields a tremendous benefit.
You could do some client side caching with local page data, but just keeping it present and requesting updates to it only is vastly superior.
Thats honestly one place SPAs shine, where there's a relatively expensive request that provides data and then a lot of actions that function on some or all of that data transiently.
taeric 723 days ago [-]
I'm willing to wager that I get far more data loading Gmail than just an email or list of titles/senders.
That is, it may seem a fine optimization, but has led to a fairly bloated experience.
kbenson 723 days ago [-]
You're thinking just of the amount of data sent, not the amount of work that's done on the back end. Just because it's only showing you the most recent 40 messages or something doesn't mean it isn't doing a significant amount of work on the back end to determine what those messages are. Not having to scan through all your email and sort by date nearly as often is a significant win.
taeric 723 days ago [-]
That is an at rest choice that should be identical in both.
I presume you are thinking of rendering? But, again, that is largely done client side in both cases.
kbenson 723 days ago [-]
No, I'm talking about back end processing cost. If the main page of the app has a significant server cost in the determining what data is being sent, being able to just redisplay the data you have when you browse back to the main page instead of request it again, which could incur that large processing fee, is a large gain.
As a simplisit ecanple, imagine an app which on login has to do an expensive query which takes five seconds to return because of how intensive it is on the back end. If you can just redisplay the data that's already in memory on the client, optionally updating it with a much less expensive query for what's changed recently, then you're saving about five seconds of processing time (and client wait time) by doing so.
Yiu could use localStorage to do something similar without it being a SPA, but that's essentially opting into a feature that serves a similar need.
Client side caching is a strong point of SPAs, so it makes sense that a use case that can leverage that heavily will have benefits.
taeric 723 days ago [-]
I find it hard to really agree that the backend of Gmail would be more involved with a thinner frontend. The "low bandwidth html" version sorta gives the lie, there...
kbenson 721 days ago [-]
I'm not sure what you're getting at. I'm not talking about bandwidth usage at all. I'm talking about CPU, memory and IO (as in disk, not client server transfer) usage.
taeric 720 days ago [-]
I'd wager all of those things are still lower on their "low bandwidth" option.
Now, I will grant that it does less. Probably lacking a lot of the "presence detection" that is done in the thick client. Certainly lacking a lot of the newer ad stuff they are pushing at.
But the rest could be offset by a very basic N-tier application where the "envelope" of the client HTML is rather cheaply added to any outgoing message. And the vast majority that goes into "determining what data is being sent, being able to just redisplay the data you have when you browse back to the main page instead of request it again, [etc.]" will probably be more identical than not between the options.
Now, I grant that some of the newer history API makes some tricks a bit easier for back button changes to work. Ironically to the point, is that gmail is broken for back button usage. So... whoops.
kbenson 719 days ago [-]
> I'd wager all of those things are still lower on their "low bandwidth" option.
I would argue that Google has thrown a bunch of engineering talent at it to optimize the problem as much as it can be for a web interface, and that Gmail is a bad example of a a SPA mail client, as it's more a combined mail client and IMAP server (really a custom designed mail store) all rolled into one. Whether Gmail itself really uses more or not is somewhat irrelevant to whether a mail client in general leans into the benefits a SPA provides. This is what I was talking about here.[1]
That said, whether it uses less resources is a tricky question. Sometimes there's algorithmic wins that overall reduce the total work done, and I don't doubt Gmail leverages some of those, but it's also just a huge amount of caching, whether in the browser or in a layer underneath. The benefit of a SPA is that you can customize the caching to a degree for the application in the client without having to have an entire architectural level underneath designed to support the application. For anything at scale, having that layer underneath is obviously better (it's custom fit for the needs of the application and isn't susceptible to client limitations), but it's also very engineering intensive.
My guess is that Gmail puts a very large amount of cache behind most requests, and is just very, very good about cache invalidation. Or they've got the data split across many locations so they can mapreduce it quickly and efficiently (but tracking where those places are will necessitate some additional resource usage).
In the end, you need caching somewhere. You can do it on the server side so that you have full control over it but you have to pay for the resources, or you can do it on the client side with some limits on control and availability, but you don't use your own resources. SPAs make client side caching more reliable in easier to deal with in some cases, because the working state of the client isn't reset (or mostly reset) on every request.
What exactly is the resource-intensive request here? Loading an E-mail, or list of E-mails? I don't see why that should be any more resource-intensive than any other CRUD app.
kbenson 723 days ago [-]
A list of emails. That's essentialls a database query that is taking X items and sorting by the date field, most commonly, except that the average user can have thousands, or even tens or hundreds of thousands of items that are unique to them in that dataset that need to be sorted and returned.
Sure, gmail optimizes for this heavily so it's fast, but it's still one of the most intensive things you can do for an app like that, so reducing the amount of times you need to do that is a huge win for any webmail. If you've ever used a webmail client that's essentially just an IMAP client for a regular IMAP account, you'll note that if you open a large inbox or folder it's WAY slower than trying to view an individual message, most times, for what are obvious reasons of you just think of a mailbox as a database of email and the operations that need to happen on that database (which it is).
If clicking on an individual message is a new page, that's fine, but if going back to the main view is another full IMAP inbox query, that's incredibly resource intensive compared to having it cached in the client already (even if the second request is cached in the server, it's still far more wasteful than not having to request it again).
robertoandred 723 days ago [-]
JavaScript doesn't override perfectly clear HTML structure, it generates it.
RHSeeger 722 days ago [-]
There's been a fair amount of discussion on this thread, which left me wanting to clarify my comments...
It is entirely possible to have a MPA application that makes calls to the back end to retrieve more data. Especially for things like a static page (cached) with some dynamic content on it. My problem is when people convert an entire site to a Single Page (SPA). When I click to go from the "home page" to a "subsection page", it makes sense to load the entire page. When I click to "see more results" for the list of items on a page, it seems reasonable to load them onto the page.
Side note: If I scroll down the page a few times and suddenly there's 8 items in the back queue, you're doing it wrong. That drives me bonkers.
detaro 723 days ago [-]
my favorite example is dev.to. A (web-)developer-centric site, open-source nowadays. In a similar discussion years ago it was praised as well-done SPA. Everytime the topic comes back up again I spend 5 minutes clicking around, it every time I find some breakage of a page being critically broken during a transition, not being the page the URL-bar says it is, ... because having a blogging site just be pages navigated by the browser was too easy.
fridgemaster 723 days ago [-]
I fail to see how HTMX could be the "future". It could have been something useful in the 2000s, back when browsers had trouble processing the many MBs of JS of a SPA. Nowadays SPA's run just fine, the average network bandwidth of a user is full-HD video tier, and even mobile microprocessors can crunch JS decently fast. There is no use case for HTMX. Fragmented state floating around in requests is also a big big problem.
The return of the "backend frontender" is also a non happening. The bar is now much higher in terms of UX and design, and for that you really need frontend specialists. Gone are the days when the backend guys could craft a few html templates and call it a day, knowing the design won't change much, and so they would be able to go back to DB work.
MrVandemar 723 days ago [-]
> Nowadays SPA's run just fine, the average network bandwidth of a user is full-HD video tier, and even mobile microprocessors can crunch JS decently fast.
ie. "I don't live in a rural area, but that's fine, nobody who matters lives there."
KyeRussell 723 days ago [-]
Really sounds to me like you’re speaking from your own professional context and are talking to consider the huge spectrum of circumstances in which web code is written.
duxup 723 days ago [-]
I often work on an old ColdFusion application.
It's amusing that for a long time the response was "oh man that sounds terrible".
Now it is "oh hey that's server side rendered ... is it a new framework?".
The cycle continues. I end up writing all sorts of things and there are times when I'm working on one and think "this would be better as Y" and then on Y "oh man this should be Z". There are days where I just opt for using old ColdFusion... it is faster for somethings.
Really though there's so many advantages to different approaches, the important thing is to do the thing thoughtfully.
giraffe_lady 723 days ago [-]
I also switch back and forth between two large projects written in different decades and it definitely gives an interesting perspective on this. Basically every time I'm in php I go "oh yeah I see why we do react now" and every time I'm in react I go "oh right I see why php still exists."
gofreddygo 720 days ago [-]
> ... I see why we do react now
I switch between a fair variety frontend, backend myself and have never had that reaction.
It's always, I could do exactly this in 2005 using jquery + JSP, it would not need any of these 1500 dependencies and the user would see absolutely no difference (except downloading 10 times more js today at 5G speeds)
The scalability issues non-facebook scale webapps are trying to solve for do not exist. These apps will be dead before they reach a 10% of that scale and yet the project folks just don't get it.
anecdotally, github project bookmarks I have 3-4 years ago won't even compile today. a large chunk of projects from 2010 still work. Including mine I wrote a decade ago as a newbie js junkie.
Why ? Dependencies.
tempest_ 723 days ago [-]
To be fair to PHP there have been quite a few improvements to the language in recent years.
I even hear Laravel is pretty nice to use.
I'll never know that stuff though because the PHP I generally encounter is 15 years old spaghetti.
giraffe_lady 723 days ago [-]
Yeah I also hear that and also have no idea because I'm permanently in that 5.6 shit.
cutler 723 days ago [-]
PHP lost its identity with the release of PHP5 since when it is nothing more than an interpreted version of Java.
pengaru 723 days ago [-]
> there are times when I'm working on one and think "this would be better as Y" and then on Y "oh man this should be Z".
How much of that is just a garden variety "grass is always greener on the other side" effect?
> the important thing is to do the thing thoughtfully.
And finish! Total losses are still total losses no matter how thoughtfully done.
duxup 723 days ago [-]
>How much of that is just a garden variety "grass is always greener on the other side" effect?
In my example not so much. I'm working in a number of frameworks, use them regularly, sometimes ColdFusion is just faster / better suited, sometimes some other system.
ksec 722 days ago [-]
I often wonder if someday we could see WebObject or ColdFusion being open sourced.
no_wizard 723 days ago [-]
ActionScript is basically ES6 too isn't it?
phpnode 723 days ago [-]
No, it’s more like ES4 which was eventually abandoned and never became part of ecmascript
courgette 723 days ago [-]
Older than that, flash already received the apple coup de grace a little while ago when ES6 was release.
mock-possum 723 days ago [-]
AS3 was typed though, so maybe closer to Typescript.
zelphirkalt 723 days ago [-]
> For instance, a major selling point of Node was running JS on both the client and server so you can write the code once. It's a pretty shitty client experience if you have to do a network request for each and every validation of user input.
Node does not absolve from this. Any important verification still needs to be done on the server side, since any JS on the client side cannot be trusted to not be manipulated. JS on the client side was of course possible before NodeJS. NodeJS did not add anything there regarding where one must verify inputs. Relying on things being checked in the frontend/client-side is just writing insecure websites/apps.
> We moved away for MPAs because they were bloated, slow and difficult to work with. SPAs have definitely become what they sought to replace.
I would claim they became even more so than the thing they replaced. Basically most of any progress in bandwidth or ressources is eaten by more bloat.
obpe 723 days ago [-]
>Node does not absolve from this. Any important verification still needs to be done on the server side, since any JS on the client side cannot be trusted to not be manipulated. JS on the client side was of course possible before NodeJS. NodeJS did not add anything there regarding where one must verify inputs. Relying on things being checked in the frontend/client-side is just writing insecure websites/apps.
Yeah, that was my point. With Node you can write JS to validate on both the client and server. In the article, they suggest you can just do a server request whenever you need to validate user input.
>Basically most of any progress in bandwidth or ressources is eaten by more bloat.
In my experience, the bloat comes from Analytics and binary data (image/video) not functional code for the SPA. Unfortunately, the business keeps claiming it's "important" to them to have analytics... I don't see it but they pay my salary.
zelphirkalt 723 days ago [-]
> In my experience, the bloat comes from Analytics and binary data (image/video) not functional code for the SPA. Unfortunately, the business keeps claiming it's "important" to them to have analytics... I don't see it but they pay my salary.
Similar to my experience. So glad I uBlock Origin a lot of unnecessary traffic. At some point it is not longer good taste, when the 5th CDN is requested, the 10th tracker script from random untrusted 3rd parties loaded ... All while neglecting good design of a website, making it implode, when you block their unwanted stuff. Not rare to save more than half the traffic, when blocking stuff.
aidenn0 723 days ago [-]
The first SPA I wrote, I wrote in React for my use and for the use of friends. I spent about 3 days getting it working and then 3 months getting it to usable performance on my phone. There were no analytics, no binary data (100% text), just a bunch of editable values and such. I ended up having to split it up into a bunch of tabs just to reduce the size of the vdom.
robertoandred 723 days ago [-]
"The size of the vdom" would not destroy performance like you're saying.
aidenn0 723 days ago [-]
I've been told that before. Maybe it was something else; I don't know.
All I know is that I was unable to figure out what it was, and I bounced it off a few people online, and the performance scaled inversely with the number of DOM nodes.
austinthetaco 723 days ago [-]
I would think you have something else wrong with the design. I've worked on some pretty complex and large react apps that worked flawlessly on some low-end mobile browsers. Maybe you're accidentally duplicating a LOT of dom nodes?
tored 723 days ago [-]
SPAs transfers large amounts of data to the frontend to be able to do HTTP JOIN and then tosses most of it away.
ladberg 723 days ago [-]
I feel like you misunderstood the OP, they are claiming that Node allows you to reuse the same code to do validation on both the client and the server. By definition that means they are also doing server-side validation, and they are not relying on it being checked on the frontend.
PaulHoule 723 days ago [-]
As I see it though, node.js on the backend is not mainstream, most sites are still using JVM or other back ends. Using the same code for the front end and the back end is a dream that has been pursued in various forms but it isn’t mainstream.
emodendroket 723 days ago [-]
Man... if you don't think node.js on the backend is mainstream at this point I don't know what to tell you. It's not even the hyped-up new thing anymore.
sidlls 723 days ago [-]
Being not hyped up doesn’t mean it’s mainstream. Most backends are in Java, Go, or PHP. Python and Ruby take up most of those that aren’t. It’s rare to find node on the backend in comparison.
emodendroket 723 days ago [-]
I don’t agree that Go is more common than node (or other options you did not mention like .NET).
iainmerrick 723 days ago [-]
I wonder if part of the confusion here is that “backend” is pretty overloaded. There are backends like API servers, and web server backends (which at Google they call “frontends”!)
I’d guess that Go is relatively more popular than Node for API servers, and Node is more popular for web servers.
And as you note, both are probably less popular than languages like Java and PHP.
is node included when choosing front ends like react or vue?
austinthetaco 722 days ago [-]
no. NodeJS is different than NPM (which is often used to install react/vue). NodeJS is the backend technology
pphysch 723 days ago [-]
Client side validation is for UX.
Server side validation is for security, correctness, etc.
They are different features that require different code. Blending the two is asking for bugs and vulnerabilities and unnecessary toil.
The real reason that SPAs arose is user analytics.
emodendroket 723 days ago [-]
I don't understand why that should be the case. There are a lot of checks that end up needing to be repeated twice with no change in logic (e.g., username length needs to be validated on both ends).
kokanee 723 days ago [-]
There are two things that engineers tend to neglect about validation experiences:
1) When you run the validation has a huge impact on UX. A field should not be marked as invalid until a blur event, and after that it should be revalidated on every keystroke. It drives people crazy when we show them a red input with an error message simply because they haven't finished typing their email address yet, or when we continue to show the error after the problem has been fixed because they haven't resubmitted the form yet.
2) Client side validation rules do occasionally diverge from server side validation rules. Requiring a phone number can be A/B tested, for example.
emodendroket 723 days ago [-]
Even if you’re not A/B testing you’re going to have some validations that only happen server-side because they require access to resources the client doesn’t have, but I don’t see either of these points as arguments against sharing the validators that can be.
kokanee 723 days ago [-]
I agree. These points are arguments against the philosophy of HTMX which asserts that you can get everything you need without client-side logic.
To be fair, I'm also not a fan of bloated libraries like React and Angular. I think we had it right 15-20 years ago: use the server for everything you can, and use the smallest amount of JS necessary to service your client-side needs.
pphysch 723 days ago [-]
> HTMX which asserts that you can get everything you need without client-side logic.
That's not true at all. HTMX extends the logical capabilities of HTML, and _hyperscript goes even further.
emodendroket 723 days ago [-]
It’s been a while since I did much frontend work but I actually found Angular revelatory. It makes it so easy and it’s really batteries-included.
pphysch 723 days ago [-]
Username length does not "need" to be validated on the client. However, it is nice for UX to enforce it there.
emodendroket 723 days ago [-]
I think a charitable reader could infer that this is often made a requirement out of UX concerns and therefore it “needs” to be done. Do you have a substantive objection to what I said?
philderbeast 723 days ago [-]
a requirement that is solved by setting the length on your input field?
or have we forgotten that plain hold HTML can validate much of this for us with no JS of any type required?
emodendroket 723 days ago [-]
There are limitations to that, as you well know, since you hedged with “much of.” And this is, again, a nitpick around the edges and not really a comment that addresses my main point.
8organicbits 723 days ago [-]
Input validation checks are such a small part of the codebase; it feels weird that it would dictate the choice of a server-side programming language. Server-side python is very capable of checking the length of a string, for example.
One challenge is that you've got to keep the server-side and client-side validations in sync, so if you'd like to increase the max length of an input, all the checks need to be updated. Ideally, you'd have a single source of truth that both front-end and back-ends are built from. That's easier if they use the same language, but it's not a requirement. You'll also probably want to deploy new back-end code and front-end code at the same time, so just using JS for both sides doesn't magically fix the synchronization concerns.
One idea is to write a spec for your input, then all your input validation can compare the actual input against the spec. Stuff like JSON schema can help here if you want to write your own. Or even older: XML schemas. Both front-end and back-end would use the same spec, so the languages you pick would no longer matter. The typical things you'd want to check (length, allowed characters, regex, options, etc.) should work well as a spec.
It's also not the only place this type of duplication is seen: you'll often have input validation checks run both in the server-side code and as database constraint checks. Django solves that issue with models, for example. This can be quite efficient: if I have a <select> in my HTML and I want to add an option, I can add the option to my Django model and the server-side rendered HTML will now have the new option (via Django's select widget). No model synchronization needed.
As others mention, you may want to write additional validations for the client-side or for the server-side, as the sorts of things you should validate at either end can be different. Those can be written in whichever language you've chosen as you're only going to write those in one place.
emodendroket 722 days ago [-]
I don’t disagree that if this is your sole reason for picking a language it is not a great one. But it is a benefit nevertheless. And obviously we can express more complex rules in a full-on programming language.
La_Beffa 720 days ago [-]
A possible solution could be what ASP.NET does where you can just set the validation rules in the backend and you get the client side one too, the magic is done by jQuery unobstrusive validation. Of course something a bit more up to date than jQuery would be ideal but you got the gist.
You shouldn't have to wait until you submit something to get feedback on it. It's poor UX.
Frontend and backend validations are also different. Frontend is more about shape and type. Backend is content and constraints.
emodendroket 723 days ago [-]
Right, you shouldn’t, but that means writing them twice. One of the selling points of backend JavaScript is the same validation code can run on both ends (obviously any validator that needs to check, e.g., uniqueness in a database won’t work).
ZephyrBlu 723 days ago [-]
Frontend and backend validation are usually not the same though. You won't be writing the same thing twice, you'll be writing different validations for each.
emodendroket 723 days ago [-]
I think the frontend validations will, most of the time, be a subset of the backend ones, with many shared validation points.
iainmerrick 723 days ago [-]
Yes, exactly!
I’ve several times been in the position of writing a new UI for an existing API. You find yourself wanting to validate stuff before the user hits “submit”, because hiding errors until after submitting is terrible UX; and to do that, you find yourself digging into the server code to figure out the validation logic, and duplicating it.
And then years or months later the code gets out of sync, and the client is enforcing all sorts of constraints that aren’t needed on the server any more! Not good.
petepete 723 days ago [-]
> It's poor UX.
It's not as easy as that. Showing validation while people are editing can be even worse, especially for less-technically able users or people using assistive technology.
Having an announcement tell you your password isn't sufficiently complex when you're typing in the second letter might not be bad for us, but how does that work for a screen reader?
emodendroket 723 days ago [-]
That seems like it’s resolved by waiting for a focus change event.
petepete 723 days ago [-]
Not really. GOV.UK Design System team have done lots of research into this and their guidance says:
> Generally speaking, avoid validating the information in a field before the user has finished entering it. This sort of validation can cause problems - especially for users who type more slowly
Not seeing how that’s inconsistent with evaluating when the user goes to the next field.
blowski 723 days ago [-]
> The real reason that SPAs arose is user analytics.
Can you go into that a bit? I don't really understand what you mean.
pphysch 723 days ago [-]
HTML gives very limited tools for tracking what a (potentially JS-less) user is doing. There are various tricks, like "link shorteners" and "magic pixels" that allow some tracking.
But if you want advanced tracking, like tracking what a user is focusing on at a particular instant, you need to wrap the whole document in a lot of JS.
SPA frameworks came out of AdTech companies like Meta, and I assure you it wasn't because they had limited engineering resources.
blowski 723 days ago [-]
I can imagine that Facebook and Google liked the way Angular and React allowed for more advanced tracking. But it seems like you're giving too much weight to that as a primary cause.
From my memory of working through this time, it was driven more by UX designers wanting to have ever more "AJAXy" interfaces. I did a lot of freelancing for design agencies 2006 - 2016, and they all wanted these "reactive" interfaces, but building these with jQuery or vanilla JS was a nightmare. So frameworks like JavaScript MVC, Backbone.js, SproutCore, Ember.js were all popping up offering better ways of achieving it. React, Vue and Angular all evolved out of that ecosystem.
iainmerrick 723 days ago [-]
That’s a good and logical story, but it doesn’t match the reality in my experience.
Companies use SPA frameworks for the same reason they use native apps, to make a “richer”, more responsive, more full-featured UI.
Analytics is typically done in a separate layer by a separate team, usually via Google Tag Manager. There might be a GA plugin for your UI framework, but it can work equally well with plain HTML. GA does use a bunch of client-side JS, yes, but it’s not really a framework you use on the client side, it’s just a switch you flip to turn on the data hose.
In my experience, trying to add analytics cleanly to clientside UI code is a complete pain. Trying to keep the analytics schema in sync as the UI evolves is really hard, and UI developers generally find analytics work tedious and/or objectionable and hate doing it.
Google Tag Manager is the big story in adtech, and I think it comes from and inhabits a completely different world from Angular, React etc.
adrr 722 days ago [-]
You can do all that with vanilla html. Cursor tracking, scroll tracking. With HTMLX it makes it trivial.
React isnt a SPA framework. It’s a component framework. It has no router or even state management. ExtJs is an Mvc framework in JavaScript and can be used to create a full spa app without additional libraries. It also came out in 2007. There is also ember that also predates react and is another mvc framework by the people who did rails.
723 days ago [-]
dtagames 723 days ago [-]
This is not correct. SPAs and web components were pioneered by Google with the introduction of Angular. Later, Vue was invented by a previous Google employee who had worked on Angular. Finally, Facebook came up with React (it's a "reaction" to Angular) because they could not be seen using a Google product.
If anything, SPAs make metrics harder because they hide the real behavior of the page in local JS/TS code and don't surface as much or any information in the URL. Also, fewer server interactions means fewer opportunities to capture user behavior or affect it at the server level.
pphysch 723 days ago [-]
A lot of misconceptions here.
Google is an AdTech company par excellence.
You don't need to do hacky URL tracking with SPAs. That's the point.
>Also, fewer server interactions means fewer opportunities to capture user behavior or affect it at the server level.
SPAs certainly do not have "fewer server interactions". What do you think an API call is?
"React" comes from "reactive web app", not "reaction to a competitor's product".
dtagames 723 days ago [-]
I work with SPAs with API calls every day. It definitely reduces the server interactions over computing everything on that side, and it gives fewer points of contact with the server about the user's behavior. For example, many clicks and other actions will not result in any server contact at all.
I'm aware that they call it "reactive" but I'll stick with my rationale. There is no way they would use a Google product like that.
traverseda 723 days ago [-]
I... don't believe you? Like looking at the network request of any SPA I've ever seen there's just tons of requests for even simple page loads. One for main content, one for profiles, one for comments, etc.
In theory stuff like graphql helps but in the reality I'm living in SPA's hit multiple endpoints to get render even simple pages.
dtagames 723 days ago [-]
Definitely true, and mine do also. It's a side effect of the migration to microservices and away from monolithic endpoints.
pphysch 723 days ago [-]
An enterprise React app I am currently working with takes about 50 requests to fully render the app post-login. Switching to another view (no reload) takes another few dozen. That's a lot of "server interactions", pretty standard for SPAs, but YMMV.
replygirl 723 days ago [-]
your timeline is a bit off. facebook had react in production (mid-late 11) less than a year after angularjs went public, open-sourced it 18-24 months later (early 13), then evan started working on vue a few months after that (mid 13) and released early the following year
dtagames 723 days ago [-]
Thank you. I stand corrected.
jannes 723 days ago [-]
React was significantly better than Angular (version 1).
Please don't pretend it was merely NIH syndrome that led to its creation.
withinboredom 723 days ago [-]
I mean, they also came out with 'flow' after MS came out with TypeScript... I def don't want to think it was NIH syndrome, but it smells fishy.
dtagames 723 days ago [-]
But that would suffice! Facebook does not use any platforms from people who might compete with them. Why would they?
rk06 722 days ago [-]
Fact check: Evan didn't work on angular.js, as in he was a user, not a contributor.
Source: Vue.js documentary
tabtab 723 days ago [-]
But if you don't blend the two, then you have a DRY violation. Someone should only have a say a field (column) is required in one and only one place, for example. The framework should take care of the details of making sure both the client and the server check.
I myself would like to see a data-dictionary-driven app framework. Code annotations on "class models" are hard to read and too static.
pphysch 723 days ago [-]
DRY has its limitations. Client-server boundary is a good candidate for such a limit.
zamnos 723 days ago [-]
that seems like an easy way for validation logic between the two to fall out of sync. Limits want to be enforced on the back end, definitely, but if the frontend also does the same validation the user experience is better, so you want to do some there as well (eg blank username does not need to do the slow round trip to the server). Through the magic of using JavaScript on both ends, the exact same bit of code can, with a bit of work, be used on both the front and the back end, so you can get the best of both worlds.
gpapilion 723 days ago [-]
I actually tend to think of it to add feature degradation and handle micro service issues. It always seemed better to have the client manage that, and more graceful.
withinboredom 723 days ago [-]
The number of SPAs that implement their own timeouts when I'm stuck on 2G networks is non-zero and incredibly annoying. The network socket has a timeout function, just because you 'time out' doesn't mean the network timed out, that data is still being transferred and retrying just makes it worse.
adrr 723 days ago [-]
I don't understand how spa is different than vanilla web app in terms of user analytics? A beacon is a beacon. Whether its img tag with a 1x1 transparent gif or an ajax call.
Also validation is usually built on both client and server for the same things. Like if you have a password complexity validation. Its both on UX and the server otherwise it will be a very terrible UX experience.
ragall 723 days ago [-]
A SPA can track the mouse cursor, as well as stealing form content that was left unsubmitted.
detaro 723 days ago [-]
an MPA can use javascript that does exactly the same.
adrr 722 days ago [-]
You don’t even need JavaScript. CSS has Hover attribute and you can use it to fire beacons.
adrr 722 days ago [-]
Htmlx has a type ahead attribute that you can add to form elements and send the contents to an endpoint without the user explicit submitting the form.
obpe 723 days ago [-]
I have never heard this before. Can you elaborate on the differences? What do you validate on the client side that you don't on the server and vice versa?
pphysch 723 days ago [-]
Some validations require capabilities that you don't want/need the client to have.
There are also validations that can improve UX but aren't meaningful on the server. Like a "password strength meter", or "caps lock is on".
Religiously deploying the same validations to client and server can be done, but it misses the point that the former is untrusted and just for UX. And will involve a lot of extra engineering and unnecessary coupling.
zamadatix 723 days ago [-]
I'm not sure adding a meter value output to the server side check to use it in both places is really more engineering work. Writing separate checks on the client and server side seems much more likely to create headache and extra work.
That said, I could definitely see additional checks being done server side. One example would be actually checking the address database to see if service is available in the entered address. On the other hand, there really isn't any waste here either. I.e. just because you write the validation in server side JS doesn't mean you MUST therefore deploy and use it in the client side JS as well, it just means you never need to worry about writing the same check twice.
pphysch 723 days ago [-]
You misunderstand: the server only cares if your password is valid (boolean), not if it is "almost valid (0.7 strength)".
zamadatix 723 days ago [-]
I understand the argument I just disagree that having a separate "bool isPasswordValid()" and "float isPasswordValid()" (really probably something that returns what's not valid with the score) function is in any way simpler than a single function used on both sides. Sure, the server may not care about the strength level but if you need to write that calculation for the client side anyways then how are you saving any engineering work by writing a 2nd validation function on the server side instead of just ignoring the extra info in the one that already exists?
Dylan16807 723 days ago [-]
In this situation code for a good strength meter is going to be an order of magnitude or two more complicated than the boolean validity check. Porting 50x as much code to the server is significantly worse than having two versions or having one shared function and one non-shared function.
zamadatix 723 days ago [-]
You shouldn't have to port anything. If you mean in the opposite case of two separate languages between client and server side then yeah, of course - by definition you're rewriting everything and there is no way to reuse code. I'm not clear how you're reaching anywhere near 50x complexity though. You're writing something like this on the client side (please excuse the lazy checks):
Then instead of writing another one on the server that only checks the password isn't blank, is less than the maximum, and has valid characters you're just reusing the full 6 check code. That's only twice as many checks, not even twice as many lines, and it's already written. You really should check all 6 again on the server anyways, but that's beside the point. Better still, if you do the reuse as a build step via shared function library file or similar you don't need to copy/paste and it stays in sync automatically.
Of particular note there is no UI code here because the meter's UI code is not related to the check function beyond it reads the return value.
Dylan16807 723 days ago [-]
If that's all you want then sure, but that's not what I would call a good password quality meter. It makes no attempt to look for patterns or words or super-common passwords.
zamadatix 723 days ago [-]
As noted excuse the basic check functions and use whatever you actually want for check criteria and the amount of work on the server side is still a factor <1 compared to writing that and then a different check on the server. If your password check logic is 50x the size of that though you might be overdoing it, but that's just an opinion. Again I'd argue you should really be validating server side as well anyways, fewer chances to mess something up and accept a weak password.
Dylan16807 723 days ago [-]
Your check functions are fine, for both client and server.
I'm not saying not to reuse things, because I specifically think it should be two separate functions on the client, one of which is copied to the server. But if you insist on having only one client function, I think the server function should be cut down.
And the premise is doing client-only advice on strength so I'm not going to challenge that premise.
As far as 50x, your code doesn't need those consts saying the exact same thing as the results object, so that simplifies to 8 lines, and I think 400 lines for a good password estimator isn't unreasonable. zxcvbn's scoring function is around that size.
obpe 723 days ago [-]
I see, I have never implemented those types of validations. We do religiously deploy the same validation on client and server to explicitly avoid the mismatch of client/server validation. Having the client submit "valid" input only to have server reject it is something we have run into. Having only client side validation is something I have never run into.
Also, in my opinion things like you suggest you shouldn't do. A password strength metre is only going to give attackers hints at the passwords you have in your system. And I have not see a caps lock on warning in forever. The only password validation we do is the length which is pretty easy to validate on client and server.
yawaramin 723 days ago [-]
> A password strength metre is only going to give attackers hints at the passwords you have in your system
No, it's not. A password strength meter just shows you the randomness of an input password, it doesn't have anything to do with passwords already in the system.
zamadatix 723 days ago [-]
I'd agree with both takes on that it depends on the meter. Ones which truly approximate password entropy work like you say, however, for some reason, the most common use of such meters is to show how many dartboard requirements you've met while ignoring the actual complexity. When this common approach is used you combine "password must be 8 characters or more" with things like "password must have a number, symbol of ${group}, and capital letter" and the average password complexity is actually made worse for a given length due to pigeonholing.
In the full picture though, in terms of UI/UX, the meter seems like only a downside. In the dartboard use case it's great because it displays what's still needed in terms users work and think with signalling e.g. "you still need a number, otherwise you're all set". People don't really think in bits of entropy though so ll that really is being signaled by either a meter or a normal failed validation hint is "more complexity and/or length needed".
There may be good cases for using a meter while simultaneously implementing good password requirement policy I'm not thinking of though.
This works like I described, it don't show 'dartboard requirements', only entropy. I think you've misunderstood what a password strength checker is. It's definitionally not a checklist like 'You need an uppercase letter, a lowercase letter, a number, a special character'. It's a tool which measures the strength i.e. the randomness or entropy of the password.
nagyf 723 days ago [-]
Everything has to be validated on the server side simply for security reasons.
Even if you do all validation on the client side, which prevents the users submitting a form with invalid data, an attacker can work around that. e.g. submitting the form with valid data, but intercepting the request and modifying the values there. Or simply just using curl with malicious/invalid data.
You still need the client side validation for UX. The regular users needs to know if they messed up someting in the form. Also it's a much better UX if it's done on the client side, without the need to send an async request for validations.
emodendroket 723 days ago [-]
Yeah but that doesn't answer why you can't share validators between the backend and the frontend if both are written in the same language.
yawaramin 723 days ago [-]
Because HTML form validation is a built-in native feature of HTML, and it's integrated in the browser:
<input type="email" required placeholder="Please enter your email address">
Constantly reinventing the wheel in every app is silly.
emodendroket 723 days ago [-]
These validators are rather limited and you’ll end up needing JavaScript for any Web app with anything beyond the simplest requirements.
yawaramin 723 days ago [-]
They're limited in some ways but they're just about powerful enough to do almost everything you'd need or want to do client-side without making a network request. In my opinion it doesn't make sense to try to fit in tons of complex validation logic in the frontend.
emodendroket 723 days ago [-]
Why make a round trip if you don’t have to?
PaulHoule 723 days ago [-]
Some kinds of validation really do need the round trip. If somebody is choosing a user name on a sign up for you do need to do a database lookup.
If your back end is fast and your HTML is lean, backend requests to validate can complete in less time than any of the 300 javascript, CSS, tracker, font, and other requests that a fashionable modern webapp does for no good reason...
It's true though that many back ends are run on the cheap with slow programming languages and single-thread runtimes like node.js that compete with slow javascript build systems to make people think slow is the new normal.
emodendroket 723 days ago [-]
Yeah, obviously if it requires I/O you can’t write it client side, but the argument here seems to be in favor of doing validation only server-side even when it could also be done client-side.
yawaramin 723 days ago [-]
Why be on the internet at all? Why not distribute a desktop app that doesn't need any connectivity at all?
emodendroket 723 days ago [-]
Presumably there are some features of the app that won’t work that way. Surely you’re not saying you prefer a Web app just for the sake of making calls.
yawaramin 723 days ago [-]
No, I'm just taking your argument to its logical conclusion. If you manufacture enough criteria, you can steer any discussion so that your choice is the only possible choice left. That's not how things work in reality, there are many competing factors that go into technological choices.
emodendroket 723 days ago [-]
Minimizing round trips is an optimization with improved UX and operational cost and no real downside except marginal implementation difficulty (which itself seems like an argument for being able to share validation between the backend and frontend).
austinthetaco 723 days ago [-]
because many people wouldn't use the product or you would have to maintain multiple codebases for the various operating systems and devices (including mobile).
politician 723 days ago [-]
Client side validations predominately drive user experiences.
Server side validations predominately enforce business constraints.
If you mix the concerns, either your UX suffers (latency) or the data suffers (consistency, correctness).
pier25 723 days ago [-]
I agree but one important point to consider is the dev effort of making a proper SPA which is not a very common occurrence.
"The best SPA is better than the best MPA. The average SPA is worse than the average MPA."
Can we even weight that statement? The average SPA is significantly worse than the average MPA. There is so much browser functionality that needs to be replicated in a SPA that few teams have the resources or talent to do a decent job.
pier25 723 days ago [-]
Yeah it's so easy to fuck it all up with an SPA.
Recently I was using Circle (like a paid social media platform for communities) and pressing back not only loses the scroll position, it loses everything. It basically reloads the whole home page.
seti0Cha 723 days ago [-]
The nice thing about htmx is it gives a middle ground between the two. Build with the simplicity of an MPA while getting a lot of the nice user experience of an SPA. Sure, you don't get all the power of having a full data model on the client side, but you really don't need that for most use cases.
pier25 723 days ago [-]
OTOH if you need to go back to the server after every interaction the UX can get pretty bad for distant users.
lenkite 723 days ago [-]
Extend that statement with "Only True Web Gods can create the Best SPA".
pier25 723 days ago [-]
You joke but even Google with all its resources struggles to create proper SPAs.
I dread using the Google Cloud console for example.
simplotek 723 days ago [-]
> For instance, a major selling point of Node was running JS on both the client and server so you can write the code once.
What? No.
The whole point of Node was a) being able to leverage javascript's concurrency model to write async code in a trivial way, and b) the promise that developers would not be forced to onboard to entirely different tech stacks on frontend, backend, and even tooling.
There was no promise to write code once, anywhere. The promise was to write JavaScript anywhere.
callahad 723 days ago [-]
That's the reasoned take, and yet I have strong and distinct memories of Node being sold on the basis of shared code as early as 2011. Much of the interest (and investment) in Meteor was fueled by its promise of "isomorphic JavaScript."
imbnwa 723 days ago [-]
I mean...look at these timestamps from 7-9 years ago[0]
>For instance, a major selling point of Node was running JS on both the client and server so you can write the code once
I mean, I'm using Laravel Livewire quite heavily for forms, modals and search. So effectively I've eliminated the need for writing much front-end code. Everything that matters is handled on the server. This means the little Javascript I'm writing is relegated to frilly carousels and other trivial guff.
lucasyvas 723 days ago [-]
You're on the money with this assessment. It's all bandwagon hopping without any consideration for reality.
Also, all these things the author complains about are realities of native apps, which still exist in massive numbers especially on mobile! I appreciate that some folks only need to care about the web, but declaring an architectural pattern as superior - in what appears to be a total vacuum - is how we all collectively arrive at shitty architecture choices time and time again.
Unfortunately, you have to understand all the patterns and choose when each one is optimal. It's all trade-offs - HTMX is compelling, but basing your entire architectural mindset around a library/pattern tailored to one very specific type of client is frankly stupid.
> to one very specific type of client is frankly stupid
However, I see this specific type of clients that need just basic web functionalities, e.g CRUD operations and build something basic more prevalent than those that need very instant in-app reactivity and animations and so on (React, and SPA ecosystem).
Nowadays that's exactly the opposite, every web developer assumes SPA as default option, even on these simple CRUD examples.
marcosdumay 723 days ago [-]
> But that isn't because of the technology
Technically, the technology support doing any of them right. On practice, doing good MPAs require offloading as much as you can into the mature and well developed platforms that handle them; while doing good SPAs require overriding the behavior of your immature and not thoroughly designed platforms on nearly every point and handling it right.
Technically, it's just a difference on platform maturity. Technically those things tend to correct themselves given some time.
On practice, almost no SPA has worked minimally well in more than a decade.
jonahx 723 days ago [-]
> But that isn't because of the technology, it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again. Nothing about this technology will stop that.
While I am a fan of MPAs and htmx, and personally find the dev experience simpler, I cannot argue with this.
The high-order bit is always the dev's skill at managing complexity. We want so badly for this to be a technology problem, but it's fundamentally not. Which isn't to say that specific tech can't matter at all -- only that its effect is secondary to the human using the tech.
wrenky 723 days ago [-]
> , it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again
It brings a tear of joy to my eye honestly. The circle of life continues, and people always forget people are bad at programming (myself included).
danielvaughn 723 days ago [-]
100%. Saying that [technology x] will remove complexity is like saying that you've designed a house that can't get messy. All houses can be messy, all houses can be clean. It depends on the inhabitants.
elliottinvent 723 days ago [-]
True, but well designed houses that have natural places for the things a well functioning home needs, are far easier to keep clean and tidy.
withinboredom 723 days ago [-]
natural to whom?
randomNumber7 723 days ago [-]
Yes, but some technologies make it easier (or harder) to keep everything clean.
Like in my opinion you can write clean code in C, but since you dont even have a string type it shepherds you into doing nasty stuff with char*... etc.
chasd00 723 days ago [-]
I remember the hype about javascript on the server (node) being front-end devs didn't have to know/learn a different language to write backend code. Not so much writing code once but not having to write Javascript for client-side and then switch to something else to write the server-side.
erikerikson 723 days ago [-]
I remember it being both and then some...
[edit: both comprising shared code between client and server, as well as, reduced barrier to server-side contribution, and then some including but not limited to the value of the concurrency model, expansive (albeit noisy) library availability, ...]
com2kid 723 days ago [-]
> Also, there was a push to move the shitty code from the server to the client to free up server resources and prevent your servers from ruining the experience for everyone.
People forget how bad MPAs were, and how expensive/complicated they were to run.
Front end frameworks like svelte let you write nearly pure HTML and JS, and then the backend just supplies data.
Having the backend write HTML seems bonkers to me, instead of writing HTML on the client and debugging it, you get to write code that writes code that you then get to debug. Lovely!
Even more complex frameworks, like React, you have tools like JSX that map pretty directly to HTML, and in my experience a lot of the hard to debug problems come up with the framework tries to get smart and doesn't just stupidly pop out HTML.
roguas 720 days ago [-]
We decided for fun to do a small project in htmx (we had to pick something, one person opted strongly). Yeah, I was cringing and still am. I fully support frontend/backend split status quo.
For stuff that is uncomplicated I much prefer svelte as it still keeps the wall between frontend/backend but let's you do a lot of "yolo frontend" that is shortlived and gets fixed. I run small startup on the side" svelte fe + clojure be. It works great as I have different acceptance for crap in frontend (if I can fix something with style="", I do and I don't care). I often hotfix a lot of stuff in front where I can and just deploy to return later and find better solution that involves some changes in backend.
I can't imagine that for moving a button I would have to do deployment dance for whole app that in my case has 3 components(where one is distributed and requires strict backwards compat).
chubot 723 days ago [-]
Well at least the shitty MPAs will run on other people's servers, rather than shitty SPAs running on my phone and iPad
FWIW I turned off JavaScript on my iPad a couple years ago ... what a relief!
I have nothing against JS, but the sites just became unusably slow
foul 723 days ago [-]
That demonstration as per OP is dumb or targeted to React-ists. You can, with HTMX, do the classic AJAX submit with offline validation.
In the last years, for every layer of web development, what I saw was that a big smelly pile of problems with bad websites and webapps, be it MPA or SPA, was not a matter of bad developers on the product, but more a problem of bad, sometimes plain evil, developers on systems sold to developers to build their product upon. Boilerplate for apps, themes, ready-made app templates are largely garbage, bloat, and prone to supply chain attacks of any sort.
hombre_fatal 721 days ago [-]
> For instance, a major selling point of Node was running JS on both the client and server so you can write the code once.
(I'm not actually arguing with you, just thinking out loud)
This is often repeated but I don't think it even close to a primary reason.
The primary reason you build JS web clients is for the same reason you build any client: the client owns the whole client app state and experience.
It's only a fluke of the web that "MPA" even means anything. While it obviously has its benefits, we take for granted how weird it is for a server to send UI over the wire. I don't see why it would be the default to build things that way except for habit. It makes more sense to look at MPA as a certain flavor of optimization and trade-offs imo which is why defaulting to MPA vs SPA never made sense now that SPA client tooling has come such a long way.
For example, SPA gives you the ability to write your JS web client the same way you build any other client instead of this weird thing where a server sends an initial UI state over the wire and then you add JS to "hydrate" it, and then ensuring the server and client UIs are synchronized.
Htmx has similar downsides of MPAs since you need to be sure that every server endpoint sends an html fragment that syncs up to the rest of the client UI assumptions. Something as simple as changing a div's class name might incur html changes across many html-sending api endpoints.
Anyways, client development is hard. Turns out nothing was a panacea and it's all just trade-offs.
MetaWhirledPeas 723 days ago [-]
> all the devs writing shitty MPAs are now writing shitty SPAs
This pretty much sums it up. There is no right technology for the wrong developer.
It's not about what can get the job done, it's about the ergonomics. Which approach encourages good habits? Which approach causes the least amount of pain? Which approach makes sense for your application? It requires a brain, and all the stuff that makes up a good developer. You'll never get good output from a brainless developer.
croes 723 days ago [-]
>For instance, a major selling point of Node was running JS on both the client and server so you can write the code once.
You did write it once before too.
With NodeJS you have Javascript on both sides, that's the selling point. You still have server and client code and you can write a MPA with NodeJS
Capricorn2481 721 days ago [-]
> For instance, a major selling point of Node was running JS on both the client and server so you can write the code once. It's a pretty shitty client experience if you have to do a network request for each and every validation of user input.
These are two different things and I don't see how they're related. You don't need code sharing to do client side navigation. And you should always be validating on the backend anyway. Nothing is stopping an MPA from validating on the client, whether you can do code sharing or not.
Spivak 723 days ago [-]
> prevent your servers from ruining the experience for everyone.
This never panned out because people are too afraid to store meaningful state on the client. And you really can't because (reasonable) user expectations. Unlike a Word document people expect to be able to open word.com and have all their stuff and have n simultaneous clients open that don't step on one another.
So to actually do anything you need a network request but now it's disposable-stateful where the client kinda holds state but you can't really trust it and have to constantly refresh.
guggle 723 days ago [-]
> a major selling point of Node was running JS on both the client and server so you can write the code once
Yes... but some people like me just don't like JS so for us that was actually a rebutal.
kitsunesoba 723 days ago [-]
> But that isn't because of the technology, it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again.
I think the root cause of this is lack of will/desire to spend time on the finer details, either on the part of management who wants it out the door the second it's technically functional or on the part of devs who completely lose interest the second that there's no "fun" work left.
Aeolun 723 days ago [-]
> SPAs have definitely become what they sought to replace.
Not sure about that. SPA’s load 4MB of code once, then only data.
Now look at a major news front page, which loads 10MB for every article.
bcrosby95 723 days ago [-]
A pro can be a con, and vice versa. The reason why you move to a SPA might be the reason why you move away from it. The reason why you use sqlite early on might be the reason you move away from it later.
A black & white view of development and technology is easy but not quite correct. Technology decisions aren't "one size fits all".
onion2k 723 days ago [-]
But that isn't because of the technology, it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again. Nothing about this technology will stop that.
This is only sort of true. The problem can be mitigated to a large extent by frameworks; as the framework introduces more and more 'magic' the work that the developer has to do decreases, which in turn reduces the surface area of things that they can get wrong. A perfect framework would give the developer all the resources they need to build an app but wouldn't expose anything that they can screw up. I don't think that can exist, but it is definitely possible to reduce places where devs can go astray to a minimum.
And, obviously, that can be done on both the server and the client.
I strongly suspect that as serverside frameworks (including things that sit in the middle like Next) improve we will see people return to focusing on the wire transfer time as an area to optimize for, which will lead apps back to being more frontend than backend again. Web dev will probably oscillate back and forth forever. It's quite interesting how things change like that.
tomca32 723 days ago [-]
Unfortunately, developers often write code in a framework they don't know well so they end up fighting the framework instead of using the niceties it provides. The end result being that the surface area of things that can go wrong actually increases.
runlaszlorun 723 days ago [-]
True. But I also find that a lot of frameworks are narrowly optimized for solving specific problems, at the expense of generality, and those problems often aren’t the ones I have.
Supposedly declarative approaches especially are my pet peeve. “Tell it what you want done, not how you want it done” is nice sounding but generally disappointing when I soon need it to do something not envisioned by its creator yet solved in a line or two of general purpose/imperative code.
CSSer 723 days ago [-]
Most companies unfortunately don't let developers adequately explore solutions or problem spaces before committing to them either. The ones that dominate do, but that's also because they often have the resources to build it from the ground up anyway.
The average mid-sized business seems to have internalized that code is always a liability, but they respond by cutting short discovery and get their just deserts.
CSSer 723 days ago [-]
That oscillation probably wouldn't happen if it were possible to be more humble about the scope of the solution and connection to commercial incentives. It's gotten to the point where a rite of passage for becoming a senior developer is waking up to the commercialization and misdirection.
You can see the cracks in Next.js. Vercel, Netlify et. al, are interested in capitalizing on the murkiness (the middle, as you put it) in this space. They promise static performance but then push you into server(less) compute so they can bill for it. This has a real toll on the average developer. In order for a feature to be a progressive enhancement, it must be optional. This is orthogonal to what is required for a PaaS to build a moat.
All many people need is a pure, incrementally deployed SSG with a robust CMS. That could exist as a separate commodity, and at some points in the history of this JAMStack/Headless/Decoupled saga it has come close (excluding very expensive solutions). It's most likely that we need web standards for this, even if it means ultimately being driven by commercial interests.
halfcat 723 days ago [-]
> a major selling point of Node was running JS on both the client and server so you can write the code once
But we don’t have JS devs.
We have a team of Python/PHP/Elixir/Ruby/whatever devs and are incredibly productive with our productivity stacks of Django/Laravel/Phoenix/Rails/whatever.
mixmastamyk 723 days ago [-]
> have to do a network request for each and every validation of user input.
HTML5 solved that to a first approximation client-side. Often later you'll need to reconcile with the database and security, so that will necessarily happen there. I don't see that being a big trade-off today.
SoftTalker 723 days ago [-]
Well by definition the "average" team is not capable of writing a "great" app. So it doesn't matter so much what the technology stack is -- most of what is produced is pretty shitty regardless.
mattgreenrocks 723 days ago [-]
This is the real problem, and why I'd argue we've made little real progress in tooling despite huge investment in it.
The web still requires too much code and concepts to be an enjoyable dev experience, much less one that you can hold in your head. Web frameworks don't really fix this, they just pile leaky abstractions on that require users to know the abstractions as well as the things they're supposed to abstract.
It seems like it is difficult to truly move webdev forward because you have to sell to people who have already bought into the inessential complexity of the web fully. The second you try to take part of that away from them, they get incensed and it triggers loss aversion.
sublinear 722 days ago [-]
> all the devs writing shitty MPAs are now writing shitty SPAs
drain the swamp man
foobarbecue 723 days ago [-]
welcome to City Web Design, can a take a order
brushfoot 723 days ago [-]
I use tech like HTMX because, as a team of one, I have no other choice.
I tried using Angular in 2019, and it nearly sank me. The dependency graph was so convoluted that updates were basically impossible. Having a separate API meant that I had to write everything twice. My productivity plummeted.
After that experience, I realized that what works for a front-end team may not work for me, and I went back to MPAs with JavaScript sprinkled in.
This year, I've looked at Node again now that frameworks like Next offer a middle ground with server-side rendering, but I'm still put off by the dependency graphs and tooling, which seems to be in a constant state of flux. It seems to offer great benefits for front-end teams that have the time to deal with it, but that's not me.
All this to say pick the right tool for the job. For me, and for teams going fuller stack as shops tighten their belts, that's tech like HTMX, sprinkled JavaScript, and sometimes lightweight frameworks like Alpine.
scoofy 723 days ago [-]
I use htmx on my current project, and it's like a dream. I'm happy to sacrifice a bit of bandwidth to be able to do all the heavy lifting in python. On top of that, it makes testing much much easier since it turns everything is GET and POST requests.
I'd add a couple features if I were working there (making css changes and multiple requests to multiple targets standard), but as it stands, it's a pleasure to work in.
I have no love for unnecessarily bloated dependency graphs, but we can't have the cake and eat the cake too.
Next.js for example, comes packed with anything and everything one might need to build an app. Sitting on the promise of hyperproductivity with "simplicity". Plus, is made of
single responsability principles set of modules, kind of necessary to build a solve-all needs framework.
And it does that.
A bit like Angular, set to solve everything front-side. With modules not entirely tightly coupled but sort of to get the full solution.
And it did that.
Then we have outliers like React, which stayed away from trying to solve too many things.
But the developers have spoken, and soon enough it became packed in with other frameworks. Gatsby etc. And community "plug-ins" to do that thing that dev think should be part of the framework.
And they did that, solved most problems from authentication to animation, free and open source sir, so that developers can write 12 lines of code and ship 3 features per day in some non innovative way, but it works, deployed in the next 36 seconds, making the manager happy as he was wondering how to justify over 100k in compensation going to a young adult who dressed cool and seemed to type fast.
Oh no! dependency hell. I have to keep things maintained, I have to actually upgrade now, LTS expired,
security audits on my back, got to even change my code that worked perfectly well and deal with "errors", I can't ship 3 features by the end of today.
We need a new framework!
BiteCode_dev 723 days ago [-]
Django comes with a lot: auth, caching, csrf protection, an orm, the admin, form workflow, templating, migrations, i18n, and yet doesn't come with thousands of deps.
courgette 723 days ago [-]
I maintains a side project in Django since 5+ years now.
The scope has been reduce to almost nothing. I have spend like 20h on it in 2022. But it still being used.
Django helps by how boring and solid it feels.
A similar project in node would probably not build anymore
BiteCode_dev 723 days ago [-]
I know of a 15 years django project that runs on 2.7 that is still making money. It got reinstalled this month on brand new Ubuntu servers out of a rubbish requirements.txt, and it worked.
So much for saying python packaging sucks.
Maxion 720 days ago [-]
Yep, same.
Django can also serve a boatload of concurrent users, way more than one would think. It is a boring, old-fashioned, but stable and very functional framework.
hirako2000 722 days ago [-]
Maybe django is less so a dependency bloat than most other frameworks. Just called out the common consequences of getting more. You get more.
Npm projects are likely the most bloated by far, but also are java based projects, just look at Spring.
Calling out a python framework or some Go lean solution being the exception of the rule is fair enough, my point is developers expect everything and have little to do but rapid painless developement
I would love to hear about those asked to migrate their python 2.7 django ecommerce app to python3 since v2 is totally dead and unmaintained posing serious security risks. But sure if we forgo these things, and don't ever need to touch the code again, some frameworks have no downside. Makes a certain kind of developers finally be right.
minusf 723 days ago [-]
it comes exactly with 2 on latest python versions
marcosdumay 723 days ago [-]
Back in the late 90's and early 00's, armed with the experience of C, C++, Bash, and Perl, everybody knew it very clearly that "batteries included" is the correct way to create development tools.
I don't know about the current fashion of minimalism comes from. It doesn't bring simplicity.
hdjjhhvvhga 723 days ago [-]
> C, C++, Bash, and Perl
While I agree with your comment, lumping these together somehow doesn't seem fair. As for C and C++, it took decades to develop package managers, and we can't still say we have standard ones (but I feel Conan is a de facto standard PM for C++).
Bash, on the other hand, should never have 'batteries included' because in this case the batteries are the rest of the system - coreutils and the rest. An Perl had CPAN quite early on, in the early nineties iirc.
marcosdumay 723 days ago [-]
C is completely without batteries, the stdlib for C++ was a great change and a big force towards people adopting it. The same happened with Bash and Perl (and people did migrate a lot of things into the later).
itronitron 723 days ago [-]
If someone releases a module that depends on nine other modules then their module will likely be promoted by the authors of the other modules, rinse and repeat.
justeleblanc 723 days ago [-]
So Next.js did everything right, but is built upon React that does too much. Okay?
ademup 723 days ago [-]
Your story sounds similar to mine, and your choice to use HTMX has me motivated to check it out. The sum total of my software supports 5 families' lifestyles entirely on LAMP MPAs with no frameworks at all. Thanks for posting.
jasfi 723 days ago [-]
I'm using React, and I feel like I can manage as a team of one. But React has a huge community, which means lots of libraries for just about anything you need.
I previously used HTMX for another project of mine, and it worked fine too. I did, however, feel limited compared to React because of what's available.
723 days ago [-]
toyg 722 days ago [-]
Until those libraries rot, or some dependency breaks the mountain of hacks...
willio58 723 days ago [-]
Angular is falling off hard in the frontend frameworks race. And I totally agree about how the boilerplate and other things about Angular feels bad to work with. Other frameworks are far easier to build with, to the point where a 1-person team can easily handle them. React is being challenged but still has the biggest community, it's a much better place to start than Angular when evaluating frameworks like this.
All that being said, I'm glad HTMX worked out for you!
fridgemaster 723 days ago [-]
Angular 2 works fine out of the box, and already provides a good architecture that noobs struggle to come up with in "freestyle" solutions like React. Angular's bi-directional binding is way superior and simpler to use vs React's mono-directional binding: you can just use bound variables, no need to do complicated setState or use abominations like Redux. Vue also has bi-directional binding. Essentially there are many alternatives that are superior to React, which is where it is mostly because of fame and popularity.
agrippanux 723 days ago [-]
You may of not used React in the past few years, but setState fell out of favor a while ago with the release of the useState hook and Redux (which I agree is an abomination) isn't necessary for 95% (imo) cases, again thanks to hooks.
For bound variables you can use MobX or signals in Preact.
ukuina 723 days ago [-]
How does something "fall out of favor" in React?
Is it deprecated?
willio58 720 days ago [-]
The recommendation for React projects for the past few years has been to write everything using functional components vs the old class-based components. In function-based components, you can only use hooks so useState is that's why the parent comment is referring to.
halfcat 723 days ago [-]
> React is being challenged but still has the biggest community
jQuery and PHP have entered the chat
ChikkaChiChi 723 days ago [-]
I've felt the same way and it's good to hear I'm not alone. I feel like log4j should have been enough of a jolt to push back on dependency hell enough that devs would start writing directly against codebases they can trace and understand. Maybe this is just a byproduct of larger teams not having to do their own DevOps.
MisterSandman 723 days ago [-]
Angular is nototiously bad for single developers. React is much better, and things like Remix and Gatsby are even better.
_heimdall 723 days ago [-]
I really can't recommend Gatsby to anyone at this point. The sale to Netlify was the final nail in the coffin of Gatsby, the entire business was sold off only for the perceived value of Valhalla
nickisnoble 723 days ago [-]
Svelte is even betterer
Bellend 723 days ago [-]
I'm a single developer and its fine. (5 years in).
stanmancan 723 days ago [-]
Have you taken a look at Elixir/Phoenix? I've recently made the switch and I find it incredibly productive as a solo developer.
fridgemaster 723 days ago [-]
Just pick a lightweight web framework, and freeze the dependencies. I don't see the problem.
recursivedoubts 723 days ago [-]
i am the creator of htmx, this is a great article that touches on a lot of the advantages of the hypermedia approach (two big ones: simplicity & it eliminates the two-codebase problem, which puts pressure on teams to adopt js on the backend even if it isn't the best server side option)
hypermedia isn't ideal for everything[1], but it is an interesting & useful technology and libraries like htmx make it much more relevant for modern development
we have a free book on practical hypermedia (a review of concepts, old web 1.0 style apps, modernized htmx-based apps, and mobile hypermedia based on hyperview[2]) available here:
I didn’t know what HTMX was and couldn’t figure it out from the comments here, so I went to htmx.org. This is what I saw at the top of the landing page:
> introduction
> htmx gives you access to AJAX, CSS Transitions, WebSockets and Server Sent Events directly in HTML, using attributes, so you can build modern user interfaces with the simplicity and power of hypertext
> htmx is small (~14k min.gz’d), dependency-free, extendable, IE11 compatible & has reduced code base sizes by 67% when compared with react
This tells me what htmx does and what some of its properties are, but it doesn’t tell me what htmx is! You might want to borrow some text from your Documentation page and put something like the following at the top of your homepage:
“htmx is a dependency-free, browser-oriented javascript library that allows you to access modern browser features directly from HTML.”
fridgemaster 723 days ago [-]
>simplicity
Can be achieved in MPAs and SPAs alike. I'd also argue that having state floating around in HTTP requests is harder to reason about than having it contained in a single piece in the browser or in a server session. Granted this is not a problem of HTMX, but of hypermedia. There is a reason why HATEOAS is almost never observed in REST setups.
> two-codebase problem
This is a non-problem. In every part of a system, you want to use the right tool for the job. Web technologies are better for building UIs, if only by the sheer ammount of libraries and templates that already exist. The same splitting happens in the server side: you would have a DB server, and a web service, maybe a load balancer. You naturally have many parts in a system, each one being specialized in one thing, and you would pick the technologies that make the most sense for every one of them. I'd also argue that backend developers would have a hard time dealing with the never ending CSS re-styling and constant UI change requests of today. This is not 2004 where the backend guys could craft a quick html template in a few hours and went back to work in the DB unmolested. The design and UX bar is way higher now, and specialists are naturally required.
_heimdall 723 days ago [-]
> There is a reason why HATEOAS is almost never observed in REST setups.
I saw the HTMX creator floating around the thread so hopefully he can confirm, but my understanding is HATEOS is a specific implementation of a REpresentstional State Transfer API. JSON is often used for the API, HTMX uses HTML instead but it is indeed still a REST API transferring state across the wire.
My shift key really doesn't appreciate all these abbreviations
Just started using HTMX on a new project and have been a big fan. I’d go so far as to say that it’s the best practical case for the theory of hypermedia in general. Like others have mentioned, this is the sort of thing that prob _should_ be in the HTML spec but, given what I’ve personally seen about the standards process, I have little expectation of seeing that. Thx again!
rmbyrro 723 days ago [-]
How would this be in HTML standard if it requires JS to work?
recursivedoubts 723 days ago [-]
it's implemented in js because that's what's available, but there's no reason this functionality couldn't be folded into HTML itself and then implemented in browsers without requiring js
cogman10 723 days ago [-]
It's not clear to me, but how and where is state managed?
In the OPs article, it looks like the only thing going over the line is UUIDs. How does the server know "this uuid refers to this element"? Does this require a sticky session between the browser and the backend? Are you pushing the state into a database or something? What does the multi-server backend end up looking like?
Doesn't this make serverless read-only apps (that only require a fileserver) effectively impossible?
In a serverless read-only app, all business logic and state is maintained on the browser.
_heimdall 723 days ago [-]
Serverless and static are different. Static sites will likely be limited to maintaining state in the browser. Serverless is terribly named but still have a short lived server with access to request query parameters, cookies, headers, and a database depending on your setup
A possible extension to HTMX would be to allow this kind of offloading to pure JS functions instead of requiring hacky intercepts.
You would still have a clear separation of responsibilities between frontend rendering (by the browser only) and application logic (which only generates HTML as output).
nymanjon 714 days ago [-]
I've done the same thing with vanilla JS in a service worker. I did it with my HTMF library -- similar to HTMX -- but not production ready and based on forms rather than being able to put your attributes everywhere. I'll have to create a simple to do app with HTMX that shows this same functionality. The Rust one is cool as you can do it with 150kB library. But I can do the same thing with 10kB with just simple JS.
lakomen 723 days ago [-]
I hate HATEOAS with a passion.
Yet another useless Java gimmick with no support other than 1 single framework, Spring Boot.
If htmx has anything to do with HATEOAS it's going to be ignored out of principle.
recursivedoubts 721 days ago [-]
That's because you are using in a JSON API context, where it doesn't make any sense.
HATEOAS (or, as Fielding prefers to call it, the hypermedia constraint) is a necessary component of a truly REST-ful networking system, but unfortunately the language around REST is all jumbled up.
Mind sharing why you have such strong opinions on HATEOAS?
Is you problem that there aren't many implementations of it today or concerns with the architecture itself?
pdonis 723 days ago [-]
The article under discussion here appears to be saying that HTMX can work without Javascript enabled. But HTMX itself is a Javascript library, correct? So how can it work without Javascript enabled?
recursivedoubts 723 days ago [-]
the article says it is possible to build web applications that use htmx for a smoother experience if javascript is enabled, but that properly falls back to vanilla HTML if js is not enabled
this is called progressive enhancement[1], and yes, htmx can be used in this manner although it requires some effort by the developer
unpoly, another hypermedia-oriented front end library, is more seamless in this regard and worth looking at
Adding the form element allows it to post to the server without javascript, just like olden times. Since the htmx header is not included, the backend was instructed to return a full page instead of a fragment.
Every company I've been a part of has redesigned their front end at least once.
These redesigns would be a lot more difficult if we had to edit HTML on the client and the HTML that a server returns.
Also, HTMX is best styled with semantic classes. Which is a problem for companies using Tailwind and utility classes in their HTML. With class-heavy HTML it's nearly impossible to redesign in two different places. And performance suffers and returning larger chunks of HTML.
Despite all that, I want HTMX to be the standard way companies develop for the web. But these 2 problems need to be addressed first, I feel, before companies (like mine) take the leap.
booleandilemma 723 days ago [-]
I use htmx on my personal site and I love it so much. Thank you!
account-5 723 days ago [-]
Complete novice here; what are the advantages of hyperview over something like flutter?
I looked at a bunch of frameworks before settling on dart/flutter for my own cross platform projects. I did look at htmx but since I wasn't really wanted to create a web app I moved on. But I like the idea of a true rest style of app.
recursivedoubts 723 days ago [-]
hyperview uses the hypermedia approach, which means the client and server are decoupled via the uniform interface
so you can, for example, deploy a new version of your mobile app without updating the client, a big advantage over needing users to update their mobile apps
benatkin 723 days ago [-]
It doesn't feel like hypermedia to me. It just feels like a vue-like language that is an internal DSL for HTML instead of an external DSL for HTML like svelte and handlebars.
we generalize HTML's hypermedia controls in the following way:
- any HTML element can become a hypermedia control
- any event can drive a hypermedia interaction
- any element can be the target of a hypermedia interaction (transclusion, a concept in hypermedia not implemented by HTML)
all server interactions are done in terms of hypermedia, just like w/links and forms
it also makes PUT, PATCH and DELETE available, which allows HTML to take advantage of the full range of HTTP actions
htmx is a completion of HTML as a hypermedia, this is its design goal
tomhallett 723 days ago [-]
have you seen any interest by the browsers to build htmx features as experimental browser features, with the goal of htmx features becoming browser standards?
When looking at the various options, I always enjoyed your architectural choice of htmx being an extension of html, for that very reason. Similar to "phonegap" hoping that the phonegap code base would get smaller and smaller as mobile browsers built more of those features natively. :)
recursivedoubts 723 days ago [-]
i haven't heard of any browsers implementing these features, but that's the right thing: they would be far more effectively implemented by the hypermedia client and it wouldn't be too much work technically
my sense is that HTML is constrained by social/organizational issues rather than technical ones at this point
hopefully someone on the chrome team notices htmx at some point and HTML starts making progress again
tomhallett 723 days ago [-]
question - which parts of htmx would be better from an end-user-perspective if they were built into the browser? i assume "all features" might be a bit faster, but is there anything which would be night and day better if it wasn't a js library?
the browser vendors have been more than happy to use experimental features to chart their own course, which I think can be a good thing to spawn innovation and healthy competition. (given the standards bodies will be slower and more prudent - similar to how python doesn't want "pedantic" to be part of python core, because that would hurt pedantic's innovation, not improve it)
Maybe the way someone from the chrome team could tap into "business value" of "let's build these htmx features in chrome" would be that it allows developers to write "internal/developer/crud apps" where a "only supported in chrome" is acceptable.....
_heimdall 723 days ago [-]
HTMX makes heavy use of replacing branches of the DOM with HTML partials fetch wit get/post requests
There a ton of additional features builtin to HTMX, but I'd love to see just this basic primitive built into browsers. It's related to the element transitions API that has been working it's way into browsers, but approaches it from the angle of HTML partials instead of diffing two full pages durn SPA navigation.
recursivedoubts 723 days ago [-]
maybe I'm too close to it, but htmx feels like a hack to address things that really should be part of the HTML spec
Links and forms are the bread and butter of many frameworks.
Like with HTMX, SvelteKit and Remix forms won't function properly without the framework.
recursivedoubts 723 days ago [-]
if you read the article, you will see that you can use htmx as progressive enhancement quite easily since it is consonant with the vanilla HTML approach.
what makes htmx a hypermedia framework is the exchange of hypermedia with the server, this satisfies the hypermedia constraint (HATEOAS) of REST. there are other libraries that are also hypermedia oriented, such as unpoly.
it is a different approach to building web applications than the JSON/RPC style that is popular today
You’ve mentioned unpoly in a couple of your comments. I’m a beginner dev and have used HTMX successfully and quite easily, so thank you for making it. What does it offer that Unpoly doesn’t and vice-versa? Or do they basically do the same things?
dlisboa 723 days ago [-]
I don't really see how progressive enhancement works if every element is a hypermedia control. Without JS/HTMX you just have a page that does nothing:
<div hx-get="/example">Get Some HTML</div>
This will never do anything without HTMX because the semantic of that markup is wrong. You'd really have to write everything in a vanilla HTML approach to begin with, and never make use of the idea of adding hypermedia to other elements.
recursivedoubts 723 days ago [-]
you do have to structure things properly to make progressive enhancement work with htmx
for your example, you wouldn't have a div, you'd use an anchor:
<a hx-get="/example" href="/example">Get Some HTML</a>
or, more likely, just boost it:
<a hx-boost="true" href="/example">Get Some HTML</a>
and then on the server side you'd need to check the `HX-Request` header to determine if you were going to render an entire page or just some partial bit of HTML
if you go down the progressive enhancement route you need to think carefully about each feature you implement and how to make it compatible w/ no-js. Some patterns (e.g. active search) work well. Others (drag and drop) don't and you'll have to forgo them in the name of supporting noJS
nb: unpoly is a more seamless progressive enhancement experience due to its design goals
renerick 723 days ago [-]
HTML forms absolutely do function without htmx since it's a part of browser standard. By default, htmx sends forms using same content type as browser does (application/x-www-form-urlencoded). The server will receive same requests, as if the browser sent them and can differentiate by presence of HX-Request HTTP header.
benatkin 723 days ago [-]
That's the properly part. If you make a form and the output is supposed to go in a particular place and it doesn't (hx-swap), then it isn't functioning properly. The degree to which it is improperly functioning depends on the UI and the user. In many cases it's improperly enough that it may as well either work or not work.
renerick 723 days ago [-]
As mentioned, htmx will attach a `HX-Request: true` header to the request[1]. The server can check this header and either return a partial for swapping, or a full page/redirect like in good old days. Same with any request. This is one way of how htmx provides "progressive enhancement". Sure, this may not be as transparent, as other JS-first frameworks implement it, but it's not complex at all.
The only thing that might cause trouble is non-standard (as in HTML standard) HTTP methods, which basically means any method other than GET and POST, I admit that. However, the fact that these methods are not supported even in HTML5 is a huge miss.
SvelteKit forms work fine with Javascript disabled -- Rich Harris is a big proponent of progressive enhancement.
benatkin 721 days ago [-]
I know they work at some level. That's why I'm comparing them to HTMX.
They don't work fine if the users/stakeholders don't find it acceptable to render the result to the full window instead of the hx-swap style area, or to spend extra time on the backend making it render the whole thing.
Actually, this is one area were SvelteKit has it beat, because the backend is done by SvelteKit, and you don't have to manually deal with hx-swap not taking effect.
jonahx 723 days ago [-]
Out of curiosity, have you used hyperview? Do you consider it production ready?
redonkulus 723 days ago [-]
We've been using similar architecture at Yahoo for many years now. We tried to go all in on a React framework that worked on the server and client, but the client was extremely slow to bootstrap due to downloading/parsing lots of React components, then React needing to rehydrate all the data and re-render the client. Not to mention rendering an entire React app on the server is a huge bottleneck for performance (can't wait for Server Components / Suspense which are supposed to make this better ... aside: we had to make this architecture ourselves to split up one giant React render tree into multiple separate ones that we can then rehydrate and attach to on the client)
We've moved back to an MPA structure with decorated markup to add interactivity like scroll views, fetching data, tabs and other common UX use cases. If you view the source on yahoo.com and look for "wafer," you can see some examples of how this works. It helps to avoid bundle size bloat from having to download and compile tons of JS for functionality to work.
For a more complex, data-driven site, I still think the SPA architecture or "islands" approach is ideal instead of MPA. For our largely static site, going full MPA with a simple client-side library based on HTML decorations has worked really well for us.
vosper 723 days ago [-]
> We've been using similar architecture at Yahoo for many years now.
At all of Yahoo? I imagined such a big company would have a variety of front-end frameworks and patterns.
redonkulus 723 days ago [-]
Nope, not all. Yahoo homepage, News, Entertainment, Weather all use this architecture. Yahoo Mail uses a React/Reduct architecture on the client. Other Yahoo properties with more complex client-side UX requirements are using things like Svelte or React. It's not a one size fits all architecture at Yahoo, we let teams determine the right tools for the job.
thomond 711 days ago [-]
I had no idea Yahoo
pier25 723 days ago [-]
> simple client-side library based on HTML decorations has worked really well for us
What library are you using?
redonkulus 723 days ago [-]
We developed an internal library, but there are similar libraries in open source (although I can't remember their names).
aidenn0 723 days ago [-]
> Managing state on both the client and server
This is a necessity as long as latencies between the client and server are large enough to be perceptible to a human (i.e. almost always in a non-LAN environment).
[edit]
I also just noticed:
> ...these applications will be unusable & slow for those on older hardware or in locations with slow and unreliable internet connections.
The part about "slow and unreliable internet connections" is not specific to SPAs If anything a thick client provides opportunities to improve the experience for locations with slow and unreliable internet connections.
[edit2]
> If you wish to use something other than JavaScript or TypeScript, you must traverse the treacherous road of transpilation.
This is silly; I almost exclusively use compiled languages, so compilation is happening no matter what; targeting JS (or WASM) isn't that different from targeting a byte-code interpreter or hardware...
--
I like the idea of HTMX, but the first half of the article is a silly argument against SPAs. Was the author "cheating" in the second half by transpiling clojure to the JVM? Have they tested their TODO example on old hardware with an unreliable internet connection?
lolinder 723 days ago [-]
> This is silly; I almost exclusively use compiled languages, so compilation is happening no matter what; targeting JS (or WASM) isn't that different from targeting a byte-code interpreter or hardware...
I agree with everything else you said, but having followed the development of Kotlin/JS and WASM closely I have to disagree with this statement.
JavaScript is a very bad compilation target for any language that wasn't designed with JavaScript's semantics in mind. It can be made to work, but the result is enormous bundle sizes (even by JS standards), difficult sourcemaps, and terrible performance.
WASM has the potential to be great, but to get useful results it's not just a matter of changing the compilation target, there's a lot of work that has to be done to make the experience worthwhile. Rust's wasm_bindgen is a good example: a ton of work has gone into smooth JS interop and DOM manipulation, and all of that has to be done for each language you want to port.
Also, GC'd languages still have a pretty hard time with WASM.
8organicbits 722 days ago [-]
> a thick client provides opportunities to improve the experience for locations with slow and unreliable internet connections.
The word "slow" here is unclear. Thick clients work poorly on low bandwidth connections, as the first load takes too long to download the JS bundle. JS bundles can be crazy big and may get updated regularly. A user may give up waiting. Thin clients may load faster on low bandwidth connections as they can use less javascript (including zero javascript for sites that support progressive enhancement, my favorite as a NoScript user). Both thin and thick clients can use fairly minimal data transfer for follow-up actions. An HTMX patch can be pretty small, although I agree the equivalent JSON would be smaller.
If "slow" means high latency, then you're right, a thick client can let the user interact with local state and the latency is only a concern when state is being synchronized (possibly with a spinner, or in the background while the user does other things).
Unreliable internet is unclear to me. If the download of the JS bundle fails, then the thick client never loads. A long download time may increase the likelihood of that happening. Once both are loaded, the thick client wins as the user can work with local state. Both need to sync state sometimes. The thin client probably needs the user to initiate retry (a poor experience) and the thick client could support retry in the background (although many don't support this).
ivan_gammel 723 days ago [-]
Fully agree with this comment. Also, client and server state are different: on the client you need only session state relevant to user journey, on server you keep only persistent state and use REST level 3 for the rest.
michaelchisari 723 days ago [-]
Everybody's arguing about whether Htmx can do this or that, or how it handles complex use case x, but Htmx can do 90% of what people need in an extremely simple and straight-forward way. That means it (or at least its approach) won't disappear.
A highly complex stock-trading application should absolutely not be using Htmx.
But a configuration page? A blog? Any basic app that doesn't require real-time updates? Htmx makes much more sense for those than React. And those simple needs are a much bigger part of the internet than the Hacker News crowd realizes or wants to admit.
If I could make one argument against SPA's it's not that they don't have their use, they obviously do, it's that we're using them for too much and too often. At some point we decided everything had to be an SPA and it was only a matter of time before people sobered up and realized things went too far.
ktosobcy 723 days ago [-]
This!
It's like with static websites - we went from static to blogs rendered in php and then back to jekyll...
silver-arrow 722 days ago [-]
Exactly! Well said
mtlynch 723 days ago [-]
I really want to switch over to htmx, as I've moved away from SPAs frameworks, and I've been much happier. SPAs have so much abstraction, and modern, vanilla JavaScript is pretty decent to work with.
The thing that keeps holding me back from htmx is that it breaks Content Security Policy (CSP), which means you lose an effective protection against XSS.[0] When I last asked the maintainer about this, the response was that this was unlikely to ever change.[1]
Alpine.js, a similar project to htmx, claims to have a CSP-compatible version,[2] but it's not actually available in any official builds.
I keep seeing people talk about this, can someone create a minimum example of what this exploit would look like?
robertoandred 723 days ago [-]
If you don't like abstraction, why would use something as abstracted and non-standard is htmx?
mtlynch 723 days ago [-]
It's a tradeoff, and either extreme has problems.
Too much abstraction (especially leaky abstraction the way web frameworks are) makes it difficult to reason about your application.
But if you optimize for absolute minimal abstraction, then you can get stuck with code that's very repetitive where it's hard to pick apart the business logic from all the boilerplate.
recursivedoubts 723 days ago [-]
htmx can work w/ a CSP, sans a few features (hx-on, event filters)
mtlynch 723 days ago [-]
My understanding based on the docs[0] is that htmx works with CSP, but it also drastically weakens its protection, as attackers who successfully inject JS into htmx attributes gain code execution that CSP would have normally prevented.
Am I misunderstanding? If I can use htmx without sacrificing the benefits of CSP, I'd really love to use htmx.
I don't understand the concern. If your backend is sufficiently compromised to inject arbitrary js into sever responses then you've already lost, and I don't see how that's worse than serving a compromised App.js from the same server.
mtlynch 723 days ago [-]
The attacker doesn't have to compromise the backend to achieve XSS.
Suppose your website displays user-generated content (like HN posts). If the attacker finds a way to bypass encoding and instead injects JS, then without CSP, the attacker gets XSS at that point. With CSP, even if the attacker can get user-generated content to render as JS, the browser will refuse to execute it.
My understanding of htmx is that the browser would still refuse to execute standard JS, but the attacker can achieve XSS by injecting htmx attributes that are effectively arbitrary JS.
infogulch 722 days ago [-]
If you exclude the features mentioned above by the creator, most htmx attributes seem pretty harmless from a CSP pov. Really, this is an argument for why htmx features should be built into the browser -- presumably details like this would be rooted out and resolved by browser developers before they would permit it to be included in their browser.
jeremyjh 723 days ago [-]
Alpine is a lightweight client side framework, not really at all equivalent to htmx.
mtlynch 723 days ago [-]
I'm not sure what you mean. htmx and alpine.js are both client-side frameworks. To me, they seem to have similar goals and similar functionality.
What do you see as the difference?
summarity 723 days ago [-]
They're neither different nor similar. In fact they work together, with Alpine managing client side reactive state (NOT app state, just interaction) and htmx managing the actual request model. That's why the htmx docs often refer to Alpine. They should be used in combination, not to displace each other.
werdnapk 723 days ago [-]
This is also comparable to how Stimulus and Turbo work together, although I found to like using Alpine in place of Stimulus with Turbo and that combination works just fine as well.
mtlynch 723 days ago [-]
Ah, gotcha. Thanks!
jeremyjh 723 days ago [-]
The purpose of htmx is to enable the server to send fragments of markup to update the dom in response to UI events. Alpine.is is purely client side, you may not even have an application server.
mtlynch 723 days ago [-]
Oh, I didn't realize that. Thanks!
dfabulich 723 days ago [-]
People were making this prediction ten years ago. It was wrong then, and it's wrong now.
This article makes its case about Htmx, but points out that its argument applies equally to Hotwired (formerly Turbolinks). Both Htmx and Hotwired/Turbolinks use custom HTML attributes with just a little bit of client-side JS to allow client-side requests to replace fragments of a page with HTML generated on the server side.
But Turbolinks is more than ten years old. React was born and rose to popularity during the age of Turbolinks. Turbolinks has already lost the war against React.
The biggest problem with Turbolinks/Htmx is that there's no good story for what happens when one component in a tree needs to update another component in the tree. (Especially if it's a "second cousin" component, where your parent component's parent component has subcomponents you want to update.)
EDIT: I know about multi-swap. https://htmx.org/extensions/multi-swap/ It's not good, because the onus is on the developer to compute which components to swap, on the server side, but the state you need is usually on the client. If you need multi-swap, you'll find it orders of magnitude easier to switch to a framework where the UI is a pure function of client-side state, like React or Svelte.
Furthermore, in Turbolinks/Htmx, it's impossible to implement "optimistic UI," where the user creates a TODO item on the client side and posts the data back to the server in the background. This means that the user always has to wait for a server round trip to create a TODO item, hurting the user experience. It's unacceptable on mobile web in particular.
When predicting the future, I always look to the State of JS survey https://2022.stateofjs.com/en-US/libraries/front-end-framewo... which asks participants which frameworks they've heard of, which ones they want to learn, which ones they're using, and, of the framework(s) they're using, whether they would use it again. This breaks down into Awareness, Usage, Interest, and Retention.
React is looking great on Usage, and still pretty good on Retention. Solid and Svelte are the upstarts, with low usage but very high interest and retention. Htmx doesn't even hit the charts.
The near future is React. The further future might be Svelte or Solid. The future is not Htmx.
jgoodhcg 723 days ago [-]
I've spent almost my entire career working on react based SPAs and react native mobile apps. I've just started playing around with HTMX.
> no good story for what happens when one component in a tree needs to update another component in the tree
HTMX has a decent answer to this. Any component can target replacement for any other component. So if the state of everything on the page changes then re-render the whole page, even if what the user clicked on is a button heavily nested.
> it's impossible to implement "optimistic UI," ... hurting the user experience
Do we actually need optimistic UI? Some apps need to work in offline mode sure, like offline maps or audiobooks or something. The HTMX author agrees, this is not the solution for that. Most of the stuff I have worked on though ... is useless without an internet connection.
In the case of "useless without internet connection" do we really need optimistic UI. The actual experience of htmx is incredibly fast. There is no overhead of all the SPA stuff. No virtual dom, hardly any js. It's basically the speed of the network. In my limited practice I've actually felt the need to add delays because the update happens _too fast_.
I'm still evaluating htmx but not for any of the reasons you've stated. My biggest concern is ... do I want my api to talk in html?
dfabulich 723 days ago [-]
> Do we actually need optimistic UI? Some apps need to work in offline mode sure, like offline maps or audiobooks or something. The HTMX author agrees, this is not the solution for that. Most of the stuff I have worked on though ... is useless without an internet connection.
> It's basically the speed of the network.
Does your stuff work on mobile web? Mobile web requests can easily take seconds, and on a dodgy connection, a single small request can often take 10+ seconds.
The difference between optimistic UI and non-optimistic UI on mobile web is the difference between an app that takes seconds to respond, on every click, and one that responds instantly to user gestures.
BeefySwain 723 days ago [-]
I want my stuff to work on mobile web. What I don't want is for stuff to look like it worked on mobile web, because the front end is "optimistic", but it actually didn't work. I find this to be a much worse user experience than having to wait for a round trip most of the time.
listenallyall 723 days ago [-]
In the case of bad internet connection, "optimistic UI" is the worst solution. User thinks the data he entered has persisted, but in fact it has not. Big surprise months later when he realizes his boss's birthday reminder never got saved to his calendar.
dfabulich 723 days ago [-]
The fix is to save data to client-side storage (IndexedDB) before attempting a network connection, and retries when connectivity is restored.
Optimistic UI probably isn't necessary for a web site, but you'll certainly want it for a web app (which is what Htmx claims to be good for).
In the real world on the mobile web we actually have, TODO apps (which is what TFA is about), calendars, notes apps, etc. all work better with client-side state synchronized to the server in the background.
React has a bunch of good libraries for this, especially TanStack React Query.
Htmx doesn't.
listenallyall 723 days ago [-]
So actually React does not address your precious Optimistic UI, you as a developer rely on an separate 3rd party library to handle it. We all know that React, a decade old by now, has a wider ecosystem. If that's your argument, then I guess every developer using emerging or niche languages, libraries and frameworks is wrong in your eyes. However, you're a guy who relies on "State of JS" to decide what to use, as opposed to thinking for yourself.
dbmikus 723 days ago [-]
No need to make such personal attacks in a comment here.
mixmastamyk 723 days ago [-]
> a single small request can often take 10+ seconds.
Hah! Good luck getting React even to start up in that environment. Meanwhile oldschool HN will still be snappy. Speaking from experience.
_heimdall 722 days ago [-]
What happens in your optimistically updated UI when a request eventually fails 10+ seconds after the users thought it succeeded and moved on?
In my experience optimistic UI updated don't actually make much sense if you expect users to regularly see large delays. Optimistic updates are great though to avoid the jank of a loading state that pops in/out of view for a fraction of a second.
One great thing about your docs is the anticipation to concerns.
CRConrad 718 days ago [-]
“So if they try this, then what can go wrong? And then if they do that, what next....?”
Must be why they're called “Recursive Doubts”.
koromak 723 days ago [-]
My app would crash and burn without optimistic UI. For simple CRUD applications sure, but most products these days aren't CRUD apps anymore.
jonomacd 723 days ago [-]
You can make use of htmx-indicator to show that the request is ongoing. From my perspective you have to be careful here. If you are _too_ optimistic and imply a request has been successfully sent to the DB when it really hasn't then users are not going to like that as their requests disappear.
OliverM 723 days ago [-]
I've not used Htmx, but a cursory browse of their docs gives https://htmx.org/extensions/multi-swap/ which seems to solve exactly this problem. And thinking about it, what makes it as difficult as you say? If you've a js-library on the client you control you can definitely send payloads that library could interpret to replace multiple locations as needed. And if the client doesn't have js turned on the fallback to full-page responses solves the problem by default.
Of course, I've not used Turbolinks, so I don't know what issues applied there.
Edit: I'm not saying htmx is the future either. I'd love to see how they handle offline-first (if at all) or intermittent network connectivity. Currently most SPAs are bad at that too...
dfabulich 723 days ago [-]
I've edited my post to clarify.
Multi-swap is possible, but it's not good, because the onus is on the developer to compute which components to swap, on the server side, but the state you need is usually on the client.
If you need multi-swap, you'll find it orders of magnitude easier to switch to a framework where the UI is a pure function of client-side state, like React or Svelte.
nymanjon 714 days ago [-]
You should be able to do this with events in HTMX also. So, you could do an update or create something and then in HTMX capture the event and reload the component when that happens.
> there's no good story for what happens when one component in a tree needs to update another component in the tree.
Huh, no one told me this before, so I've been very easily doing it with htmx's 'out of band swap' feature. If only I'd known before that it was impossible! ;-)
deltarholamda 723 days ago [-]
I guess it depends on what your definition of "the future" is.
If it's teams of 10X devs working around the world to make the next great Google-scale app, then yeah, maybe React or something like it is the future.
If it's a bunch of individual devs making small things that can be tied together over the old-school Internet, then something like HTMX moves that vision forward, out of a 90-00s page-link, page-link, form-submit flow.
Of course, the future will be a bit of both. For many of my various project ideas, something like React is serious overkill. Not even taking into account the steep learning curve and seemingly never-ending treadmill of keeping current.
geenat 723 days ago [-]
> it's impossible to implement "optimistic UI," where the user creates a TODO item on the client side and posts the data back to the server in the background.
Pretty common patterns for this- just use a sprinkle of client side JS (one of: hx-on, alpine, jquery, hyperscript, vanilla js, etc), then trigger an event for htmx to do its thing after awhile, or use the debounce feature if it's only a few seconds. Lots of options, actually.
React would have to eventually contact the server as well if we're talking about an equivalent app.
jbergens 723 days ago [-]
Of course there are some challenges and some use cases where Htmx is not the best solution but I think it can scale pretty far.
You can split a large app into pages and then each page only has to care about its own parts (sub components). If you want some component to be used on multiple pages you just create it with the server technology you use and include it. The other components on the page can easily target it. You may have some problem if you change a shared component in such a way that targeting stops working. You may be able to share the targeting code to make this easier.
wibblewobble124 723 days ago [-]
we’re using htmx at work, migrating away from react. the technique we’re using is just rendering the whole page, e.g. we have a page where one side of the screen is a big form and the other side is a view on the same data but with a different UI, updating one updates the other. we’re using the morphdom swapping mode so only the things that changed are updated in-place. as a colleague commented after implementing this page, it was pretty much like react as far as “pure function of state.”
listenallyall 723 days ago [-]
Intentionally or not, this doesn't read like a cogent argument against the merits of HTMX (and isn't, since it is factually incorrect) but just as a person who is trying to convince him/herself that his/her professional skill set isn't starting to lose relevance.
From the February 31, 1998 Hacker News archives: "According to state of the web survey, Yahoo and Altavista are looking great on usage, Hotbot and AskJeeves are the upstarts. Google doesn't even hit the charts."
antoniuschan99 723 days ago [-]
But isn't React going this route as well? There was some talk a week or so back with the React team talking about this direction.
Also, it seems so cyclic, isn't HTMX/Hotwire similar to Java JSP's which was how things were before SPA's got popular?
qgin 723 days ago [-]
It's interesting that this paradigm is especially popular on Hacker News. I see it pop up here pretty regularly and not many other places.
antoniuschan99 723 days ago [-]
lol HN itself seems like it should be the poster child of htmx :P. It's literally just text.
BeefySwain 723 days ago [-]
The people using HTMX have never heard of stateofjs.com (though they are painfully aware of the state of js!)
vp8989 723 days ago [-]
1) "Web application development" doesn't happen in a vacuum. Often it happens in contexts where the "backend" is also consumed by various non-web applications. In those contexts, collapsing the frontend and backend back into 1 component is less of the slam dunk than it's made out to be in this post.
2) The missing piece is how you can achieve this "collapsing" back of functionality into single SSR deployable(s) while still preserving the ability to scale out a large web application across many teams. Microfrontends + microservices could be collapsed into SSR "microapplications" that are embedded into their hosting app using iframes?
rektide 723 days ago [-]
Personally I believe strongly in thick clients but this is a pretty neat demo anyways.
I see a lot of resemblance to http://catalyst.rocks with WebComponents that target other components. I think there's something unspoken here that's really powerful & interesting, which is the declarativization of the UI. We have stuff on the page, but making the actions & linkages of what does what to what has so far been trapped in code-land, away from the DOM. The exciting possibility is that we can nicely encode more of the behavior into the DOM, which creates a consistent learnable/visible/malleable pattern for wiring (and rewiring) stuff up. It pushes what hypermedia can capture into a much deeper zone of behaviors than just anchor-tag links (and listeners, which are jump points away from the medium into codespace).
wwweston 723 days ago [-]
> the declarativization of the UI
Yes! There's always going to be some range of client behavior that's difficult to reduce to declarations, but so much of what we do is common that if it isn't declarative we're repeating a lot of effort.
And in general I think you're describing a big part of what made the web successful in the first place; the UI-as-document paradigm was declarative, accessible, readable, repeatable.
rektide 722 days ago [-]
Thank you! Yes and!:
> but so much of what we do is common that if it isn't declarative we're repeating a lot of effort.
We're not only repeating effort, we're also using artisinal approaches to wiring things up. Hand writing handlers is repeated work with low repeatability; there'll be a variety of forms & ways & places people end up writing similar-ish handlers, & creating the data model to pass all the references/targets around.
Being more declarative is not only less work, it also ought be much higher quality, more predictable, to have a lot less vagueries of implementation. It'll make it much easier to comprehend & maintain. Less work, for more repeatable/consistent outcomes.
CRConrad 718 days ago [-]
> ...declarativization of the UI. We have stuff on the page, but making the actions & linkages of what does what to what has so far been trapped in code-land, away from the DOM. The exciting possibility is that we can nicely encode more of the behavior into the DOM, which creates a consistent learnable/visible/malleable pattern for wiring (and rewiring) stuff up.
Web “app” development finally catching up to where Visual Basic and Delphi were ~30 years ago, hurrah!
haolez 723 days ago [-]
Catalyst looks nice! What's the downside? Is it dead?
rektide 723 days ago [-]
Catalyst is great. Definitely active, actively used, last updates in main were late February.
They have a big 2.0 branch that has a ton of internal changes. TypeScript 5.0 finally updated to support modern @decorator syntax & they're rewriting a bunch to support that, which is excellent.
Alifatisk 723 days ago [-]
That looks alot like Stimulus!
rektide 723 days ago [-]
Yeah it is! I meant to cite that too. It's mentioned by Catalyst as inspiration, somewhere.
diegof79 723 days ago [-]
While HTMLX makes some interactions easier for developers without JS
experience, the primary issue in web development is that the browser was not designed for apps. It evolved unevenly from a document navigation platform, and many things we do in web development today are hacks due to the lack of a better solution.
In my opinion, the future of the web as a platform is about viewing the web browser as an operating system with basic composable primitives.
HTMLX adds attributes to HTML using JS, and the argument about "no-JavaScript" is misleading: with HTMLX you can write interactions without JS, but HTMX uses JS. But, as it forces you to use HTML constructs that will work without scripts (such as forms), the page will fall back. It doesn't means that the fallback is usable.
The custom HTMLX attributes work because the browser supports extensions of its behavior using JS. If we add those attributes to the standard HTML, the result is more fragmentation and an endless race. The best standard is one that eliminates the need for creating more high-level standards. In my view, a possible evolution of WASM could achieve that goal. It means going in the opposite direction of the article, as clients will do more computing work. In a future like that, you can use HTMLX, SwiftUI, Flutter, or React to develop web apps. The biggest challenge is to balance a powerful OS-like browser like that with attributes like searchability, accessibility, and learnability (the devtools inspect and console is the closest thing to Smalltalk we have today)...even desktop OSs struggle today to provide that.
majormajor 723 days ago [-]
> HTMX allows you to design pages that fetch fragments of HTML from your server to update the user's page as needed without the annoying full-page load refresh.
I've been on the sidelines for the better part of a decade for frontend stuff, but I was full-stack at a tiny startup in 2012ish that used Rails with partial fragments templates for this. It needed some more custom JS than having a "replacement target" annotation everywhere, but it was pretty straightforward, and provided shared rendering for the initial page load and these updates.
So, question to those who have been active in the frontend world since then: that obviously failed to win the market compared to JS-first/client-first approaches (Backbone was the alternative we were playing with back then). Has something shifted now that this is a significantly more appealing mode?
IIRC, one of the big downsides of that "partial" approach in comparison with SPA-approaches was that we had to still write those JSON-or-XML-returning versions of the endpoints as mobile clients became more prevalent. That seems like it would still be an issue here too.
efields 723 days ago [-]
FE dev/manager here. I'll tackle this one out of order.
> one of the big downsides of that "partial" approach in comparison with SPA-approaches was that we had to still write those JSON-or-XML-returning versions of the endpoints as mobile clients became more prevalent. That seems like it would still be an issue here too.
Yup. Still, if you're at the scale where you need to support multiple clients, things should be going well enough where you can afford the extra work.
As soon as multiple clients are involved, you're writing SOMETHING to support specifically that client. 10+ years ago, you'd be writing those extra conditionals to return JSON/XML _and_ someone is building out this non-browser client (mobile app, third party API, whatever). But you're not rearchitecting your browser experience so that's the tradeoff.
> Has something shifted now that this is a significantly more appealing mode?
React especially led from one promise to another about _how much less code_ you'd have to write to support a wide range of clients, when in reality there was always another configuration, another _something_ to maintain when new clients were introduced. On top of that, the mobile device libraries (React Native, etc), were always steps behind what a true native app UX felt like.
I think a lot of us seasoned developers just feel burned by the SPA era. Because of how fast it is to iterate in js, places like npm would seemingly have just the right component needed to avoid having to build custom in-house, and its simply an `npm add` and an import away. Meanwhile, as the author states, React and company changed a lot under the hood rapidly, so dependencies would quickly become out of date, now trying to maintain a project full of decaying 3rd party libs because its own tech debt nightmare. Just for, say, popper.js or something like that.
I'm just glad the community seems to actively be reconsidering "the old ways" as something valuable worth revisiting after learning what we learned in the last decade.
dpistole 723 days ago [-]
From a front end perspective I think the selling points I see pitched for these new server side frameworks are "SEO" and "speed".
SEO I personally think is a questionable motivation except in very specific use cases.
Speed is almost compelling but the complexity cost and all the considerations around how a page is structured (which components are server, which are client, etc) does not seem worth the complexity cost IMO. Just pop a loading animation up in most cases IMO.
I think I'm stuck somewhere in the middle between old-hacker-news-person yelling "lol were just back at index.html" and freshly-minted-youtube-devs going "this is definitely the new standard".
0xbadcafebee 723 days ago [-]
I just want Visual Basic for the web man. Screw writing lines of code. I want to point and click, drop complex automated objects onto a design, put in the inputs and outputs, and publish it. I don't care how you do it, I don't want to know any of the details. I just want to be able to make things quickly and easily. I don't care about programming, I just want to get work done and move on with my life.
At this rate, when I'm 80 years old we will still be fucking around with these stupid lines of code, hunched over, ruining our eyesight, becoming ever more atrophied, all to make a fucking text box in a monitor pop some text into a screen on another monitor somewhere else in the world. It's absolutely absurd that we spend this much of our lives to do such a dumb thing, and we've been iterating on it for five decades, and it's still just popping some text in a screen, but we applaud ourselves that we're so advanced now because something you can't even see is doing something different in the background.
otreblatercero 723 days ago [-]
I feel you. I'd love to have a tree that gives money, and I tried to, but somehow I had to implement many things, like invent a seed that can actually produce golden coins, I had to read about alchemy, seed hybridation... I just wanted to get money from a tree.
But do not despair, while documenting my process, I found a revolutionary tool called Dreamweaver, I think it's the future, I think it would be terrific for your needs.
CRConrad 718 days ago [-]
> I just want Visual Basic for the web man. Screw writing lines of code. I want to point and click, drop complex automated objects onto a design, put in the inputs and outputs, and publish it. I don't care how you do it, I don't want to know any of the details.
Do an Internet search for “Quartex Pascal”, and/or its creator, Jon Aasenden. He has a blog on WordPress, and a Facebook group. His crazy Quartex project is apparently nearing completion. It's an Object Pascal compiler and IDE — kind of a Delphi / Lazarus clone, if you will — that compiles to JavaScript, for the end product to be run in the browser. I think that's as close to “Visual Basic for the web” as one can get.
That’s basically what I have with Vue + Vuetify + PUG templates.
It’s a pleasure to work with so little boilerplate.
pkelly 723 days ago [-]
Thank you for writing this article! I've had similar thoughts for the past 5 years or so.
A lot of the comments here seem to have the approach that there is a single best stack for building web applications. I believe this comes from the fact that as web engineers we have to choose which tech to invest our careers in which is inherently risky. Spend a couples years on something that becomes defunct and it feels like a waste. Also, startup recruiters are always looking for the tech experience that matches the choice of their companies. VCs want to strike while the iron is hot.
Something that doesn't get talked about enough (which the author does mention near the end of article) is that different web apps have different needs. There is 100% a need for SPAs for certain use cases. Messaging, video players, etc. But there are many cases where it is overkill, like the many many CRUD resource apps I've built over the years. Say you have a couple hundred users that need to manage the state of a dozen interconnected resources. The benefits of an MPA are great here. Routing is free, no duplication of FE / BE code. Small teams of devs can ship code and fix bugs very fast which keeps the user feedback loop tight.
quii 723 days ago [-]
Thanks for taking the time to read the article :) A lot of the comments here seem to implying that I claim "htmx is the one hammer to solve all website needs", even when I explicitly say SPAs have their place in the article.
A hypermedia approach is the nice happy medium between a very static website and an SPA, not sure why so many people are close-minded about this possibility.
poidos 723 days ago [-]
I’ve been using HTMX (from Clojure) for projects recently and I have to say I like it a lot. Full-stack web stuff is a hobby for me and I always had trouble really grokking all the parts of SPAs. HTMX fits neatly into my brain’s model of how websites should work.
MattyRad 723 days ago [-]
> ... requires a full page refresh to use ... isn't good enough for many types of web-app we need to make.
> without the annoying full-page load refresh.
This fixation on the page refresh needs to stop. Nearly every single website which has purportedly "saved" page refreshes has brutalized every other aspect of the UX.
This is a good article, and I agree that Htmx brings sanity back to the frontend, but somewhere along the line frontend folks got it in their head that page refreshes were bad, which is incorrect for essentially all CRUD / REST APIs. Unless you're specifically making a complex application that happens to be served through the web, like Kibana or Metabase, then stop harping on page refreshes.
Even this article calls it the annoying refresh. Not the impediment refresh, or the derisive refresh, or the begrieved refresh. Moreover, what exactly is annoying about page refreshes? That there's a brief flash? That it takes ~0.3 seconds to completely resolve?
Users don't care about page refreshes, and in fact they are an indication of normalcy. Upending the entire stack and simultaneously breaking expected functionally to prevent them is madness.
The killer feature of Htmx is that it doesn't upend the entire stack, and you can optimize page refreshes relatively easily. That's great! But even then I'm still not convinced the tradeoff is worth it.
jmull 723 days ago [-]
> HTMX is the Future
I'm not seeing it. SPAs can be overly complex and have other issues, but I'm not seeing HTMX as a particular improvement.
Also, a bunch of this article doesn't make sense to me.
E.g, one of the listed costs of SPAs is managing state on the client and server... but (1) you don't have to -- isn't it rather common to keep your app server stateless? -- and (2) HTMX certainly allows for client-side and server-side state, so I'm not sure how it's improving things. That is, if you want to carefully manage app state, you're going to need a mechanism to do that, and HTMX isn't going to help you.
It also doesn't somehow prevent a rats nest of tooling or dependencies. It isn't an application framework, so this all depends on how to solve that.
SPA's also aren't inherently "very easy to make [...] incorrectly".
Also, the suggested HTMX approach to no browser-side javascript is very crappy. Your app would have to be very specifically designed to not be utterly horrible w/o JS with such an approach and instead be just pretty horrible. There are just so much more straightforward ways to make apps that work well without JS. Also, this isn't exactly a mainstream requirement in my experience.
I could go on and on. "caching" - htmx doesn't address the hard part caching. "seo-friendliness" - Like all the benefits here attributed to htmx, htmx doesn't particularly help with this and there are many other available way to achieve it.
IDK. These kinds of over-promising hyped up articles give me the feeling the thing being hyped up probably doesn't have a lot of real merit to be explored or else they'd talk about that instead. It also feels dishonest to me, or at least incompetent, so make all of these claims and assertions that aren't really true or aren't really especially a benefit of htmx vs many numerous other options.
CRConrad 718 days ago [-]
> SPA's also aren't inherently "very easy to make [...] incorrectly".
Judging from the ones one encounters in the wild, yes they are.
Veuxdo 723 days ago [-]
I like how the cons of SPA are "you have to manage state" and "clients have to execute code".
I mean, aren't these baseline "get computers to do stuff" things?
chasd00 723 days ago [-]
back in the olden days a web browser was largely considered just a program to read documents stored on other systems that can be linked to each other sent over a simple stateless protocol. Then we started to be able to collect user input, then a hack was invented to maintain state between requst/response pairs (cookies), then a scripting language etc
There are many use cases out there where not treating a browser as a container to run an actual application is the right way to go. On the other hand, there's many use cases where you want the browser to be, basically, a desktop app container.
The big bold letters at the top of the article declaring htmlx is the future is a bit much. It has its place and maybe people are re-discovering it but it's certainly not the future of web development IMO. The article gives me kind of web dev career whiplash.
0x445442 723 days ago [-]
When do you want the browser to be anything more than a hypertext document viewer and why?
dalmo3 723 days ago [-]
Roughly every time I can use something by just typing some words instead of downloading a .exe.
mdaniel 723 days ago [-]
Or its close friend: when I want to use the data or customize the UI via `document.querySelectorAll`
zerkten 723 days ago [-]
Why do things in two places when you can do it all in one place? This isn't limited to computers, but unless you are getting specific benefits, it isn't wise to continue with a SPA approach.
We had the same and worse problems with "thick clients" that came before the web grew. With the right requirements, team, tools etc., you could sometimes build great apps. This was incredibly difficult and the number of great apps was relatively small. Building with earlier server-side web tech, like PHP, isolated everything on the server and it was easier to iterate well than with the "thick clients" model.
SPA reinvents "thick clients" to some degree and brings back many of the complications. No one should claim you can't build a great SPA, or that they have few advantages, but the probability of achieving success is frequently lower. Frameworks try to mitigate these concerns, but you are still only moving a closing some of the gaps and the probability of failure remains higher. Depending on the app you can move the success metrics, but we often end up fudging on items like performance.
We get to a point where there is current model is fraying and energy builds to replace it with something else. We end up going back to old techniques, but occasionally we learn from what was done before.
I find that it's surprisingly rare for people with 1-2 years of experience to be able to give an accurate overview of the last 10 years of web development. A better understanding of this history can help with avoiding (or targeting) problems old timers have encountered and complain about in comments.
optymizer 723 days ago [-]
I remember fetching HTML from the server with AJAX and updating innerHTML before it was called AJAX. Is HTMX repackaging that or am I missing some exciting breakthrough here?
sourcecodeplz 723 days ago [-]
It is like that yes but more abstract because it uses some special HTML tags to make the JS calls.
There is a big downside though: weak error handling. It just assumes that your call will get a response.
bccdee 720 days ago [-]
No, there're error events[1] which can be handled with a little bit of client-side scripting.
It is doing the same. Just makes cleaner and easier.
fogzen 723 days ago [-]
Server-side apps cannot provide optimistic UI. No matter how you feel about it, they are limited in this capability compared to client-side apps. The user doesn’t care about the technology. For example, imagine a todo app that shows a new todo immediately. Or form validations that happen as soon as data is entered. That’s a superior experience to waiting on the server to continue interaction. Whether that’s harder to engineer is irrelevant to the user. We should be striving for the best possible user experience, not what we as engineers personally find easy or comfortable.
HTMX is cool. HTMX may fit your needs. But it’s not enough for providing the best possible user experience.
adamckay 723 days ago [-]
You don't have to be restricted to just using htmx, you can use it with client side Javascript to give you that interactivity you need in the places you need it.
Indeed, the creator of htmx has created another library called hyperscript which he's described as a companion to htmx.
Awful. The last thing we need is another layer further away from plain javascript cluttering up web pages.
chrsjxn 723 days ago [-]
I love articles like these, because the narrative of "JS framework peddlers have hoodwinked you!" is fun, in an old-timey snake oil salesman kind of way.
But I'll be honest. I'll believe it when I see it. It's not that htmx is bad, but given the complexity of client-side interactions on the modern web, I can't see it ever becoming really popular.
Some of the specifics in the comparisons are always weird, too.
> Instead of one universal client, scores of developers create bespoke clients, which have to understand the raw data they fetch from web servers and then render controls
according to the data.
This is about client side apps fetching arbitrary JSON payloads, but your htmx backend needs to do the same work, right? You have to work with the raw data you get from your DB (or another service) and then render based on that data.
You're still coupled to the data, and your htmx endpoint is just as "bespoke" as the client code which uses it. It's not wrong to prefer that work be done on the server instead of the client, or vice versa, but we're really just shuffling complexity around.
jonahx 723 days ago [-]
> This is about client side apps fetching arbitrary JSON payloads, but your htmx backend needs to do the same work, right? You have to work with the raw data you get from your DB (or another service) and then render based on that data.
In your analogy, the client JS code is like the serverside code, fetching over the network instead of directly from the DB, and then doing essentially the same work from there... materializing html and a set of controls for the user to interact with.
In a sense, I see your point.
But there's a difference: When you materialize the html on the server and send that over the wire, the browser does all the work for you. When you take the SPA approach, you must re-implement much of what the browser does in JS, and hence the well-known trouble with routing, history, and so on. You can argue that React/Angular/whatever takes care of this for you at this point, and to some extent it's true, but you're still cutting against the grain. And even as mature as the frameworks are, you will hit weird edge cases sometimes that you'd never have to worry about with the browser itself.
chrsjxn 723 days ago [-]
There's some difference, but I'm still not convinced. An htmx app is still using JS to manipulate the state of the DOM based on the results of API calls. You could build an app with web-components that do the same local modifications using JSON payloads.
There are definitely advantages to using a multi-page application architecture, which htmx is going to get by default.
But I really don't see a big difference between using JS to replace DOM fragments with server generated HTML, compared to using JS to replace DOM fragments with client generated HTML.
fogzen 723 days ago [-]
You touch on something that bugs me about these discussions: Lack of proof. Show me the web app with killer UX developed with htmx. Show me the product of the tools and processes being advocated.
renerick 723 days ago [-]
I'm not a developer of either of these, but here are two examples:
Thanks! I feel like these discussions would be so much more fruitful if they were centered around dissecting real products.
iamsaitam 723 days ago [-]
"If you wish to use something other than JavaScript or TypeScript, you must traverse the treacherous road of transpilation." -- this is the crux of the article.
These kind of takes fall in the bullseye of "I don't want to program with Javascript". The subtext is all about this.
Perhaps.. maybe.. Htmx won't be the future because there are a lot of people that like programming in Javascript?
Pet_Ant 723 days ago [-]
The problem is that these kind of approaches require more upfront thought, which produces less now, and pays off later... and only if maintained by people in tune with the original design.
I've seen this architectures quickly ruined by 'can-do' people who butcher everything to get a feature done _and_ get a bonus from the management for quick delivery.
727564797069706 723 days ago [-]
In my experience, these 'can-do' people can (and usually will) butcher anything, be it MPA, SPA or TUI.
This seems like the real problem we need to solve, but not sure how?
aidenn0 723 days ago [-]
I suppose we could stop rewarding people for delivering features that customers want...
Pet_Ant 723 days ago [-]
Or make them accountable for long term consequences. It’s no different than CEOs cutting R&D and calling a profit. Deferred bonuses till a 3 year post mortem would help.
tacone 723 days ago [-]
I used to have my hand-written mini version of htmlx ten years ago. It took a few jquery lines of code to have small parts of the UX to update without refresh.
I don't see the point by the way, I think htmlx is here to stay and a good choice for many, but it's clearly not a silver bullet. You make decently fast UIs, not blazing fasts, there are no (proper) offline first apps with htmlx, caching is likely more difficult or impossible sometimes and the load for your server is inevitably greater (of course it could be more than acceptable in some cases, so why not?), that also means more bandwidth for your cloud provider as opposed for you cdn. You will still have to write javascript sooner or later.
It depends on what you're doing. Nothing is aprioristic ly "the future", the future is "the future", and it has yet to come.
If anyone is looking to discuss making Hypermedia Driven Applications with HTMX in Python, head over to the discussions there!
nologic01 723 days ago [-]
Good initiative. HTMX + python hits a sweet spot for various interesting things.
tgbugs 723 days ago [-]
I first encountered the principles behind htmx in its precursor intercooler.js. Those principles really resonated with my distaste for complexity. Amusingly I found out about htmx itself when rereading https://grugbrain.dev and it all clicked! htmx is crystal that trap internet complexity demon!
fredrikholm 723 days ago [-]
Irony that they're all made by the same person!
xutopia 723 days ago [-]
Wether it is Htmx or Phoenix/Liveview or Hotwire/Stimulus we're seeing a shift in the industry towards augmented HTML rather than throwing away all RESTFUL routes. REST is elegant and this approach is very powerful as well.
s1k3s 723 days ago [-]
Love articles like this because I know there's some manager somewhere who will read this and force it upon their team without having any idea if it's good or not. And in 3 years we'll have people from those teams complaining about HTMX because it's not suited for their projects.
The future is whatever works best for your use-case.
lvh 723 days ago [-]
I don't disagree with the article, but I feel like the author almost landed on an interesting counterpoint. Author points out they didn't do this in ClojureScript, but writing apps in Reagent (the leading ClojureScript React wrapper) has looked almost identical across many years and many versions of React. Many of the state management epochs have also been avoided, because "manage state" is a core idea in Clojure, and so the stuff we had almost a decade ago is still perfectly fine today.
So, I posit that the churn, while definitely real, is not actually intrinsic.
Right now, at Latacora, we're writing a bunch of Clojure. That includes Clerk notebooks, some of which incorporate React components. That's an advantage I think we shouldn't ignore: not needing to write my own, say, Gantt chart component, is a blessing. So, specifically: not only do I think the churn is incidental to the problem, I don't even believe you need to give up compatibility to get it.
Fun fact: despite all of this, a lot of what we're writing is in Clerk, and while that's still fundamentally an SPA-style combination of frontend and backend if you were to look at the implementation, it absolutely _feels_ like an htmx app does, in that it's a visualization of your backend first and foremost (React components notwithstanding).
jdthedisciple 723 days ago [-]
This puts all the computational load on the server.
Imagine 10s of thousands of clients requesting millions of HTML fragments be put together by a single server maintaining all the states while all the powerful high end computing power at the end user's fingertips goes completely to waste.
Not convinced.
quacker 723 days ago [-]
How is it fundamentally any different than 10s of thousands of clients requesting JSON or whatever other serialized data format?
jdthedisciple 723 days ago [-]
Insofar as only retrieving data and returning it as json is way less work for the server than retrieving data plus rendering it.
quacker 722 days ago [-]
For the same data, why should serializing it to JSON be particularly faster than rendering it as HTML? The server is converting the data directly to a byte string in either case, which is about the same amount of work.
HTML is more verbose, so I would guess that JSON serialization is slightly faster, but I doubt there's an order of magnitude difference. (I could be proven wrong though)
I agree that taking HTMX to the extreme where _all_ interactions require a request to the server is too much overhead for interactive web apps. But there's likely a good middle ground where I can have mainly server side rendering of HTML fragments with a small amount of client side state/code that doesn't incur particularly more or less server load.
quii 721 days ago [-]
Converting data to a HTML string is not a performance bottleneck you'll be worrying about. I wasn't worrying about it much in 2000, and you really shouldn't need to in 2023.
jonahx 723 days ago [-]
> by a single server maintaining all the states
HTTP is stateless. This is the whole point of the hypermedia paradigm.
If you have a page with many partial UI page changes over htmx, then yes, this paradigm puts increased load on the server, but your DB will almost certainly be your bottleneck before this will be, just as in the SPA case.
jdthedisciple 723 days ago [-]
I'm not talking about network state, but app state.
Yes, in HTMX the server is handling client app state, even things as little as whether a todo is in read or edit state.
That just seems absurd to me, let the client take care of that.
bccdee 720 days ago [-]
Not really? The server in your example serves the read & the edit components statelessly; the component which the user is viewing exists only on the client.
mixmastamyk 723 days ago [-]
This avoids unnecessary computation at the client, it does not substantially add to the burden of the server. Which would need to be reconciled regardless of the markup format used over the pipe. Alpine is available for local flair.
jdthedisciple 723 days ago [-]
I don't know but adapting UI to reflect the edit state of a todo seems like a classic client responsibility imho, not unnecessary.
What's unnecessary to me however is sending bytes thousands of miles across the wire to some server to do the same.
mixmastamyk 723 days ago [-]
Batching in a SPA could alleviate some work but could be done with alpine instead, as needed. With a significant cut in overhead, download size, developer ramp up, etc. Depends, but think the reduction in complexity is significant.
IshKebab 723 days ago [-]
Most users these days are probably using phones, not high end computers.
jdthedisciple 723 days ago [-]
Most phones have more computing power than all of NASA did in the 80s.
I was specifically thinking of modern smartphones in fact, which are pretty damn fast at executing a little bit of JS.
(Though I agree that some of the bloated bundles resulting from modern frameworks or their poor usage definitely go to far)
rwalle 723 days ago [-]
The processor on my phone is better than the one on a 2015 Macbook Pro 13" (i5)
IshKebab 723 days ago [-]
The processor on an average phone does not.
dhosek 723 days ago [-]
Most phones have an awful lot od computational power.
tabtab 723 days ago [-]
I keep saying this and will say it again: what's really needed is a state-ful GUI markup language. HTML+DOM+JS+CSS is the wrong tool for the CRUD/GUI job, and force-fitting it has inflamed the area so bad many don't want to even try to scratch it.
Bloated JS frameworks like Angular, React, Vue, and Electron have big learning curves and a jillion gotcha's because they have to reinvent long-known and loved GUI idioms from scratch, but DOM is inherently defective for that need, meant for static documents. There are just too many GUI needs that HTML/DOM lacks or can't do right:
https://www.reddit.com/r/CRUDology/comments/10ze9hu/missing_...
Let's byte the bullet and create a GUI markup standard. Perhaps base it off Tk or Qt kits to avoid starting from scratch.
CRConrad 718 days ago [-]
> Let's byte the bullet and create a GUI markup standard.
Heh, fun pun.
> Perhaps base it off Tk or Qt kits to avoid starting from scratch.
VCL / LCL.
peter_retief 723 days ago [-]
I like htmx but have decided to use unpoly, for my current project, which is similar.
The concept is great but why has it taken so long?
HTMX brings new life to tech like django which can catapult MVPs into production asap.
Backend engineers are now able to write management tools and experimental products faster - and then pass the winning products off to a fluttr team to code for all environments. The backend could be converted into a django rest api if the code is properly refactored.
incrudible 723 days ago [-]
I take the opposite side of that bet. Always bet on (more) Javascript. Dealing with HTML sucks, and we want as little of it as possible, otherwise we would not have invented generations of frameworks to make it manageable. The lasting success of React shows that we have converged on how to do that. Moving back to MPAs is always something that bored engineers want to do. Users generally do not care.
Moreover, REST APIs - and I mean the simple ones people actually want to use, none of that HATEOAS BS - are ubiquitous for all sorts of interactions between web and nonweb clients. Are you going to ship an MPA as your mobile apps, or are you going to just use REST plus whatever clients make sense?
It also makes a lot of sense in terms of organization. Your backend developers probably suck at design, your frontend developers suck at databases.
janosd 723 days ago [-]
I can still remember the horrors of page state. The server would keep track of what the client has and only send HTML fragments to the client. Early-days ASP, Prado and the likes did this and it was a terrible idea. HTMX sounds very much like that, but the packaging is nicer. Ultimately, the problem is that sometimes you need to update more than just the tiny, well-defined part that is the todo list and several parts of the UI need to change when a request is made. By which I mean this happens all the time. The road to hell is paved with todo list implementations as a proof that a system works and is good. Please show me a moderately complex login system with interlinking interfaces implemented in this.
infamia 716 days ago [-]
Unpoly and Hotwire are somewhat similar to HTMX, but they update entire sections of a page (i.e., "page fragments") as you describe. They perform a DOM diff and update everything wrapped inside an HTML tag (e.g., a <div>) with a special attribute. Unpoly does this within the app's normal request/response cycle, while Hotwire uses websockets to stream updates.
Can we count in Hotwire, inertiajs, Alpinejs, unpolyjs aswell?
kurtextrem 723 days ago [-]
Back in the days, when JSON became popular as response type for rendering in the client, I saw arguments such as "the JSON payload is smaller than sending full HTML, so you pay the download only once instead of N times".
Only once, because what has to be done with the JSON has been downloaded in the JS bundle. With full HTML, full HTML comes back in every response.
However, I'm not sure if this is actually a problem or rather depends on how much interaction the user does (so where is the "turning point" of the overhead of having all in the bundle vs full HTML responses). What does everyone think?
aigoochamna 723 days ago [-]
I somewhat get where htmx is coming from. It's not bad per-say.. I actually like the general idea behind it (it's sorta like Turbolinks, but a bit more optimal using fragments instead of the entire page, though Turbolinks requires zero additional work on the markup side and works with JavaScript disabled out of the box).
With that being said, I imagine it would become unmaintainable very quickly. The problems htmx is solving are better solved with other solutions in my opinion, but I do think there's something that can be learned or leveraged with the way htmx goes about the solution.
CRConrad 718 days ago [-]
> It's not bad per-say..
Per se. It's Latin for “in itself”; has nothing to do with saying anything. Think about it: What would “per-say” even mean?
werdnapk 723 days ago [-]
Turbo (the updated Turbolinks) uses fragments quite heavily... Turbo calls them frames. Turbolinks was more of a full page only approach though.
anyonecancode 723 days ago [-]
The tricky part of an SPA is that as a developer, you're taking on a lot of the burden of managing location state that in an MPA is handled by the browser. And location state often is a significant component of application state.
Certainly it's possible to take on that burden and execute it well, but I think a lot of teams and businesses don't fully account for the fact that they are doing so and properly deciding if that extra burden is really necessary. The baseline for nailing performance and correctness is higher with an SPA.
yellowapple 723 days ago [-]
Using an HTTP header to decide between "just return a snippet for this specific list element" v. "return the whole page with the updated content for this list element" is an interesting choice that I hadn't really considered before; normally I would've opted for two entirely separate routes (one for the full page, one for the specific hypermedia snippet), which HTMX also seems to support. I guess it ain't fundamentally different from using e.g. Accept-* headers for content negotiation.
quii 723 days ago [-]
I think both are valid, as i mentioned in the article, for this particular case, the psuedo content-negotiation felt right
hu3 723 days ago [-]
meta: I love when htmx is highlighted in HN because the discussions branch into alternatives and different ways of doing web dev. It's very enriching to think outside the box!
mikeg8 723 days ago [-]
Agree. I always find some interesting and new FE approaches/methodologies in these random HTMX threads and it’s awesome.
AtlasBarfed 723 days ago [-]
ok look people.
I think what is needed is to recognize that the SPA architecture isn't actually just a view processor. IMO it is a very shitty designed:
View rendered <--> client process <--> server process
So it seems that SPA apps load an absolute mountain of javascript into the view (the tab/page) and then that starts (crudely IMO) running as client-side daemon tracking messy state and interfacing with local storage, with javascript (opinion: yuck) ferreted away in a half dozen divs.
IMO, what has been needed since you have local storage and local session state and all that is ... a client daemon that the web page talks to that offers data services, and then that client daemon if it needs server data calls to the internet.
That way local state tracking, transformation, and maintenance can be isolated away from the code of the view. Large amounts of javascript (or maybe all with CSS wizardry is dropped). The "client daemon" can be coded in webassembly, so you aren't stuck with javascript (opinion: yuck).
You can even have more efficient many views/tabs interfacing with the single client daemon, and the client daemon can track and sync data between different tabs/views/windows.
Now, of course that is fucking ripe as hell for abuse, tracking. Not sure how to solve it.
But "separation of concerns" in current web frameworks is a pipe dream.
divan 723 days ago [-]
Why people concentrating on creating more hacks on top of fundamentally ill-suited stack for app development, rather then rethink the whole stack from the first principles?
Web development without HTML/CSS/JS is the future.
mal-2 723 days ago [-]
I like some concepts from HTMX but I don't understand how it tracks the relationship between these addresses and the identifiers in the markup. It seems to be just that the identifier strings match - the markup identifies the targets/swaps and it just refers to itself.
When I compare this to Phoenix LiveView I much prefer LiveView, because it both provides the markup templating engine and tracks the meaning of the relationship, with server-side tokens and methods.
nine_k 723 days ago [-]
I think it's the wrong article (pun semi-intended), HTMX is a future. React is a future. Svelte is a future. Even Angular is a future. They all have their specific strengths which define where they are more applicable.
There's no "the future" in this area, because demands are very different; a heavily interactive SPA like GMail or Jira has requirements unlike an info page that needs a few bits of interactivity, etc.
branko_d 723 days ago [-]
I’m afraid htmlx might be repeating the same mistake that CORBA and DCOM made decades ago: pretending that latency doesn’t matter.
Yes you could make a CORBA or DCOM object almost indistinguishable from a local object, except for the latency when it was actually remote. And since it looked like a normal object it encountered “chatty” interface which exacerbated the latency cost.
Htmlx seems pretty chatty to me, which I’m sure works OK over the LAN, but what about the “real” internet?
thomasreggi 723 days ago [-]
I agree with this article, however I think that HTMX needs a strong server framework to support HTMX. I've thought about this alot and a couple months back created this deno / typescript framework https://github.com/reggi/htmx-components, would love for people to take a look at it and provide guidance and direction for a releasable version.
triyambakam 723 days ago [-]
That's really nice!
auct 723 days ago [-]
I tried htmx, but syntax is horrible, so 3 years ago I've created uajax universal Ajax forms and js-ajax-button.
Add class to any form and it is ajaxed.
I even released it on github
The js-ajax-button has similar approach. Add class to button that have data-url and it will make request to it.
This is small func I use, but with uajax is so powerful, I don't need react or htmx.
But it is hard to sell something that eliminates using javascript.
honkycat 723 days ago [-]
Lol, we're still having the SPA discussion 7 years later in the year of our lord 2023?
Talk about the positives of YOUR approach, don't tear down a different approach that half the industry is using. You're not going to say anything new or interesting to the person you are trying to convince this way. Experienced engineers already know the trade-offs between an SPA and a server rendered experience.
CRConrad 718 days ago [-]
> Talk about the positives of YOUR approach, don't tear down a different approach that half the industry is using.
AIUI TFA wasn't by the creators of HTMX, so it isn't the author's approach.
w10-1 723 days ago [-]
What's the business case, for them or for developers?
Ideas aside, the web app future belongs to those with the resources to sustain a response, or those who can restore the ability to capture/monetize developers and users in a closed system.
The scope of web apps is broad enough that many technologies arguably have their place. The open javascript ecosystem reduced the cost of creating candidates, but has no real mechanism to declare winners for the purpose of consolidating users, i.e., access to resources.
Careers and companies are built on navigating this complexity, but no one really has the incentive to reduce it unless they can capture that value.
I really appreciate Cloudflare because they are open about both their technology and their business model. They thus offer a reasonable guarantee that they can sustain their service and their technology, without basing that guarantee on the fact that they are a biggie like AWS, Microsoft, or Google (i.e., eating their own dog food, so we can join them at the trough).
The biggest cost in IT is not development or operating fees but reliance and opportunity.
jimmaswell 723 days ago [-]
I've made my personal website something of a hybrid SPA. WithJS enabled it only loads and replaces the relevant portions of the page, but a page renders fully from PHP going to it directly.
The JS would be a bit more elegant if script tags didn't need special handling to execute on insertion.
The experience is very seamless this way - I'm very pleased with it. It's live at https://jimm.horse - the dynamic behavior can be found clicking on the cooking icon or N64 logo.
On reading the article, I'll definitely make use of this if it becomes well-supported. It does exactly what I wanted here.
themaximalist 723 days ago [-]
I just started using HTMX in new projects and really like it.
The LivewView/Hotwire/LiveWire way of building applications make a really great tradeoff—the ease of building websites with the speed and power of webapp UX.
I wanted something simple to use with Express and it's been very productive.
There's a few things to get used to, but overall like it and plan to keep using it in my projects.
PaulHoule 723 days ago [-]
I worked for a startup that built a React + Scala system for building training sets for machine learning models. At the time I was involved this had a strong research component, particularly we frequently had to roll out new tasks, and in fact we were working actively with new customers to adapt to their needs all the time.
The build for the system took about 20 minutes, and part of the complexity was that every new task (form where somebody had to make a judgement) had to be built twice since both a front end and back end component had to be built so React was part of the problem and not part of the solution. Even in a production environment this split would have been a problem because a busy system with many users might still need a new task added from time to time (think AMZN's MTurk) and forcing people to reload the front end to work on a new task defies the whole reason for using React.
It all was a formula for getting a 20 person team to be spinning its wheels, struggling to meet customer requirements and keeping our recruiters busy replacing developers that were getting burnt out.
I've built several generations of my own train-and-filter system since then and the latest one is HTMX powered. Each task is written once on the back end. My "build" process is click the green button on the IDE and the server boots in a second or two. I can add a new task and be collecting data in 5-10 minutes in some cases, contrasted to the "several people struggling for 5 days" that was common with the old system. There certainly are UIs that would be hard to implement with HTMX, but for me HTMX makes it possible to replace the buttons a user can choose from when they click a button (implement decision trees), make a button get "clicked" when a user presses a keyboard button and many other UI refinements.
I can take advantage of all the widgets available in HTML 5 and also add data visualizations based on d3.js. As for speed, I'd say contemporary web frameworks are very much "blub"
On my tablet via tailscale with my server on the wrong end of an ADSL connection I just made a judgement and timed the page reload in less than a second with my stopwatch. On the LAN the responsiveness is basically immediate, like using a desktop application (if the desktop application wasn't always going out to lunch and showing a spinner all the time.)
themodelplumber 723 days ago [-]
Thanks for the reminder, I've been meaning to try it out. Just to get started, I asked ChatGPT to write an htmx app to show a 10-day weather forecast.
It described the general steps and seemed to be able to describe how htmx works pretty well, including hx-get and hx-target, etc., but then said "As an AI language model, I am not able to write full applications with code".
I replied "do the same thing in bash" (which I knew would be different in significant ways, but just to check) and it provided the code.
I wonder, is this a function of recency of htmx or something else? Do other htmx developers encounter this? I imagine it's at least a little bit of a pain for these boilerplate cases, if it's consistent vs. access to the same GPT tooling for other languages.
listenallyall 723 days ago [-]
HTMX is quite easy to code. Your prompt sounds rather generic, I mean, you can just serve 10 days of weather forecasts without any interaction whatsoever.
It isn't clear what you were asking ChatGPT to provide, therefore not surprised it didn't come up with the exact answer you expected. I'd suggest learning HTMX by reading the docs, the majority is just a single page.
themodelplumber 723 days ago [-]
ChatGPT made it very clear it understood exactly what I wanted, short of writing the code. The steps it offered were long, and far from generic. So. Why no code? It offers code for other langs all the time, even unprompted.
It was like...if you've ever been mansplained before, when you know how to code something, but are asking a specific question that interests you, related to the process of having a service do that part for you--and someone comes along like, "well, it's easy to code that yourself!"
Not sure if you've experienced that before but it's very similar.
listenallyall 723 days ago [-]
I don't really use ChatGPT but regardless, it seems strange to evaluate the merits of a JS library based upon ChatGPT's prowess with it. Your time would be far better spent, I think, by reviewing the docs yourself and building a basic site.
So are you saying that ChatGPT will probably offer to write an htmx app in bash?
> I'm sorry, but it's not possible to write a weather forecast app using bash and htmx as htmx is a client-side technology and bash is a command-line shell. htmx is typically used in conjunction with HTML and JavaScript to create dynamic web applications.
Cuz that didn't work. It straight up forged ahead and wrote a bash script to show the weather though.
I really wonder today if ChatGPT is going to cause a bash renaissance
(If you are telling me I can do this myself plz reread posts thx)
nirav72 723 days ago [-]
Was it with gpt 3.5 or 4?
themodelplumber 723 days ago [-]
3.5. I actually just tried the same prompt in playground > text-davinci-003 and it wrote the front end in htmx without any endpoint code, then when prompted it provided endpoint code.
Still looks to be missing something aside from just the api key...
...maybe 4 is way better. I thought I had a Plus account but it looks like API only.
Edit: Just tried 4 and it is better in the sense that it writes a more complete app for you, but it strangely separates the sections of the front end into discrete textareas w/ code, so if you are new to front end it'll be confusing for sure. "Where do I put all this" "oh actually it all gets run together; specifically these two sections of code are concatenated and placed in the body area of the first code section."
It also writes the endpoint code in Python and Flask by default, but with one more prompt it seems to have fixed all that.
I really wonder why 3.5, the common public interface, did what it did, when "actually writes code for more languages by default" isn't exactly on the Plus features list.
tkiolp4 723 days ago [-]
Frontend developers don’t want to write HTML nor augmented HTML. They want to write code, and these days that means JS. Frontend developers want to make good money (like those backend developers or even infrastructure developers who are working with data, servers, and cool programming languages), hence they need to work with complex libraries/frameworks (if you just write HTM?, you don’t get to earn much money because anyone can write HTM?).
Hell, the term “frontend developer” exists only because they are writing JS! Tell them it’s better to write HTM?, and you are removing the “developer” from their titles!
Same reason why backend developers use K8s. There’s little money on wiring together bash scripts.
Now, if you’re working on your side project alone, then sure HTMX is nice.
robertoandred 723 days ago [-]
Incorrect. You can't create a website without HTML. The term "frontend developer" exists because frontend is a complex mix of HTML, CSS, JS, browser functionality, screen sizes, accessibility requirements, privacy requirements, server interactions.
Backend devs just lampoon it because they assume it must be simple.
silver-arrow 723 days ago [-]
htmx has provided the greatest satisfaction and production of my 30-year programming career. After a year of constant development experience with it, I am confident that this is the proper method of building web applications. It truly is how HTML should have evolved.
denton-scratch 723 days ago [-]
How's it not a SPA, if you're updating the DOM in JS without a full page reload?
Sorry, I read a load of stuff about React, before I came to any explanation of HTMX. Turns out, it's loading fragments of HTML into the DOM (without reload), instead of loading fragments of JSON, converting them to HTML fragments client-side, and injecting the resulting HTML into the DOM (without reload).
So I stopped reading there; perhaps the author explained why HTMX solves this at the end (consistent with the general upside-down-ness), but the "is the future" title was also offputting, so excuse me if I should have read the whole article before commenting.
I never bought into the SPA thing. SPAs destroy the relationship between URLs and the World Wide Web.
robertoandred 723 days ago [-]
SPAs work with complex, relevant, and unique URLs perfectly fine.
kubota 723 days ago [-]
I don't know. The tabs example on the htmx page is perceptibly slow to me. Making a rest call every time I switch a tab, each time sending 90% of the same html skeleton data over the wire feels like a sin to me. Returning html from my api also feels like a sin.
CRConrad 718 days ago [-]
> Returning html from my api also feels like a sin.
Sorry, but that's just... Silly. HTML is what the Web is all about.
kubota 716 days ago [-]
Most of my apis are consumed my multiple clients, many without user interfaces (other backend systems), some with mobile user interfaces, some with web interfaces. I should make my backend clients parse HTML? Or my mobile clients parse HTML? I don't see any benefit to coupling data to a visual markup language, unless you are only serving web ui clients, and have no plans for that to ever change. Or I should stand up additional services (and pay for their operation and maintenance) just to server side render my data wrapped in html? That seems sillier to me, but to each their own.
mikece 723 days ago [-]
"You can use whatever programming language you like to deliver HTML, just like we used to."
Is this suggesting writing any language we want in the browser? I have wondered for a couple decades why Python or some other open source scripting language wasn't added to browsers. I know Microsoft supported VBScript as an alternative to JavaScript in Internet Explorer and had it not been a security nightmare (remember the web page that would format your hard drive, anyone?) and not a proprietary language it might have a rival to JavaScript in the browser. In those days it wouldn't have taken much to relegate JavaScript to non-use. Today we just get around it by compiling to WASM.
loloquwowndueo 723 days ago [-]
It is not suggesting that. On the server, you can use your language of choice to generate complete or partial HTML responses to be sent and then put in the right places on the page by JavaScript (htmx) running on the browser.
traverseda 723 days ago [-]
It is not suggesting running arbitrary languages in the browser. It's basically Ajax.
avisser 723 days ago [-]
Commenting on "It's basically Ajax": Yes, and it's also a return to the basics of a browser. Making HTTP calls (Hypertext Transfer Protocol) to get HTML (the aforementioned Hypertext) and rendering it is the core thing that browsers do. It's both necessary and sufficient to being a browser.
mikece 723 days ago [-]
I hope the execution is better than Ajax! They way that was implemented in WebForms was horrible and a complete PITA to debug.
biorach 723 days ago [-]
> Is this suggesting writing any language we want in the browser?
Nope, server
nologic01 723 days ago [-]
htmx got a lot of good press (deservedly) but I think somehow it needs to get to the next step beyond the basic hypermedia evangelism. I don't know exactly what that step needs to be, because I don't know what a fully "htmx-ed" web would look like. It is promising, but that promise must be made more concrete.
A conceptual roadmap of where this journey could take us and, ideally, some production quality examples of solving important problems in a productive and fun way would increase the fan base and mindshare. Even better if it show how to solve problems we didn't know we had :-). I mean the last decade has been pretty boring in terms of opening new dimensions.
Just my two cents.
minusf 723 days ago [-]
htmx never claimed it's the solution for everything. there is no next conceptual step. it's sending snippets of html to the client.
i read most htmx threads on hn and it's clear that people are looking for alternatives from react et al. they have a quick look, maybe implement an example and they are angry that it can't do everything they want cause the js ecosystem fatigue is real.
the centerpiece of the htmx site is an actual in-production app that was converted from react and it's better because of that. again, it will not be everybody's case.
htmx will let a lot of developers go all the way without bringing node into their ruby/python/php world for certain workloads. for them it is the future. the rest should stop reading.
lakomen 722 days ago [-]
the title is literally "htmx is the future"...
If htmx wants to be the future it needs to be wrapped in a SSR framework. One that performs well.
jksmith 723 days ago [-]
You know, nobody likes this argument, but desktop is still just better. Yeah, yeah, the updates, security issues, I get it, but the tools are simple better, render faster, better functionality/complexity ratio, less gnashing of teeth.
karaterobot 723 days ago [-]
There are lots of great desktop apps, sure. And, for a specific task (like text editing, or watching videos, or playing music) a desktop app is usually better than a web app. However, I expect the ecosystem of desktop apps required to replace every website I use would be worse than the web.
What I mean is, different websites work differently, and do different things. For example, you might imagine a single desktop app that replaces multiple news aggregators, like Reddit or HN. It would ignore the style of both sites, and replace it with a single, uniform way of displaying posts and threads. But, what features from each does it implement? Does it have both upvoting and downvoting, like Reddit, or just upvoting, like HN? Does it support deeply nested threads, like Reddit, or only a couple levels, like HN? You'd run into limitations like this when trying to have a single app do everything, so you'd end up having to have n applications, one for each website you were replacing...
I'm also not with you on the "gnashing of teeth" point. I've never struggled to install, uninstall, or upgrade a website I was browsing.
lolinder 723 days ago [-]
Desktop on which OS? Using what GUI framework? The web is a single platform, but the desktop developer experience seems to vary wildly depending on the OS.
I'm genuinely curious what OS and tooling you use that you find so much better, because every time I've tried desktop development I eventually give up and go back to the web. It might be because Linux support is always a requirement for me.
criddell 723 days ago [-]
> which OS?
The one you want to run your software on.
> what GUI framework?
Assuming you aren't making a game, each OS has a different answer. Making an iPadOS or macOS app? I'd probably go with SwiftUI. Linux? I'm partial to GTK. Windows? Probably WinUI (although I also still like MFC extended with raw Win32 API calls).
There are cross platform tools, but none of them are very good. If you want to make something really great, target the most important OS and make the absolute best thing you can with the native features of that OS. Follow the conventions of the platform and you get a lot of stuff (like accessibility features) for little effort.
> The web is a single platform
I agree. The web as presented by the browser is its own distinct platform. A well written web app will almost always use more battery, memory, CPU, and network bandwidth than a similar well written native app. Sometimes, despite all the problems with web technologies, the web is where something belongs.
I'm still a big believer in the personal computer. The web takes power from individuals and is a step back to the days of dumb terminals and centralized computing.
jksmith 723 days ago [-]
Both Delphi and FreePascal (less so) support windows, linux, mac, android with the same codebase (or calls rather). The Delphi model has really set the standard for a robust desktop framework, since before most people started writing code.
bottlepalm 723 days ago [-]
I tried HTMX. The static typing just isn't there between the components you send down and the rest of the page. This makes maintenance of a large HTMX app pretty costly. For simple stuff, it's probably fine, but large complicated web apps, I'm not seeing it. Mixing HTML fragments from the server into the client is pretty messy. Keeping all the rendering in one place à la React seems much simpler to maintain. With Next for example you can pre-render on the server, and re-render on the client - the same code is doing the rendering in both places so a bit more easier to understand.
723 days ago [-]
eimrine 723 days ago [-]
> Some SPA implementations of SPA throw away progressive enhancement (a notable and noble exception is Remix). Therefore, you must have JavaScript turned on for most SPAs.
Is this really the future?
wibblewobble124 723 days ago [-]
we’re using htmx at work, migrating away from react. the technique we’re using is just rendering the whole page, e.g. we have a page where one side of the screen is a big form and the other side is a view on the same data but with a different UI, updating one updates the other. we’re using the morphdom swapping mode so only the things that changed are updated in-place. as a colleague commented after implementing this page, it was pretty much like react as far as “pure function of state.”
our policy is that for widgets that are like browser components e.g. search as you type with keyboard shortcuts, we just use the off the shelf react component for that purpose and use it from htmx like it’a browser input element. for all other business logic (almost all of which has no low latency requirements and almost always involves a server requets), we use htmx in our server side language of choice.
our designer who knows a bit of react is not happy, but the 12 engineers on our team who are experts in $backend_lang and who are tired of debugging react race conditions, cache errors, TypeScript front end exceptions, js library churn, serialisation bugs, etc. are very happy indeed.
it doesn’t fit every app, but it fits our app like a glove and many others that I’ve considered writing that I didn’t feel like bothering to do so before discovering htmx.
robertoandred 723 days ago [-]
Sounds like your backend devs are just bad at frontend.
wibblewobble124 722 days ago [-]
this is a shallow and dismissive comment lacking any basic charity or nuance: dijkstra was just bad at “goto.”
this doesn’t worry me, though. those in the react crowd that insist on this arbitrary and newfangled “frontend/backend” stratification and are dogmatic about it are by definition going to stick with what they know and won’t come and bother us who choose tools based on real experience and their practical merits. better off they make themselves easy to spot from a distance.
robertoandred 722 days ago [-]
"Frontend/backend" is neither new nor pointless. The people who pretend they are are usually backenders who are under the false impression that frontend is simple.
wibblewobble124 718 days ago [-]
if you’ve been writing HTML since 1999 like me then it’s newfangled. in 2008 when i got my first paid job writing ASP and PHP at a web dev shop, nobody was “frontend”. there was the content person, the designer (photoshop wiz), the builder (convert design into HTML) and the dev. we still had to support IE6 or our manager would yell at us. i wrote my first SPA at work in 2011, and that was radical—everyone thought it was a bad idea. some people thought backbone.js might be a cool approach. took ages for react to appear.
took until maybe 2015 before i remember seeing any job ads of “frontend” positions, it takes a while before a job market develops around a technology.
i didn’t say it was pointless. i said the stratification of “dev” into “frontend dev” and “backend dev” is newfangled and arbitrary. you could also split devs into other classes (DB only, CSS only, etc.).
it is funny when people who don’t know you accuse you of incompetence because you don’t like their tools or methods. dogmatic. i prefer some other tools and methods and i am delivering value to customers. pragmatic.
ihateolives 723 days ago [-]
And? Your average React dev is bad at proper backend too.
robertoandred 722 days ago [-]
So then doesn't it make sense that frontend devs focus on frontend and backend devs focus on backend?
CRConrad 718 days ago [-]
It makes sense that application developers focus on getting an application to work. In the case of these newfangled “Web applications”, that would be focussing on getting HTML to render in the user's browser. That focus doesn't have to have anything to do with an arbitrary split into “frontend and backend”.
aabbcc1241 721 days ago [-]
If you like the concept of putting some work back to the server instead of throwing to the client, you may also checkout the liveview approach. It drives the entire website using the server with realtime updates (similar interactivity as SPA)
Am I the only one who actually thinks that a SPA is simpler than server rendered UI? I always go for a SPA when I can sacrifice indexability and never seem to regret it.
klysm 723 days ago [-]
I'll stick to making SPAs for the apps I work on. This is just the pendulum of tradeoffs swinging back till people experience the pains of MPAs again
munbun 723 days ago [-]
The HATEOAS approach is viable for some applications but I wouldn't necessarily call it the future.
Using custom html attributes as the base for complex client-side interactions is arguably a step backwards when considering the story around maintenance.
Right now, if you are building a robust component library - it's much easier to maintain using a template language with strong Typescript / IDE support, like JSX or similar.
nawgz 723 days ago [-]
I'm sorry, but these arguments are so tired.
> SPAs have allowed engineers to create some great web applications, but they come with a cost:
> Hugely increased complexity both in terms of architecture and developer experience. You have to spend considerable time learning about frameworks.
Yes, better quality software usually packages a bit more complexity.
SPAs are popular, just like native apps, because people don't like jarring reloads. Webviews in native apps are panned for a reason; turning your whole app into a series of webviews would be stupid, right?
> Tooling is an ever-shifting landscape in terms of building and packaging code.
I've used these 4 libraries to build apps since 2015:
* React
* MobX
* D3
* Webpack
The only one I have had pain with is react-router-dom, which has had 2 or 3 "fuck our last approach" refactors in this time. And I added TypeScript in 2018.
PEBCAK
> Managing state on both the client and server
It's a lie that a thin client isn't managing state; it's just doing a static, dumb job of it.
Imagine some cool feature like... collaborative editing.
How would you pull that off in HTMX?
> Frameworks, on top of libraries, on top of other libraries, on top of polyfills. React even recommend using a framework on top of their tech:
Yes, React is famously not a batteries-included library, while Angular is. But, as addressed, you need about 3 other libraries.
Besides, did you know: HTMX is also a framework. Did you know: HTMX also has a learning curve. Did you know: HTMX forces you to be able to manipulate and assemble HTMLstrings in a language that might not have any typing or tooling for that?
Anyways, I've said enough. I should've just said what I really think: someone who can't even get their nested HTML lists to actually indent the nesting shouldn't give advice on building UIs.
rmbyrro 723 days ago [-]
> _Yes, better quality software usually packages a bit more complexity._
That view (+quality = +complexity) is actually flip sided, isn't it? [1]
Sure, with respect to a fixed problem I agree. With respect to static rendering vs building an application state machine, though, the problem isn’t fixed, and inherently has more complexity
traverseda 723 days ago [-]
>Imagine some cool feature like... collaborative editing.
What are you gaining by writing something like that in java/type-script rather than like rust and webassembly?
To me javascript is in a sort of uncanny valley where you probably want to be either making a real app and compiling it to wasm or using something like htmx.
nawgz 723 days ago [-]
> rather than like rust and webassembly
WASM can't even interact with the DOM, how exactly are these languages positioned better to give me access to a ton of UI primitives?
The backend can be a fast language, certainly, but the browser is a premier UI platform... and it's powered by JS...
To me, Rust developers thinking they know how to build UIs is the real uncanny valley. What they produce looks like it should work, but the more you look at it, the more you realize they don't know what UX stands for
vb-8448 723 days ago [-]
> Imagine some cool feature like... collaborative editing.
In my opinion, the whole point of the article and for everyone who is backing htmx is that SPA frameworks are too complex (and a liability) for solo/small teams or projects that don't need `collaborative editing`(or other advanced stuff).
nawgz 723 days ago [-]
> In my opinion, the whole point of the article
Well, in my opinion, the article claims "HTMX is the future" as its title, and so it's impossible to interpret the arguments in any other way than "the user experience benefits of the SPA might matter to the users, but I think it's stupid because I dislike JS"
CRConrad 718 days ago [-]
> Well, in my opinion, the article claims "HTMX is the future" as its title,
Titles are by necessity summary in nature. If you want the whole complexity and nuance of the article in the title, the whole article would have to be the title. It's a bit too long for that. (And then everyone would complain that there was no additional meat to the body text.)
> and so it's impossible to interpret the arguments in any other way than "the user experience benefits of the SPA might matter to the users, but I think it's stupid because I dislike JS"
No, sorry, that's BS: It's eminently possible to interpret the arguments differently. All you have to take into account is that a less abbreviated (and frankly still a bit too long) version of the title could have been "HTMX is the future for solo/small teams or projects that don't need `collaborative editing`(or other advanced stuff)".
HTH!
nawgz 717 days ago [-]
> Titles are by necessity summary in nature
Ok, that's why I made arguments against the body of the text. Did you read my initial post? I'm sure you must've, which makes it so strange you'd come attack this argument while pretending that the other ones don't exist
> It's eminently possible to interpret the arguments differently.
It's really not. The whole thing is a series of strawmans aiming to justify degrading user experience to avoid developer "complexity", but then that developer complexity is grossly overstated, as if from someone who has never dipped their toe in it decided that each complaint they had heard about it was spoken by God himself.
Indeed, my whole first comment is thoroughly discarding each of these strawmans before I grew frustrated by the fact that the author and the thread were both reciting this 2015-era anti-JS dogma.
I suggest you go try to make counterarguments to those points instead of my exasperated reply to someone adding nothing to the convo.
HTH!
bfung 723 days ago [-]
The article reminds me of the early 2000s when we were all rolling our own XMLHttpRequest and replacing dom elements with html rendered from a java servlet.
No. A hodgepodge of everyone's favourite way, is the future. Always has been!
That said for my next hobby project I will probably go even simpler than HTMX, and use classic server side rendering. Then add some Vanilla JS where needed.
I keep going around in circles, but I have tried NextJS and while pretty cool there are a class of problems you need to deal with that simply don't exist in simpler apps.
pwpw 723 days ago [-]
What is the simplest way to host a website closer to barebones HTML, CSS, and a bit of JS with reusable components like nav bars? My experiences handling those manually leads to too much overhead as I add more pages. SvelteKit makes things fairly easy to organize, but I dislike how the user isn’t served simple HTML, CSS, and JS files. Ideally, I don’t want to use any framework.
optymizer 723 days ago [-]
It's called PHP and you can host it anywhere, or if there's nothing dynamic going on, run it on the files on your computer and upload the generated HTML/CSS/JS files to an S3 bucket.
src/index.php
---------
<!DOCTYPE html>
<html>
<head><title>Hey look ma, we're back to PHP</title></head>
<body>
<? include "navbar.php" ?>
<p>Don't forget about PHP - a hypertext preprocessor!</p>
<? include footer.php ?>
</body>
</html>
src/navbar.php
----------
<div>
<ul>
<li>Home</li>
<li>About</li>
</ul>
</div>
Makefile to generate a static site:
-----------------------------------
dist/index.html: src/index.php
dist/about.html: src/about.php
dist/%.html: src/%.php
@mkdir -p ${dir $@}
php $< > $@
pier25 723 days ago [-]
Astro
quest88 723 days ago [-]
This is a weak argument. The article is demoing a TODO app talking to localhost. Almost any library, framework, or language is the future if this is how we're judging the future.
> Working with HTMX has allowed me to leverage things I learned 15-20 years ago that still work, like my website.
Yes, a website is different than a webapp and has different requirements.
BeefySwain 723 days ago [-]
> Yes, a website is different than a webapp and has different requirements.
The piece missing here is that most people do not stop to think which they are building before they reach for a JS heavy SPA framework and start spinning up microservices in whatever AWS calls their Kube implimentation.
manx 723 days ago [-]
It seems that many people are wondering why UI interactions, which don't need new data, should take a network roundtrip. You can avoid those round-trips by using a library like https://alpinejs.dev . It pairs well with htmx.
rglover 723 days ago [-]
Or: just use plain HTML [1] and keep separation of concerns intact.
In a quote:
> "So much complexity in software comes from trying to make one thing do two things." - Ryan Singer (Basecamp/37Signals)
HTMX Solution to keeping client and server in sync: Remove the client.
Okay, now you have half the code base, but need a round trip to the server for every interaction.
You could also remove the server and let people download your blog, where they can only post locally. No server-side input validation needed!
manx 723 days ago [-]
You can implement interactivity which doesn't need data from the server entirely client side. Libraries like https://alpinejs.dev help here and pair well with htmx.
exabrial 723 days ago [-]
Schemaless markup has lead us down a path where something is broken nearly all the time.
I’d really rather see strongly typed markup that can easily be checked for correctness and who’s behavior is well defined. Something Modular too with profiles and designed for extensibility.
At one point we experimented with the server returning the stuff to replace the HTML with. We support that in our framework natively (through a mechanism called “slots”).
That said, I have come to believe that people this decade will (and should) invert the idea of progressive enhancement to be client-first.
Imagine your site being composed of static files (eg served on Hypercore or IPFS via beaker browser). As more people use it, the swarm grows. No worries about being DDOSed. No having to trust a server to not send you the wrong interface one day. The software is yours. It just got delivered via a hypercore swarm or whatever, and your client checked the Merkle trees to prove it hasn’t been tampered with.
The you just interact eith a lot of headless web services. Rather than bearer tokens and cookies, you can use webauthn and web crypto to sign requests with private keys. You can do a lot more. And you store your data WHERE YOU WANT.
Sure, htmx can be used there. But there, too, it’s better to fetch JSON and interpret it on the client.
Returning markup language like HTML actually mixes data with presentation. Consider what happens when you want to return a lot of rows. With JSON, you return just the structured content. With HTML, there is a lot of boilerplate <li class=“foo” onclick=“…”> mixed in there which is a lot of extra weight on the wire.
If you are interested in learning more, I gave a whole talk about it:
I think "the future" definitely isn't some non-standard custom attributes.
723 days ago [-]
hashworks 723 days ago [-]
> It is very easy to support users who do not wish to, or cannot use JavaScript
I don't get this. To use htmx one has to load 14 KB of gzipped JS. How does this make it easy to support clients that don't support JS?
akpa1 723 days ago [-]
Because HTMX is built around graceful fallbacks to standard features.
For example, you can apply HTMX to a standard anchor tag and be able to tell if a request has come from HTMX on the server to tailor the response. Then, if the client supports HTMX, it'll prevent the default action and swap the content out, otherwise it'd do exactly what an anchor normally does.
The same goes for form elements.
If you're just a little bit careful about how you use HTMX, it gracefully falls back to standard behaviour very easily.
lofaszvanitt 723 days ago [-]
.. in an alternate universe.
doodlesdev 723 days ago [-]
No it's not. Honestly, the fact that this website displays like shit without JavaScript enabled is ironic considering it uses HTMX.
Please just use the damn full-stack JS frameworks, they make life simpler, just wait for WebAssembly to allow us to have full-stack Rust/Go/whatever frameworks, and then you can abandon JavaScript, otherwise you get the mess of websites like this one where the developer has not written JavaScript, but the website still needs it for me to be able to read a _damn blog post_.
Otherwise, stick with RoR, Django, Laravel, or whatever ticks your fancies, but HTMX just ain't for everyone and everything, it's supposed to be used for hypermedia, not for web apps or anything else, just that: hypermedia.
And no, JavaScript libraries aren't all "complicated" and "full of churn", React is. Stop using React, or otherwise accept the nature of its development, and stop complaining. There are hundreds of different JavaScript libraries and yet every single time I see people bashing on full stack JavaScript they just keep repeating "React" like it's the only library in the world and the only way developers have written code for the last decade.
Also, tangentially related, can we as an industry stop acting like kids and stop following these "trends"? The author talks about a "SPA-craze" but what I've been seeing more and more now is the contrary movement, however it's based on the same idea of a hype cycle, with developers adopting technology because it's cool or whatever and not really considering what are their actual needs and which tools will provide them with that.
Rant over.
tsuujin 723 days ago [-]
> Please just use the damn fullstack JS frameworks, they make life simpler
Strongest possible disagree. I’ve been doing web dev for a long time, and the last 10 years has seen a massive, ridiculous increase in complexity across the board.
I personally took my company back to good old server rendered apps with a turbolinks overlay because I was sick of dealing with the full stack frameworks, and we saw a huge increase in productivity and developer happiness.
doodlesdev 723 days ago [-]
Hotwire is appropriate (I imagine you're not using actual Turbolinks which has been deprecated) with RoR or whatever stack you want, I agree it's great developer experience (despite Ruby and Rails being painfully slow haha).
I wonder which full-stack JS framework you used that you thought made life harder? One of the things that gets me mad is the idea of putting it all in one single box, as React is indeed very (needlessly) complex and so can be other libraries, but that doesn't mean the paradigm of JavaScript front-to-back is fundamentally flawed.
edit: Something else I should've added to my comment is that the HTMX approach is terrible if you ever need more than just the web-client (i.e. a mobile app, native or otherwise) since you will now have to implement an API anyway, which you could've done in the first place by taking the usual approach to development.
renerick 723 days ago [-]
> if you ever need more than just the web-client (i.e. a mobile app, native or otherwise) since you will now have to implement an API anyway, which you could've done in the first place
Unless your new client is a hybrid/webview/electron application, it's a trap. APIs for web, for native and public API have different sets of constraints in terms of authentication, versioning, even features are probably gonna be different.
And it's not like building an API after you built web would be complicated. Unless you put your business logic right into controllers/handlers, making an API would be just making a new endpoint that calls to existing application services. It's not free, it may not be trivial, but like other comment said, if you can afford a new client, you probably can afford an API for it
tsuujin 723 days ago [-]
> I wonder which full-stack JS framework you used that you thought made life harder?
I’ve used Angular, Dart, Backbone, Ember, Elm, React, Vue, Svelte, and a few others I can’t remember anymore. All in production systems, not demo projects. Also some of the “build it once” platforms like Meteor.
They’re all cool until you have to actually maintain them. My favorite part is having to build my data models twice, one for the producer and one for the consumer. That’s totally never caused any headaches or slowed anyone down at all.
doodlesdev 723 days ago [-]
> I’ve used Angular, Dart, Backbone, Ember, Elm, React, Vue, Svelte, and a few others I can’t remember anymore. All in production systems, not demo projects. Also some of the “build it once” platforms like Meteor.
That's quite a lot of libraries lol, I guess no one can say you just haven't found the right library
I must admit I've used Hotwire much less than I should've, but I still feel comfortable with full-stack JavaScript (or rather, TypeScript).
You said you migrated back to Turbo, what backend framework do you use? RoR with Hotwire is so nice, but I personally avoid it because of Ruby (not personally a fan), to be fair the same is true for most full-stack frameworks I avoid (such as Django/Python).
tsuujin 723 days ago [-]
I’ve used turbolinks on a few backend driven projects, but mostly with rails, and hotwire replaced that nicely enough. I really only used it because from the users perspective it’s hard to tell that you’re not using a SPA.
I really like the idea of systems like Phoenix’s LiveView as well, but I ask a lot of questions before I implement something like that. The majority of projects I’ve ever worked on didn’t really need that much interactivity and I strongly prefer the “sprinkle on JS when you actually need it” approach to web apps.
jdthedisciple 723 days ago [-]
Well which JS framework(s)/librar[y|ies] are you then using/advocating for, if you agree that React is needlessly complex?
doodlesdev 723 days ago [-]
I personally enjoy Svelte.
lakomen 723 days ago [-]
[flagged]
dang 721 days ago [-]
Please don't break the site guidelines like this! We have to ban accounts that do that.
I'm sorry for saying something you disagree with. Next time I'll ask for permission to comment my opinion. Really, what's the point of your comment? At least state why you think I have no idea what I'm talking about, the way you wrote it adds literally nothing to the conversation.
718 days ago [-]
dang 721 days ago [-]
Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.
As long as native apps live alongside web, then the architecture of web is going to trend towards that of native apps, managing display state on the client while managing business state on server.
jcmontx 723 days ago [-]
I am very fond of MPAs. Glad to see them make a comeback.
The one single strong point of the front/back split is the famous Strangler Fig Pattern which takes away a lot of stress when making decisions.
tommica 723 days ago [-]
Htmx seems really nice,but as far as I can see, it is messy to implement some kind of a state, f.x a button that can only be pressed if the other forms on the page have been saved.
tedunangst 723 days ago [-]
How does this differ from what we called rehydration a decade ago?
CRConrad 718 days ago [-]
If nothing else, it's less of a silly moniker: Why would I want to get my Web page wet in the first place?!? Remember, DRY!
chenster 723 days ago [-]
Please make up our mind. Why can't we just make a decision and stick to it? It's like something comes alone everything 12 month and claims it is 'better'.
tommica 723 days ago [-]
Well, there is as many opinions as there is people, so consensus is a bit hard to reach...
teaearlgraycold 723 days ago [-]
I'd rather see either:
* NextJS provide a holistic solution to backend code. Right now it's missing an ORM that works with serverless postgres. Given their recent additions of serverless postgres to Vercel I expect this will happen in 6-12 months.
* RedwoodJS become more mature.
The issues with SPAs IMO come from having to cobble together your full stack app, which requires making a ton of hard decisions in predicting the future of each major library you use. And then there's limited coherence between your ORM, API, and client without extra work. A mature, well designed, and financially supported full stack JS solution that truly rivals Rails would be perfect.
keb_ 723 days ago [-]
Does anyone have an example Hacker News client written in HTMX?
sublinear 723 days ago [-]
> SPAs have allowed engineers to create some great web applications, but they come with a cost: ... Managing state on both the client and server
Having a separation of concerns between server and client is the whole point, and replacing JSON APIs with data trapped in HTML fragments is a massive step backwards.
> the new API is simply reflected in the new HTML returned by the server
Whereas with SPAs (my words)
> the new API is simply reflected in the new json returned by the server
I think I prefer the mental model of the api serving data and client rendering it.
CRConrad 718 days ago [-]
HTML is the data format of Web pages. That's what the client, the browser, is built to render.
sublinear 723 days ago [-]
What if we need the same backend to be usable by more than one client? What if those clients aren't even a SPA, but a wrapper library, or a native app? What if we need internal scripts that manage the content in the backend using the same API? What if we need to redesign the client without touching the backend?
HTMX doesn't address any of that. YAGNI is often only true for the original developers, not everyone else who has to maintain it long term.
recursivedoubts 722 days ago [-]
In the article I recommend splitting your hypermedia and data APIs up into two separate concerns.
The data API can address all of the concerns you have.
The hypermedia API, being consumed by an htmx front end, can then take advantage of the strengths of hypermedia (e.g. the uniform interface giving you a lot of flexibility in aggressively refactoring your API.)
Please, read the article.
sublinear 722 days ago [-]
There is no data that should be encapsulated in hypermedia. YAGNI in this sense is the opposite - You Are Gonna Need It
recursivedoubts 721 days ago [-]
That's fine, you can build a reasonable general purpose JSON API very quickly, particularly if you aren't having it dragged around by your web applications nitty-gritty needs.
In the linked essay I am riffing on someone already recommending splitting your JSON APIs into a general purpose API for clients, and a specialized one for your web application, in order to separate these two concerns and remove the pressure that that latter puts on the former.
I recommend going further with that and adopting hypermedia for your application API, since no one else should be depending on it. I recommend this because I like the hypermedia paradigm, but it only makes sense as part of a complete hypermedia system. Trying to reuse a hypermedia API for other clients isn't a good idea and that's not what I'm recommending.
Does that make sense?
robertoandred 723 days ago [-]
This really comes down to backend devs thinking frontend must be simple, and when they realize it's not they blame the tools. So they come up with new tools and pretend they're better because they cater to backend devs and not those silly frontenders who just don't know anything.
hankchinaski 723 days ago [-]
everytime i see a custom new DSL i just bin the thing altogether, I dont want to learn a new DSL that will disappear in a year, it will be impossible to find documentation and community support for. just a big no for me. Apart from that seems like a good idea
>Here we are getting a bit fancy and only allowing one row at a time to be edited, using hyperscript. https://hyperscript.org
CodeCompost 723 days ago [-]
I'm beginning to realise that AI assistance is now resulting in long and verbose articles like this one. The bullet points are especially off-putting.
rmorey 723 days ago [-]
then the problem begets its own solution - just use an assistant to summarize for you! \s
(i don't actually think this article is largely AI-generated)
nologic01 723 days ago [-]
Could you use htmx (or an extension?) to dynamically update SVG fragments? I.e., replicate the functionality of d3.js on the server side?
xkcd1963 723 days ago [-]
"The hypermedia approach of building websites led to the world-wide-web being an incredible success."
This is wrong
tbjgolden 722 days ago [-]
Worth pointing out that Astro is already a good solution for this problem; can use it without touching client side libs
unixhero 723 days ago [-]
As long as I don't have to learn it.
WesSouza 722 days ago [-]
I’m currently writing HTMX is not the future and expect to be in the front page as well. Thank you.
shams93 723 days ago [-]
What isn't mentioned is how there are intrinsic security issues with server side templating, including htx. Part of the reason react won is it has the best track record for client security with complex client apps when its used and configured and deployed correctly. With server side templating its easy to fall victim to injection attacks.
synergy20 723 days ago [-]
htmx is ajax wrapped in html to me.
it does not work for resource restricted (i.e. embedded) devices where you just can't do server-side rendering, CSR SPA is the future there as the device side just need return some json data for browser to render.
renerick 723 days ago [-]
Htmx actually can be used with these restrictions, there is an extension to do client side rendering from a JSON response [1]. And you can make htmx send JSON requests instead of form data [2].
The idea is easily extendable to any template engine, so you can keep your device response minimal while enjoying the simplicity of htmx. I will admit though, this approach gets funky much faster, than returning HTML fragments, so you probably shouldn't exclusively build your app with this client-side-templates
Basecamp ( or Hey or guys behind ror ) , the original SAAS guys really do understand software development from practical point of view, they did something similar with rails as well as native clients, now a days they are trying people to move away from cloud. Bare metal is the future.
Animats 723 days ago [-]
It's sort of like client side PHP. Is that a good thing or a bad thing?
723 days ago [-]
pier25 723 days ago [-]
Good luck with latency though for anything slightly interactive.
kokizzu5 723 days ago [-]
meh, Svelte is the future, just plain Svelte, not SvelteKit
mike503 723 days ago [-]
Reminds me of Drupal's AHAH implementation.
st3fan 723 days ago [-]
Phoenix Liveview is the future :-)
intellix 722 days ago [-]
Check out Qwik from builder.io
elwell 723 days ago [-]
I don't like XML syntax.
CRConrad 718 days ago [-]
Yeah, frightfully verbose. (I suppose that's why JSON seems so much more popular nowadays; at the very least a large contributing cause.) But WTF does that have to do with anything here, which wasn't about XML at all?
elwell 707 days ago [-]
HTMX is HTML is XML
klabb3 723 days ago [-]
> Hugely increased complexity both in terms of architecture and developer experience. You have to spend considerable time learning about frameworks.
You have to learn something. You can claim bloat in JS frameworks, but that isn’t solved by simply moving it to the server.
Is htmx lean and nice today? Probably! But does it handle the same use cases that the React users have? What happens to it under pressure of feature bloat? Small-core frameworks like Elm who resisted this pressure were abandoned by big shops. You can’t just take something immature (however good) and simply extrapolate a happy future.
> Tooling is an ever-shifting landscape in terms of building and packaging code.
Yes. JS is not the only language with churn issues and dependency hell.
> Managing state on both the client and server
Correct me if I’m wrong, but state can change for something outside of a htmx request, meaning you can end up with stale state in element Y in the client after refreshing element X. The difference is that your local cache is in the DOM tree instead of a JS object.
> By their nature, a fat client requires the client to execute a lot of JavaScript. If you have modern hardware, this is fine, but these applications will be unusable & slow for those on older hardware or in locations with slow and unreliable internet connections.
On unreliable connections you want as thick of a client as possible. If you have server-in-the-loop for UI updates, you quite obviously have latency/retry issues. It’s much preferable to show stale state immediately and update in the background.
> It is very easy to make an SPA incorrectly, where you need to use the right approach with hooks to avoid ending up with abysmal client-side performance.
Bloat comes from reckless software development practices, and are possible in any technology. Angular and React have a shitton of features and ecosystem around it, whereas say Svelte is more lean. Enterprisey shops tend to prioritize features and not give a flying fuck about performance. This is a business choice, not a statement about technology.
> Some SPA implementations of SPA throw away progressive enhancement (a notable and noble exception is Remix). Therefore, you must have JavaScript turned on for most SPAs.
Finally, we cut to the chase. This is 100% true, and we should be talking about this, because it’s still not settled: do we want web pages or web apps? If both, where is the line? Can you expect something like Slack to work without JavaScript? What about a blog with interactive graphs? Should everything degrade or should some things require JS/WASM?
I love that htmx exists. I have absolutely nothing against it. It honors some of the early web philosophy in an elegant and simple manner. It may be a better model for server-centric apps and pages, which don’t need offline or snappy UIs. But it cannot magically solve the inherent complexities of many modern web apps.
doodlesdev 723 days ago [-]
> Finally, we cut to the chase. This is 100% true, and we should be talking about this, because it’s still not settled: do we want web pages or web apps? If both, where is the line? Can you expect something like Slack to work without JavaScript? What about a blog with interactive graphs? Should everything degrade or should some things require JS/WASM?
BINGO. At times, it seems everyone is talking through each other because we are thinking of different things: static pages, dynamic websites, web apps, etc. all require different approaches to development. Honestly, what gets me real mad is when you need to run JavaScript to see a static page, I just can't stand it, one lovely example is the blog post we are talking about which displays awful without running JavaScript, this proves that indeed HTMX is not a panacea, and you can also "hold it wrong" (the blog post in question uses HTMX in the backend).
Overall, I believe most applications do well with a graceful degradation approach similar to what Remix offers and then everyone copied (the idea of using form actions webforms-style for every interactivity, so it works with and without JavaScript). I do agree that things like Slack, Discord, Element, or otherwise things we would call web apps are acceptable to be purely SPAs or not gracefully degrade without it enabled, the biggest problem I have with these is that they exist as web clients in the first place: the world would be a different place if approaches such as wxWidgets has paid off and gotten adopted, imagine how many slow and bloated web apps could've been beautiful and fast native applications. One can dream. I'm not that pessimistic, not yet.
klabb3 723 days ago [-]
> BINGO. At times, it seems everyone is talking through each other because we are thinking of different things
Glad to hear. Yes, it seems like the post and the comments are largely missing the functional issue at play.
> the blog post we are talking about which displays awful without running JavaScript
Yeah, case in point, perhaps.. I mean if you have two paths (incremental and full) to reach the same state, you better be careful to ensure those are functionally equivalent. This is surface are for bugs, so the very least you need to do is turn off JS and test all flows. To me, the value add of SPAs is the snappy UI, and offline-capabilities, so if you’re gonna roundtrip to the server anyway, then you may as well just re-render the entire page old-school to greatly reduce complexity.
> the biggest problem I have with these is that they exist as web clients in the first place: the world would be a different place if approaches such as wxWidgets has paid off and gotten adopted, imagine how many slow and bloated web apps could've been beautiful and fast native applications.
I actually disagree with this (the opinion, not the problem statement). The main players, Apple, Microsoft, Google, have known about the cross-platform issues and haven’t done jack in decades (perhaps flutter deserves an honorary mention though). Meanwhile, the web, with all its problems, have gotten so much better. Getting the web to the point of native standards seems much more feasible than establishing new open standards for app development. The bloat issue is largely a red herring imo. A well made web app is snappy, and importantly, can be sandboxed. The issue is that people don’t care. You can find equally shitty and bloated apps in the sea of crap on the app stores. With webview support in the OS, bundles can be small. I have an app based on Tauri which is web based, and the msi is 10Mb. It’s never had any perf issues.
doodlesdev 723 days ago [-]
> I have an app based on Tauri which is web based, and the msi is 10Mb. It’s never had any perf issues.
And that's on the heavy side! Tauri is awesome, it's unfortunate most developers opt for Electron to reduce the amount of testing needed between browser engines.
I agree with you that a well-made web app is snappy and works well. I don't think we'll go back to native apps (for now), what I hope for is simply that it could become an _option_ as right now it's just basically trying to swim upstream.
Web apps do a lot of things really well that native applications always had difficulty with, such as accessibility and distribution. Still, I feel that even when doing web apps that are snappy they miss out on some cool features of native applications such as consistent theming, reduced memory usage, reduced CPU usage (regardless of how well the web app is written), and just the simplicity of it.
Perhaps the ship has already sailed, and native applications will never make a comeback, in which case I hope lighter weight engines purpose-made for web applications get adopted as an alternative to shipping the entirety of Chromium with each program I download. Projects like Servo [0] show me that's possible, it's just that there is currently no interest from the big players to keep funding these developments and provide them as an alternative to Chromium.
> And that's on the heavy side! Tauri is awesome, it's unfortunate most developers opt for Electron to reduce the amount of testing needed between browser engines.
The browser engine differences isn’t that big of a deal, because you already have standard ways of dealing with it on the web. I think today the main reasons people pick Electron today are tooling, desktop integration features and Node. Tauri is catching up insanely fast though. I am very happy that I jumped on the bandwagon fairly early.
> in which case I hope lighter weight engines purpose-made for web applications get adopted as an alternative to shipping the entirety of Chromium with each program I download
Absolutely, shipping a browser engine is insanity and should always have been a stopgap at best imo. Even servo is way too heavyweight to ship with each app. Good news is with OS having native webviews we’re 80-95% there already, so we don’t even have to deal with that tradeoff – this let’s an app have a perf overhead of ~1 typical browser tab. It’ll take a moment to iron out all kinks but it’s already perfectly acceptable on most deployment targets. Honestly, Linux is just as bad when it comes to distribution. Distros have been in some perpetual siloing mindset and have not been able to get behind a decent and unified distribution story. It has nothing to do with web though.
> it's just that there is currently no interest from the big players to keep funding these developments
The big players have been so awful that even if you disregard the abysmal interop story, distributing native apps is still orders of magnitude more hassle than the web ecosystem. Imo Electron was born as an escape hatch from that disaster, not because frontend developers conspired to eat all the worlds RAM for fun. I absolutely agree that the big players should passively fund projects like servo and tauri. I don’t want them anywhere near strategic decision making though.
tudorw 723 days ago [-]
Xanadu.
0xblinq 722 days ago [-]
No, it’s not.
bioinformatics 723 days ago [-]
No, it’s not
joseph_grobbles 723 days ago [-]
[dead]
selfcare11 721 days ago [-]
[dead]
genes123 721 days ago [-]
[dead]
steve76 723 days ago [-]
[dead]
morelight19 721 days ago [-]
[dead]
babycat 722 days ago [-]
[dead]
morelight18 721 days ago [-]
[dead]
prada543 722 days ago [-]
[dead]
poemsporn1 722 days ago [-]
[dead]
betches98 722 days ago [-]
[dead]
fabulous345 722 days ago [-]
[dead]
guyracer5 722 days ago [-]
[dead]
fashionista45 722 days ago [-]
[dead]
fabulous345 722 days ago [-]
[dead]
guyracer5 722 days ago [-]
[dead]
fierce443 722 days ago [-]
[dead]
chillhouse7 722 days ago [-]
[dead]
sharinggenes1 721 days ago [-]
[dead]
smashfizzle7 721 days ago [-]
[dead]
lustforlife1 721 days ago [-]
[dead]
petter13 722 days ago [-]
[dead]
justonemore1 722 days ago [-]
[dead]
petter13123 722 days ago [-]
[dead]
butterflyeffec 721 days ago [-]
[dead]
reading1234 721 days ago [-]
[dead]
fabulous657 722 days ago [-]
[dead]
nitch654 722 days ago [-]
[dead]
travelmore54 722 days ago [-]
[dead]
spicysugar1 722 days ago [-]
[dead]
justonemore1 722 days ago [-]
[dead]
boomn1x1 722 days ago [-]
[dead]
monsoon11 721 days ago [-]
[dead]
lifebutterfly 721 days ago [-]
[dead]
advocates19 721 days ago [-]
[dead]
chillhouse7 722 days ago [-]
[dead]
selfcare4yu1 721 days ago [-]
[dead]
morelight11 721 days ago [-]
[dead]
flightsnot81 721 days ago [-]
[dead]
selfcare111 721 days ago [-]
[dead]
happyjock1 722 days ago [-]
[dead]
drunkbetch54 722 days ago [-]
[dead]
missmischief4 722 days ago [-]
[dead]
indigosparkle7 721 days ago [-]
[dead]
reading87 721 days ago [-]
[dead]
yourblanks17 721 days ago [-]
[dead]
terthanthesun1 721 days ago [-]
[dead]
lusttforlife1 722 days ago [-]
[dead]
haveless54 722 days ago [-]
[dead]
pinchofsalt2 721 days ago [-]
[dead]
virginvanilla1 722 days ago [-]
[dead]
travelmore54 722 days ago [-]
[dead]
buckshot53 722 days ago [-]
[dead]
champagne7 721 days ago [-]
[dead]
morelight11 721 days ago [-]
[dead]
drunk2412 722 days ago [-]
[dead]
drunk2412 722 days ago [-]
[dead]
livincool134 721 days ago [-]
[dead]
fabulous13 721 days ago [-]
[dead]
advocates19 721 days ago [-]
[dead]
smashfizzle12 721 days ago [-]
[dead]
gypsysoul1 722 days ago [-]
[dead]
thatnerdy12 721 days ago [-]
[dead]
blooms112 721 days ago [-]
[dead]
isonfire54 722 days ago [-]
[dead]
angelhearts1 721 days ago [-]
[dead]
hellobabe 722 days ago [-]
[dead]
fashionista45 722 days ago [-]
[dead]
fabulous657 722 days ago [-]
[dead]
gypsyperson 722 days ago [-]
[dead]
yourblanks17 721 days ago [-]
[dead]
inkandfable1 722 days ago [-]
[dead]
poemsporn1 722 days ago [-]
[dead]
bachelor6 722 days ago [-]
[dead]
angelhearts9 721 days ago [-]
[dead]
kong4523 722 days ago [-]
[dead]
thedad46 722 days ago [-]
[dead]
flightsnot45 721 days ago [-]
[dead]
monsoon654 721 days ago [-]
[dead]
babynative54 722 days ago [-]
[dead]
livincool87 721 days ago [-]
[dead]
indigosparkle6 721 days ago [-]
[dead]
styleslover123 721 days ago [-]
[dead]
drunkbetch54 722 days ago [-]
[dead]
thedad4536 722 days ago [-]
[dead]
chillhouse54 722 days ago [-]
[dead]
missmischief4 722 days ago [-]
[dead]
blooms112 721 days ago [-]
[dead]
monsoon11 721 days ago [-]
[dead]
julesontour45 722 days ago [-]
[dead]
lyjules534 722 days ago [-]
[dead]
isonfire36 722 days ago [-]
[dead]
haveless54 722 days ago [-]
[dead]
indigosparkle1 721 days ago [-]
[dead]
julesontour45 722 days ago [-]
[dead]
velvet564 722 days ago [-]
[dead]
butterflyeffec 721 days ago [-]
[dead]
reading1234 721 days ago [-]
[dead]
isonfire54 722 days ago [-]
[dead]
canyon12342 722 days ago [-]
[dead]
iamwell45 722 days ago [-]
[dead]
lifebutterfly 721 days ago [-]
[dead]
lucky43 722 days ago [-]
[dead]
canyon12342 722 days ago [-]
[dead]
prada543 722 days ago [-]
[dead]
workofgod43 722 days ago [-]
[dead]
happyjock1 722 days ago [-]
[dead]
guyracer54 722 days ago [-]
[dead]
gobblers1 721 days ago [-]
[dead]
gobblers1 721 days ago [-]
[dead]
buckshot53 722 days ago [-]
[dead]
smashfizzle7 721 days ago [-]
[dead]
lustforlife1 721 days ago [-]
[dead]
weworewhat1 722 days ago [-]
[dead]
petter13 722 days ago [-]
[dead]
promax64 722 days ago [-]
[dead]
babynative54 722 days ago [-]
[dead]
iamwell45 722 days ago [-]
[dead]
doodles675 721 days ago [-]
[dead]
genes123 721 days ago [-]
[dead]
babycat 722 days ago [-]
[dead]
vogue134 722 days ago [-]
[dead]
betches4534 722 days ago [-]
[dead]
chillhouse54 722 days ago [-]
[dead]
collective13 722 days ago [-]
[dead]
gamersimmer1 722 days ago [-]
[dead]
floufrouu121 721 days ago [-]
[dead]
selfcare111 721 days ago [-]
[dead]
bachelor6 722 days ago [-]
[dead]
inkandfable1 722 days ago [-]
[dead]
isonfire36 722 days ago [-]
[dead]
indigosparkle6 721 days ago [-]
[dead]
trueliving1 722 days ago [-]
[dead]
isonfire11 721 days ago [-]
[dead]
boomn1x1 722 days ago [-]
[dead]
spicysugar1 722 days ago [-]
[dead]
galaxy12 722 days ago [-]
[dead]
styleslover123 721 days ago [-]
[dead]
thedad4536 722 days ago [-]
[dead]
nitch7554 722 days ago [-]
[dead]
floufrouu11 721 days ago [-]
[dead]
xoxogossip1 722 days ago [-]
[dead]
nitch654 722 days ago [-]
[dead]
lyjules89 722 days ago [-]
[dead]
lyjules534 722 days ago [-]
[dead]
thedad46 722 days ago [-]
[dead]
morelight876 721 days ago [-]
[dead]
morelight19 721 days ago [-]
[dead]
collective13 722 days ago [-]
[dead]
gamersimmer1 722 days ago [-]
[dead]
lucky43 722 days ago [-]
[dead]
babykins3 722 days ago [-]
[dead]
indigosparkle7 721 days ago [-]
[dead]
weworewhat9 721 days ago [-]
[dead]
monsoon654 721 days ago [-]
[dead]
isonfire11 721 days ago [-]
[dead]
selfcare4yu1 721 days ago [-]
[dead]
gypsyperson 722 days ago [-]
[dead]
missmischief21 722 days ago [-]
[dead]
champagne7 721 days ago [-]
[dead]
livincool134 721 days ago [-]
[dead]
reading87 721 days ago [-]
[dead]
terthanthesun1 721 days ago [-]
[dead]
vogue134 722 days ago [-]
[dead]
vogue567 722 days ago [-]
[dead]
xoxogossip1 722 days ago [-]
[dead]
thatnerdy12 721 days ago [-]
[dead]
vogue567 722 days ago [-]
[dead]
bikerdude3 722 days ago [-]
[dead]
betches98 722 days ago [-]
[dead]
facerracer45 722 days ago [-]
[dead]
galaxy12 722 days ago [-]
[dead]
livincool87 721 days ago [-]
[dead]
angelhearts9 721 days ago [-]
[dead]
mystique54 722 days ago [-]
[dead]
weworewhat9 721 days ago [-]
[dead]
lyjules89 722 days ago [-]
[dead]
nitch7554 722 days ago [-]
[dead]
betches4534 722 days ago [-]
[dead]
floufrouu11 721 days ago [-]
[dead]
petter13123 722 days ago [-]
[dead]
workofgod43 722 days ago [-]
[dead]
indigosparkle1 721 days ago [-]
[dead]
angelhearts1 721 days ago [-]
[dead]
morelight18 721 days ago [-]
[dead]
promax64 722 days ago [-]
[dead]
darkhorse34 722 days ago [-]
[dead]
lusttforlife1 722 days ago [-]
[dead]
pinchofsalt2 721 days ago [-]
[dead]
weworewhat1 722 days ago [-]
[dead]
virginvanilla1 722 days ago [-]
[dead]
gypsysoul1 722 days ago [-]
[dead]
hellobabe 722 days ago [-]
[dead]
fierce443 722 days ago [-]
[dead]
darkhorse34 722 days ago [-]
[dead]
mystique54 722 days ago [-]
[dead]
guyracer54 722 days ago [-]
[dead]
morelight876 721 days ago [-]
[dead]
flightsnot45 721 days ago [-]
[dead]
trueliving1 722 days ago [-]
[dead]
flightsnot81 721 days ago [-]
[dead]
floufrouu121 721 days ago [-]
[dead]
betches98 722 days ago [-]
[dead]
kong4523 722 days ago [-]
[dead]
fabulous13 721 days ago [-]
[dead]
babykins3 722 days ago [-]
[dead]
betches98 722 days ago [-]
[dead]
facerracer45 722 days ago [-]
[dead]
doodles675 721 days ago [-]
[dead]
velvet564 722 days ago [-]
[dead]
bikerdude3 722 days ago [-]
[dead]
smashfizzle12 721 days ago [-]
[dead]
missmischief21 722 days ago [-]
[dead]
selfcare11 721 days ago [-]
[dead]
phpnode 723 days ago [-]
Title needs (2011). Not because that's when it was written but because that's when this technique was the future.
recursivedoubts 723 days ago [-]
we are going to go back
back to the future
hamilyon2 723 days ago [-]
I am not going to be serious here, because, honestly, this is kind of ridiculous.
Two years ago I distinctly remember server side rendering piped thru websocket was future.
I also like how demo is visibly ugly, jQuery-style ugly. It's nostalgic in a way. And I swear to gods this approach will break back button. And resending form after network error. And expired cookie will of course result in your input being lost.
pictur 723 days ago [-]
If your project consists of a todo list, these tools will do the trick. but they are useless for projects with larger and more complex needs. and yes, there may be cases where it doesn't work in frameworks like nextjs and you need to apply hacky solutions. but I don't see even libraries like nextjs being so self-praising. Come on, folks, there's no point in praising a small package that can do some operations through the attribute. It is inevitable that the projects developed with this package will become garbage when the codebase grows. because that doesn't exist in the development logic. It's nothing more than a smug one-man's entertainment project. sorry but this is the truth.
infamia 723 days ago [-]
You can always incrementally add dynamic features using web components when HTMX and similar things aren't a good fit. It doesn't have to be either HTMX or JS-first frameworks. Our industry's fixed mindset of JS/React vs. Hypermedia (e.g., HTMX/Hotwire/Unpoly) needs to change.
pictur 723 days ago [-]
I am not for or against anything. I don't live in a fantasy world like in the article. I want to talk about real world problems, not dreams.
I don’t understand how it’s inevitable that projects using this package will become garbage when the codebase grows. It looks like reasonable patterns could be built around it. Am I missing something?
Rendered at 00:02:02 GMT+0000 (Coordinated Universal Time) with Vercel.
For instance, a major selling point of Node was running JS on both the client and server so you can write the code once. It's a pretty shitty client experience if you have to do a network request for each and every validation of user input.
Also, there was a push to move the shitty code from the server to the client to free up server resources and prevent your servers from ruining the experience for everyone.
We moved away for MPAs because they were bloated, slow and difficult to work with. SPAs have definitely become what they sought to replace.
But that isn't because of the technology, it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again. Nothing about this technology will stop that.
Client-side validation is used as an excuse for React but we were doing client-side validation in 1999 with plain ordinary Javascript. If the real problem was “not write the validation code twice” surely the answer would have been some kind of DSL that code-generated or interpreted the validation rules for the back end and front end, not the fantastically complex Rube Goldberg machine of the modern Javascript wait wait wait wait and wait some more to build machine and then users wait wait wait wait wait for React and 60,000 files worth of library code to load and then wait wait wait wait even more for completely inscrutable reasons later on. (e.g. amazing how long you have to wait for Windows to delete the files in your node_modules directory)
Yes, they overlap. Sure, you'll need some repetition and maybe, indeed, some DSL or tooling to share some of the overlapping ones across the boundaries.
But no! They are not the same. A "this email is already in use" is serverside, (it depends on the case). A "this doesn't look like an email-address, did you mean gmail.com instead of gamil.com" is client side and a "unique-key-constraint: contactemail already used" is even more down.
My point is, that the more you sit down (with customers! domain experts!) and talk or think all this through, the less it's a technical problem that has to be solved with DSLs, SPAs, MPAs or "same language for backend and UI". And the more you (I) realize it really often hardly matters.
You quite probably don't even need that email-uniqueness validation at all. In any layer. If you just care to speak to the business.
unfortunately this also needs to be done server side, unless your trusting the client to send you information that is what your expecting?
client side validation makes for a good user experience, but it does not replace the requirement to validate things server side, and many times you will end up doing the same validations for different reasons.
If it's merely a hint for the user (did you make a typo?) there's no need to ensure "this is a valid email address". in fact: foo@gamil.com is perfect valid email-address, but quite likely (though not certain!) not what the user meant.
I've seen hundreds of email-adres-format-validations in my career, server-side. The most horrible regexps, the most naïve assumptions[1]. But to what end?
What -this is a question that a domain expert or business should answer - does it matter if an email is valid? trump@whitehouse.gov is probably valid. As is i@i.pm[2]. What your business- expert quite likely will answer is something in line of "we need to be sure we can send stuff so that the recipient will can/read it", which is a good business constraint, but one that cannot be solved by validating the format of an email. One possible correct "validation" is to send some token to the email, and when that token is then entered, you -the business- can be sure that at least at this point in time, the user can read mail at that address.
[1] A recent gig was a Saas where a naïve implementor, years ago, decided that email-addresses always had a TLD of two or three letters: .com or .us and such. Many of their customers now have .shop or somesuch. [2] Many devs don't realize that `foo` is a valid email-adress. That's foo without any @ or whatever. It's a local one, so rather niche and hardly used in practice; but if you decide "I'll just validate using the RFC, you'll be letting through such addresses too!". Another reason not to validate the format of email: it's arbitrary and you'll end up with lots of emails that are formatted correct, but cannot be used anyway.
The validation is there to catch user mistakes before sending a validation email and ending up with unusable account creation.
It doesn't matter if an email has a valid format: that says almost nothing about it's validity. The only way you can be sure an address can receive mail(today) is by sending mail to it. All the rest is theatre.
And all this only matters if the business cares about deliverability in the first place.
You seem to think that because of this client validation should be skipped. On that point I disagree. If you can tell that it's not a valid email address (bigtunacan@goog obviously invalid since missing a TLD) then no email should be sent. Good UX is to let the customer/user know there is a mistake in the email address.
The button for changing approvers was greyed out, so out of boredom I changed it to active in the client-side code. Lo and behold after clicking the "active" button I got a box for selecting the approver.
I could select any user in the company. Even the CEO or myself.
I did the right thing and mentioned this to our IT Security department. Since obviously this could be used to order really expensive stuff in the name of the CEO or whoever.
They came back to me and told me, the vendor (I'm not sure I want to mention them here because they're happy to sue), knows about this for 3 years and won't fix it.
Stop right there.
I'm tired of receiving mail from people that gave my email address as if it was their own.
Never ever accept an email address unless you can instantly confirm it's valid sending an email and waiting for an answer. If the user can't access their email on the spot, just leave it blank and use another data as key.
I wish they included that in GDPR or something.
If it isn't valid the server won't annoy anyone. The problem is that the address is valid. And not theirs, it's mine.
The moment the users need to be careful, they will. Make the problem theirs, not mine.
"Sorry sir, the address you provided returns error" or "haven't you received the confirmation email YET? really? there are other customers in the line" and see how soon they remember the right address, perfectly spelled.
Even big ass companies like Paypal that have no problem freezing your monies, allow their customers to provide unchecked email addresses and send income reports there. (here)
I meant that it very much depends on the business-case (and hence laws and regulations) what exactly you'll have to verify, and therefore where you verify and validate it.
Do you need an address to contact people on? You'll must make sure that the user can read the emails sent to that by you. Do you merely use it as a login-handle? Then it probably only has to be guaranteed unique. Do you need to just store it in some address-book? Then just checking roughly the format is probably enough. "It depends".
Pretty humongous dick move to use someone else's email address as one's own login for some website, wouldn't you agree? What if it's a popular website, and the owner of the address would like to use it for their id; why should anyone else be able to deprive them of that?
And thus it's also a dick move from the site operator to allow those dicks to do that. So no, it doesn't depend: Just don't accept untested email addresses for anything.
Not all web-applications with a login are open for registration. Not all are public. Not all are "landgrab". Not all have thousands of users or hundreds of registrations a week. Not all are web applications and not all require email validation.
Some do. But, like your niche example proves: the business-case and constraints matter. There's no one size fits all.
Did you mean “receiving mail intended for people that gave my email address”? Because that's how I usually notice that they did.
Ultimately it’s about splitting your app into a server and client with a clear API bounday. Decoupling the client and server means they can be separate teams with clearly definied roles and responsibilities. This may be worse for small teams but is significantly better for large teams (like Facebook and Google who started these trends).
One example is your iOS app can hit the same API as your web app, since your server is no longer tightly coupled to html views. You can version your backend and upgrade your clients on their own timelines.
I’ve worked in two kinds of organizations. In one of them when there is a ‘small’ ticket from the viewpoint of management, one programmer is responsible for implementation but might get some help from a specialist (DBA, CSS god, …)
In the other a small ticket gets partitioned to two, three or more sub teams and productivity is usually reduced by a factor more than the concurrency you might get because of overhead with meetings, documentation, tickets that take 3 sprints to finish because subtasks that were one day late caused the team to lose a whole sprint, etc.
People will defend #2 by saying thar’s how Google does it or that’s how Facebook does it, but those monopolists get monopoly rents that subsidize wasteful practices and if wall street ever asks for “M0R M0NEY!” they can just jack up the ad load. People think they want to work there but you’ll just get a masterclass in “How to kill your startup.”
Tech orgs and those standards exist because:
- tech generally doesn't understand business - the business struggles to express it's needs to tech
Embedding worked for you, but how big was your team? Could that scale?
I'm not questioning your success or your frustrations, but how unique was the situation for your success?
As far as iterations go it’s very rapid. Our work teams are split into 1 backend and 1 frontend developer. They agree on an API spec for the project. This the contract between them and the frontend starts working immediately against a mock or very minimal version of the API. Iterate from there.
Still you only have to get it mostly right. Enough to get started. This only starts to become a huge problem when the endpoints is a dependency of another team. When you’re in constant communication between the developer building the API and the developer building the client it’s easy to adjust as you go.
I find a key part of a workflow like this though especially if you have multiple teams is to have a lead/architect/staff developer or whatever you may call it be the product owner of the API.
You need someone ensure consistency and norms and when you have an API covering functionally as broad and deep as the one I work on, it’s important to keep in mind each user story of the API:
- The in house client using the API. This generally means some mechanism to join or expand related records efficiently and easily and APIs providing a clear abstraction over multiple different database table when necessary. - The external client, used by a third party or the customer directly for automation or custom workflows. The biggest thing I’ve found helps these use cases is to be able to query records by a related field. For example if you have some endpoint that allows querying by a userID, being able to also query by by a name or foreignID passed over SSO can help immensely.
Company forced us to type 2 using Angular. projects thar used to take a couple of days for one person became multi month efforts for a dozen developers across three teams.
</s>
And just because you serve HTML doesn’t necessary mean that you backend code is tightly coupled with the view code, HTML is just one adapter of many.
A boundary doesn’t get better just because you slip a HTTP barrier in between, this is the same type of misconception that has driven the microservice hysteria.
third time I've heard this thing and the reasoning still escapes me.
First there's ownership. Backend team owns API. Frontend teams own clients (web/android/ios/cli) etc. Do you now have a BFF for each client type? Who owns it then ? Don't you now need more fullstacks ?
there's confusion. Now you have 2 sets of contracts (API-BFF, BFF-clientIOS, BFF-clientAndroid, ...). You now have more human synchronization overhead. Changes take longer to percolate throughout. More scope for inconsistencies.
And there's performance. Adding more hops isn't making it faster, simpler or cheaper.
Isn't is better to have the API team own the single source of ownership ?
but... ideally this separation of ownership (backend backend, front end for backend) allows each to focus on the domain better without mixing up say localization in the lower level api's et
iow having a bff is sort of like having the view model as a server... that way multiple clients can be dead simple and just map the bff response to a ui and be done with it
(thats the ideal as i understand it anyways)
Yes. I’m generally against specialization and splitting teams. This of course depends on what type of organization you have and how complex the frontend is. iOS and Android is usually complex as it is so they are typically specialized but I would still keep them in the team.
Specialized teams not only creates synchronization issues between teams but also creates different team cultures.
What this does is that it induces a constant time delay for everything the organization does. Because teams no longer can solve an entire feature the organization instead spends more time on moving cards around in the planning tool of choice. The tiniest thing can require massive bureaucratic overhead.
Solutions also has a tendency to become suboptimal because no technician has an general overview of the problem from start to finish. And it also quite common that the same problem is solved multiple times, for each team.
By making BFFs specialized, instead of the teams, you don’t need to spend time to create and design a generalized API. How many hours hasn’t been wasted on API design? It adds nothing to customer satisfaction.
This also means that you separate public and private APIs. External consumers should not use the API as your own web client.
Specialized BFFs is not only to have a good fit for the client consuming it but it also about giving different views of the same underlying data.
E.g assume we have an article with multiple revisions (edits). Handling revisions is important for the Admin API but for the web client that serves the final version of the article not at all, it shouldn’t even be aware of that the concepts of revisions exists.
Creating a new a BFF is as easy as copy&paste an existing one. Then you add and remove what you need.
The differences between BFFs is usually how you view your schema (GET). Writing to your model (POST) is likely shared because of constraints.
What is then different views of the same data? An SQL query (or VIEW). Too many APIs just maps a database table to an endpoint 1:1, those APIs are badly designed because the consequence of that is that the client needs to do an asynchronous HTTP JOIN to get the data it needs, very inefficient.
By writing SQL to fit your BFFs you will then realize that the ORM is the main problem of your architecture, it usually the ORM that creates the idea that you only have one view of the same data, one table to one entity. But SQL is a relationship model, you can’t realistically express that with 1:1 only.
By removing the ORM you will also solve the majority of your performance issues, two birds one stone scenario.
Ownership of a BFF should ideally be by the ones consuming it.
iOS and Android can usually use the same BFF, they don’t differ that much to warrant a new BFF. If there are any differences between the two, give them different endpoints within the same BFF for that specific use case. When designing APIs one should be pragmatic, not religious.
BFF is nothing more than an adapter in hexagonal architecture.
Right why have someone _good_ at a particular domain who can lead design on a team when you can have a bunch of folks who are just ok at it, and then lack leadership?
> Specialized teams not only creates synchronization issues between teams but also creates different team cultures.
Difference in culture can be cultivated as a benefit. It can allow folks to move between teams in an org and feel different, and it can allow for different experimentation to find success.
> What this does is that it induces a constant time delay for everything the organization does. Because teams no longer can solve an entire feature the organization instead spends more time on moving cards around in the planning tool of choice. The tiniest thing can require massive bureaucratic overhead.
I've seen this true when I was by myself doing every from project management, development, testing, and deployment. Orgs can have multiple steak holders who might throw a flag at any moment or force inefficient processes.
> Solutions also has a tendency to become suboptimal because no technician has an general overview of the problem from start to finish. And it also quite common that the same problem is solved multiple times, for each team.
Generalists can also produce suboptimal solution because they lack a deeper knowledge and XP in a particular domain, like DB, so they tend to reach for an ORM because that's a tool for a generalists.
> By making BFFs specialized, instead of the teams, you don’t need to spend time to create and design a generalized API. How many hours hasn’t been wasted on API design? It adds nothing to customer satisfaction.
Idk what you're trying to claim, but API design should reflect a customers workflow. If it's not, you are doing it wrong. This requires both gathering of info, and design planning.
> This also means that you separate public and private APIs. External consumers should not use the API as your own web client.
Internal and external APIs are OK, this is just a feature of _composability_ in your API stack.
> Specialized BFFs is not only to have a good fit for the client consuming it but it also about giving different views of the same underlying data.
If the workflow is the same, you're basically duplicating more effort than if you just had a thin client for each platform.
> E.g assume we have an article with multiple revisions (edits). Handling revisions is important for the Admin API but for the web client that serves the final version of the article not at all, it shouldn’t even be aware of that the concepts of revisions exists.
Based on what? Many comment systems or articles use an edit notification or similar for correcting info. This is a case by case basis on the product.
> Creating a new a BFF is as easy as copy&paste an existing one. Then you add and remove what you need.
That sounds terrible, and very OO. I'd rather generate another client for my openapi documented API, in whatever language is most appropriate for that client.
> The differences between BFFs is usually how you view your schema (GET). Writing to your model (POST) is likely shared because of constraints.
That's a stretch, if I need a form, I likely need the same data if I'm on iOS, Android, native, or web. Again it's about execution of a workflow.
> What is then different views of the same data? An SQL query (or VIEW). Too many APIs just maps a database table to an endpoint 1:1, those APIs are badly designed because the consequence of that is that the client needs to do an asynchronous HTTP JOIN to get the data it needs, very inefficient.
Yes, those API are not being designed correctly, but I think you said folks are wasting too much time on design, so not sure what your arguing for here other than to not just try and force your clients to do excessive business logic.
> By writing SQL to fit your BFFs you will then realize that the ORM is the main problem of your architecture, it usually the ORM that creates the idea that you only have one view of the same data, one table to one entity. But SQL is a relationship model, you can’t realistically express that with 1:1 only.
Yet ORMs are tools of generalists. I agree they are generally something that can get in the way of a complex data model, but they are fine for like a user management system, or anything else that is easily normalized.
> By removing the ORM you will also solve the majority of your performance issues, two birds one stone scenario.
That depends a lot on how the orm is being used.
> Ownership of a BFF should ideally be by the ones consuming it.
Why? We literally write clients for APIs we don't own all the time, whenever we call out to an external/3p service. Treat your client teams like a client! Make API contracts, version things correctly, communicate.
> iOS and Android can usually use the same BFF, they don’t differ that much to warrant a new BFF. If there are any differences between the two, give them different endpoints within the same BFF for that specific use case. When designing APIs one should be pragmatic, not religious.
The workflows Shou be the same. The main difference between any clients are the inputs available to the user to interact with.
> BFF is nothing more than an adapter in hexagonal architecture.
That's what a client is...
I can have fullstack that is better than a specialist. Specialist only means that they have specialized in one part of the architecture, that doesn't necessarily mean that they solve problems particular well, that depends on the skill of the developer.
And the point is that even if they do have more skill within that domain, total overall domain can still suffer. Many SPAs suffer from this, each part can be well engineered but the user experience is still crap.
If your developers is lacking in skill, then you should definitely not split them up into multiple teams. But again I'm talking about organization in general, that splitting teams has a devastating effect on organization output. Difference in culture will make it harder to move between teams, thus the organization will have much more difficult time planning resources effectively.
BFF is all about reflecting the need of the client, but the argument was the a generalized API is better because of re-usability. The reason why you split into multiple BFFs is because the workflow isn't the same, it differs a lot between a web client and a typical app. If the workflow is the same you don't split, that is why I wrote BFF per client type, a type that has specific workflow (need & requirement).
> This is a case by case basis on the product.
Of course, it was an example.
> That sounds terrible, and very OO. I'd rather generate another client for my openapi documented API, in whatever language is most appropriate for that client
I'm talking about the server here, not the client.
> That's a stretch, if I need a form, I likely need the same data if I'm on iOS, Android, native, or web. Again it's about execution of a workflow.
But the authentication and redirects will probably be different, so you can reuse a service (class) for updating the model, but have different controllers (endpoints).
> Yes, those API are not being designed correctly
Every generalized API will have that problem in various degrees, thus BFF.
> Yet ORMs are tools of generalists.
Oh, you think a fullstack is generalist and thus doesn't know SQL. Why do you believe that?
> That depends a lot on how the orm is being used.
Most ORMs, especially if they are of type active record, just misses that mark entirely when it comes to relationship based data. Just the idea that one class maps to a table is wrong on so many levels (data mappers are better at this).
ORM entities will eventually infect every part of you system, thus there will be view code that have entities with a save method on, thus the model will be changed almost from everywhere, impossible to track and refactor.
Performance is generally bad, thus most ORMs has an opaque caching layer that will come back and bite you.
And typically is that you need to adapt your database schema to what the ORM manage to handle.
> We literally write clients for APIs we don't own all the time,
The topic here is APIs you control yourself within the team/organization. External APIs, either that you consume or you need to expose is different topic, they need to be designed (more). The point is internal APIs can be treated differently than external ones, no need to follow the holy grail of REST for your internal APIs. Waste of time.
But even with external APIs that you need to expose they can be subdivided into different BFFs, no need to squeeze them into one, this has the benefit that you can spend less time on overall design of the API, because the API is smaller (fewer endpoints).
> That's what a client is...
I'm specially talking about server architecture here, the client uses the adapter.
Are. En developer is, flera developers are.
> Most ORMs, especially if they are of type active record, just misses that mark entirely
Miss. En ORM misses, flera ORMs miss. (Du fixade ju "are"!)
> Performance is generally bad, thus most ORMs has
Have. En ORM has, flera ORMs have.
Kom igen, så jävla svårt är det inte.
You can't trust it to actually save changes you've made, it might just fail without an error message or sometimes it soft-locks until you reload the page. Even on a reliable connection. Error handling in SPAs is just lacking in general, and a big part of that is that they can't automatically fall back to simple browser error pages.
Google seems to be one of the few that do pretty good on that front, but they also seem to be more deliberate for which products they build SPAs.
How desirable this is depends on the UI complexity.
Complex UIs as the ones built by google and facebook will most likely benefit from that.
Small shops building CRUD applications probably won't. On the contrary: the user requirements often cross-cut client and server-side code, and separating these in two teams adds communication overhead, at the best of the hypotheses.
Moreover, experience shows that such separation/specialization leads to bloated UIs in what would otherwise be simple applications -- too many solutions looking for problems in the client-side space.
It sounds a lot more annoying to have to manage one client and many servers instead.
Language and runtime decisions really need more context to be useful. JS everywhere can work well early on when making a small number of devs as productive as possible is a goal. When a project scales parts of the stack usually have different priorities take over that make JS a worse fit.
And in this case what actually happened is exactly what we had expected would happen: tons of badly-written Angular apps than need to be maintained for foreseeable future because at this point nobody wants to rewrite them so they become Frankensteins nobody wants to deal with.
As far as I know, windows explorer has been extremely slow for this kind of operation for ages. It's not even explainable by requiring a file list before starting the operation, I have no idea what it is about Windows explorer, it's just broken for such use cases.
Just recently, I had to look up how to write a robocopy script because simply copying a 60GB folder with many files from a local network drive was unbelievably slow (not to mention resuming failed operations). The purpose was exactly what I wrote: copy a folder in Windows explorer.
What does this have to do with React or JavaScript?
Plus we now get the benefit of people trying to "replace" built in browser functionality with custom code, either
The SPA broke it... Back button broken and a buggy custom implementation is there instead? Check.
or
They're changing things because they're already so far from default browser behavior, why not? ... Scrolling broken or janky because the developer decided it would be cool to replace it? Check.
There is a time and place for SPA (mail is a great example). But using them in places where the page reload would load in completely new content for most of the page anyways? That's paying a large cost for no practical benefit; and your users are paying some of that cost.
Yep. It's bonkers to me that a page consisting mostly of text (say, a Twitter feed or a news article) takes even so much as a second (let alone multiple!) to load on any PC/tablet/smartphone manufactured within the last decade. That latency is squarely the fault of heavyweight SPA-enabling frameworks and their encouragement of replacing the browser's features with custom JS-driven versions.
On the other hand, having to navigate a needlessly-elongated history due to every little action producing a page load (and a new entry in my browser's history, meaning one more thing to click "Back" to skip over) is no less frustrating. Neither is wanting to reload a page only for the browser to throw up scary warnings about resending information simply because that page happened to result from some POST'd form submission.
Everything I've seen of HTMX makes it seem to be a nice middle-ground between full-MPA v. full-SPA: each "screen" is its own page (like an MPA), but said page is rich enough to avoid full-blown reloads (with all the history-mangling that entails) for every little action within that page (like an SPA). That it's able to gracefully downgrade back to an ordinary MPA should the backend support it and the client require it is icing on the cake.
I'm pretty averse to frontend development, especially when it involves anything beyond HTML and CSS, but HTMX makes it very tempting to shift that stance from absolute to conditional.
Even before that I was making Java applets to do things you couldn't do with HTML, like draw a finite element model and send it to a FORTRAN back end to see what it does under stress, or replace Apple's Quicktime VR plugin, or simulate the Ising model with the Monte Carlo methods.)
What happened around 2015 is that people gave up writing HTML forms and felt that they had to use React to make very simple things like newsletter signups so now you see many SPAs that don't need to be SPAs.
Today we have things like Figma that approach the complex UI you'd expect from a high-end desktop app, but in many ways our horizons have shrunk thanks to "phoneishness" and the idea that everything should be done with a very "simple" (in terms of what the user sees) mobile app that is actually very hard to develop -- who cares about how fast your build cycle is if the app store can hang up your releases as long they like?
MPAs break back buttons all the damn time, I'd say more often than SPAs do.
Remember the bad old days when websites would have giant text "DO NOT USE YOUR BROWSER BACK BUTTON"? That is because the server had lots of session state on it, and hitting the browser back button would make the browser and server be out of sync.
Or the old online purchase flows where going back to change the order details would completely break the world and you'd have to re-enter all your shipping info. SPAs solve that problem very well.
Let's think about it a different way.
If you are making a phone app, would you EVER design it so that the app downloads UI screens on demand as the user explores the app? That'd be insane.
Yeah, state mutation triggered by GET requests is going to make for a bad time, SPA or MPA. Fortunately enough of the web application world picked up enough of the concepts behind REST (which is at the heart of all web interaction, not just APIs) by the mid/late 00s that this already-rare problem became vanishingly rare well before SPAs became cancerous.
> going back to change the order details would completely break the world and you'd have to re-enter all your shipping info. SPAs solve that problem very well.
The problem is entirely orthogonal to SPA vs MPA.
> If you are making a phone app, would you EVER design it so that the app downloads UI screens on demand as the user explores the app?
It's not only EVER done, it's regularly done. Perhaps you should interrogate some of the reasons why.
But more to the point, if it's bad, SPAs seem to frequently manage to bring the worst of both worlds, a giant payload of application shell and THEN also screen-specific application/UI/data payload, all for reasons like developer's unfortunately common inability to understand that both JSON and HTML are perfectly serviceable data exchange formats (let alone that the latter sometimes has advantages).
Content in the app is reloaded, sure, but the actual layout and business logic? Code that generally changes almost never, regenerated on every page load?
I know of technologies that are basically web wrappers that allow for doing that to bypass app store review processes, but I'd be pissed if an alarm clock app decided to reload its layout from a server every time I loaded it up!
The SPA model of "here is an HTML skeleton, fill in the content spaces with stuff fetched from an API" makes a ton more sense.
The application model, that has been in use for even longer, of "here is an application, fetch whatever data you need from whatever sources you need" is, well, a fair bit simpler.
Everyone is stuck with this web mindset for dealing with applications and I get the feeling that a lot of developers now days have never written an actual phone or desktop application.
> But more to the point, if it's bad, SPAs seem to frequently manage to bring the worst of both worlds, a giant payload of application shell and THEN also screen-specific info, all for reasons like developer's unfortunately common inability to understand that both JSON and HTML are perfectly serviceable data exchange formats (let alone that the latter sometimes has advantages).
I've seen plenty of MPAs that consist of multiple large giant mini-apps duct taped together.
Shitty code is shitty code!
Parentheticals added.
Oh, and then request scope wasn't good enough because you needed to do a post-redirect-get? I will say that I do not think MPAs for web applications were the good old days.
I’m just so miffed that it can end up necessitating roping in so much BS. Mind you, not necessarily in this example. Things like HTMX excite me. And, on the other side, things like Next.js and Remix that IMO are a breath of fresh air, even if they might not ultimately be heading in the right direction (I genuinely have no idea).
As for phone apps these are undeniably a step backwards from desktop apps, web apps and every other kind of app. On the web you can deploy 100 times a day, waiting for the app store to approve changes to your app is like building a nuclear reactor in comparison.
All the time you get harassed in a physical store or a web site to "download our mobile app" and you know there ought to be a steaming pile of poop emoji because the mobile app almost always sucks.
One of the great answers to the app store problem is to move functionality away from the front end into the back, for instance people were looking to do this with chat bots and super apps back in 2017 and now that chatbots are the rage again people will see them as a way to get mobile apps back on internet time.
It was even worse when the page didn't warn you, but would lose state all the same!
Good luck forcing users to download 50MB before they can use your web app.
The web and mobile/desktop apps are two totally different paradigms with different constraints.
It might be news to folks to learn that every single SPA framework has solved the problem entirely because it's really not an uncommon experience to have your browser history broken by a SPA. I believe that most frameworks implement the API correctly. I also believe a good number of developers use the framework incorrectly.
Bad websites are the results of bad developers, not the tool. You can have your history messed up by any kind of website.
But look, whatever. It’s Friday afternoon, I’m out of here. Have a good weekend.
You could do some client side caching with local page data, but just keeping it present and requesting updates to it only is vastly superior.
Thats honestly one place SPAs shine, where there's a relatively expensive request that provides data and then a lot of actions that function on some or all of that data transiently.
That is, it may seem a fine optimization, but has led to a fairly bloated experience.
I presume you are thinking of rendering? But, again, that is largely done client side in both cases.
As a simplisit ecanple, imagine an app which on login has to do an expensive query which takes five seconds to return because of how intensive it is on the back end. If you can just redisplay the data that's already in memory on the client, optionally updating it with a much less expensive query for what's changed recently, then you're saving about five seconds of processing time (and client wait time) by doing so.
Yiu could use localStorage to do something similar without it being a SPA, but that's essentially opting into a feature that serves a similar need.
Client side caching is a strong point of SPAs, so it makes sense that a use case that can leverage that heavily will have benefits.
Now, I will grant that it does less. Probably lacking a lot of the "presence detection" that is done in the thick client. Certainly lacking a lot of the newer ad stuff they are pushing at.
But the rest could be offset by a very basic N-tier application where the "envelope" of the client HTML is rather cheaply added to any outgoing message. And the vast majority that goes into "determining what data is being sent, being able to just redisplay the data you have when you browse back to the main page instead of request it again, [etc.]" will probably be more identical than not between the options.
Now, I grant that some of the newer history API makes some tricks a bit easier for back button changes to work. Ironically to the point, is that gmail is broken for back button usage. So... whoops.
I would argue that Google has thrown a bunch of engineering talent at it to optimize the problem as much as it can be for a web interface, and that Gmail is a bad example of a a SPA mail client, as it's more a combined mail client and IMAP server (really a custom designed mail store) all rolled into one. Whether Gmail itself really uses more or not is somewhat irrelevant to whether a mail client in general leans into the benefits a SPA provides. This is what I was talking about here.[1]
That said, whether it uses less resources is a tricky question. Sometimes there's algorithmic wins that overall reduce the total work done, and I don't doubt Gmail leverages some of those, but it's also just a huge amount of caching, whether in the browser or in a layer underneath. The benefit of a SPA is that you can customize the caching to a degree for the application in the client without having to have an entire architectural level underneath designed to support the application. For anything at scale, having that layer underneath is obviously better (it's custom fit for the needs of the application and isn't susceptible to client limitations), but it's also very engineering intensive.
My guess is that Gmail puts a very large amount of cache behind most requests, and is just very, very good about cache invalidation. Or they've got the data split across many locations so they can mapreduce it quickly and efficiently (but tracking where those places are will necessitate some additional resource usage).
In the end, you need caching somewhere. You can do it on the server side so that you have full control over it but you have to pay for the resources, or you can do it on the client side with some limits on control and availability, but you don't use your own resources. SPAs make client side caching more reliable in easier to deal with in some cases, because the working state of the client isn't reset (or mostly reset) on every request.
1: https://news.ycombinator.com/item?id=35835988
Sure, gmail optimizes for this heavily so it's fast, but it's still one of the most intensive things you can do for an app like that, so reducing the amount of times you need to do that is a huge win for any webmail. If you've ever used a webmail client that's essentially just an IMAP client for a regular IMAP account, you'll note that if you open a large inbox or folder it's WAY slower than trying to view an individual message, most times, for what are obvious reasons of you just think of a mailbox as a database of email and the operations that need to happen on that database (which it is).
If clicking on an individual message is a new page, that's fine, but if going back to the main view is another full IMAP inbox query, that's incredibly resource intensive compared to having it cached in the client already (even if the second request is cached in the server, it's still far more wasteful than not having to request it again).
It is entirely possible to have a MPA application that makes calls to the back end to retrieve more data. Especially for things like a static page (cached) with some dynamic content on it. My problem is when people convert an entire site to a Single Page (SPA). When I click to go from the "home page" to a "subsection page", it makes sense to load the entire page. When I click to "see more results" for the list of items on a page, it seems reasonable to load them onto the page.
Side note: If I scroll down the page a few times and suddenly there's 8 items in the back queue, you're doing it wrong. That drives me bonkers.
The return of the "backend frontender" is also a non happening. The bar is now much higher in terms of UX and design, and for that you really need frontend specialists. Gone are the days when the backend guys could craft a few html templates and call it a day, knowing the design won't change much, and so they would be able to go back to DB work.
ie. "I don't live in a rural area, but that's fine, nobody who matters lives there."
It's amusing that for a long time the response was "oh man that sounds terrible".
Now it is "oh hey that's server side rendered ... is it a new framework?".
The cycle continues. I end up writing all sorts of things and there are times when I'm working on one and think "this would be better as Y" and then on Y "oh man this should be Z". There are days where I just opt for using old ColdFusion... it is faster for somethings.
Really though there's so many advantages to different approaches, the important thing is to do the thing thoughtfully.
I switch between a fair variety frontend, backend myself and have never had that reaction.
It's always, I could do exactly this in 2005 using jquery + JSP, it would not need any of these 1500 dependencies and the user would see absolutely no difference (except downloading 10 times more js today at 5G speeds)
The scalability issues non-facebook scale webapps are trying to solve for do not exist. These apps will be dead before they reach a 10% of that scale and yet the project folks just don't get it.
anecdotally, github project bookmarks I have 3-4 years ago won't even compile today. a large chunk of projects from 2010 still work. Including mine I wrote a decade ago as a newbie js junkie.
Why ? Dependencies.
I even hear Laravel is pretty nice to use.
I'll never know that stuff though because the PHP I generally encounter is 15 years old spaghetti.
How much of that is just a garden variety "grass is always greener on the other side" effect?
> the important thing is to do the thing thoughtfully.
And finish! Total losses are still total losses no matter how thoughtfully done.
In my example not so much. I'm working in a number of frameworks, use them regularly, sometimes ColdFusion is just faster / better suited, sometimes some other system.
Node does not absolve from this. Any important verification still needs to be done on the server side, since any JS on the client side cannot be trusted to not be manipulated. JS on the client side was of course possible before NodeJS. NodeJS did not add anything there regarding where one must verify inputs. Relying on things being checked in the frontend/client-side is just writing insecure websites/apps.
> We moved away for MPAs because they were bloated, slow and difficult to work with. SPAs have definitely become what they sought to replace.
I would claim they became even more so than the thing they replaced. Basically most of any progress in bandwidth or ressources is eaten by more bloat.
Yeah, that was my point. With Node you can write JS to validate on both the client and server. In the article, they suggest you can just do a server request whenever you need to validate user input.
>Basically most of any progress in bandwidth or ressources is eaten by more bloat.
In my experience, the bloat comes from Analytics and binary data (image/video) not functional code for the SPA. Unfortunately, the business keeps claiming it's "important" to them to have analytics... I don't see it but they pay my salary.
Similar to my experience. So glad I uBlock Origin a lot of unnecessary traffic. At some point it is not longer good taste, when the 5th CDN is requested, the 10th tracker script from random untrusted 3rd parties loaded ... All while neglecting good design of a website, making it implode, when you block their unwanted stuff. Not rare to save more than half the traffic, when blocking stuff.
All I know is that I was unable to figure out what it was, and I bounced it off a few people online, and the performance scaled inversely with the number of DOM nodes.
I’d guess that Go is relatively more popular than Node for API servers, and Node is more popular for web servers.
And as you note, both are probably less popular than languages like Java and PHP.
Server side validation is for security, correctness, etc.
They are different features that require different code. Blending the two is asking for bugs and vulnerabilities and unnecessary toil.
The real reason that SPAs arose is user analytics.
1) When you run the validation has a huge impact on UX. A field should not be marked as invalid until a blur event, and after that it should be revalidated on every keystroke. It drives people crazy when we show them a red input with an error message simply because they haven't finished typing their email address yet, or when we continue to show the error after the problem has been fixed because they haven't resubmitted the form yet.
2) Client side validation rules do occasionally diverge from server side validation rules. Requiring a phone number can be A/B tested, for example.
To be fair, I'm also not a fan of bloated libraries like React and Angular. I think we had it right 15-20 years ago: use the server for everything you can, and use the smallest amount of JS necessary to service your client-side needs.
That's not true at all. HTMX extends the logical capabilities of HTML, and _hyperscript goes even further.
or have we forgotten that plain hold HTML can validate much of this for us with no JS of any type required?
One challenge is that you've got to keep the server-side and client-side validations in sync, so if you'd like to increase the max length of an input, all the checks need to be updated. Ideally, you'd have a single source of truth that both front-end and back-ends are built from. That's easier if they use the same language, but it's not a requirement. You'll also probably want to deploy new back-end code and front-end code at the same time, so just using JS for both sides doesn't magically fix the synchronization concerns.
One idea is to write a spec for your input, then all your input validation can compare the actual input against the spec. Stuff like JSON schema can help here if you want to write your own. Or even older: XML schemas. Both front-end and back-end would use the same spec, so the languages you pick would no longer matter. The typical things you'd want to check (length, allowed characters, regex, options, etc.) should work well as a spec.
It's also not the only place this type of duplication is seen: you'll often have input validation checks run both in the server-side code and as database constraint checks. Django solves that issue with models, for example. This can be quite efficient: if I have a <select> in my HTML and I want to add an option, I can add the option to my Django model and the server-side rendered HTML will now have the new option (via Django's select widget). No model synchronization needed.
As others mention, you may want to write additional validations for the client-side or for the server-side, as the sorts of things you should validate at either end can be different. Those can be written in whichever language you've chosen as you're only going to write those in one place.
https://learn.microsoft.com/en-us/aspnet/core/mvc/models/val...
Frontend and backend validations are also different. Frontend is more about shape and type. Backend is content and constraints.
I’ve several times been in the position of writing a new UI for an existing API. You find yourself wanting to validate stuff before the user hits “submit”, because hiding errors until after submitting is terrible UX; and to do that, you find yourself digging into the server code to figure out the validation logic, and duplicating it.
And then years or months later the code gets out of sync, and the client is enforcing all sorts of constraints that aren’t needed on the server any more! Not good.
It's not as easy as that. Showing validation while people are editing can be even worse, especially for less-technically able users or people using assistive technology.
Having an announcement tell you your password isn't sufficiently complex when you're typing in the second letter might not be bad for us, but how does that work for a screen reader?
> Generally speaking, avoid validating the information in a field before the user has finished entering it. This sort of validation can cause problems - especially for users who type more slowly
https://design-system.service.gov.uk/patterns/validation/
Can you go into that a bit? I don't really understand what you mean.
But if you want advanced tracking, like tracking what a user is focusing on at a particular instant, you need to wrap the whole document in a lot of JS.
SPA frameworks came out of AdTech companies like Meta, and I assure you it wasn't because they had limited engineering resources.
From my memory of working through this time, it was driven more by UX designers wanting to have ever more "AJAXy" interfaces. I did a lot of freelancing for design agencies 2006 - 2016, and they all wanted these "reactive" interfaces, but building these with jQuery or vanilla JS was a nightmare. So frameworks like JavaScript MVC, Backbone.js, SproutCore, Ember.js were all popping up offering better ways of achieving it. React, Vue and Angular all evolved out of that ecosystem.
Companies use SPA frameworks for the same reason they use native apps, to make a “richer”, more responsive, more full-featured UI.
Analytics is typically done in a separate layer by a separate team, usually via Google Tag Manager. There might be a GA plugin for your UI framework, but it can work equally well with plain HTML. GA does use a bunch of client-side JS, yes, but it’s not really a framework you use on the client side, it’s just a switch you flip to turn on the data hose.
In my experience, trying to add analytics cleanly to clientside UI code is a complete pain. Trying to keep the analytics schema in sync as the UI evolves is really hard, and UI developers generally find analytics work tedious and/or objectionable and hate doing it.
Google Tag Manager is the big story in adtech, and I think it comes from and inhabits a completely different world from Angular, React etc.
React isnt a SPA framework. It’s a component framework. It has no router or even state management. ExtJs is an Mvc framework in JavaScript and can be used to create a full spa app without additional libraries. It also came out in 2007. There is also ember that also predates react and is another mvc framework by the people who did rails.
If anything, SPAs make metrics harder because they hide the real behavior of the page in local JS/TS code and don't surface as much or any information in the URL. Also, fewer server interactions means fewer opportunities to capture user behavior or affect it at the server level.
Google is an AdTech company par excellence.
You don't need to do hacky URL tracking with SPAs. That's the point.
>Also, fewer server interactions means fewer opportunities to capture user behavior or affect it at the server level.
SPAs certainly do not have "fewer server interactions". What do you think an API call is?
"React" comes from "reactive web app", not "reaction to a competitor's product".
I'm aware that they call it "reactive" but I'll stick with my rationale. There is no way they would use a Google product like that.
In theory stuff like graphql helps but in the reality I'm living in SPA's hit multiple endpoints to get render even simple pages.
Please don't pretend it was merely NIH syndrome that led to its creation.
Source: Vue.js documentary
I myself would like to see a data-dictionary-driven app framework. Code annotations on "class models" are hard to read and too static.
Also validation is usually built on both client and server for the same things. Like if you have a password complexity validation. Its both on UX and the server otherwise it will be a very terrible UX experience.
There are also validations that can improve UX but aren't meaningful on the server. Like a "password strength meter", or "caps lock is on".
Religiously deploying the same validations to client and server can be done, but it misses the point that the former is untrusted and just for UX. And will involve a lot of extra engineering and unnecessary coupling.
That said, I could definitely see additional checks being done server side. One example would be actually checking the address database to see if service is available in the entered address. On the other hand, there really isn't any waste here either. I.e. just because you write the validation in server side JS doesn't mean you MUST therefore deploy and use it in the client side JS as well, it just means you never need to worry about writing the same check twice.
Of particular note there is no UI code here because the meter's UI code is not related to the check function beyond it reads the return value.
I'm not saying not to reuse things, because I specifically think it should be two separate functions on the client, one of which is copied to the server. But if you insist on having only one client function, I think the server function should be cut down.
And the premise is doing client-only advice on strength so I'm not going to challenge that premise.
As far as 50x, your code doesn't need those consts saying the exact same thing as the results object, so that simplifies to 8 lines, and I think 400 lines for a good password estimator isn't unreasonable. zxcvbn's scoring function is around that size.
Also, in my opinion things like you suggest you shouldn't do. A password strength metre is only going to give attackers hints at the passwords you have in your system. And I have not see a caps lock on warning in forever. The only password validation we do is the length which is pretty easy to validate on client and server.
No, it's not. A password strength meter just shows you the randomness of an input password, it doesn't have anything to do with passwords already in the system.
In the full picture though, in terms of UI/UX, the meter seems like only a downside. In the dartboard use case it's great because it displays what's still needed in terms users work and think with signalling e.g. "you still need a number, otherwise you're all set". People don't really think in bits of entropy though so ll that really is being signaled by either a meter or a normal failed validation hint is "more complexity and/or length needed".
There may be good cases for using a meter while simultaneously implementing good password requirement policy I'm not thinking of though.
This works like I described, it don't show 'dartboard requirements', only entropy. I think you've misunderstood what a password strength checker is. It's definitionally not a checklist like 'You need an uppercase letter, a lowercase letter, a number, a special character'. It's a tool which measures the strength i.e. the randomness or entropy of the password.
You still need the client side validation for UX. The regular users needs to know if they messed up someting in the form. Also it's a much better UX if it's done on the client side, without the need to send an async request for validations.
If your back end is fast and your HTML is lean, backend requests to validate can complete in less time than any of the 300 javascript, CSS, tracker, font, and other requests that a fashionable modern webapp does for no good reason...
It's true though that many back ends are run on the cheap with slow programming languages and single-thread runtimes like node.js that compete with slow javascript build systems to make people think slow is the new normal.
Server side validations predominately enforce business constraints.
If you mix the concerns, either your UX suffers (latency) or the data suffers (consistency, correctness).
"The best SPA is better than the best MPA. The average SPA is worse than the average MPA."
https://nolanlawson.com/2022/06/27/spas-theory-versus-practi...
Recently I was using Circle (like a paid social media platform for communities) and pressing back not only loses the scroll position, it loses everything. It basically reloads the whole home page.
I dread using the Google Cloud console for example.
What? No.
The whole point of Node was a) being able to leverage javascript's concurrency model to write async code in a trivial way, and b) the promise that developers would not be forced to onboard to entirely different tech stacks on frontend, backend, and even tooling.
There was no promise to write code once, anywhere. The promise was to write JavaScript anywhere.
[0]https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
https://en.wikipedia.org/wiki/Isomorphic_JavaScript
I mean, I'm using Laravel Livewire quite heavily for forms, modals and search. So effectively I've eliminated the need for writing much front-end code. Everything that matters is handled on the server. This means the little Javascript I'm writing is relegated to frilly carousels and other trivial guff.
Also, all these things the author complains about are realities of native apps, which still exist in massive numbers especially on mobile! I appreciate that some folks only need to care about the web, but declaring an architectural pattern as superior - in what appears to be a total vacuum - is how we all collectively arrive at shitty architecture choices time and time again.
Unfortunately, you have to understand all the patterns and choose when each one is optimal. It's all trade-offs - HTMX is compelling, but basing your entire architectural mindset around a library/pattern tailored to one very specific type of client is frankly stupid.
https://htmx.org/essays/when-to-use-hypermedia/
However, I see this specific type of clients that need just basic web functionalities, e.g CRUD operations and build something basic more prevalent than those that need very instant in-app reactivity and animations and so on (React, and SPA ecosystem).
Nowadays that's exactly the opposite, every web developer assumes SPA as default option, even on these simple CRUD examples.
Technically, the technology support doing any of them right. On practice, doing good MPAs require offloading as much as you can into the mature and well developed platforms that handle them; while doing good SPAs require overriding the behavior of your immature and not thoroughly designed platforms on nearly every point and handling it right.
Technically, it's just a difference on platform maturity. Technically those things tend to correct themselves given some time.
On practice, almost no SPA has worked minimally well in more than a decade.
While I am a fan of MPAs and htmx, and personally find the dev experience simpler, I cannot argue with this.
The high-order bit is always the dev's skill at managing complexity. We want so badly for this to be a technology problem, but it's fundamentally not. Which isn't to say that specific tech can't matter at all -- only that its effect is secondary to the human using the tech.
It brings a tear of joy to my eye honestly. The circle of life continues, and people always forget people are bad at programming (myself included).
Like in my opinion you can write clean code in C, but since you dont even have a string type it shepherds you into doing nasty stuff with char*... etc.
[edit: both comprising shared code between client and server, as well as, reduced barrier to server-side contribution, and then some including but not limited to the value of the concurrency model, expansive (albeit noisy) library availability, ...]
People forget how bad MPAs were, and how expensive/complicated they were to run.
Front end frameworks like svelte let you write nearly pure HTML and JS, and then the backend just supplies data.
Having the backend write HTML seems bonkers to me, instead of writing HTML on the client and debugging it, you get to write code that writes code that you then get to debug. Lovely!
Even more complex frameworks, like React, you have tools like JSX that map pretty directly to HTML, and in my experience a lot of the hard to debug problems come up with the framework tries to get smart and doesn't just stupidly pop out HTML.
For stuff that is uncomplicated I much prefer svelte as it still keeps the wall between frontend/backend but let's you do a lot of "yolo frontend" that is shortlived and gets fixed. I run small startup on the side" svelte fe + clojure be. It works great as I have different acceptance for crap in frontend (if I can fix something with style="", I do and I don't care). I often hotfix a lot of stuff in front where I can and just deploy to return later and find better solution that involves some changes in backend.
I can't imagine that for moving a button I would have to do deployment dance for whole app that in my case has 3 components(where one is distributed and requires strict backwards compat).
FWIW I turned off JavaScript on my iPad a couple years ago ... what a relief!
I have nothing against JS, but the sites just became unusably slow
In the last years, for every layer of web development, what I saw was that a big smelly pile of problems with bad websites and webapps, be it MPA or SPA, was not a matter of bad developers on the product, but more a problem of bad, sometimes plain evil, developers on systems sold to developers to build their product upon. Boilerplate for apps, themes, ready-made app templates are largely garbage, bloat, and prone to supply chain attacks of any sort.
(I'm not actually arguing with you, just thinking out loud)
This is often repeated but I don't think it even close to a primary reason.
The primary reason you build JS web clients is for the same reason you build any client: the client owns the whole client app state and experience.
It's only a fluke of the web that "MPA" even means anything. While it obviously has its benefits, we take for granted how weird it is for a server to send UI over the wire. I don't see why it would be the default to build things that way except for habit. It makes more sense to look at MPA as a certain flavor of optimization and trade-offs imo which is why defaulting to MPA vs SPA never made sense now that SPA client tooling has come such a long way.
For example, SPA gives you the ability to write your JS web client the same way you build any other client instead of this weird thing where a server sends an initial UI state over the wire and then you add JS to "hydrate" it, and then ensuring the server and client UIs are synchronized.
Htmx has similar downsides of MPAs since you need to be sure that every server endpoint sends an html fragment that syncs up to the rest of the client UI assumptions. Something as simple as changing a div's class name might incur html changes across many html-sending api endpoints.
Anyways, client development is hard. Turns out nothing was a panacea and it's all just trade-offs.
This pretty much sums it up. There is no right technology for the wrong developer.
It's not about what can get the job done, it's about the ergonomics. Which approach encourages good habits? Which approach causes the least amount of pain? Which approach makes sense for your application? It requires a brain, and all the stuff that makes up a good developer. You'll never get good output from a brainless developer.
You did write it once before too. With NodeJS you have Javascript on both sides, that's the selling point. You still have server and client code and you can write a MPA with NodeJS
These are two different things and I don't see how they're related. You don't need code sharing to do client side navigation. And you should always be validating on the backend anyway. Nothing is stopping an MPA from validating on the client, whether you can do code sharing or not.
This never panned out because people are too afraid to store meaningful state on the client. And you really can't because (reasonable) user expectations. Unlike a Word document people expect to be able to open word.com and have all their stuff and have n simultaneous clients open that don't step on one another.
So to actually do anything you need a network request but now it's disposable-stateful where the client kinda holds state but you can't really trust it and have to constantly refresh.
Yes... but some people like me just don't like JS so for us that was actually a rebutal.
I think the root cause of this is lack of will/desire to spend time on the finer details, either on the part of management who wants it out the door the second it's technically functional or on the part of devs who completely lose interest the second that there's no "fun" work left.
Not sure about that. SPA’s load 4MB of code once, then only data.
Now look at a major news front page, which loads 10MB for every article.
A black & white view of development and technology is easy but not quite correct. Technology decisions aren't "one size fits all".
This is only sort of true. The problem can be mitigated to a large extent by frameworks; as the framework introduces more and more 'magic' the work that the developer has to do decreases, which in turn reduces the surface area of things that they can get wrong. A perfect framework would give the developer all the resources they need to build an app but wouldn't expose anything that they can screw up. I don't think that can exist, but it is definitely possible to reduce places where devs can go astray to a minimum.
And, obviously, that can be done on both the server and the client.
I strongly suspect that as serverside frameworks (including things that sit in the middle like Next) improve we will see people return to focusing on the wire transfer time as an area to optimize for, which will lead apps back to being more frontend than backend again. Web dev will probably oscillate back and forth forever. It's quite interesting how things change like that.
Supposedly declarative approaches especially are my pet peeve. “Tell it what you want done, not how you want it done” is nice sounding but generally disappointing when I soon need it to do something not envisioned by its creator yet solved in a line or two of general purpose/imperative code.
The average mid-sized business seems to have internalized that code is always a liability, but they respond by cutting short discovery and get their just deserts.
You can see the cracks in Next.js. Vercel, Netlify et. al, are interested in capitalizing on the murkiness (the middle, as you put it) in this space. They promise static performance but then push you into server(less) compute so they can bill for it. This has a real toll on the average developer. In order for a feature to be a progressive enhancement, it must be optional. This is orthogonal to what is required for a PaaS to build a moat.
All many people need is a pure, incrementally deployed SSG with a robust CMS. That could exist as a separate commodity, and at some points in the history of this JAMStack/Headless/Decoupled saga it has come close (excluding very expensive solutions). It's most likely that we need web standards for this, even if it means ultimately being driven by commercial interests.
But we don’t have JS devs.
We have a team of Python/PHP/Elixir/Ruby/whatever devs and are incredibly productive with our productivity stacks of Django/Laravel/Phoenix/Rails/whatever.
HTML5 solved that to a first approximation client-side. Often later you'll need to reconcile with the database and security, so that will necessarily happen there. I don't see that being a big trade-off today.
The web still requires too much code and concepts to be an enjoyable dev experience, much less one that you can hold in your head. Web frameworks don't really fix this, they just pile leaky abstractions on that require users to know the abstractions as well as the things they're supposed to abstract.
It seems like it is difficult to truly move webdev forward because you have to sell to people who have already bought into the inessential complexity of the web fully. The second you try to take part of that away from them, they get incensed and it triggers loss aversion.
drain the swamp man
I tried using Angular in 2019, and it nearly sank me. The dependency graph was so convoluted that updates were basically impossible. Having a separate API meant that I had to write everything twice. My productivity plummeted.
After that experience, I realized that what works for a front-end team may not work for me, and I went back to MPAs with JavaScript sprinkled in.
This year, I've looked at Node again now that frameworks like Next offer a middle ground with server-side rendering, but I'm still put off by the dependency graphs and tooling, which seems to be in a constant state of flux. It seems to offer great benefits for front-end teams that have the time to deal with it, but that's not me.
All this to say pick the right tool for the job. For me, and for teams going fuller stack as shops tighten their belts, that's tech like HTMX, sprinkled JavaScript, and sometimes lightweight frameworks like Alpine.
I'd add a couple features if I were working there (making css changes and multiple requests to multiple targets standard), but as it stands, it's a pleasure to work in.
Next.js for example, comes packed with anything and everything one might need to build an app. Sitting on the promise of hyperproductivity with "simplicity". Plus, is made of single responsability principles set of modules, kind of necessary to build a solve-all needs framework.
And it does that.
A bit like Angular, set to solve everything front-side. With modules not entirely tightly coupled but sort of to get the full solution.
And it did that.
Then we have outliers like React, which stayed away from trying to solve too many things. But the developers have spoken, and soon enough it became packed in with other frameworks. Gatsby etc. And community "plug-ins" to do that thing that dev think should be part of the framework.
And they did that, solved most problems from authentication to animation, free and open source sir, so that developers can write 12 lines of code and ship 3 features per day in some non innovative way, but it works, deployed in the next 36 seconds, making the manager happy as he was wondering how to justify over 100k in compensation going to a young adult who dressed cool and seemed to type fast.
Oh no! dependency hell. I have to keep things maintained, I have to actually upgrade now, LTS expired, security audits on my back, got to even change my code that worked perfectly well and deal with "errors", I can't ship 3 features by the end of today.
We need a new framework!
The scope has been reduce to almost nothing. I have spend like 20h on it in 2022. But it still being used.
Django helps by how boring and solid it feels.
A similar project in node would probably not build anymore
So much for saying python packaging sucks.
Django can also serve a boatload of concurrent users, way more than one would think. It is a boring, old-fashioned, but stable and very functional framework.
Npm projects are likely the most bloated by far, but also are java based projects, just look at Spring.
Calling out a python framework or some Go lean solution being the exception of the rule is fair enough, my point is developers expect everything and have little to do but rapid painless developement
I would love to hear about those asked to migrate their python 2.7 django ecommerce app to python3 since v2 is totally dead and unmaintained posing serious security risks. But sure if we forgo these things, and don't ever need to touch the code again, some frameworks have no downside. Makes a certain kind of developers finally be right.
I don't know about the current fashion of minimalism comes from. It doesn't bring simplicity.
While I agree with your comment, lumping these together somehow doesn't seem fair. As for C and C++, it took decades to develop package managers, and we can't still say we have standard ones (but I feel Conan is a de facto standard PM for C++).
Bash, on the other hand, should never have 'batteries included' because in this case the batteries are the rest of the system - coreutils and the rest. An Perl had CPAN quite early on, in the early nineties iirc.
I previously used HTMX for another project of mine, and it worked fine too. I did, however, feel limited compared to React because of what's available.
All that being said, I'm glad HTMX worked out for you!
For bound variables you can use MobX or signals in Preact.
Is it deprecated?
jQuery and PHP have entered the chat
hypermedia isn't ideal for everything[1], but it is an interesting & useful technology and libraries like htmx make it much more relevant for modern development
we have a free book on practical hypermedia (a review of concepts, old web 1.0 style apps, modernized htmx-based apps, and mobile hypermedia based on hyperview[2]) available here:
https://hypermedia.systems
[1] - https://htmx.org/essays/when-to-use-hypermedia/
[2] - https://hyperview.org/
> introduction
> htmx gives you access to AJAX, CSS Transitions, WebSockets and Server Sent Events directly in HTML, using attributes, so you can build modern user interfaces with the simplicity and power of hypertext
> htmx is small (~14k min.gz’d), dependency-free, extendable, IE11 compatible & has reduced code base sizes by 67% when compared with react
This tells me what htmx does and what some of its properties are, but it doesn’t tell me what htmx is! You might want to borrow some text from your Documentation page and put something like the following at the top of your homepage:
“htmx is a dependency-free, browser-oriented javascript library that allows you to access modern browser features directly from HTML.”
Can be achieved in MPAs and SPAs alike. I'd also argue that having state floating around in HTTP requests is harder to reason about than having it contained in a single piece in the browser or in a server session. Granted this is not a problem of HTMX, but of hypermedia. There is a reason why HATEOAS is almost never observed in REST setups.
> two-codebase problem
This is a non-problem. In every part of a system, you want to use the right tool for the job. Web technologies are better for building UIs, if only by the sheer ammount of libraries and templates that already exist. The same splitting happens in the server side: you would have a DB server, and a web service, maybe a load balancer. You naturally have many parts in a system, each one being specialized in one thing, and you would pick the technologies that make the most sense for every one of them. I'd also argue that backend developers would have a hard time dealing with the never ending CSS re-styling and constant UI change requests of today. This is not 2004 where the backend guys could craft a quick html template in a few hours and went back to work in the DB unmolested. The design and UX bar is way higher now, and specialists are naturally required.
I saw the HTMX creator floating around the thread so hopefully he can confirm, but my understanding is HATEOS is a specific implementation of a REpresentstional State Transfer API. JSON is often used for the API, HTMX uses HTML instead but it is indeed still a REST API transferring state across the wire.
My shift key really doesn't appreciate all these abbreviations
https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi...
HATEOAS (or the hypermedia constraint) is a requirement for a REST-ful system, as the creator of the term (Roy Fielding) says here:
https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...
I have an rewritten wikipedia article on HATEOAS here:
https://htmx.org/essays/hateoas/
And I agree it doesn't make much sense outside of a hypermedia system (like the web), being consumed by humans:
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
In the OPs article, it looks like the only thing going over the line is UUIDs. How does the server know "this uuid refers to this element"? Does this require a sticky session between the browser and the backend? Are you pushing the state into a database or something? What does the multi-server backend end up looking like?
In a serverless read-only app, all business logic and state is maintained on the browser.
A possible extension to HTMX would be to allow this kind of offloading to pure JS functions instead of requiring hacky intercepts.
You would still have a clear separation of responsibilities between frontend rendering (by the browser only) and application logic (which only generates HTML as output).
Yet another useless Java gimmick with no support other than 1 single framework, Spring Boot.
If htmx has anything to do with HATEOAS it's going to be ignored out of principle.
HATEOAS (or, as Fielding prefers to call it, the hypermedia constraint) is a necessary component of a truly REST-ful networking system, but unfortunately the language around REST is all jumbled up.
I try to explain why this is here:
https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi...
and I try to explain what HATEOAS really is here:
https://htmx.org/essays/hateoas/
and I try to explain why HATEOAS has been, by and large, a failure outside of true hypermedia clients presenting directly to humans here:
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
Is you problem that there aren't many implementations of it today or concerns with the architecture itself?
this is called progressive enhancement[1], and yes, htmx can be used in this manner although it requires some effort by the developer
unpoly, another hypermedia-oriented front end library, is more seamless in this regard and worth looking at
[1] - https://developer.mozilla.org/en-US/docs/Glossary/Progressiv...
A site for a project of mine [1] is built with HTMX and operates more or less the same for JS and no-JS users.
I’m aiming to add some bells and whistles for JS users but the version you see there is more or less the experience non-JS users gets too:
1. https://www.compactdata.org
[1] https://developer.mozilla.org/en-US/docs/Glossary/Progressiv...
Whoa... I was very slow apparently
These redesigns would be a lot more difficult if we had to edit HTML on the client and the HTML that a server returns.
Also, HTMX is best styled with semantic classes. Which is a problem for companies using Tailwind and utility classes in their HTML. With class-heavy HTML it's nearly impossible to redesign in two different places. And performance suffers and returning larger chunks of HTML.
Despite all that, I want HTMX to be the standard way companies develop for the web. But these 2 problems need to be addressed first, I feel, before companies (like mine) take the leap.
I looked at a bunch of frameworks before settling on dart/flutter for my own cross platform projects. I did look at htmx but since I wasn't really wanted to create a web app I moved on. But I like the idea of a true rest style of app.
so you can, for example, deploy a new version of your mobile app without updating the client, a big advantage over needing users to update their mobile apps
Hypermedia advances would be microformats and RDF and the like. http://microformats.org/wiki/faqs-for-rdf
we generalize HTML's hypermedia controls in the following way:
- any HTML element can become a hypermedia control
- any event can drive a hypermedia interaction
- any element can be the target of a hypermedia interaction (transclusion, a concept in hypermedia not implemented by HTML)
all server interactions are done in terms of hypermedia, just like w/links and forms
it also makes PUT, PATCH and DELETE available, which allows HTML to take advantage of the full range of HTTP actions
htmx is a completion of HTML as a hypermedia, this is its design goal
When looking at the various options, I always enjoyed your architectural choice of htmx being an extension of html, for that very reason. Similar to "phonegap" hoping that the phonegap code base would get smaller and smaller as mobile browsers built more of those features natively. :)
my sense is that HTML is constrained by social/organizational issues rather than technical ones at this point
hopefully someone on the chrome team notices htmx at some point and HTML starts making progress again
the browser vendors have been more than happy to use experimental features to chart their own course, which I think can be a good thing to spawn innovation and healthy competition. (given the standards bodies will be slower and more prudent - similar to how python doesn't want "pedantic" to be part of python core, because that would hurt pedantic's innovation, not improve it)
Maybe the way someone from the chrome team could tap into "business value" of "let's build these htmx features in chrome" would be that it allows developers to write "internal/developer/crud apps" where a "only supported in chrome" is acceptable.....
There a ton of additional features builtin to HTMX, but I'd love to see just this basic primitive built into browsers. It's related to the element transitions API that has been working it's way into browsers, but approaches it from the angle of HTML partials instead of diffing two full pages durn SPA navigation.
if browsers got into the game I would assume they could do things much faster and integrate things like preload (https://htmx.org/extensions/preload/) and idiomorph (https://github.com/bigskysoftware/idiomorph/) much more cleanly w/ the rest of the browser infrastructure
Like with HTMX, SvelteKit and Remix forms won't function properly without the framework.
what makes htmx a hypermedia framework is the exchange of hypermedia with the server, this satisfies the hypermedia constraint (HATEOAS) of REST. there are other libraries that are also hypermedia oriented, such as unpoly.
it is a different approach to building web applications than the JSON/RPC style that is popular today
i encourage you to read the linked article, and, if it is interesting to you, the essays at https://htmx.org/essays, and then potentially https://hypermedia.systems
for your example, you wouldn't have a div, you'd use an anchor:
or, more likely, just boost it: and then on the server side you'd need to check the `HX-Request` header to determine if you were going to render an entire page or just some partial bit of HTMLif you go down the progressive enhancement route you need to think carefully about each feature you implement and how to make it compatible w/ no-js. Some patterns (e.g. active search) work well. Others (drag and drop) don't and you'll have to forgo them in the name of supporting noJS
nb: unpoly is a more seamless progressive enhancement experience due to its design goals
The only thing that might cause trouble is non-standard (as in HTML standard) HTTP methods, which basically means any method other than GET and POST, I admit that. However, the fact that these methods are not supported even in HTML5 is a huge miss.
[1]: https://htmx.org/reference/#request_headers
They don't work fine if the users/stakeholders don't find it acceptable to render the result to the full window instead of the hx-swap style area, or to spend extra time on the backend making it render the whole thing.
Actually, this is one area were SvelteKit has it beat, because the backend is done by SvelteKit, and you don't have to manually deal with hx-swap not taking effect.
We've moved back to an MPA structure with decorated markup to add interactivity like scroll views, fetching data, tabs and other common UX use cases. If you view the source on yahoo.com and look for "wafer," you can see some examples of how this works. It helps to avoid bundle size bloat from having to download and compile tons of JS for functionality to work.
For a more complex, data-driven site, I still think the SPA architecture or "islands" approach is ideal instead of MPA. For our largely static site, going full MPA with a simple client-side library based on HTML decorations has worked really well for us.
At all of Yahoo? I imagined such a big company would have a variety of front-end frameworks and patterns.
What library are you using?
This is a necessity as long as latencies between the client and server are large enough to be perceptible to a human (i.e. almost always in a non-LAN environment).
[edit]
I also just noticed:
> ...these applications will be unusable & slow for those on older hardware or in locations with slow and unreliable internet connections.
The part about "slow and unreliable internet connections" is not specific to SPAs If anything a thick client provides opportunities to improve the experience for locations with slow and unreliable internet connections.
[edit2]
> If you wish to use something other than JavaScript or TypeScript, you must traverse the treacherous road of transpilation.
This is silly; I almost exclusively use compiled languages, so compilation is happening no matter what; targeting JS (or WASM) isn't that different from targeting a byte-code interpreter or hardware...
--
I like the idea of HTMX, but the first half of the article is a silly argument against SPAs. Was the author "cheating" in the second half by transpiling clojure to the JVM? Have they tested their TODO example on old hardware with an unreliable internet connection?
I agree with everything else you said, but having followed the development of Kotlin/JS and WASM closely I have to disagree with this statement.
JavaScript is a very bad compilation target for any language that wasn't designed with JavaScript's semantics in mind. It can be made to work, but the result is enormous bundle sizes (even by JS standards), difficult sourcemaps, and terrible performance.
WASM has the potential to be great, but to get useful results it's not just a matter of changing the compilation target, there's a lot of work that has to be done to make the experience worthwhile. Rust's wasm_bindgen is a good example: a ton of work has gone into smooth JS interop and DOM manipulation, and all of that has to be done for each language you want to port.
Also, GC'd languages still have a pretty hard time with WASM.
The word "slow" here is unclear. Thick clients work poorly on low bandwidth connections, as the first load takes too long to download the JS bundle. JS bundles can be crazy big and may get updated regularly. A user may give up waiting. Thin clients may load faster on low bandwidth connections as they can use less javascript (including zero javascript for sites that support progressive enhancement, my favorite as a NoScript user). Both thin and thick clients can use fairly minimal data transfer for follow-up actions. An HTMX patch can be pretty small, although I agree the equivalent JSON would be smaller.
If "slow" means high latency, then you're right, a thick client can let the user interact with local state and the latency is only a concern when state is being synchronized (possibly with a spinner, or in the background while the user does other things).
Unreliable internet is unclear to me. If the download of the JS bundle fails, then the thick client never loads. A long download time may increase the likelihood of that happening. Once both are loaded, the thick client wins as the user can work with local state. Both need to sync state sometimes. The thin client probably needs the user to initiate retry (a poor experience) and the thick client could support retry in the background (although many don't support this).
A highly complex stock-trading application should absolutely not be using Htmx.
But a configuration page? A blog? Any basic app that doesn't require real-time updates? Htmx makes much more sense for those than React. And those simple needs are a much bigger part of the internet than the Hacker News crowd realizes or wants to admit.
If I could make one argument against SPA's it's not that they don't have their use, they obviously do, it's that we're using them for too much and too often. At some point we decided everything had to be an SPA and it was only a matter of time before people sobered up and realized things went too far.
It's like with static websites - we went from static to blogs rendered in php and then back to jekyll...
The thing that keeps holding me back from htmx is that it breaks Content Security Policy (CSP), which means you lose an effective protection against XSS.[0] When I last asked the maintainer about this, the response was that this was unlikely to ever change.[1]
Alpine.js, a similar project to htmx, claims to have a CSP-compatible version,[2] but it's not actually available in any official builds.
[0] https://htmx.org/docs/#security
[1] https://news.ycombinator.com/item?id=32158352
[2] https://alpinejs.dev/advanced/csp
[3] https://github.com/alpinejs/alpine/issues/237
Too much abstraction (especially leaky abstraction the way web frameworks are) makes it difficult to reason about your application.
But if you optimize for absolute minimal abstraction, then you can get stuck with code that's very repetitive where it's hard to pick apart the business logic from all the boilerplate.
Am I misunderstanding? If I can use htmx without sacrificing the benefits of CSP, I'd really love to use htmx.
[0] https://htmx.org/docs/#security
Suppose your website displays user-generated content (like HN posts). If the attacker finds a way to bypass encoding and instead injects JS, then without CSP, the attacker gets XSS at that point. With CSP, even if the attacker can get user-generated content to render as JS, the browser will refuse to execute it.
My understanding of htmx is that the browser would still refuse to execute standard JS, but the attacker can achieve XSS by injecting htmx attributes that are effectively arbitrary JS.
What do you see as the difference?
This article makes its case about Htmx, but points out that its argument applies equally to Hotwired (formerly Turbolinks). Both Htmx and Hotwired/Turbolinks use custom HTML attributes with just a little bit of client-side JS to allow client-side requests to replace fragments of a page with HTML generated on the server side.
But Turbolinks is more than ten years old. React was born and rose to popularity during the age of Turbolinks. Turbolinks has already lost the war against React.
The biggest problem with Turbolinks/Htmx is that there's no good story for what happens when one component in a tree needs to update another component in the tree. (Especially if it's a "second cousin" component, where your parent component's parent component has subcomponents you want to update.)
EDIT: I know about multi-swap. https://htmx.org/extensions/multi-swap/ It's not good, because the onus is on the developer to compute which components to swap, on the server side, but the state you need is usually on the client. If you need multi-swap, you'll find it orders of magnitude easier to switch to a framework where the UI is a pure function of client-side state, like React or Svelte.
Furthermore, in Turbolinks/Htmx, it's impossible to implement "optimistic UI," where the user creates a TODO item on the client side and posts the data back to the server in the background. This means that the user always has to wait for a server round trip to create a TODO item, hurting the user experience. It's unacceptable on mobile web in particular.
When predicting the future, I always look to the State of JS survey https://2022.stateofjs.com/en-US/libraries/front-end-framewo... which asks participants which frameworks they've heard of, which ones they want to learn, which ones they're using, and, of the framework(s) they're using, whether they would use it again. This breaks down into Awareness, Usage, Interest, and Retention.
React is looking great on Usage, and still pretty good on Retention. Solid and Svelte are the upstarts, with low usage but very high interest and retention. Htmx doesn't even hit the charts.
The near future is React. The further future might be Svelte or Solid. The future is not Htmx.
> no good story for what happens when one component in a tree needs to update another component in the tree
HTMX has a decent answer to this. Any component can target replacement for any other component. So if the state of everything on the page changes then re-render the whole page, even if what the user clicked on is a button heavily nested.
> it's impossible to implement "optimistic UI," ... hurting the user experience
Do we actually need optimistic UI? Some apps need to work in offline mode sure, like offline maps or audiobooks or something. The HTMX author agrees, this is not the solution for that. Most of the stuff I have worked on though ... is useless without an internet connection.
In the case of "useless without internet connection" do we really need optimistic UI. The actual experience of htmx is incredibly fast. There is no overhead of all the SPA stuff. No virtual dom, hardly any js. It's basically the speed of the network. In my limited practice I've actually felt the need to add delays because the update happens _too fast_.
I'm still evaluating htmx but not for any of the reasons you've stated. My biggest concern is ... do I want my api to talk in html?
> It's basically the speed of the network.
Does your stuff work on mobile web? Mobile web requests can easily take seconds, and on a dodgy connection, a single small request can often take 10+ seconds.
The difference between optimistic UI and non-optimistic UI on mobile web is the difference between an app that takes seconds to respond, on every click, and one that responds instantly to user gestures.
Optimistic UI probably isn't necessary for a web site, but you'll certainly want it for a web app (which is what Htmx claims to be good for).
In the real world on the mobile web we actually have, TODO apps (which is what TFA is about), calendars, notes apps, etc. all work better with client-side state synchronized to the server in the background.
React has a bunch of good libraries for this, especially TanStack React Query.
Htmx doesn't.
Hah! Good luck getting React even to start up in that environment. Meanwhile oldschool HN will still be snappy. Speaking from experience.
In my experience optimistic UI updated don't actually make much sense if you expect users to regularly see large delays. Optimistic updates are great though to avoid the jank of a loading state that pops in/out of view for a fraction of a second.
https://htmx.org/essays/splitting-your-apis/
Must be why they're called “Recursive Doubts”.
Of course, I've not used Turbolinks, so I don't know what issues applied there.
Edit: I'm not saying htmx is the future either. I'd love to see how they handle offline-first (if at all) or intermittent network connectivity. Currently most SPAs are bad at that too...
Multi-swap is possible, but it's not good, because the onus is on the developer to compute which components to swap, on the server side, but the state you need is usually on the client.
If you need multi-swap, you'll find it orders of magnitude easier to switch to a framework where the UI is a pure function of client-side state, like React or Svelte.
https://htmx.org/img/memes/whowillwin.png
Huh, no one told me this before, so I've been very easily doing it with htmx's 'out of band swap' feature. If only I'd known before that it was impossible! ;-)
If it's teams of 10X devs working around the world to make the next great Google-scale app, then yeah, maybe React or something like it is the future.
If it's a bunch of individual devs making small things that can be tied together over the old-school Internet, then something like HTMX moves that vision forward, out of a 90-00s page-link, page-link, form-submit flow.
Of course, the future will be a bit of both. For many of my various project ideas, something like React is serious overkill. Not even taking into account the steep learning curve and seemingly never-ending treadmill of keeping current.
Pretty common patterns for this- just use a sprinkle of client side JS (one of: hx-on, alpine, jquery, hyperscript, vanilla js, etc), then trigger an event for htmx to do its thing after awhile, or use the debounce feature if it's only a few seconds. Lots of options, actually.
React would have to eventually contact the server as well if we're talking about an equivalent app.
You can split a large app into pages and then each page only has to care about its own parts (sub components). If you want some component to be used on multiple pages you just create it with the server technology you use and include it. The other components on the page can easily target it. You may have some problem if you change a shared component in such a way that targeting stops working. You may be able to share the targeting code to make this easier.
From the February 31, 1998 Hacker News archives: "According to state of the web survey, Yahoo and Altavista are looking great on usage, Hotbot and AskJeeves are the upstarts. Google doesn't even hit the charts."
Also, it seems so cyclic, isn't HTMX/Hotwire similar to Java JSP's which was how things were before SPA's got popular?
2) The missing piece is how you can achieve this "collapsing" back of functionality into single SSR deployable(s) while still preserving the ability to scale out a large web application across many teams. Microfrontends + microservices could be collapsed into SSR "microapplications" that are embedded into their hosting app using iframes?
I see a lot of resemblance to http://catalyst.rocks with WebComponents that target other components. I think there's something unspoken here that's really powerful & interesting, which is the declarativization of the UI. We have stuff on the page, but making the actions & linkages of what does what to what has so far been trapped in code-land, away from the DOM. The exciting possibility is that we can nicely encode more of the behavior into the DOM, which creates a consistent learnable/visible/malleable pattern for wiring (and rewiring) stuff up. It pushes what hypermedia can capture into a much deeper zone of behaviors than just anchor-tag links (and listeners, which are jump points away from the medium into codespace).
Yes! There's always going to be some range of client behavior that's difficult to reduce to declarations, but so much of what we do is common that if it isn't declarative we're repeating a lot of effort.
And in general I think you're describing a big part of what made the web successful in the first place; the UI-as-document paradigm was declarative, accessible, readable, repeatable.
> but so much of what we do is common that if it isn't declarative we're repeating a lot of effort.
We're not only repeating effort, we're also using artisinal approaches to wiring things up. Hand writing handlers is repeated work with low repeatability; there'll be a variety of forms & ways & places people end up writing similar-ish handlers, & creating the data model to pass all the references/targets around.
Being more declarative is not only less work, it also ought be much higher quality, more predictable, to have a lot less vagueries of implementation. It'll make it much easier to comprehend & maintain. Less work, for more repeatable/consistent outcomes.
Web “app” development finally catching up to where Visual Basic and Delphi were ~30 years ago, hurrah!
They have a big 2.0 branch that has a ton of internal changes. TypeScript 5.0 finally updated to support modern @decorator syntax & they're rewriting a bunch to support that, which is excellent.
In my opinion, the future of the web as a platform is about viewing the web browser as an operating system with basic composable primitives.
HTMLX adds attributes to HTML using JS, and the argument about "no-JavaScript" is misleading: with HTMLX you can write interactions without JS, but HTMX uses JS. But, as it forces you to use HTML constructs that will work without scripts (such as forms), the page will fall back. It doesn't means that the fallback is usable.
The custom HTMLX attributes work because the browser supports extensions of its behavior using JS. If we add those attributes to the standard HTML, the result is more fragmentation and an endless race. The best standard is one that eliminates the need for creating more high-level standards. In my view, a possible evolution of WASM could achieve that goal. It means going in the opposite direction of the article, as clients will do more computing work. In a future like that, you can use HTMLX, SwiftUI, Flutter, or React to develop web apps. The biggest challenge is to balance a powerful OS-like browser like that with attributes like searchability, accessibility, and learnability (the devtools inspect and console is the closest thing to Smalltalk we have today)...even desktop OSs struggle today to provide that.
I've been on the sidelines for the better part of a decade for frontend stuff, but I was full-stack at a tiny startup in 2012ish that used Rails with partial fragments templates for this. It needed some more custom JS than having a "replacement target" annotation everywhere, but it was pretty straightforward, and provided shared rendering for the initial page load and these updates.
So, question to those who have been active in the frontend world since then: that obviously failed to win the market compared to JS-first/client-first approaches (Backbone was the alternative we were playing with back then). Has something shifted now that this is a significantly more appealing mode?
IIRC, one of the big downsides of that "partial" approach in comparison with SPA-approaches was that we had to still write those JSON-or-XML-returning versions of the endpoints as mobile clients became more prevalent. That seems like it would still be an issue here too.
> one of the big downsides of that "partial" approach in comparison with SPA-approaches was that we had to still write those JSON-or-XML-returning versions of the endpoints as mobile clients became more prevalent. That seems like it would still be an issue here too.
Yup. Still, if you're at the scale where you need to support multiple clients, things should be going well enough where you can afford the extra work.
As soon as multiple clients are involved, you're writing SOMETHING to support specifically that client. 10+ years ago, you'd be writing those extra conditionals to return JSON/XML _and_ someone is building out this non-browser client (mobile app, third party API, whatever). But you're not rearchitecting your browser experience so that's the tradeoff.
> Has something shifted now that this is a significantly more appealing mode?
React especially led from one promise to another about _how much less code_ you'd have to write to support a wide range of clients, when in reality there was always another configuration, another _something_ to maintain when new clients were introduced. On top of that, the mobile device libraries (React Native, etc), were always steps behind what a true native app UX felt like.
I think a lot of us seasoned developers just feel burned by the SPA era. Because of how fast it is to iterate in js, places like npm would seemingly have just the right component needed to avoid having to build custom in-house, and its simply an `npm add` and an import away. Meanwhile, as the author states, React and company changed a lot under the hood rapidly, so dependencies would quickly become out of date, now trying to maintain a project full of decaying 3rd party libs because its own tech debt nightmare. Just for, say, popper.js or something like that.
I'm just glad the community seems to actively be reconsidering "the old ways" as something valuable worth revisiting after learning what we learned in the last decade.
SEO I personally think is a questionable motivation except in very specific use cases.
Speed is almost compelling but the complexity cost and all the considerations around how a page is structured (which components are server, which are client, etc) does not seem worth the complexity cost IMO. Just pop a loading animation up in most cases IMO.
I think I'm stuck somewhere in the middle between old-hacker-news-person yelling "lol were just back at index.html" and freshly-minted-youtube-devs going "this is definitely the new standard".
At this rate, when I'm 80 years old we will still be fucking around with these stupid lines of code, hunched over, ruining our eyesight, becoming ever more atrophied, all to make a fucking text box in a monitor pop some text into a screen on another monitor somewhere else in the world. It's absolutely absurd that we spend this much of our lives to do such a dumb thing, and we've been iterating on it for five decades, and it's still just popping some text in a screen, but we applaud ourselves that we're so advanced now because something you can't even see is doing something different in the background.
Do an Internet search for “Quartex Pascal”, and/or its creator, Jon Aasenden. He has a blog on WordPress, and a Facebook group. His crazy Quartex project is apparently nearing completion. It's an Object Pascal compiler and IDE — kind of a Delphi / Lazarus clone, if you will — that compiles to JavaScript, for the end product to be run in the browser. I think that's as close to “Visual Basic for the web” as one can get.
https://news.ycombinator.com/item?id=18981806
It’s a pleasure to work with so little boilerplate.
A lot of the comments here seem to have the approach that there is a single best stack for building web applications. I believe this comes from the fact that as web engineers we have to choose which tech to invest our careers in which is inherently risky. Spend a couples years on something that becomes defunct and it feels like a waste. Also, startup recruiters are always looking for the tech experience that matches the choice of their companies. VCs want to strike while the iron is hot.
Something that doesn't get talked about enough (which the author does mention near the end of article) is that different web apps have different needs. There is 100% a need for SPAs for certain use cases. Messaging, video players, etc. But there are many cases where it is overkill, like the many many CRUD resource apps I've built over the years. Say you have a couple hundred users that need to manage the state of a dozen interconnected resources. The benefits of an MPA are great here. Routing is free, no duplication of FE / BE code. Small teams of devs can ship code and fix bugs very fast which keeps the user feedback loop tight.
A hypermedia approach is the nice happy medium between a very static website and an SPA, not sure why so many people are close-minded about this possibility.
> without the annoying full-page load refresh.
This fixation on the page refresh needs to stop. Nearly every single website which has purportedly "saved" page refreshes has brutalized every other aspect of the UX.
This is a good article, and I agree that Htmx brings sanity back to the frontend, but somewhere along the line frontend folks got it in their head that page refreshes were bad, which is incorrect for essentially all CRUD / REST APIs. Unless you're specifically making a complex application that happens to be served through the web, like Kibana or Metabase, then stop harping on page refreshes.
Even this article calls it the annoying refresh. Not the impediment refresh, or the derisive refresh, or the begrieved refresh. Moreover, what exactly is annoying about page refreshes? That there's a brief flash? That it takes ~0.3 seconds to completely resolve?
Users don't care about page refreshes, and in fact they are an indication of normalcy. Upending the entire stack and simultaneously breaking expected functionally to prevent them is madness.
The killer feature of Htmx is that it doesn't upend the entire stack, and you can optimize page refreshes relatively easily. That's great! But even then I'm still not convinced the tradeoff is worth it.
I'm not seeing it. SPAs can be overly complex and have other issues, but I'm not seeing HTMX as a particular improvement.
Also, a bunch of this article doesn't make sense to me.
E.g, one of the listed costs of SPAs is managing state on the client and server... but (1) you don't have to -- isn't it rather common to keep your app server stateless? -- and (2) HTMX certainly allows for client-side and server-side state, so I'm not sure how it's improving things. That is, if you want to carefully manage app state, you're going to need a mechanism to do that, and HTMX isn't going to help you.
It also doesn't somehow prevent a rats nest of tooling or dependencies. It isn't an application framework, so this all depends on how to solve that.
SPA's also aren't inherently "very easy to make [...] incorrectly".
Also, the suggested HTMX approach to no browser-side javascript is very crappy. Your app would have to be very specifically designed to not be utterly horrible w/o JS with such an approach and instead be just pretty horrible. There are just so much more straightforward ways to make apps that work well without JS. Also, this isn't exactly a mainstream requirement in my experience.
I could go on and on. "caching" - htmx doesn't address the hard part caching. "seo-friendliness" - Like all the benefits here attributed to htmx, htmx doesn't particularly help with this and there are many other available way to achieve it.
IDK. These kinds of over-promising hyped up articles give me the feeling the thing being hyped up probably doesn't have a lot of real merit to be explored or else they'd talk about that instead. It also feels dishonest to me, or at least incompetent, so make all of these claims and assertions that aren't really true or aren't really especially a benefit of htmx vs many numerous other options.
Judging from the ones one encounters in the wild, yes they are.
I mean, aren't these baseline "get computers to do stuff" things?
There are many use cases out there where not treating a browser as a container to run an actual application is the right way to go. On the other hand, there's many use cases where you want the browser to be, basically, a desktop app container.
The big bold letters at the top of the article declaring htmlx is the future is a bit much. It has its place and maybe people are re-discovering it but it's certainly not the future of web development IMO. The article gives me kind of web dev career whiplash.
We had the same and worse problems with "thick clients" that came before the web grew. With the right requirements, team, tools etc., you could sometimes build great apps. This was incredibly difficult and the number of great apps was relatively small. Building with earlier server-side web tech, like PHP, isolated everything on the server and it was easier to iterate well than with the "thick clients" model.
SPA reinvents "thick clients" to some degree and brings back many of the complications. No one should claim you can't build a great SPA, or that they have few advantages, but the probability of achieving success is frequently lower. Frameworks try to mitigate these concerns, but you are still only moving a closing some of the gaps and the probability of failure remains higher. Depending on the app you can move the success metrics, but we often end up fudging on items like performance.
We get to a point where there is current model is fraying and energy builds to replace it with something else. We end up going back to old techniques, but occasionally we learn from what was done before.
I find that it's surprisingly rare for people with 1-2 years of experience to be able to give an accurate overview of the last 10 years of web development. A better understanding of this history can help with avoiding (or targeting) problems old timers have encountered and complain about in comments.
There is a big downside though: weak error handling. It just assumes that your call will get a response.
[1]: https://htmx.org/events/#htmx:timeout
HTMX is cool. HTMX may fit your needs. But it’s not enough for providing the best possible user experience.
Indeed, the creator of htmx has created another library called hyperscript which he's described as a companion to htmx.
https://hyperscript.org/
But I'll be honest. I'll believe it when I see it. It's not that htmx is bad, but given the complexity of client-side interactions on the modern web, I can't see it ever becoming really popular.
Some of the specifics in the comparisons are always weird, too.
> Instead of one universal client, scores of developers create bespoke clients, which have to understand the raw data they fetch from web servers and then render controls according to the data.
This is about client side apps fetching arbitrary JSON payloads, but your htmx backend needs to do the same work, right? You have to work with the raw data you get from your DB (or another service) and then render based on that data.
You're still coupled to the data, and your htmx endpoint is just as "bespoke" as the client code which uses it. It's not wrong to prefer that work be done on the server instead of the client, or vice versa, but we're really just shuffling complexity around.
In your analogy, the client JS code is like the serverside code, fetching over the network instead of directly from the DB, and then doing essentially the same work from there... materializing html and a set of controls for the user to interact with.
In a sense, I see your point.
But there's a difference: When you materialize the html on the server and send that over the wire, the browser does all the work for you. When you take the SPA approach, you must re-implement much of what the browser does in JS, and hence the well-known trouble with routing, history, and so on. You can argue that React/Angular/whatever takes care of this for you at this point, and to some extent it's true, but you're still cutting against the grain. And even as mature as the frameworks are, you will hit weird edge cases sometimes that you'd never have to worry about with the browser itself.
There are definitely advantages to using a multi-page application architecture, which htmx is going to get by default.
But I really don't see a big difference between using JS to replace DOM fragments with server generated HTML, compared to using JS to replace DOM fragments with client generated HTML.
- https://htmx.org/essays/a-real-world-react-to-htmx-port/ by https://github.com/David-Guillot - a SaaS product migrated to htmx from React.
- https://zorro.management/ by https://twitter.com/Telroshan - a kanban project management tool. This one is particularly interesting IMO, it implements quite advanced UI with htmx and some custom JS.
These kind of takes fall in the bullseye of "I don't want to program with Javascript". The subtext is all about this.
Perhaps.. maybe.. Htmx won't be the future because there are a lot of people that like programming in Javascript?
I've seen this architectures quickly ruined by 'can-do' people who butcher everything to get a feature done _and_ get a bonus from the management for quick delivery.
This seems like the real problem we need to solve, but not sure how?
I don't see the point by the way, I think htmlx is here to stay and a good choice for many, but it's clearly not a silver bullet. You make decently fast UIs, not blazing fasts, there are no (proper) offline first apps with htmlx, caching is likely more difficult or impossible sometimes and the load for your server is inevitably greater (of course it could be more than acceptable in some cases, so why not?), that also means more bandwidth for your cloud provider as opposed for you cdn. You will still have to write javascript sooner or later.
It depends on what you're doing. Nothing is aprioristic ly "the future", the future is "the future", and it has yet to come.
If anyone is looking to discuss making Hypermedia Driven Applications with HTMX in Python, head over to the discussions there!
The future is whatever works best for your use-case.
So, I posit that the churn, while definitely real, is not actually intrinsic.
Right now, at Latacora, we're writing a bunch of Clojure. That includes Clerk notebooks, some of which incorporate React components. That's an advantage I think we shouldn't ignore: not needing to write my own, say, Gantt chart component, is a blessing. So, specifically: not only do I think the churn is incidental to the problem, I don't even believe you need to give up compatibility to get it.
Fun fact: despite all of this, a lot of what we're writing is in Clerk, and while that's still fundamentally an SPA-style combination of frontend and backend if you were to look at the implementation, it absolutely _feels_ like an htmx app does, in that it's a visualization of your backend first and foremost (React components notwithstanding).
Imagine 10s of thousands of clients requesting millions of HTML fragments be put together by a single server maintaining all the states while all the powerful high end computing power at the end user's fingertips goes completely to waste.
Not convinced.
HTML is more verbose, so I would guess that JSON serialization is slightly faster, but I doubt there's an order of magnitude difference. (I could be proven wrong though)
I agree that taking HTMX to the extreme where _all_ interactions require a request to the server is too much overhead for interactive web apps. But there's likely a good middle ground where I can have mainly server side rendering of HTML fragments with a small amount of client side state/code that doesn't incur particularly more or less server load.
HTTP is stateless. This is the whole point of the hypermedia paradigm.
If you have a page with many partial UI page changes over htmx, then yes, this paradigm puts increased load on the server, but your DB will almost certainly be your bottleneck before this will be, just as in the SPA case.
Yes, in HTMX the server is handling client app state, even things as little as whether a todo is in read or edit state.
That just seems absurd to me, let the client take care of that.
What's unnecessary to me however is sending bytes thousands of miles across the wire to some server to do the same.
I was specifically thinking of modern smartphones in fact, which are pretty damn fast at executing a little bit of JS.
(Though I agree that some of the bloated bundles resulting from modern frameworks or their poor usage definitely go to far)
Bloated JS frameworks like Angular, React, Vue, and Electron have big learning curves and a jillion gotcha's because they have to reinvent long-known and loved GUI idioms from scratch, but DOM is inherently defective for that need, meant for static documents. There are just too many GUI needs that HTML/DOM lacks or can't do right: https://www.reddit.com/r/CRUDology/comments/10ze9hu/missing_...
Let's byte the bullet and create a GUI markup standard. Perhaps base it off Tk or Qt kits to avoid starting from scratch.
Heh, fun pun.
> Perhaps base it off Tk or Qt kits to avoid starting from scratch.
VCL / LCL.
The concept is great but why has it taken so long?
https://unpoly.com/
Backend engineers are now able to write management tools and experimental products faster - and then pass the winning products off to a fluttr team to code for all environments. The backend could be converted into a django rest api if the code is properly refactored.
Moreover, REST APIs - and I mean the simple ones people actually want to use, none of that HATEOAS BS - are ubiquitous for all sorts of interactions between web and nonweb clients. Are you going to ship an MPA as your mobile apps, or are you going to just use REST plus whatever clients make sense?
It also makes a lot of sense in terms of organization. Your backend developers probably suck at design, your frontend developers suck at databases.
https://unpoly.com/tutorial
https://hotwired.dev/
However, I'm not sure if this is actually a problem or rather depends on how much interaction the user does (so where is the "turning point" of the overhead of having all in the bundle vs full HTML responses). What does everyone think?
With that being said, I imagine it would become unmaintainable very quickly. The problems htmx is solving are better solved with other solutions in my opinion, but I do think there's something that can be learned or leveraged with the way htmx goes about the solution.
Per se. It's Latin for “in itself”; has nothing to do with saying anything. Think about it: What would “per-say” even mean?
Certainly it's possible to take on that burden and execute it well, but I think a lot of teams and businesses don't fully account for the fact that they are doing so and properly deciding if that extra burden is really necessary. The baseline for nailing performance and correctness is higher with an SPA.
I think what is needed is to recognize that the SPA architecture isn't actually just a view processor. IMO it is a very shitty designed:
View rendered <--> client process <--> server process
So it seems that SPA apps load an absolute mountain of javascript into the view (the tab/page) and then that starts (crudely IMO) running as client-side daemon tracking messy state and interfacing with local storage, with javascript (opinion: yuck) ferreted away in a half dozen divs.
IMO, what has been needed since you have local storage and local session state and all that is ... a client daemon that the web page talks to that offers data services, and then that client daemon if it needs server data calls to the internet.
That way local state tracking, transformation, and maintenance can be isolated away from the code of the view. Large amounts of javascript (or maybe all with CSS wizardry is dropped). The "client daemon" can be coded in webassembly, so you aren't stuck with javascript (opinion: yuck).
You can even have more efficient many views/tabs interfacing with the single client daemon, and the client daemon can track and sync data between different tabs/views/windows.
Now, of course that is fucking ripe as hell for abuse, tracking. Not sure how to solve it.
But "separation of concerns" in current web frameworks is a pipe dream.
Luckily, after 3 decades, there is some sobering realization that typesetting engine is not a good foundation for modern apps. https://news.ycombinator.com/item?id=34612696
Web development without HTML/CSS/JS is the future.
When I compare this to Phoenix LiveView I much prefer LiveView, because it both provides the markup templating engine and tracks the meaning of the relationship, with server-side tokens and methods.
There's no "the future" in this area, because demands are very different; a heavily interactive SPA like GMail or Jira has requirements unlike an info page that needs a few bits of interactivity, etc.
Yes you could make a CORBA or DCOM object almost indistinguishable from a local object, except for the latency when it was actually remote. And since it looked like a normal object it encountered “chatty” interface which exacerbated the latency cost.
Htmlx seems pretty chatty to me, which I’m sure works OK over the LAN, but what about the “real” internet?
The js-ajax-button has similar approach. Add class to button that have data-url and it will make request to it. This is small func I use, but with uajax is so powerful, I don't need react or htmx.
But it is hard to sell something that eliminates using javascript.
Talk about the positives of YOUR approach, don't tear down a different approach that half the industry is using. You're not going to say anything new or interesting to the person you are trying to convince this way. Experienced engineers already know the trade-offs between an SPA and a server rendered experience.
AIUI TFA wasn't by the creators of HTMX, so it isn't the author's approach.
Ideas aside, the web app future belongs to those with the resources to sustain a response, or those who can restore the ability to capture/monetize developers and users in a closed system.
The scope of web apps is broad enough that many technologies arguably have their place. The open javascript ecosystem reduced the cost of creating candidates, but has no real mechanism to declare winners for the purpose of consolidating users, i.e., access to resources.
Careers and companies are built on navigating this complexity, but no one really has the incentive to reduce it unless they can capture that value.
I really appreciate Cloudflare because they are open about both their technology and their business model. They thus offer a reasonable guarantee that they can sustain their service and their technology, without basing that guarantee on the fact that they are a biggie like AWS, Microsoft, or Google (i.e., eating their own dog food, so we can join them at the trough).
The biggest cost in IT is not development or operating fees but reliance and opportunity.
Relevant code:
https://github.com/ldyeax/jimm.horse/blob/master/j/j.php
https://github.com/ldyeax/jimm.horse/blob/master/j/component...
The JS would be a bit more elegant if script tags didn't need special handling to execute on insertion.
The experience is very seamless this way - I'm very pleased with it. It's live at https://jimm.horse - the dynamic behavior can be found clicking on the cooking icon or N64 logo.
On reading the article, I'll definitely make use of this if it becomes well-supported. It does exactly what I wanted here.
The LivewView/Hotwire/LiveWire way of building applications make a really great tradeoff—the ease of building websites with the speed and power of webapp UX.
I wanted something simple to use with Express and it's been very productive.
There's a few things to get used to, but overall like it and plan to keep using it in my projects.
The build for the system took about 20 minutes, and part of the complexity was that every new task (form where somebody had to make a judgement) had to be built twice since both a front end and back end component had to be built so React was part of the problem and not part of the solution. Even in a production environment this split would have been a problem because a busy system with many users might still need a new task added from time to time (think AMZN's MTurk) and forcing people to reload the front end to work on a new task defies the whole reason for using React.
It all was a formula for getting a 20 person team to be spinning its wheels, struggling to meet customer requirements and keeping our recruiters busy replacing developers that were getting burnt out.
I've built several generations of my own train-and-filter system since then and the latest one is HTMX powered. Each task is written once on the back end. My "build" process is click the green button on the IDE and the server boots in a second or two. I can add a new task and be collecting data in 5-10 minutes in some cases, contrasted to the "several people struggling for 5 days" that was common with the old system. There certainly are UIs that would be hard to implement with HTMX, but for me HTMX makes it possible to replace the buttons a user can choose from when they click a button (implement decision trees), make a button get "clicked" when a user presses a keyboard button and many other UI refinements.
I can take advantage of all the widgets available in HTML 5 and also add data visualizations based on d3.js. As for speed, I'd say contemporary web frameworks are very much "blub"
http://www.paulgraham.com/avg.html
On my tablet via tailscale with my server on the wrong end of an ADSL connection I just made a judgement and timed the page reload in less than a second with my stopwatch. On the LAN the responsiveness is basically immediate, like using a desktop application (if the desktop application wasn't always going out to lunch and showing a spinner all the time.)
It described the general steps and seemed to be able to describe how htmx works pretty well, including hx-get and hx-target, etc., but then said "As an AI language model, I am not able to write full applications with code".
I replied "do the same thing in bash" (which I knew would be different in significant ways, but just to check) and it provided the code.
I wonder, is this a function of recency of htmx or something else? Do other htmx developers encounter this? I imagine it's at least a little bit of a pain for these boilerplate cases, if it's consistent vs. access to the same GPT tooling for other languages.
https://www.wunderground.com/forecast/us/ak/north-pole
It isn't clear what you were asking ChatGPT to provide, therefore not surprised it didn't come up with the exact answer you expected. I'd suggest learning HTMX by reading the docs, the majority is just a single page.
It was like...if you've ever been mansplained before, when you know how to code something, but are asking a specific question that interests you, related to the process of having a service do that part for you--and someone comes along like, "well, it's easy to code that yourself!"
Not sure if you've experienced that before but it's very similar.
> I'm sorry, but it's not possible to write a weather forecast app using bash and htmx as htmx is a client-side technology and bash is a command-line shell. htmx is typically used in conjunction with HTML and JavaScript to create dynamic web applications.
Cuz that didn't work. It straight up forged ahead and wrote a bash script to show the weather though.
I really wonder today if ChatGPT is going to cause a bash renaissance
(If you are telling me I can do this myself plz reread posts thx)
Still looks to be missing something aside from just the api key...
...maybe 4 is way better. I thought I had a Plus account but it looks like API only.Edit: Just tried 4 and it is better in the sense that it writes a more complete app for you, but it strangely separates the sections of the front end into discrete textareas w/ code, so if you are new to front end it'll be confusing for sure. "Where do I put all this" "oh actually it all gets run together; specifically these two sections of code are concatenated and placed in the body area of the first code section."
It also writes the endpoint code in Python and Flask by default, but with one more prompt it seems to have fixed all that.
I really wonder why 3.5, the common public interface, did what it did, when "actually writes code for more languages by default" isn't exactly on the Plus features list.
Hell, the term “frontend developer” exists only because they are writing JS! Tell them it’s better to write HTM?, and you are removing the “developer” from their titles!
Same reason why backend developers use K8s. There’s little money on wiring together bash scripts.
Now, if you’re working on your side project alone, then sure HTMX is nice.
Backend devs just lampoon it because they assume it must be simple.
Sorry, I read a load of stuff about React, before I came to any explanation of HTMX. Turns out, it's loading fragments of HTML into the DOM (without reload), instead of loading fragments of JSON, converting them to HTML fragments client-side, and injecting the resulting HTML into the DOM (without reload).
So I stopped reading there; perhaps the author explained why HTMX solves this at the end (consistent with the general upside-down-ness), but the "is the future" title was also offputting, so excuse me if I should have read the whole article before commenting.
I never bought into the SPA thing. SPAs destroy the relationship between URLs and the World Wide Web.
Sorry, but that's just... Silly. HTML is what the Web is all about.
Is this suggesting writing any language we want in the browser? I have wondered for a couple decades why Python or some other open source scripting language wasn't added to browsers. I know Microsoft supported VBScript as an alternative to JavaScript in Internet Explorer and had it not been a security nightmare (remember the web page that would format your hard drive, anyone?) and not a proprietary language it might have a rival to JavaScript in the browser. In those days it wouldn't have taken much to relegate JavaScript to non-use. Today we just get around it by compiling to WASM.
Nope, server
A conceptual roadmap of where this journey could take us and, ideally, some production quality examples of solving important problems in a productive and fun way would increase the fan base and mindshare. Even better if it show how to solve problems we didn't know we had :-). I mean the last decade has been pretty boring in terms of opening new dimensions.
Just my two cents.
i read most htmx threads on hn and it's clear that people are looking for alternatives from react et al. they have a quick look, maybe implement an example and they are angry that it can't do everything they want cause the js ecosystem fatigue is real.
the centerpiece of the htmx site is an actual in-production app that was converted from react and it's better because of that. again, it will not be everybody's case.
htmx will let a lot of developers go all the way without bringing node into their ruby/python/php world for certain workloads. for them it is the future. the rest should stop reading.
If htmx wants to be the future it needs to be wrapped in a SSR framework. One that performs well.
What I mean is, different websites work differently, and do different things. For example, you might imagine a single desktop app that replaces multiple news aggregators, like Reddit or HN. It would ignore the style of both sites, and replace it with a single, uniform way of displaying posts and threads. But, what features from each does it implement? Does it have both upvoting and downvoting, like Reddit, or just upvoting, like HN? Does it support deeply nested threads, like Reddit, or only a couple levels, like HN? You'd run into limitations like this when trying to have a single app do everything, so you'd end up having to have n applications, one for each website you were replacing...
I'm also not with you on the "gnashing of teeth" point. I've never struggled to install, uninstall, or upgrade a website I was browsing.
I'm genuinely curious what OS and tooling you use that you find so much better, because every time I've tried desktop development I eventually give up and go back to the web. It might be because Linux support is always a requirement for me.
The one you want to run your software on.
> what GUI framework?
Assuming you aren't making a game, each OS has a different answer. Making an iPadOS or macOS app? I'd probably go with SwiftUI. Linux? I'm partial to GTK. Windows? Probably WinUI (although I also still like MFC extended with raw Win32 API calls).
There are cross platform tools, but none of them are very good. If you want to make something really great, target the most important OS and make the absolute best thing you can with the native features of that OS. Follow the conventions of the platform and you get a lot of stuff (like accessibility features) for little effort.
> The web is a single platform
I agree. The web as presented by the browser is its own distinct platform. A well written web app will almost always use more battery, memory, CPU, and network bandwidth than a similar well written native app. Sometimes, despite all the problems with web technologies, the web is where something belongs.
I'm still a big believer in the personal computer. The web takes power from individuals and is a step back to the days of dumb terminals and centralized computing.
Is this really the future?
our policy is that for widgets that are like browser components e.g. search as you type with keyboard shortcuts, we just use the off the shelf react component for that purpose and use it from htmx like it’a browser input element. for all other business logic (almost all of which has no low latency requirements and almost always involves a server requets), we use htmx in our server side language of choice.
our designer who knows a bit of react is not happy, but the 12 engineers on our team who are experts in $backend_lang and who are tired of debugging react race conditions, cache errors, TypeScript front end exceptions, js library churn, serialisation bugs, etc. are very happy indeed.
it doesn’t fit every app, but it fits our app like a glove and many others that I’ve considered writing that I didn’t feel like bothering to do so before discovering htmx.
this doesn’t worry me, though. those in the react crowd that insist on this arbitrary and newfangled “frontend/backend” stratification and are dogmatic about it are by definition going to stick with what they know and won’t come and bother us who choose tools based on real experience and their practical merits. better off they make themselves easy to spot from a distance.
took until maybe 2015 before i remember seeing any job ads of “frontend” positions, it takes a while before a job market develops around a technology.
i didn’t say it was pointless. i said the stratification of “dev” into “frontend dev” and “backend dev” is newfangled and arbitrary. you could also split devs into other classes (DB only, CSS only, etc.).
it is funny when people who don’t know you accuse you of incompetence because you don’t like their tools or methods. dogmatic. i prefer some other tools and methods and i am delivering value to customers. pragmatic.
This approach has been implemented in most popular programming languages used for backend development: https://github.com/liveviews/liveviews
Using custom html attributes as the base for complex client-side interactions is arguably a step backwards when considering the story around maintenance.
Right now, if you are building a robust component library - it's much easier to maintain using a template language with strong Typescript / IDE support, like JSX or similar.
> SPAs have allowed engineers to create some great web applications, but they come with a cost:
> Hugely increased complexity both in terms of architecture and developer experience. You have to spend considerable time learning about frameworks.
Yes, better quality software usually packages a bit more complexity.
SPAs are popular, just like native apps, because people don't like jarring reloads. Webviews in native apps are panned for a reason; turning your whole app into a series of webviews would be stupid, right?
> Tooling is an ever-shifting landscape in terms of building and packaging code.
I've used these 4 libraries to build apps since 2015:
* React * MobX * D3 * Webpack
The only one I have had pain with is react-router-dom, which has had 2 or 3 "fuck our last approach" refactors in this time. And I added TypeScript in 2018.
PEBCAK
> Managing state on both the client and server
It's a lie that a thin client isn't managing state; it's just doing a static, dumb job of it.
Imagine some cool feature like... collaborative editing.
How would you pull that off in HTMX?
> Frameworks, on top of libraries, on top of other libraries, on top of polyfills. React even recommend using a framework on top of their tech:
Yes, React is famously not a batteries-included library, while Angular is. But, as addressed, you need about 3 other libraries.
Besides, did you know: HTMX is also a framework. Did you know: HTMX also has a learning curve. Did you know: HTMX forces you to be able to manipulate and assemble HTMLstrings in a language that might not have any typing or tooling for that?
Anyways, I've said enough. I should've just said what I really think: someone who can't even get their nested HTML lists to actually indent the nesting shouldn't give advice on building UIs.
That view (+quality = +complexity) is actually flip sided, isn't it? [1]
[1] https://www.infoq.com/news/2014/10/complexity-software-quali...
What are you gaining by writing something like that in java/type-script rather than like rust and webassembly?
To me javascript is in a sort of uncanny valley where you probably want to be either making a real app and compiling it to wasm or using something like htmx.
WASM can't even interact with the DOM, how exactly are these languages positioned better to give me access to a ton of UI primitives?
The backend can be a fast language, certainly, but the browser is a premier UI platform... and it's powered by JS...
To me, Rust developers thinking they know how to build UIs is the real uncanny valley. What they produce looks like it should work, but the more you look at it, the more you realize they don't know what UX stands for
In my opinion, the whole point of the article and for everyone who is backing htmx is that SPA frameworks are too complex (and a liability) for solo/small teams or projects that don't need `collaborative editing`(or other advanced stuff).
Well, in my opinion, the article claims "HTMX is the future" as its title, and so it's impossible to interpret the arguments in any other way than "the user experience benefits of the SPA might matter to the users, but I think it's stupid because I dislike JS"
Titles are by necessity summary in nature. If you want the whole complexity and nuance of the article in the title, the whole article would have to be the title. It's a bit too long for that. (And then everyone would complain that there was no additional meat to the body text.)
> and so it's impossible to interpret the arguments in any other way than "the user experience benefits of the SPA might matter to the users, but I think it's stupid because I dislike JS"
No, sorry, that's BS: It's eminently possible to interpret the arguments differently. All you have to take into account is that a less abbreviated (and frankly still a bit too long) version of the title could have been "HTMX is the future for solo/small teams or projects that don't need `collaborative editing`(or other advanced stuff)".
HTH!
Ok, that's why I made arguments against the body of the text. Did you read my initial post? I'm sure you must've, which makes it so strange you'd come attack this argument while pretending that the other ones don't exist
> It's eminently possible to interpret the arguments differently.
It's really not. The whole thing is a series of strawmans aiming to justify degrading user experience to avoid developer "complexity", but then that developer complexity is grossly overstated, as if from someone who has never dipped their toe in it decided that each complaint they had heard about it was spoken by God himself.
Indeed, my whole first comment is thoroughly discarding each of these strawmans before I grew frustrated by the fact that the author and the thread were both reciting this 2015-era anti-JS dogma.
I suggest you go try to make counterarguments to those points instead of my exasperated reply to someone adding nothing to the convo.
HTH!
The more realistic and practical read is https://htmx.org/essays/when-to-use-hypermedia/#hypermedia-n...
That said for my next hobby project I will probably go even simpler than HTMX, and use classic server side rendering. Then add some Vanilla JS where needed.
I keep going around in circles, but I have tried NextJS and while pretty cool there are a class of problems you need to deal with that simply don't exist in simpler apps.
> Working with HTMX has allowed me to leverage things I learned 15-20 years ago that still work, like my website.
Yes, a website is different than a webapp and has different requirements.
The piece missing here is that most people do not stop to think which they are building before they reach for a JS heavy SPA framework and start spinning up microservices in whatever AWS calls their Kube implimentation.
In a quote:
> "So much complexity in software comes from trying to make one thing do two things." - Ryan Singer (Basecamp/37Signals)
[1] https://github.com/cheatcode/joystick
Okay, now you have half the code base, but need a round trip to the server for every interaction.
You could also remove the server and let people download your blog, where they can only post locally. No server-side input validation needed!
I’d really rather see strongly typed markup that can easily be checked for correctness and who’s behavior is well defined. Something Modular too with profiles and designed for extensibility.
At one point we experimented with the server returning the stuff to replace the HTML with. We support that in our framework natively (through a mechanism called “slots”).
That said, I have come to believe that people this decade will (and should) invert the idea of progressive enhancement to be client-first.
Imagine your site being composed of static files (eg served on Hypercore or IPFS via beaker browser). As more people use it, the swarm grows. No worries about being DDOSed. No having to trust a server to not send you the wrong interface one day. The software is yours. It just got delivered via a hypercore swarm or whatever, and your client checked the Merkle trees to prove it hasn’t been tampered with.
The you just interact eith a lot of headless web services. Rather than bearer tokens and cookies, you can use webauthn and web crypto to sign requests with private keys. You can do a lot more. And you store your data WHERE YOU WANT.
Sure, htmx can be used there. But there, too, it’s better to fetch JSON and interpret it on the client.
Returning markup language like HTML actually mixes data with presentation. Consider what happens when you want to return a lot of rows. With JSON, you return just the structured content. With HTML, there is a lot of boilerplate <li class=“foo” onclick=“…”> mixed in there which is a lot of extra weight on the wire.
If you are interested in learning more, I gave a whole talk about it:
https://qbix.com/blog/2020/01/02/the-case-for-building-clien...
I don't get this. To use htmx one has to load 14 KB of gzipped JS. How does this make it easy to support clients that don't support JS?
For example, you can apply HTMX to a standard anchor tag and be able to tell if a request has come from HTMX on the server to tailor the response. Then, if the client supports HTMX, it'll prevent the default action and swap the content out, otherwise it'd do exactly what an anchor normally does.
The same goes for form elements.
If you're just a little bit careful about how you use HTMX, it gracefully falls back to standard behaviour very easily.
Please just use the damn full-stack JS frameworks, they make life simpler, just wait for WebAssembly to allow us to have full-stack Rust/Go/whatever frameworks, and then you can abandon JavaScript, otherwise you get the mess of websites like this one where the developer has not written JavaScript, but the website still needs it for me to be able to read a _damn blog post_.
Otherwise, stick with RoR, Django, Laravel, or whatever ticks your fancies, but HTMX just ain't for everyone and everything, it's supposed to be used for hypermedia, not for web apps or anything else, just that: hypermedia.
And no, JavaScript libraries aren't all "complicated" and "full of churn", React is. Stop using React, or otherwise accept the nature of its development, and stop complaining. There are hundreds of different JavaScript libraries and yet every single time I see people bashing on full stack JavaScript they just keep repeating "React" like it's the only library in the world and the only way developers have written code for the last decade.
Also, tangentially related, can we as an industry stop acting like kids and stop following these "trends"? The author talks about a "SPA-craze" but what I've been seeing more and more now is the contrary movement, however it's based on the same idea of a hype cycle, with developers adopting technology because it's cool or whatever and not really considering what are their actual needs and which tools will provide them with that.
Rant over.
Strongest possible disagree. I’ve been doing web dev for a long time, and the last 10 years has seen a massive, ridiculous increase in complexity across the board.
I personally took my company back to good old server rendered apps with a turbolinks overlay because I was sick of dealing with the full stack frameworks, and we saw a huge increase in productivity and developer happiness.
I wonder which full-stack JS framework you used that you thought made life harder? One of the things that gets me mad is the idea of putting it all in one single box, as React is indeed very (needlessly) complex and so can be other libraries, but that doesn't mean the paradigm of JavaScript front-to-back is fundamentally flawed.
edit: Something else I should've added to my comment is that the HTMX approach is terrible if you ever need more than just the web-client (i.e. a mobile app, native or otherwise) since you will now have to implement an API anyway, which you could've done in the first place by taking the usual approach to development.
Unless your new client is a hybrid/webview/electron application, it's a trap. APIs for web, for native and public API have different sets of constraints in terms of authentication, versioning, even features are probably gonna be different.
And it's not like building an API after you built web would be complicated. Unless you put your business logic right into controllers/handlers, making an API would be just making a new endpoint that calls to existing application services. It's not free, it may not be trivial, but like other comment said, if you can afford a new client, you probably can afford an API for it
I’ve used Angular, Dart, Backbone, Ember, Elm, React, Vue, Svelte, and a few others I can’t remember anymore. All in production systems, not demo projects. Also some of the “build it once” platforms like Meteor.
They’re all cool until you have to actually maintain them. My favorite part is having to build my data models twice, one for the producer and one for the consumer. That’s totally never caused any headaches or slowed anyone down at all.
I must admit I've used Hotwire much less than I should've, but I still feel comfortable with full-stack JavaScript (or rather, TypeScript).
You said you migrated back to Turbo, what backend framework do you use? RoR with Hotwire is so nice, but I personally avoid it because of Ruby (not personally a fan), to be fair the same is true for most full-stack frameworks I avoid (such as Django/Python).
I really like the idea of systems like Phoenix’s LiveView as well, but I ask a lot of questions before I implement something like that. The majority of projects I’ve ever worked on didn’t really need that much interactivity and I strongly prefer the “sprinkle on JS when you actually need it” approach to web apps.
If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.
https://news.ycombinator.com/newsguidelines.html
The one single strong point of the front/back split is the famous Strangler Fig Pattern which takes away a lot of stress when making decisions.
* NextJS provide a holistic solution to backend code. Right now it's missing an ORM that works with serverless postgres. Given their recent additions of serverless postgres to Vercel I expect this will happen in 6-12 months.
* RedwoodJS become more mature.
The issues with SPAs IMO come from having to cobble together your full stack app, which requires making a ton of hard decisions in predicting the future of each major library you use. And then there's limited coherence between your ORM, API, and client without extra work. A mature, well designed, and financially supported full stack JS solution that truly rivals Rails would be perfect.
Having a separation of concerns between server and client is the whole point, and replacing JSON APIs with data trapped in HTML fragments is a massive step backwards.
> the new API is simply reflected in the new HTML returned by the server
Whereas with SPAs (my words)
> the new API is simply reflected in the new json returned by the server
I think I prefer the mental model of the api serving data and client rendering it.
HTMX doesn't address any of that. YAGNI is often only true for the original developers, not everyone else who has to maintain it long term.
The data API can address all of the concerns you have.
The hypermedia API, being consumed by an htmx front end, can then take advantage of the strengths of hypermedia (e.g. the uniform interface giving you a lot of flexibility in aggressively refactoring your API.)
Please, read the article.
In the linked essay I am riffing on someone already recommending splitting your JSON APIs into a general purpose API for clients, and a specialized one for your web application, in order to separate these two concerns and remove the pressure that that latter puts on the former.
I recommend going further with that and adopting hypermedia for your application API, since no one else should be depending on it. I recommend this because I like the hypermedia paradigm, but it only makes sense as part of a complete hypermedia system. Trying to reuse a hypermedia API for other clients isn't a good idea and that's not what I'm recommending.
Does that make sense?
>Here we are getting a bit fancy and only allowing one row at a time to be edited, using hyperscript. https://hyperscript.org
(i don't actually think this article is largely AI-generated)
it does not work for resource restricted (i.e. embedded) devices where you just can't do server-side rendering, CSR SPA is the future there as the device side just need return some json data for browser to render.
The idea is easily extendable to any template engine, so you can keep your device response minimal while enjoying the simplicity of htmx. I will admit though, this approach gets funky much faster, than returning HTML fragments, so you probably shouldn't exclusively build your app with this client-side-templates
[1]: https://htmx.org/extensions/client-side-templates/ [2]: https://htmx.org/extensions/json-enc/
You have to learn something. You can claim bloat in JS frameworks, but that isn’t solved by simply moving it to the server.
Is htmx lean and nice today? Probably! But does it handle the same use cases that the React users have? What happens to it under pressure of feature bloat? Small-core frameworks like Elm who resisted this pressure were abandoned by big shops. You can’t just take something immature (however good) and simply extrapolate a happy future.
> Tooling is an ever-shifting landscape in terms of building and packaging code.
Yes. JS is not the only language with churn issues and dependency hell.
> Managing state on both the client and server
Correct me if I’m wrong, but state can change for something outside of a htmx request, meaning you can end up with stale state in element Y in the client after refreshing element X. The difference is that your local cache is in the DOM tree instead of a JS object.
> By their nature, a fat client requires the client to execute a lot of JavaScript. If you have modern hardware, this is fine, but these applications will be unusable & slow for those on older hardware or in locations with slow and unreliable internet connections.
On unreliable connections you want as thick of a client as possible. If you have server-in-the-loop for UI updates, you quite obviously have latency/retry issues. It’s much preferable to show stale state immediately and update in the background.
> It is very easy to make an SPA incorrectly, where you need to use the right approach with hooks to avoid ending up with abysmal client-side performance.
Bloat comes from reckless software development practices, and are possible in any technology. Angular and React have a shitton of features and ecosystem around it, whereas say Svelte is more lean. Enterprisey shops tend to prioritize features and not give a flying fuck about performance. This is a business choice, not a statement about technology.
> Some SPA implementations of SPA throw away progressive enhancement (a notable and noble exception is Remix). Therefore, you must have JavaScript turned on for most SPAs.
Finally, we cut to the chase. This is 100% true, and we should be talking about this, because it’s still not settled: do we want web pages or web apps? If both, where is the line? Can you expect something like Slack to work without JavaScript? What about a blog with interactive graphs? Should everything degrade or should some things require JS/WASM?
I love that htmx exists. I have absolutely nothing against it. It honors some of the early web philosophy in an elegant and simple manner. It may be a better model for server-centric apps and pages, which don’t need offline or snappy UIs. But it cannot magically solve the inherent complexities of many modern web apps.
Overall, I believe most applications do well with a graceful degradation approach similar to what Remix offers and then everyone copied (the idea of using form actions webforms-style for every interactivity, so it works with and without JavaScript). I do agree that things like Slack, Discord, Element, or otherwise things we would call web apps are acceptable to be purely SPAs or not gracefully degrade without it enabled, the biggest problem I have with these is that they exist as web clients in the first place: the world would be a different place if approaches such as wxWidgets has paid off and gotten adopted, imagine how many slow and bloated web apps could've been beautiful and fast native applications. One can dream. I'm not that pessimistic, not yet.
Glad to hear. Yes, it seems like the post and the comments are largely missing the functional issue at play.
> the blog post we are talking about which displays awful without running JavaScript
Yeah, case in point, perhaps.. I mean if you have two paths (incremental and full) to reach the same state, you better be careful to ensure those are functionally equivalent. This is surface are for bugs, so the very least you need to do is turn off JS and test all flows. To me, the value add of SPAs is the snappy UI, and offline-capabilities, so if you’re gonna roundtrip to the server anyway, then you may as well just re-render the entire page old-school to greatly reduce complexity.
> the biggest problem I have with these is that they exist as web clients in the first place: the world would be a different place if approaches such as wxWidgets has paid off and gotten adopted, imagine how many slow and bloated web apps could've been beautiful and fast native applications.
I actually disagree with this (the opinion, not the problem statement). The main players, Apple, Microsoft, Google, have known about the cross-platform issues and haven’t done jack in decades (perhaps flutter deserves an honorary mention though). Meanwhile, the web, with all its problems, have gotten so much better. Getting the web to the point of native standards seems much more feasible than establishing new open standards for app development. The bloat issue is largely a red herring imo. A well made web app is snappy, and importantly, can be sandboxed. The issue is that people don’t care. You can find equally shitty and bloated apps in the sea of crap on the app stores. With webview support in the OS, bundles can be small. I have an app based on Tauri which is web based, and the msi is 10Mb. It’s never had any perf issues.
I agree with you that a well-made web app is snappy and works well. I don't think we'll go back to native apps (for now), what I hope for is simply that it could become an _option_ as right now it's just basically trying to swim upstream.
Web apps do a lot of things really well that native applications always had difficulty with, such as accessibility and distribution. Still, I feel that even when doing web apps that are snappy they miss out on some cool features of native applications such as consistent theming, reduced memory usage, reduced CPU usage (regardless of how well the web app is written), and just the simplicity of it.
Perhaps the ship has already sailed, and native applications will never make a comeback, in which case I hope lighter weight engines purpose-made for web applications get adopted as an alternative to shipping the entirety of Chromium with each program I download. Projects like Servo [0] show me that's possible, it's just that there is currently no interest from the big players to keep funding these developments and provide them as an alternative to Chromium.
[0]: https://servo.org/
The browser engine differences isn’t that big of a deal, because you already have standard ways of dealing with it on the web. I think today the main reasons people pick Electron today are tooling, desktop integration features and Node. Tauri is catching up insanely fast though. I am very happy that I jumped on the bandwagon fairly early.
> in which case I hope lighter weight engines purpose-made for web applications get adopted as an alternative to shipping the entirety of Chromium with each program I download
Absolutely, shipping a browser engine is insanity and should always have been a stopgap at best imo. Even servo is way too heavyweight to ship with each app. Good news is with OS having native webviews we’re 80-95% there already, so we don’t even have to deal with that tradeoff – this let’s an app have a perf overhead of ~1 typical browser tab. It’ll take a moment to iron out all kinks but it’s already perfectly acceptable on most deployment targets. Honestly, Linux is just as bad when it comes to distribution. Distros have been in some perpetual siloing mindset and have not been able to get behind a decent and unified distribution story. It has nothing to do with web though.
> it's just that there is currently no interest from the big players to keep funding these developments
The big players have been so awful that even if you disregard the abysmal interop story, distributing native apps is still orders of magnitude more hassle than the web ecosystem. Imo Electron was born as an escape hatch from that disaster, not because frontend developers conspired to eat all the worlds RAM for fun. I absolutely agree that the big players should passively fund projects like servo and tauri. I don’t want them anywhere near strategic decision making though.
back to the future
Two years ago I distinctly remember server side rendering piped thru websocket was future.
I also like how demo is visibly ugly, jQuery-style ugly. It's nostalgic in a way. And I swear to gods this approach will break back button. And resending form after network error. And expired cookie will of course result in your input being lost.
https://htmx.org/essays/a-real-world-react-to-htmx-port/