Nice but how do those services combine with each others ? How do you combine notion, slack, your git hosting, linear and your CI/CD ? If there are only URLs between each others it’s hard to link all the work together
econner 6 hours ago [-]
It's weird that one of the reasons that you endorse AWS is that you had regular meetings with your account manager but then you regret premium support which is the whole reason you had regular meetings with your account manager.
unsnap_biceps 5 hours ago [-]
If you spend enough (or they think you'll spend enough), you'll get an account manager without the premium support contract, especially early in the onboarding
necubi 4 hours ago [-]
Or if you’re a newish startup who they hope will eventually spend enough to justify it.
dangus 4 hours ago [-]
As a counterpoint, I find our AWS super team to be a mix of 40% helpful, 40% “things we say are going over their head,” 20% attempting to upsell and expand our dependence. It’s nice that we have humans but I don’t think it’s a reason to choose it or not.
GCP’s architecture seems clearly better to me especially if you are looking to be global.
Every organization I’ve ever witnessed eventually ends up with some kind of struggle with AWS’ insane organizations and accounts nightmare.
GCP’s use of folders makes way more sense.
GCP having global VPCs is also potentially a huge benefit if you want your users to hit servers that are physically close to them. On AWS you have to architect your own solution with global accelerator which becomes even more insane if you need to cross accounts, which you’ll probably have to do eventually because of the aforementioned insanity of AWS account/organization best practices.
0xbadcafebee 11 minutes ago [-]
There's a very large gap between "seems" and reality. GCP is a huge PITA. It's not even stable to use, as the console is constantly unresponsive and buggy, the UX is insane, finding documentation is like being trapped in hell.
Know how you find all the permissions a single user in GCP has? You have to make 9+ API calls, then filter/merge all the results. They finally added a web tool to try and "discover" the permissions for a user... you sit there and watch it spin while it madly calls backend APIs to try to figure it out. Permissions for a single user can be assigned to users, groups, orgs, projects, folders, resources, (and more I forget), and there's inheritance to make it more complex. It can take all day to track down every single place the permissions could be set for a single user in a single hierarchical organization, or where something is blocking some permission. The complexity increases as you have more GCP projects, folders, orgs. But, of course, if you don't do all this, GCP will fight you every step of the way.
Compare that to AWS, where you just click a user, and you see what's assigned to it. They engineered it specifically so it wouldn't be a pain in the ass.
> Every organization I’ve ever witnessed eventually ends up with some kind of struggle with AWS’ insane organizations and accounts nightmare.
This was an issue in the early days, but it's well solved now with newer integrations/services. Follow their Well Architected Framework (https://docs.aws.amazon.com/wellarchitected/latest/framework...), ask customer support for advice, implement it. I'm not exaggerating when I say this is the best description of the best information systems engineering practice in the world, and it's achievable by startups. It just takes a long time to read. If you want to become an excellent systems engineer/engineering manager/CTO/etc, this is your bible. (Note: you have to read the entire thing, especially the appendixes; you can't skim it like StackOverflow)
SkiFire13 24 minutes ago [-]
> Every organization I’ve ever witnessed eventually ends up with some kind of struggle with AWS’ insane organizations and accounts nightmare.
What are these struggles? The product I work on uses AWS and we have ~5 accounts (I hear they used to be more TBF) but nowadays all the infrastructure is on one of them and the other are for some niche stuff (tech support?). I could see how going overboard with many accounts could be an issue, but I don't really see issues having everything on one account.
danpalmer 3 hours ago [-]
Similar to my experience with the two. We didn't have regular meetings with our GCP account manager, but they did help us and we had a technical support rep there we were in contact with sometimes. We rarely heard from anyone at AWS, and a friend had some horror stories of reporting security issues to AWS.
Architecturally I'd go with GCP in a heartbeat. Bigquery was also one of the biggest wins in my previous role. Completely changed out business for almost everyone, vs Redshift which cost us a lot of money to learn that it sucked.
You could say I'm biased as I work at Google (but not on any of this), but for me it was definitely the other way around, I joined Google in part because of the experience of using GCP and migrating AWS workloads to in.
UltraSane 2 hours ago [-]
Global VPCs are very nice but they feel like a single blast radius.
dangus 1 hours ago [-]
Whether or not your VPC can have subnets in multiple regions is entirely unrelated to security.
UltraSane 45 minutes ago [-]
I meant failure blast radius. Having isolated regions is a core part of the AWS reliability design. AWS has had entire regions fail but these failure have always been isolated to a single region. Global VPCs must rely on globally connected routers that can all fail in ways AWS regional VPCs can't.
wetpaws 4 hours ago [-]
[dead]
calmbonsai 6 hours ago [-]
This is the best post to HN in quite some time. Kudos to the detailed and structured break-down.
If the author had a Ko-Fi they would've just earned $50 USD from me.
I've been thinking of making the leap away from JIRA and I concur on RDS, Terraform for IAC, and FaaS whenever possible. Google support is non-existent and I only recommend GC for pure compute. I hear good things about Big Table, but I've never used in in production.
I disagree on Slack usage aside from the postmortem automation. Slack is just gonna' be messy no matter what policies are put in place.
xyzzy_plugh 4 hours ago [-]
Anecdotally I've actually had pretty good interactions with GCP including fast turn arounds on bugs that couldn't possibly affect many other customers.
unethical_ban 6 hours ago [-]
What do you use if not slack? OPs advice is standard best practice. Respect peoples time by not expecting immediate response, and use team or function based channels as much as possible.
Other options are email of course, and what, teams for instant messages?
jasonpeacock 5 hours ago [-]
I’ve always that forums are much better suited to corporate communications than email or chat.
Organized by topics, must be threaded, and default to asynchronous communications. You can still opt in to notifications, and history is well organized and preserved.
jasonpeacock 5 hours ago [-]
The bullet points for using Slack basically describe email (and distribution lists).
It’s funny how we get an instant messaging platform and derive best practices that try to emulate a previous technology.
Btw, email is pretty instant.
unethical_ban 4 hours ago [-]
I get it, email accomplishes a lot. But it "feels" like a place these days for one-off group chats, especially for people from different organizations. Realtime chat has its places and can also step in to that email role within a team. All my opinion, none too strongly held.
kstrauser 6 hours ago [-]
> Picking Terraform over Cloudformation: Endorse
I, too, prefer McDonald's cheeseburgers to ground glass mixed with rusty nails. It's not so much that I love Terraform (spelled OpenTofu) as that it's far and away the least bad tool I've used in the space.
orwin 4 hours ago [-]
Terraform/openTofu is more than OK. The fact that you can use to to configure your Cisco products as well as AWS is honestly great for us. It's also a bit like ansible: if you don't manage it carefully and try to separate as much as possible early, it starts bloating, so you have to curate early.
Terragrunt is the only sane way to deploy terraform/openTofu in a professional environment though.
kstrauser 58 minutes ago [-]
I curse at Terraform at least once a week, usually right after I’ve discovered some weird arbitrary limitation surprising misfeature. It’s still what I reach for when I need to manage a whole organization. And compared to CloudFormation, it’s the freaking Cistine Chapel of IaC.
MrDarcy 2 hours ago [-]
We can also use expect to configure Cisco routers and AWS infrastructure, doesn’t mean we should.
walt_grata 4 hours ago [-]
I've been very happy using cdk for interacting with aws. Much better than terraform and the like.
etothet 2 hours ago [-]
I second this. I do use some terraform, but for most of our stacks, CDK has been fantastic.
MrDarcy 2 hours ago [-]
CDK is far better than Terraform.
jauntywundrkind 1 hours ago [-]
If you're any good at all at CDK, it's cdk8s is also a very solid clear & clean way to do kubernetes too. https://cdk8s.io/
I'm trying to make the decision for where to go with my home lab, and while Pulumi and Cue look neat, cdk8s seems so predictable & has such clear structure & form to it.
That's said the l1/l2/l3 distinction can be a brute to deal with. There's significant hidden complexity there.
shepherdjerred 55 minutes ago [-]
I use cdk8s for my homelab and absolutely love it. 100% recommend.
Not an opinion on Pulumi specifically, but an opinion on using imperative programming languages for infrastructure configuration: don't do it. (This includes using things like CDKTF)
Infrastructure needs to be consistent, intuitive and reproducible. Imperative languages are too unconstrained. Particularly, they allow you to write code whose output is unpredictable (for example, it'd be easy to write code that creates a resources based on the current time of day...).
With infrastructure, you want predictability and reproducibility. You want to focus more on writing _what_ your infra should look like, less _how_ to get there.
nothrabannosir 8 minutes ago [-]
Couldn't disagree more.
I have written both TF and then CDKTF extensively (!), and I am absolutely never going back to raw TF. TF vs CDKTF isn't declarative vs imperative, it's "anemic untyped slow feedback mess" vs "strong typesystem, expressive builtins and LSP". You can build things in CDKTF that are humanly intractable in raw TF and it requires far less discipline, not more, to keep it from becoming an unmaintainable mess. Having a typechecker for your providers is a "cannot unsee" experience. As is being able to use for loops and defining functions.
That being said, would I have preferred a CDKTF in Haskell, or a typed Nix dialect? Hell yes. CDKTF was awful, it was just the least bad thing around. Just like TF itself, in a way.
But I have little problems with HCL as a compilation target. Rich ecosystem and the abstractions seem sensible. Maybe that's Stockholm syndrome? Ironically, CDKTF has made me stop hating TF :)
Now that Hashicorp put the kibosh on CDKTF though, the question is: where next...
kstrauser 54 minutes ago [-]
Thanks for saving me the trouble of writing exactly that. I want my IaC to be roughly as Turing complete as JSOJ. It’s sooo tempting to say “if only I could write this part with a for loop…” and down that path lies madness.
There are things I think Terraform could do to improve its declarative specs without violating the spirit. Yet, I still prefer it as-is to any imperative alternatives.
vanviegen 49 minutes ago [-]
> Particularly, they allow you to write code whose output is unpredictable
Is that an easy mistake to make and a hard one to recover from, in your experience?
The way you have to bend over backwards in Terraform just to instantiate a thing multiple times based on some data really annoys me..
popalchemist 4 hours ago [-]
yes. IaC is a misnomer. IaC implementations should have a spec (some kind of document) as the source of truth; not code.
orthecreedence 1 hours ago [-]
Pulumi is superior to Terraform for my use cases. It's actually Infrastructure as Code. Terraform pretends to be, but uses a horrible config language that tries to skirt the line between programming language and config spec, and skirts it horribly. Reorganizing modules is a huge pain. I dreaded using Terraform and I spin things up and down in Pulumi all day. No contest.
Granted, I'm a programmer, have been for a long time, so using programming tools is a no brainer for me. If someone wants to manage infra but doesn't have programming skills, then learning the Terraform config language is a great idea. Just kidding, it's going to be just as confusing and obnoxious as learning the basic skills you need in python/js to get up and running with Pulumi.
kstrauser 52 minutes ago [-]
I disagree with that. I think it’s satisfying to find a way to express my intent in HCL, and I don’t think I could do it as well without a strong programming background.
x3n0ph3n3 4 hours ago [-]
My opinion is there are not enough good software developers doing DevOps, and HCL is simple enough and can have pretty good guardrails on it. My biggest concern is people shooting themselves in the foot because the static analysis tools available for HCL don't work with Pulumi.
SlightlyLeftPad 2 hours ago [-]
It’s an unfortunate truth that good software developers aren’t crazy enough to want to do it.
easterncalculus 4 hours ago [-]
You can honestly do a lot of what people do with Terraform now just using Docker and Ansible. I'm surprised more people don't try to. Most clouds are supported, even private clouds and stuff like MAAS.
x3n0ph3n3 3 hours ago [-]
Yeah, but ansible is one of the nine circles of hell and its support for various AWS services beyond EC2 and S3 is near nonexistant.
Tostino 1 hours ago [-]
I have mixed feelings about it. On my first startup, I used ansible to automate all of the manual workflows and server setup that we had done. Everything was just completely manual and in people's heads before, and translating it to ansible was a pain in the ass to say the least. I don't think it would have been any easier to translate it to something else though. It ended up working fine and we had a solid system that I could reset up our environment from scratch on a set of VPS provided by some terraform scripts. We were originally on digitalocean, and had to migrate to Azure because of acquisition BS.
For my current startup I ended up not going a direction where I needed ansible. I've now got everything in helm charts and deployable to K8S clusters, and packaged with Dockerfiles. Not really missing ansible, but not exactly in love with K8S either. It works well enough I guess.
kolja005 4 hours ago [-]
>Since the database is used by everyone, it becomes cared for by no one. Startups don’t have the luxury of a DBA, and everything owned by no one is owned by infrastructure eventually.
This post was a great read.
Tangent to this, I've always found "best practices" to be a bit of a misnomer. In most cases in software and especially devops I have found it means "pay for this product that constrains the way that you do things so you don't shoot yourself in the foot". It's not really a "practice" if you're using a product that gives you one way to do something. That said my company uses a very similar tech stack and I would choose the same one if I was starting a company tomorrow, despite the fact that, as others have mentioned, it's a ton to keep in your head all at once.
SoftTalker 1 hours ago [-]
After listing dozens of infrastructure products/projects, "My general infrastructure advice is “less is better”.
That made me laugh. Yes I get that they probably didn't use all of these at the same time.
rixed 1 hours ago [-]
Sure, let's take advices about infrastructure from that guy wo needs a tool to automate postmortems.
> Regret: Not adopting an identity platform early on. I stuck with Google Workspace at the start...
I've worked with hundreds of customers to integrate IdP's with our application and Google Workspace was by far the worst of the big players (Entra ID, Okta, Ping). Its extremely inflexible for even the most basic SAML configuration. Stay far, far away.
0xbadcafebee 37 minutes ago [-]
And it's a horrible moat. I've gotten locked out of a Google Workspace permanently because the person who set it up left, used a personal email/phone to do it, and despite us owning/controlling the domain, Google wouldn't unlock admin access to the Workspace for us, they would only delete it. Unacceptable business risk.
Grimburger 5 hours ago [-]
> There are no great FaaS options for running GPU workloads
Knative on k8s works well for us, there's some oddities about it but in general does the job
nevalainen 2 hours ago [-]
There is a lot of "stuff" for liking to keep it simple. Great article though!
bigiain 28 minutes ago [-]
I initially read this wrong as "Almost every infrastructure decision I make I regret after 4 years", and I nodded my head in agreement.
I've been working mostly at startups most of my career (for Sydney Australia values of "start up" which mostly means "small and new or new-ish business using technology", not the Silicon Valley VC money powered moonshot crapshoot meaning). Two of those roles (including the one I'm in now) have been longer that a decade.
And it's pretty much true that almost all infrastructure (and architecture) decisions are things that 4-5 years later become regrets. Some standouts from 30 years:
I didn't choose Macromind/Macromedia Director in '94 but that was someone else's decision I regretted 5 years later.
I shouldn't have chosen to run a web business on ISP web hosting and Perl4 in '95 (yay /cgi-bin).
I shouldn't have chosen globally colocated desktop pc linux machines and MySQL in '98/99 (although I got a lot of work trips and airline miles out of that).
I shouldn't have chosen Python2 in 2007, or even worse Angular2 in 2011.
I _probably_ shouldn't have chosen Arch Linux (and a custom/bastardised Pacman repo) for a hardware startup in 2013.
I didn't choose Groovy on Grails in 2014 but I regretted being recruited into being responsible for it by 2018 or so.
I shouldn't have chosen Java/MySQL in 2019 (or at least I should have kept a much tighter leash on the backend team and their enterprise architecture astronaut).
The other perspective on all those decisions though, each of them allowed a business to do the things they needed to take money off customers (I know I know, that's not the VC startup way...) Although I regretted each of those later, even in retrospect I think I made decent pragmatic choices at the time. And at this stage of my career I've become happy enough knowing that every decision is probably going to have regrets over a 4 or 5 year timeframe, but that most projects never last long enough for you to get there - either the business doesn't pass out and closes the project down, or a major ground up rewrite happens for reasons often unrelated to 5 year old infrastructure or architecture choices.
jmward01 3 hours ago [-]
You will never agree 100% with someone else when it comes to decisions like this, but clearly there is a lot of history behind these decisions and they are a great starting point for conversations internally I think.
jbmsf 3 hours ago [-]
Thanks. I've been meaning to write one of these for a long time, but you went into detail in a very effective, organized way.
I also reached a lot of similar decisions and challenges, even where we differ (ECS vs EKS) I completely understand your conclusions.
gnarbarian 1 hours ago [-]
given enough time you may regret every single one of them.
robszumski 7 hours ago [-]
Thanks for sharing, really helpful to see your thinking. I haven't fully embraced FaaS myself but never regretted it either.
Curious to hear more about Renovate vs Dependabot. Is it complicated to debug _why_ it's making a choice to upgrade from A to B? Working on a tool to do app-specific breaking change analysis so winning trust and being transparent about what is happening is top of mind.
When were you using quay.io? In the pre-CoreOS years, CoreOS years (2014-2018), or the Red Hat years?
dangoodmanUT 4 hours ago [-]
> There are no great FaaS options for running GPU workloads, which is why we could never go fully FaaS.
modal.com???
jrjeksjd8d 6 hours ago [-]
I see you regret Datadog but there's no alternative - did you end up homebrewing metrics, or are you just living with their insane pricing model? In my experience they suck but not enough to leave.
stackskipton 6 hours ago [-]
Not author but Prometheus is perfectly acceptable alternative if you don't want to go whole Otel route.
t-writescode 3 hours ago [-]
Prometheus + … what? Datadog is a visualization platform, prometheus is a data gathering infrastructure.
stackskipton 3 hours ago [-]
Grafana is most common one.
jpgvm 2 hours ago [-]
VictoriaMetrics stack. Better, cheaper, faster queries, more k8s native, etc.
Easy to run with budget saved from not being on Datadog + attracts smart and observability minded engineers to your team.
lelandbatey 4 hours ago [-]
Currently going through leaving DD at work. Many potential options, many companies trying to break in. The one that calls to me spiritually is: throw it all in Clickhouse (hosted Clickhouse is shockingly cheap) with a hosted HyperDX (logs and metrics UI) instance in front of it. HyperDX has its issues, but it's shocking how cheap it is to toss a couple hundred TB of logs/metrics into Clickhouse per month (compared to the kings ransom DD charges). And you can just query the raw rows, which really comes in handy for understanding some in-the-weeds metrics questions.
mwcampbell 6 hours ago [-]
I disagree on Kubernetes versus ECS. For me, the reasons to use ECS are not having to pay for a control plane, and not having to keep up with the Kubernetes upgrade treadmill.
x3n0ph3n3 3 hours ago [-]
This. k8s is primarily resume driven development in most software shops. Hardly any product or service really needs its complexity.
karolist 32 minutes ago [-]
To replace Kubernetes, you inevitably have to reinvent Kubernetes. By the time you build in canaries, blue/green deployments, and rolling updates with precise availability controls, you've just built a bespoke version of k8s. I'll take the industry standard over a homegrown orchestration tool any day.
jauntywundrkind 1 hours ago [-]
The amount of tools and systems here that work because of k8s is signficiant. K8s is a control plane and an integration plane.
I wish luck to the imo fools chasing the "you may not need it" logic. The vacuum that attitude creates in its wake demands many many many complex & gnarly home-cooked solutions.
Can you? Sure, absolutely! But you are doing that on your own, glueing it all together every step of the way. There's no other glue layer anywhere remotely as integrative, that can universally bind to so much. The value is astronomical, imho.
weedhopper 6 hours ago [-]
Great post. I even wouldn’t mind more details, especially about datadog, or as others pointed out, the kind of contradiction with aws support.
kaycey2022 5 hours ago [-]
Feels like a minor glimpse into what's involved in running tech companies these days. Sure this list could be much simpler, but then so would the scope of the company's offerings. So AI would offer enough accountability to replace all of this? Agents juggling million token contexts? It's kind of hard to wrap my head around.
nine_k 4 hours ago [-]
Agents run tools, too. You can make an LLM count by the means of language processing, but it's much more efficient to let it run a Python script.
By the same token, it's more efficient to let an LLM operate all these tools (and more) than to force an LLM to keep all of that on its "mind", that is, context.
kaycey2022 3 hours ago [-]
Agents are just not deterministic. You will see wierd things like in one thread an agent simply says it cannot access a CLI tool for whatever reason. You inspect the call, it worked just fine in another thread. You eventually shrug your shoulders and close the thread, pick up from another instead of having the agent flail around some obvious BS for hours and hours.
Just because they can run tools, doesn't mean they run them reliably. Running tools is not a be all and end all of the problem.
Amdahl's law is still in play when it comes to agents orchestrating entire business processes on their own.
cindyllm 3 hours ago [-]
[dead]
zem 6 hours ago [-]
I would love to read more about the pros and cons of using a single database, if anyone has pointers to articles
rawgabbit 3 hours ago [-]
The things that impact the most are locking/blocking, data duplication (ghosting due to race conditions), and poor performance. The best advice is RTFM the documentation for your database; yes, it is a lot to digest that is why DBAs exist. Most of these foot guns are due to poor architecture. You have to imagine multiple users/processes are literally trying to write to the same record at the same time; when you realize this, a single table with simple key-values is completely inadequate.
Everything in article is excellent point but other big point is schema changes become extremely difficult because you have unknown applications possibly relying on that schema.
It's also at certain point, the database becomes absolutely massive and you will need teams of DBAs care and feeding it.
x3n0ph3n3 3 hours ago [-]
Not only will you need a team of DBAs caring for it, but you'll never be able to hire them.
hobs 2 hours ago [-]
No organization I have seen prioritizes a DBA's requirements, concerns, or approach. They certainly don't pay them enough to deal with that bullshit, so I was out.
sgarland 4 hours ago [-]
Pro: every team probably needs user information, so don’t duplicate it in weird ways with uncertain consistency.
Con: it’s sadly likely that no one on your staff knows a damn thing about how an RDBMS works, and is seemingly incapable of reading documentation, so you’re gonna run into footguns faster. To be fair, this will also happen with isolated DBs, and will then be much more effort to rein in.
brandmeyer 3 hours ago [-]
They are very similar to the pros and cons of having a monorepo. It encourages information sharing and cross-linkage between related teams. This is simultaneously its biggest pro and its biggest con.
ink_13 4 hours ago [-]
(2024)
Just FYI article is two years old
0xbadcafebee 6 hours ago [-]
Using GCP gives me the same feeling as vibe-coded source code. Technically works but deeply unsettling. Unless GCP is somehow saving you boatloads of cash, AWS is much better.
RDS is a very quick way to expand your bill, followed by EC2, followed by S3. RDS for production is great, but you should avoid the bizarre HN trope of "Postgres for everything" with RDS. It makes your database unnecessarily larger which expands your bill. Use it strategically and your cost will remain low while also being very stable and easy to manage. You may still end up DIYing backups. Aurora Serverless v2 is another useful way to reduce bill. If you want to do custom fancy SQL/host/volume things, RDS Custom may enable it.
I'm starting to think Elasticache is a code smell. I see teams adopt it when they literally don't know why they're using it. Similar to the "Postgres for everything" people, they're often wasteful, causing extra cost and introducing more complexity for no benefit. If you decide to use Elasticache, Valkey Serverless is the cheapest option.
Always use ECR in AWS. Even if you have some enterprise artifact manager with container support... run your prod container pulls with ECR. Do not enable container scanning, it just increases your bill, nobody ever looks at the scan results.
I no longer endorse using GitHub Actions except for non-business-critical stuff. I was bullish early on with their Actions ecosystem, but the whole thing is a mess now, from the UX to the docs to the features and stability. I use it for my OSS projects but that's it. Most managed CI/CD sucks. Use Drone.io for free if you're small, use WoodpeckerCI otherwise.
Buying an IP block is a complicated and fraught thing (it may not seem like it, but eventually it is). Buy reserved IPs from AWS, keep them as long as you want, you never have to deal with strange outages from an RIR not getting the correct contact updated in the correct amount of time or some foolishness.
He mentions K8s, and it really is useful, but as a staging and dev environment. For production you run into the risk of insane complexity exploding, and the constant death march of upgrades and compatibility issues from the 12 month EOL; I would not recommend even managed K8s for prod. But for staging/dev, it's fantastic. Give your devs their own namespace (or virtual cluster, ideally) and they can go hog wild deploying infrastructure and testing apps in a protected private environment. You can spin up and down things much easier than typical AWS infra (no need for terraform, just use Helm) with less risk, and with horizontal autoscaling that means it's easier to save money. Compare to the difficulty of least-privilege in AWS IAM to allow experiments; you're constantly risking blowing up real infra.
Helm is a perfectly acceptable way to quickly install K8s components, big libraries of apps out there on https://artifacthub.io/. A big advantage is its atomic rollouts which makes simple deploy/rollback a breeze. But ExternalSecrets is one of the most over-complicated annoying garbage projects I've ever dealt with. It's useful, but I will fight hard to avoid it in future. There are multiple ways to use it with arcane syntax, yet it actually lacks some useful functionality. I spent way too much time trying to get it to do some basic things, and troubleshooting it is difficult. Beware.
I don't see a lot of architectural advice, which is strange. You should start your startup out using all the AWS well-architected framework that could possibly apply to your current startup. That means things like 1) multiple AWS accounts (the more the better) with a management account & security account, 2) identity center SSO, no IAM users for humans, 3) reserved CIDRs for VPCs, 4) transit gateway between accounts, 5) hard-split between stage & prod, 6) openvpn or wireguard proxy on each VPC to get into private networks, 7) tagging and naming standards and everything you build gets the tags, 8) put in management account policies and cloudtrail to enforce limitations on all the accounts, to do things like add default protections and auditing. If you're thinking "well my startup doesn't need that" - only if your startup dies will you not need it, and it will be an absolute nightmare to do it later (ever changed the wheels on a moving bus before?). And if you plan on working for more than one startup in your life, doing it once early on means it's easier the second time. Finally if you think "well that will take too long!", we have AI now, just ask it to do the thing and it'll do it for you.
zbentley 4 hours ago [-]
> Do not enable container scanning, it just increases your bill, nobody ever looks at the scan results.
God I wish that were true. Unfortunately, ECR scanning is often cheaper and easier to start consuming than buying $giant_enterprise_scanner_du_jour, and plenty of people consider free/OSS scanners insufficient.
Stupid self inflicted problems to be sure, but far from “nobody uses ECR scanning”.
themafia 2 hours ago [-]
> “This EC2 instance type running 24/7 at full load is way less expensive than a Lambda running”.
For the same amount of memory they should cost _nearly_ identical. Run the numbers. They're not significantly different services. Aside from this you do NOT pay for IPv4 when using Lambda, you do on EC2, and so Lambda is almost always less expensive.
Rendered at 07:09:35 GMT+0000 (Coordinated Universal Time) with Vercel.
GCP’s architecture seems clearly better to me especially if you are looking to be global.
Every organization I’ve ever witnessed eventually ends up with some kind of struggle with AWS’ insane organizations and accounts nightmare.
GCP’s use of folders makes way more sense.
GCP having global VPCs is also potentially a huge benefit if you want your users to hit servers that are physically close to them. On AWS you have to architect your own solution with global accelerator which becomes even more insane if you need to cross accounts, which you’ll probably have to do eventually because of the aforementioned insanity of AWS account/organization best practices.
Know how you find all the permissions a single user in GCP has? You have to make 9+ API calls, then filter/merge all the results. They finally added a web tool to try and "discover" the permissions for a user... you sit there and watch it spin while it madly calls backend APIs to try to figure it out. Permissions for a single user can be assigned to users, groups, orgs, projects, folders, resources, (and more I forget), and there's inheritance to make it more complex. It can take all day to track down every single place the permissions could be set for a single user in a single hierarchical organization, or where something is blocking some permission. The complexity increases as you have more GCP projects, folders, orgs. But, of course, if you don't do all this, GCP will fight you every step of the way.
Compare that to AWS, where you just click a user, and you see what's assigned to it. They engineered it specifically so it wouldn't be a pain in the ass.
> Every organization I’ve ever witnessed eventually ends up with some kind of struggle with AWS’ insane organizations and accounts nightmare.
This was an issue in the early days, but it's well solved now with newer integrations/services. Follow their Well Architected Framework (https://docs.aws.amazon.com/wellarchitected/latest/framework...), ask customer support for advice, implement it. I'm not exaggerating when I say this is the best description of the best information systems engineering practice in the world, and it's achievable by startups. It just takes a long time to read. If you want to become an excellent systems engineer/engineering manager/CTO/etc, this is your bible. (Note: you have to read the entire thing, especially the appendixes; you can't skim it like StackOverflow)
What are these struggles? The product I work on uses AWS and we have ~5 accounts (I hear they used to be more TBF) but nowadays all the infrastructure is on one of them and the other are for some niche stuff (tech support?). I could see how going overboard with many accounts could be an issue, but I don't really see issues having everything on one account.
Architecturally I'd go with GCP in a heartbeat. Bigquery was also one of the biggest wins in my previous role. Completely changed out business for almost everyone, vs Redshift which cost us a lot of money to learn that it sucked.
You could say I'm biased as I work at Google (but not on any of this), but for me it was definitely the other way around, I joined Google in part because of the experience of using GCP and migrating AWS workloads to in.
If the author had a Ko-Fi they would've just earned $50 USD from me.
I've been thinking of making the leap away from JIRA and I concur on RDS, Terraform for IAC, and FaaS whenever possible. Google support is non-existent and I only recommend GC for pure compute. I hear good things about Big Table, but I've never used in in production.
I disagree on Slack usage aside from the postmortem automation. Slack is just gonna' be messy no matter what policies are put in place.
Other options are email of course, and what, teams for instant messages?
Organized by topics, must be threaded, and default to asynchronous communications. You can still opt in to notifications, and history is well organized and preserved.
It’s funny how we get an instant messaging platform and derive best practices that try to emulate a previous technology.
Btw, email is pretty instant.
I, too, prefer McDonald's cheeseburgers to ground glass mixed with rusty nails. It's not so much that I love Terraform (spelled OpenTofu) as that it's far and away the least bad tool I've used in the space.
Terragrunt is the only sane way to deploy terraform/openTofu in a professional environment though.
I'm trying to make the decision for where to go with my home lab, and while Pulumi and Cue look neat, cdk8s seems so predictable & has such clear structure & form to it.
That's said the l1/l2/l3 distinction can be a brute to deal with. There's significant hidden complexity there.
Homelab CDKs: https://github.com/shepherdjerred/monorepo/tree/main/package...
Script I wrote to generate types from Helm charts: https://github.com/shepherdjerred/monorepo/tree/main/package...
Infrastructure needs to be consistent, intuitive and reproducible. Imperative languages are too unconstrained. Particularly, they allow you to write code whose output is unpredictable (for example, it'd be easy to write code that creates a resources based on the current time of day...).
With infrastructure, you want predictability and reproducibility. You want to focus more on writing _what_ your infra should look like, less _how_ to get there.
I have written both TF and then CDKTF extensively (!), and I am absolutely never going back to raw TF. TF vs CDKTF isn't declarative vs imperative, it's "anemic untyped slow feedback mess" vs "strong typesystem, expressive builtins and LSP". You can build things in CDKTF that are humanly intractable in raw TF and it requires far less discipline, not more, to keep it from becoming an unmaintainable mess. Having a typechecker for your providers is a "cannot unsee" experience. As is being able to use for loops and defining functions.
That being said, would I have preferred a CDKTF in Haskell, or a typed Nix dialect? Hell yes. CDKTF was awful, it was just the least bad thing around. Just like TF itself, in a way.
But I have little problems with HCL as a compilation target. Rich ecosystem and the abstractions seem sensible. Maybe that's Stockholm syndrome? Ironically, CDKTF has made me stop hating TF :)
Now that Hashicorp put the kibosh on CDKTF though, the question is: where next...
There are things I think Terraform could do to improve its declarative specs without violating the spirit. Yet, I still prefer it as-is to any imperative alternatives.
Is that an easy mistake to make and a hard one to recover from, in your experience?
The way you have to bend over backwards in Terraform just to instantiate a thing multiple times based on some data really annoys me..
Granted, I'm a programmer, have been for a long time, so using programming tools is a no brainer for me. If someone wants to manage infra but doesn't have programming skills, then learning the Terraform config language is a great idea. Just kidding, it's going to be just as confusing and obnoxious as learning the basic skills you need in python/js to get up and running with Pulumi.
For my current startup I ended up not going a direction where I needed ansible. I've now got everything in helm charts and deployable to K8S clusters, and packaged with Dockerfiles. Not really missing ansible, but not exactly in love with K8S either. It works well enough I guess.
This post was a great read.
Tangent to this, I've always found "best practices" to be a bit of a misnomer. In most cases in software and especially devops I have found it means "pay for this product that constrains the way that you do things so you don't shoot yourself in the foot". It's not really a "practice" if you're using a product that gives you one way to do something. That said my company uses a very similar tech stack and I would choose the same one if I was starting a company tomorrow, despite the fact that, as others have mentioned, it's a ton to keep in your head all at once.
That made me laugh. Yes I get that they probably didn't use all of these at the same time.
past discussion: https://news.ycombinator.com/item?id=39313623
I've worked with hundreds of customers to integrate IdP's with our application and Google Workspace was by far the worst of the big players (Entra ID, Okta, Ping). Its extremely inflexible for even the most basic SAML configuration. Stay far, far away.
Knative on k8s works well for us, there's some oddities about it but in general does the job
I've been working mostly at startups most of my career (for Sydney Australia values of "start up" which mostly means "small and new or new-ish business using technology", not the Silicon Valley VC money powered moonshot crapshoot meaning). Two of those roles (including the one I'm in now) have been longer that a decade.
And it's pretty much true that almost all infrastructure (and architecture) decisions are things that 4-5 years later become regrets. Some standouts from 30 years:
I didn't choose Macromind/Macromedia Director in '94 but that was someone else's decision I regretted 5 years later.
I shouldn't have chosen to run a web business on ISP web hosting and Perl4 in '95 (yay /cgi-bin).
I shouldn't have chosen globally colocated desktop pc linux machines and MySQL in '98/99 (although I got a lot of work trips and airline miles out of that).
I shouldn't have chosen Python2 in 2007, or even worse Angular2 in 2011.
I _probably_ shouldn't have chosen Arch Linux (and a custom/bastardised Pacman repo) for a hardware startup in 2013.
I didn't choose Groovy on Grails in 2014 but I regretted being recruited into being responsible for it by 2018 or so.
I shouldn't have chosen Java/MySQL in 2019 (or at least I should have kept a much tighter leash on the backend team and their enterprise architecture astronaut).
The other perspective on all those decisions though, each of them allowed a business to do the things they needed to take money off customers (I know I know, that's not the VC startup way...) Although I regretted each of those later, even in retrospect I think I made decent pragmatic choices at the time. And at this stage of my career I've become happy enough knowing that every decision is probably going to have regrets over a 4 or 5 year timeframe, but that most projects never last long enough for you to get there - either the business doesn't pass out and closes the project down, or a major ground up rewrite happens for reasons often unrelated to 5 year old infrastructure or architecture choices.
I also reached a lot of similar decisions and challenges, even where we differ (ECS vs EKS) I completely understand your conclusions.
Curious to hear more about Renovate vs Dependabot. Is it complicated to debug _why_ it's making a choice to upgrade from A to B? Working on a tool to do app-specific breaking change analysis so winning trust and being transparent about what is happening is top of mind.
When were you using quay.io? In the pre-CoreOS years, CoreOS years (2014-2018), or the Red Hat years?
modal.com???
I wish luck to the imo fools chasing the "you may not need it" logic. The vacuum that attitude creates in its wake demands many many many complex & gnarly home-cooked solutions.
Can you? Sure, absolutely! But you are doing that on your own, glueing it all together every step of the way. There's no other glue layer anywhere remotely as integrative, that can universally bind to so much. The value is astronomical, imho.
By the same token, it's more efficient to let an LLM operate all these tools (and more) than to force an LLM to keep all of that on its "mind", that is, context.
Just because they can run tools, doesn't mean they run them reliably. Running tools is not a be all and end all of the problem.
Amdahl's law is still in play when it comes to agents orchestrating entire business processes on their own.
https://www.enterpriseintegrationpatterns.com/patterns/messa...
Everything in article is excellent point but other big point is schema changes become extremely difficult because you have unknown applications possibly relying on that schema.
It's also at certain point, the database becomes absolutely massive and you will need teams of DBAs care and feeding it.
Con: it’s sadly likely that no one on your staff knows a damn thing about how an RDBMS works, and is seemingly incapable of reading documentation, so you’re gonna run into footguns faster. To be fair, this will also happen with isolated DBs, and will then be much more effort to rein in.
Just FYI article is two years old
RDS is a very quick way to expand your bill, followed by EC2, followed by S3. RDS for production is great, but you should avoid the bizarre HN trope of "Postgres for everything" with RDS. It makes your database unnecessarily larger which expands your bill. Use it strategically and your cost will remain low while also being very stable and easy to manage. You may still end up DIYing backups. Aurora Serverless v2 is another useful way to reduce bill. If you want to do custom fancy SQL/host/volume things, RDS Custom may enable it.
I'm starting to think Elasticache is a code smell. I see teams adopt it when they literally don't know why they're using it. Similar to the "Postgres for everything" people, they're often wasteful, causing extra cost and introducing more complexity for no benefit. If you decide to use Elasticache, Valkey Serverless is the cheapest option.
Always use ECR in AWS. Even if you have some enterprise artifact manager with container support... run your prod container pulls with ECR. Do not enable container scanning, it just increases your bill, nobody ever looks at the scan results.
I no longer endorse using GitHub Actions except for non-business-critical stuff. I was bullish early on with their Actions ecosystem, but the whole thing is a mess now, from the UX to the docs to the features and stability. I use it for my OSS projects but that's it. Most managed CI/CD sucks. Use Drone.io for free if you're small, use WoodpeckerCI otherwise.
Buying an IP block is a complicated and fraught thing (it may not seem like it, but eventually it is). Buy reserved IPs from AWS, keep them as long as you want, you never have to deal with strange outages from an RIR not getting the correct contact updated in the correct amount of time or some foolishness.
He mentions K8s, and it really is useful, but as a staging and dev environment. For production you run into the risk of insane complexity exploding, and the constant death march of upgrades and compatibility issues from the 12 month EOL; I would not recommend even managed K8s for prod. But for staging/dev, it's fantastic. Give your devs their own namespace (or virtual cluster, ideally) and they can go hog wild deploying infrastructure and testing apps in a protected private environment. You can spin up and down things much easier than typical AWS infra (no need for terraform, just use Helm) with less risk, and with horizontal autoscaling that means it's easier to save money. Compare to the difficulty of least-privilege in AWS IAM to allow experiments; you're constantly risking blowing up real infra.
Helm is a perfectly acceptable way to quickly install K8s components, big libraries of apps out there on https://artifacthub.io/. A big advantage is its atomic rollouts which makes simple deploy/rollback a breeze. But ExternalSecrets is one of the most over-complicated annoying garbage projects I've ever dealt with. It's useful, but I will fight hard to avoid it in future. There are multiple ways to use it with arcane syntax, yet it actually lacks some useful functionality. I spent way too much time trying to get it to do some basic things, and troubleshooting it is difficult. Beware.
I don't see a lot of architectural advice, which is strange. You should start your startup out using all the AWS well-architected framework that could possibly apply to your current startup. That means things like 1) multiple AWS accounts (the more the better) with a management account & security account, 2) identity center SSO, no IAM users for humans, 3) reserved CIDRs for VPCs, 4) transit gateway between accounts, 5) hard-split between stage & prod, 6) openvpn or wireguard proxy on each VPC to get into private networks, 7) tagging and naming standards and everything you build gets the tags, 8) put in management account policies and cloudtrail to enforce limitations on all the accounts, to do things like add default protections and auditing. If you're thinking "well my startup doesn't need that" - only if your startup dies will you not need it, and it will be an absolute nightmare to do it later (ever changed the wheels on a moving bus before?). And if you plan on working for more than one startup in your life, doing it once early on means it's easier the second time. Finally if you think "well that will take too long!", we have AI now, just ask it to do the thing and it'll do it for you.
God I wish that were true. Unfortunately, ECR scanning is often cheaper and easier to start consuming than buying $giant_enterprise_scanner_du_jour, and plenty of people consider free/OSS scanners insufficient.
Stupid self inflicted problems to be sure, but far from “nobody uses ECR scanning”.
For the same amount of memory they should cost _nearly_ identical. Run the numbers. They're not significantly different services. Aside from this you do NOT pay for IPv4 when using Lambda, you do on EC2, and so Lambda is almost always less expensive.