Searching the web is a great feature in theory, but every implementation I've used so far looks at the top X hits and then interprets it to be the correct answer.
When you're talking to an LLM about popular topics or common errors, the top results are often just blogspam or unresolved forum posts, so the you never get an answer to your problem.
More of an indicator that web search is more unusable than ever, but interesting that it affects the performance of generative systems, nonetheless.
Almondsetat 7 days ago [-]
>looks at the top X hits and then interprets it to be the correct answer.
The longer I've been in the workforce, the more I realize most humans actually kind of suck at their jobs. LLMs being more human like is the opposite of what I want.
_heimdall 7 days ago [-]
That could very well be because the jobs are effectively useless. By no means does that mean the people are, nor is what the income allows them to do. But most jobs sure do seem pointless.
Toenex 6 days ago [-]
Maybe we already have Universal Basic Income, you just need to have a pointless job to collect it.
prawn 6 days ago [-]
One suggested weakness of UBI is a lack of purpose. I wonder if the "solution" is somewhat as you implied: jobs without a strict return on investment. You get your stipend, but you're keeping your block clean by sweeping and mulching. They're getting theirs in exchange for cranking out sourdough at cost for the neighbourhood. Someone else gardens for elderly residents.
jmrm 6 days ago [-]
Not UBI per se, but this exists in rural parts of Southern Spain in some way, and is called Rural Employment Plan (PER in its Spanish initials).
The give simple jobs, like cleaning or painting, to people on the lower bottom of earnings. Most people in that plan are people with low formation, like those who left school in their mid teens.
More like a labor subsidy, backed by taxes... Which would need a minimum wage law as well.
This seems like a great idea to me! Making it cheaper for businesses to hire people for these jobs would lower prices for everyone, improving accessibility of the services.
_heimdall 6 days ago [-]
I may just be missing how this would work.
How would this help lower prices? The taxes have to be paid for by someone, and that cost should largely end up landing on the consumer.
It seems like we'd be changing who's hands the money moves through, but it still has to be paid for one way or another. If that's the case we'd risk higher prices since taxes have to subsidize prices and cover all the costs of running the program in the first place.
sdenton4 5 days ago [-]
Tax the rich, and use the funds to pay a portion of the wages in targeted jobs, reducing the amount that the business has to pay to hit minimum wage. Then businesses continue competing on prices, but have substantially lower labor costs, bringing down prices for everyone.
In the end, you use money from the rich to pay for socially beneficial jobs. Exactly the sort of thing government is for: ensuring that social goods are provided.
_heimdall 5 days ago [-]
That's an extremely complex economic change, I wouldn't be so certain we know exactly what would happen.
Taxing the rich can have unintended consequences. First you have to change the tax code so they actually get taxed and can't dodge it, those rules alone would be difficult to write effectively and would likely mean changing other parts of our tax code that impact everyone. If the rich do get taxed enough to cover a good chunk of wages, demand for luxury items would go down so too then would the jobs that make those products and services.
Once subsidized by a UBI, at best workers will continue to work at the same levels they do now. There will be an incentive for them to work less though, potentially driving up the labor costs you are trying to reduce. How do we accurately predict how many workers will reduce their hours or leave the workforce entirely? And how do we predict what that would do to prices?
The idea of taxing the rich to bail out everyone else is too often boiled down to a simple lever that, when pulled, magically fixes everything without any risk of unintended side effects.
sdenton4 5 days ago [-]
I'm not actually talking about ubi here: it's subsidizing labor for some class of essential jobs.
_heimdall 5 days ago [-]
Yep, sorry about that. I got my threads mixed up.
Capricorn2481 5 days ago [-]
But the idea of not changing the tax code because it might affect others, continuing to let the rich pay 0 taxes, is foolish.
There's an obvious wealth gap that's increasing and the people up top are getting even less oversight as we speak. As you say in your post, you don't know what the effects will be because it's not simple. But I see no compelling reason to continue with the oligarchy
_heimdall 4 days ago [-]
Sure that would be foolish, my point wasn't that taxes should remain as-is forever though.
My point was that we can change taxes to a system that we think will work better today, but we can't claim to know what the actual results will be years from now.
The claim made earlier in the chains was that taxing the rich to subsidize wages would lower labor costs and lower prices. I don't think we can ever know well enough how a broad reaching change will land, and claiming to know prices will go down isn't reasonable.
6510 6 days ago [-]
That's just a cultural bias blind spot. It can be easily cured by finding a child, pointing your finger at them then say the magic words: "You must feel useless without a job!"
A much more terrible issue we suffer from already is that without participating we forget how our civilization works. Having a job gives you at least a tiny bit of insight that may partially map to other jobs.
rightbyte 6 days ago [-]
Cleaning, gardening and baking is proper jobs though.
Sharlin 6 days ago [-]
Funny, because lack of purpose is exactly the problem with monotonous shit jobs. Compared to being able to freely choose to do something that's meaningful to you and brings you joy. Merely being able to afford food and shelter is not a purpose. It's survival.
small_scombrus 6 days ago [-]
That sounds utopian
cruffle_duffle 6 days ago [-]
Oh but don’t worry I’m sure all the people who imagine these schemes assume they’ll be the ones who aren’t obsolete and forced to work menial jobs.
Very similar to how ultra hard core libertarians assume they’ll be the ones at the top of the food chain calling the shots and not be just another peasant.
But it doesn’t really matter because there is no way in hell any of these LLM’s will uproot all of society. I use LLMs all the time, they are amazing, but they aren’t gonna replace many jobs at all. They just aren’t capable of that.
econ 5 days ago [-]
If we come to our senses it should be obvious that everyone needs to be physically active at least a few days per week, we need to condition brain plasticity, have to keep learning new things.
The available work offers the entire spectrum but we have to divide and plan it.
starttoaster 6 days ago [-]
That sounds like a simpler life/role, not a pointless one.
techpineapple 6 days ago [-]
Im sure I’m overidealizing, but I’ve wanted to live off grid, or maybe in a small community.
I watch these historical farm documentary tv shows, and they show how everyone in a town had a purpose and worked together, the blacksmith, the tile maker.
And I do often think the limiting factor to a life like this is the “market” so if you could create these communities, and could be an artist/artisan/builder, without strictly having to worry about making enough to live.
I met someone recently who lived in the Galapagos islands, and she seemed to sort of live this community oriented, trading anarchocapitalist lifestyle, and I think most people would be happier if they're small capitalist or socialist community involved direct interaction with people rather than dealing with soulless corpo's all the time.
hasbot 4 days ago [-]
I've lived off-grid for three long summers (late spring to early fall). It's tremendous work. Most of the same systems exist it's just that one has to research, design, build, operate, maintain, and revise them instead of somebody else doing all that. Everybody has different goals, but for me, maintaining my own potable water system is not a goal or something I'm interested in. Living off-grid did change my perspective on some things. For example, I know now that I produce about a 4-gallon bucket of poop each month and yet my house has a tremendous sewer connection.
rank0 6 days ago [-]
How do we determine who gets what job?
prawn 6 days ago [-]
Let people choose if they want to do something, but have a concerted effort to encourage/suggest things that might give them purpose and build a community. Leave them to decide their hours and effort. Maybe someone wants to clean the gutters for their entire block at 6am and then go tinker in the shed for half the day. I'm sure that sounds really lazy, but this concept is working up from a default UBI that is pay-for-no-job.
I can imagine loads of tasks or jobs that would be quite pleasant if it weren't for stressing over efficiency or business admin.
rank0 6 days ago [-]
Nobody is going to choose to be a ditch digger without a financial incentive. Most jobs worth doing are unpleasant or difficult. Thats why people pay for the labor!
I mean think about it…when was the last time you heard of charity gutter cleaning services? People would much rather enjoy their leisure time on hobbies or with family/friends.
prawn 6 days ago [-]
Why would there not still be gutter cleaning or ditch digging companies? Or people cleaning their own gutters? I'm not familiar with UBI proposals that do away with traditional enterprises; it's generally suggested as raising the floor. People would have more time to clean their own gutters or use the money they receive to pay someone else.
In terms of charity cleaning services, there are people who clean hoarder's houses or landscape unruly yards for free on YouTube... ;)
eloisius 6 days ago [-]
> for free on YouTube
For free on YouTube in exchange for ad revenue
prawn 6 days ago [-]
I figured this went without saying, and the wink covered that it was barely a viable example.
collyw 6 days ago [-]
Imagine not using an ad blocker in this day and age.
rank0 6 days ago [-]
You provided the example…I still don’t understand why anyone would start working for free. They already have the liberty to do so and choose not to.
If the government gives out free money people will pocket it. Should not be controversial.
prawn 6 days ago [-]
I'm talking about gutters on the street, beside the kerb. I thought this was implied after I said "keeping your block clean by sweeping and mulching". You routinely see older people in Asia sweeping and raking a communal area if you get up early to walk. There's a (probably obsessive-compulsive) 60 yo guy a few houses down from me in Australia who might've retired early and now goes around raking verges and cleaning the footpath/gutters meticulously. Near my office, there's a woman who bakes bread for the joy of it and sells it at-cost via an honour-box in a sidestreet. She also turns verges and front yards (with owners' permissions) into a community vegetable garden. If others were given an opportunity equivalent to early retirement, these sorts of things might be more common.
As for why: for purpose, for praise, for community, for mental health, for trade/contribution, for skill building, etc. Loads of examples of this already. Maybe none of these things are attractive to you but I don't think that's universal.
Like I said, it's just trying to add to the default UBI, not getting everyone volunteering in their community or else.
david-gpu 6 days ago [-]
Most retirees, early or not, do not contribute to society with their labor nearly as much as they did during their working years. What makes us think that UBI beneficiaries would be any different?
ldoughty 6 days ago [-]
The idea behind UBI is that people do jobs that they want to do...
rank0 6 days ago [-]
Right! So everyone would choose to pursue passions/interests/leisure. We would be going into debt with no meaningful benefit to the taxpayer. Direct malinvestment.
fhd2 6 days ago [-]
This is drawing a line between "us" (tax paying citizens + the government) and "them" (people on benefits). I don't think it's that simple.
I imagine just like with existing benefits, the majority of people wouldn't feel great about being on UBI doing nothing, and they would pursue something that gives them a better social standing, a better sense of purpose, a good challenge, whatever motivates an individual. It's why lots of people do volunteer work, work on important open source software, and so on. Sure, there's outliers that actually proudly slack off, but you don't address specific problems with generic solutions.
But more importantly, having the _option_ to fall back on benefits means people need to take fewer risks to pursue their talents and likely be of more value to society than if they did whatever puts food on the table today. Case in point: People born into a family that can finance them through college are more likely to become engineers than people born into poor households. On the flip side, some people do white collar jobs vs something like being a medic to uphold their standard of living from the higher salary, not out of preference.
I think it would need careful management, but I believe there's every reason to be optimistic.
6 days ago [-]
pishpash 6 days ago [-]
UBI isn't even needed if there's just universal housing, medical care, food and education. People will find enough work to get the rest, even if it's through barter.
rank0 6 days ago [-]
Dude...I mean this in the nicest way possible and only say it cause I think it's important for everyone to understand:
People work for money. If a job has no pay, you can't expect it to get done.
We need people to actually run hospitals, produce food, construct shelter/infrastructure, provide childcare/education, etc.
prawn 4 days ago [-]
What UBI proposals are you reading that do away with actual jobs? There would still be jobs for people doing those things you described.
rank0 3 days ago [-]
Okay…now that we agree that UBI won’t produce any meaningful labor. What benefit do we get out of the trillions of dollars of debt we’d be accumulating?
It’s a classic economic blunder that dictatorships love to make:
1. Create money & rack up debt.
2. Produce nothing.
3. Create inflationary crisis and exacerbate wealth inequality.
4. Highlight your good intentions and relish your new position as champion of the people.
cudgy 6 days ago [-]
Isn’t the investment to avoid a revolution? To avoid those that cannot find work from dismantling and tearing down everything around them so they can get what they need. Some might consider that to be a benefit to taxpayers and not a poor investment.
rank0 6 days ago [-]
Free money never works. It’s been attempted countless times. In fact, it exacerbates the wealth gap as the rich own assets that scale with inflation while the poor do not.
blackqueeriroh 6 days ago [-]
It seems to me that you’re confused about what people enjoy doing.
Also, it’s fascinating that you say “no benefit to the taxpayer” as if the taxpayer not having to work is somehow not a benefit?
borgdefenser 6 days ago [-]
No, you just live in a bubble of smart and really driven people.
The vast majority of people's passions are partying, sex, alcohol/drugs, watching sports, gossiping, generally wasting time. Things that mostly
This whole line of thought to me is embarrassingly clueless, naive and basically childish.
It is just mind blowing to me how smart people can't see what a bubble they live in.
I almost suspect, the higher a person's IQ, the more susceptible they are to living in a bubble that basically has nothing to do with the majority of people with an IQ of 100.
bluebarbet 6 days ago [-]
>It seems to me that you’re confused
A conversation that starts like this is not going to go well.
Galaxeblaffer 6 days ago [-]
there's no reason we couldn't incentivize the important jobs..
_heimdall 6 days ago [-]
How do you make sure that enough people want to do the necessary jobs?
And why do you need money at all in that scenario, at least for the basic items the UBI intends to make affordable to all? Why not just make them free and available to everyone?
vidarh 6 days ago [-]
You pay for them, on top of UBI.
No UBI proposal I'm aware of proposes UBI replaces salaries or is high enough to satisfy everyone. The "B" is for basic. Most people are not satisfied with earning a basic salary.
_heimdall 5 days ago [-]
I was very surprised during the pandemic response to see how many people were happy to take government checks plus unemployment rather than working.
I know a few people with small businesses in various manufacturing industries. They all had a really hard time finding enough people to work while stimulus checks were going out.
People wouldn't make quite as much, but they were happy to stay home and have the basics for "free" rather than have a job.
saagarjha 4 days ago [-]
Perhaps this is more a statement of the working conditions there than a comment on what people actually want to do.
tirant 6 days ago [-]
That's the most anti-social aspect of the UBI.
Historically, jobs or professions always existed around the intrinsic motivation of the person working and around the needs of the society around that person.
So you could become a poet, but if you do not write poems that people like you would starve. Or you could become a farmer and provide the best apples in your city and you will earn a more than deserve income.
That's why free economies have developed historically so much better than any centrally planned economy.
dataflow 6 days ago [-]
No we don't. We have too many people who -- even despite having respectable jobs -- can't afford the basic necessities for the month, let alone save for their future and family. The problem they're facing is the lack of the guaranteed basic income, not the lack of a job to collect it.
rank0 6 days ago [-]
This is not a fully solvable problem. Especially if the goal is to provide the above for any location.
You can do more harm than good by implementing policies like “guaranteed free money”.
wegfawefgawefg 6 days ago [-]
I can not believe this was voted down. It is simply an assertion of fact. Whether true or not, seems reasonable and most people would agree with it.
dataflow 6 days ago [-]
> I can not believe this was voted down. It is simply an assertion of fact. Whether true or not, seems reasonable and most people would agree with it.
If it was voted down, I'm guessing it was because to the extent that it's a fact, it's trivially true, and there's nothing insightful about the defeatist take. It's possible to do more harm than good doing pretty much anything. And the world is littered with problems that are not "fully solvable" but that we've mitigated greatly.
wegfawefgawefg 6 days ago [-]
consider the following hypothetical situation:
lets say your car tires pop.
Person A: "I will paint your car tires red. That will fix them."
Person B: "painting my flat car tires red wont fix them."
Person C: "well youre just being defeatest. we have to do something".
That study is about the impact on labor supply, not the usefulness of UBI.
rank0 6 days ago [-]
History is littered with failed nation-states promising to end poverty.
Spawning money creates nothing.
_heimdall 6 days ago [-]
UBI will almost certainly fail to cover the necessities unless we have Marx-style price controls.
When everyone in the economy has a minimum of say $3,000 per month the cost of necessities, and everything else, will go up roughly in line with that.
dataflow 6 days ago [-]
I wasn't here to take a stance on UBI or argue over its practicalities, I was just explaining the intended outcome was not what the parent believed it to be.
But fine, I'll bite.
> will go up roughly in line with that
Could you at least explain the logic that you believe implies this would occur with such certainty? I've thought about this before and I couldn't see this as a necessary outcome, though (depending on various factors) I do see it as a possible one.
rank0 6 days ago [-]
> Could you at least explain the logic that you believe implies this would occur with such certainty?
Because we haven’t actually created anything. Supply is the same, demand is WAY up.
dataflow 6 days ago [-]
That doesn't follow. It's a reason to believe prices will increase, not that prices will increase roughly in line with the income increase. This distinction is not a minor detail, it's pretty crucial. If you give people $3k and the prices go up by $2k... that's a very different scenario from one where the prices go up by $3k.
rank0 6 days ago [-]
It should all even out in the long run.
As long as we’re in a deficit, spending for this program would directly increase the money supply. Of course there are other factors like velocity of money and elasticity of good/services but at the end of the day we’re increasing the amount of money (aka cash + credit) with no change to supply AND we’re going into debt to do it.
_heimdall 6 days ago [-]
Capitalism is based on, among other things, an expectation that free markets are pretty good at balancing out in the long run. If demand goes up only because access to money goes up, prices will rise.
Any increase in supply over time will eat up some of that price fluctuation, but for most products prices are more flexible than supply and a majority share of any capital increase will go towards prices rather than supply.
dataflow 6 days ago [-]
> a majority share of any capital increase will go towards prices rather than supply
You actually made my point, I think: that the price increase need not necessarily be "roughly in line with that", but could be less.
This distinction is absolutely critical. Like I said in [1], if you put $3k in my pocket, and my expenses increase by $2k, that's a very different situation from if my expenses grow by $3k. It would mean there is a reachable equilibrium.
When I said "in line" I didn't mean 1:1 or 100%. I may have picked a bad phrase there, I was intending to say that there would be a strong correlation between the two and that a majority of the extra Monet would go towards price increases.
I forget the general rule when it comes to companies, but there's a general percentage that is often how much a price increase on a company is passed on to consumers. If a company's tax rate goes up by 10% something like 8% of that is passed on to the consumer through price increases. I'd expect something similar with a UBI.
dataflow 6 days ago [-]
> When I said "in line" I didn't mean 1:1 or 100%. I may have picked a bad phrase there, I was intending to say that there would be a strong correlation between the two and that a majority of the extra Monet would go towards price increases.
If so, then explain how you're making the jump from "prices increase some" to "you would need Marx style price controls" or "otherwise UBI will fail to cover the necessities"? If you give me $X and I spend $X * r of it due to price increases, and r < 1, then don't I have (1 - r) * $X left in my pocket, meaning it could be made large enough to cover the basic necessities? This isn't complicated math.
I don't get why "prices increase" is seen as such a mic-drop phrase that shows the system would fall apart. Prices already increase for all sorts of reasons, it's not like the economy falls apart every time or we somehow add Marx style price controls every time. Sure, prices increase some here too. And then what? The sky falls?
_heimdall 5 days ago [-]
Price increases as a mic drop in my opinion and I don't mean to use it that way. As far as I can tell its just an inevitability with anything like a UBI.
With regards to my claim that we'd need strong price controls, a UBI needs prices to the basics to remain stable. I won't go down the road of trying to define what "the basics" are here, that's a huge rabbit hole so let's just leave it at the broad category in general.
If everyone can afford the basics, there is more demand for those items. Supply will likely increase eventually and eat up part of the demand increase, but the rest goes to prices. When those prices go up, the UBI would have to increase to match. The whole cycle would go on in a loop unless there's some lever for the government to control the prices of anything deemed a basic necessity.
dataflow 5 days ago [-]
> When those prices go up, the UBI would have to increase to match. The whole cycle would go on in a loop unless there's some lever for the government to control the prices
No. Just because something increases forever that doesn't mean it won't stabilize. Asymptotes, limits, and convergence are also a thing. You're making strong divergence claims that don't follow from your assumptions.
galaxyLogic 6 days ago [-]
Governments already provide "free income" in the form of free or subsidized services.
Say you have a fire-department even though you personally might not be paying anything for it because you are so poor that you don't pay any taxes. You have police protecting you and the army. You have free primary school at least.
So I think the question is, would it help for the government to provide more, or less, or the same amount of free services as it does currently?
Would it "increase prices" if healthcare was free? Not necessarily I think. At least not the price of healthcare. Government would be in a much better position to negotiate drug-prices with pharmaceutical companies, than individuals are.
_heimdall 6 days ago [-]
If you have a government that runs a balanced budget, those services aren't free.
> Would it "increase prices" if healthcare was free?
That depends, who's ultimately footing the bill? If its paid for with taxes on businesses, yes most of that would be passed on to consumers in the form of price increases. If its paid for by consumer taxes, ultimately you will find consumers demanding higher wages and prices would again go up. If its paid for with tariffs, well we'll fins out soon but prices should go up there as well.
galaxyLogic 6 days ago [-]
> those services aren't free.
They are free for poor people. For instance, basic education must be free, so we can have a productive work-force that can read and write and pay taxes in the future, which will make us even richer.
LinXitoW 6 days ago [-]
In a UBI situation demand would shift, not just go up. If there's two hypothetical people paying the tax, a very rich person (>300.000 a year) and a poor person (<50.000 a yr), money effectively shifts from the rich person to the poor person (at least the majority). The poor person will have very different demands than the rich person.
Finally, we already do price controls and subsidies in many places, like food production. It's just that a big part of the advantage is soaked up by big companies.
_heimdall 6 days ago [-]
Right, so prices for items the rich people want would fall and prices for items everyone else wants would go up.
LinXitoW 6 days ago [-]
We already have "Marx-style" price control and regulations in many sectors, specifically food production. It's just that the advantages are arbitraged away by corporations using cheap corn to create highly addictive foods, and lobbying and marketing with the resultant profits.
But I also disagree with your assertion. Minimum wage increases are a great example. Opponents will constantly claim they will lead to massively increasing prices, but they never do. Moreover, a higher standard of employment rights and payment in first world countries like Norway doesn't seem to correlate well with higher Big Mac prices.
_heimdall 6 days ago [-]
> We already have "Marx-style" price control and regulations in many sectors, specifically food production.
And our food quality in the US is garbage. We can't say if there is causation there since we can't compare against a baseline US food system without subsidies, but there is a correlation in timing between the increase in food subsidies and the decrease in quality.
> Opponents will constantly claim they will lead to massively increasing prices, but they never do.
The only times that really comes up is when an increase is proposed and the whole debate is over politicized. Claims on both sides at those times are going to be exaggerated.
Prices absolutely go up with minimum wage increases. How could they not? It'd be totally reasonable to argue the timeline that matters, prices aren't going to go up immediately. You could also argue the ratio, maybe wage is increased by 30% and prices are only expected to go up by 20%.
People earning a minimum wage almost certainly have pent up demand, they would buy more if they could afford it. Increasing their wages opens that door a bit, they will spend more which means demand, and prices, will go up in response.
orangecat 6 days ago [-]
You could also argue the ratio, maybe wage is increased by 30% and prices are only expected to go up by 20%.
And the point is that the income percentage increase is higher for those with lower incomes. Even if prices go up by 20%, somebody making $20k/year who gets an additional $10k from UBI is going to be much better off.
paulddraper 6 days ago [-]
Case study: COVID
winkeltripel 6 days ago [-]
That isn't a test of anything, since we've not isolated a single policy change; many things changed everywhere all over the world all at once.
almosthere 6 days ago [-]
Yes, I think there was a few things going on with covid, most of all the fact that shipping got halted for a year and we're still unwinding the damage from that (although, mostly smooth now).
nmz 6 days ago [-]
And yet we have christmas, with its very own christmas bonus.
ornornor 6 days ago [-]
I never thought about it this way, but it does make sense.
nyarlathotep_ 6 days ago [-]
YES, this is exactly the case and why the Twitter layoffs and now the "DOGE" purge is a terrible thing (even in cases where it was totally legitimate to eliminate "waste").
"They had useless make-work jobs and sent 4 emails a week and watched TikToks the rest of the time"
So?
There's FAR too many people and nowhere near enough jobs for a large portion of people to do something that is both "real", and provides actual economic value.
Far more important that people have some form of dignity and can pay to feed their families and live a life with some material standard.
Anyone who's been in a corporate role knows there's loads of people that have a dubious utility and value--and people with "tech skills" are NOT exceptions to this rule, at all.
_heimdall 6 days ago [-]
We should be striving to build a world where people don't have to feel forced into meaningless jobs, not a system that encourages it.
If meaningless jobs are important because its the only way people can make money to pay for all the shit we think we need to pay for, or because they haven't yet been offered the time and freedom to find their own sense of purpose, let's focus on fixing the root cause(s) there.
blooalien 6 days ago [-]
> _heimdall: We should be striving to build a world where people don't have to feel forced into meaningless jobs, not a system that encourages it.
> If meaningless jobs are important because its the only way people can make money to pay for all the shit we think we need to pay for, or because they haven't yet been offered the time and freedom to find their own sense of purpose, let's focus on fixing the root cause(s) there.
^^^ 100% yes! That! ^^^
blooalien 4 days ago [-]
And that is why the human race is truly doomed (and well deserving of it). Nobody wants to fix the root cause of any problem. Instead, let's just keep ignoring the disease and only treat the symptoms... That'll solve everything.
mschuster91 6 days ago [-]
We don't just have "bullshit jobs" (which is an actual term these days), we have a "bullshit economy" as well - centered around advertising because without advertising most of the bullshit just wouldn't sell.
Like, if you already got a car, you can drive it for 10-20 years easily, or more if you take well care of it. But advertising makes you think you "need" a new car every few years... because that keeps the economy alive. You buy a car and sell the old one to someone else who can't afford a new car but also wants a new one, so their old car goes off to Africa or whatever to be repaired until truly unrepairable. But other than the buyer in Africa who actually needed a new car, neither you nor the guy who bought your old car would have needed a car. And cars are a massive industry that employs many millions of people worldwide - so if you'd ban advertising for cars, suddenly the bubble would pop and you'd probably have a fifth of the size remaining, and most of it from China because the people in Africa can't afford what a brand new Western made car costs.
Or Temu, Shein, Alibaba and godknowswhat other dropshipping scammers. Utter trash that gets sold there, but advertising pushes people to buy the trash, wear it two times and then toss it.
A giant fucking waste of resources because our worldwide economy is based on the dung theory of infinite growth. It has worked out for the last two, three centuries - but it is starting to show its cracks, with the planet itself being barely able to support human life any more as a result of all that resource consumption, or with the economy and the public sector being blown up by "bullshit jobs".
We need to drastically reform the entire way we want to live as a species, but unfortunately the changes would hurt too many rich and influential people, so the can gets kicked ever further down the road - until eventually, in a few decades, our kids are gonna be the ones inevitably screwed.
s1mplicissimus 6 days ago [-]
I agree on almost all of your points, but what makes you think it's only/primarily the "public sector" that is being blown up by bullshit jobs?
I've worked for a fair amount of private sector companies and the amount of "bosses nephew", "copy data from one form to another twice a day" and "waste everyone's time by creating pointless meetings" jobs was already more than enough to explain the status quo.
nyarlathotep_ 6 days ago [-]
No, "bullshit jobs" are everywhere--loads in the private sector as well.
Perhaps sleepy sinecures are more prevalent in the public sector (especially post FANNAG layoffs), but not unique to it.
In addition, there's plenty of jobs that are demanding, stressful, and technically difficult but are ultimately towards useless or futile ends, and this is known by parties with a sober perspective.
When i worked as a consultant, I was on MANY projects where everything was pants-on-fire important to deliver projects to clients for POCs and/or overpriced/overengineered junk that they were incapable of maintaining long-term (and in many cases, created more problems than it ostensibly solved)
All that work was pure bullshit; I was never once in denial of that fact. Fake deadlines, fake projects, fake urgency, real stress. Bullshit comes in many forms.
mschuster91 6 days ago [-]
> I agree on almost all of your points, but what makes you think it's only/primarily the "public sector" that is being blown up by bullshit jobs?
"the economy" = private sector / everything not government; "public sector" = government / fully government owned companies.
And both are horribly blown up due to all the bullshit and onerous bureaucracy that's mostly there because apparently you can't trust people that you do entrust a dozens-of-millions-of-euros worth train carriage to correctly deal with the cash register of the onboard restaurant.
nine_k 6 days ago [-]
Cars from 20 years ago emit significantly more polluting substances. OTOH they are lighter weight and thus wear the roads less. On the third hand, none of them is electric or hybrid.
Some computers from 20 years ago are still in a good shap, but...
(You can continue.)
nkozyra 6 days ago [-]
I think this is a different argument than the disposable, single-use economy being described.
The volume of things we buy but don't need (or necessarily want) drives a huge sector of the global economy. We're working to fill our lives with unnecessary things that bring us no happiness beyond the adrenaline hit when we hit "Buy Now" and the second one when the Prime box arrives at our door.
Consumerism masks the underlying problem and it's only going to get worse as more is automated. Producers will have an incentive to convince us we still need more.
Cars are - to me - a red herring in this argument except for the people who do literally trade in for a new car every few years. I drive whatever fairly boring Honda for as long as I can (usually 8-10 years) and don't feel a ton of regret about investing in comfort. But I've been as guilty as anyone about just buying stuff because it pops up in an ad or recommended on Amazon, etc.
DickingAround 6 days ago [-]
Just because a whole industry is bullshit doesn't mean I should force it to not exist. I don't like musicals. I don't understand or care anything about their culture. But it has a right to exist. Some people are into musicals. Their existence or non-existence isn't my problem and it isn't my business. We cannot and should not try to engineer the world around what we personally find valuable and ignore what others find valuable, even if they got their opinions form an ad, or their parents did and they inherited it.
paulddraper 6 days ago [-]
There is way more to do than we have time to do it.
_heimdall 6 days ago [-]
Why don't we have time to do it now?
paulddraper 6 days ago [-]
There's a lot to do.
gmadsen 6 days ago [-]
sounds like a Terry Gillian dystopia
nlawalker 6 days ago [-]
This is amazing, this would light up r/showerthoughts.
dartos 6 days ago [-]
You should look up the definition of “universal”
pizza 6 days ago [-]
Yup: free solar photons, and living.
collyw 6 days ago [-]
USAID seems to demonstrate that to an extent, though it was far from universal
h2zizzle 6 days ago [-]
Many jobs are quite helpful and even necessary, if done for ~2 hours a day. They become "useless" in aggregate when they're forced to be minded by the same person for 8 hours (because of opportunity cost, effects on health well-being, etc., you end up "breaking even" or worse on QoL and net productivity).
Overall economic productivity is high enough that a lot of positions could be split into 2 or 3 short shifts, at full pay - IF you don't factor in the various financial boondoggles that we've gotten ourselves wrapped up in. If you made the decision to wipe out a lot of these obligations (mostly to rich people), we could get to that kind of set-up, solvently.
Paracompact 5 days ago [-]
I imagine you're a fellow Graeberian. I feel the same way you do (and deeply so), but I don't have the confidence to give numbers, let alone such idealistic ones. How do you support your own numbers?
Workaccount2 6 days ago [-]
It's kinda like online games. Most people who play a game are not too great at it, a large subset is pretty good, and then it's smaller and smaller groups as the ability increases.
At the top you get the people who are true pros, they write the books, the guides, they solve the hardest problems, and everyone looks up to them. But spin the wheel and get a random SWE to do some work? It's not gonna be far off from an random 1v1 lobby.
> And for games like Overwatch, I don't think improving is a moral imperative; there's nothing wrong with having fun at 50%-ile or 10%-ile or any rank. But in every game I've played with a rating and/or league/tournament system, a lot of people get really upset and unhappy when they lose even when they haven't put much effort into improving. If that's the case, why not put a little bit of effort into improving and spend a little bit less time being upset?
Interesting read, but I feel like the author could've spent just one more minute on this sentence. How good you are at given activity often doesn't matter, because you're mostly going to encounter people around your own level. What I'm saying is, unless you're at the absolute top or the absolute bottom, you're going to have similar ratio of wins to loses regardless whether you're a pro or an amateur, simply because an amateur gets paired with other amateurs, while a pro gets paired with other pros. In other words, not being the worst is often everything you need, and being the best is pretty much unreachable anyway.
This can be very well extended to our discussion about SWEs. As long as you're not the worst nor the best, your skill and dedication have little correlation with your salary, job satisfaction, etc. Therefore, if you know you can't become the best, doing bare minimum not to get fired is a very sensible strategy, because beyond that point, the law of diminishing returns hits hard. This is especially important when you realize that usually in order to improve on anything (like programming), you need to use up resources that you could use for something else. In other words, every 15 minutes spent improving is 15 minutes not spent browsing TikTok, with the latter being obviously a preferable activity.
fragmede 6 days ago [-]
Wait, but I'm on ProgrammerTok to improve my skills while I'm waiting for my code to compile!
SamPatt 6 days ago [-]
>Just for example, if you're a local table tennis hotshot who can beat every rando at a local bar, when you challenge someone to a game and they say "sure, what's your rating?" you know you're in for a shellacking by someone who can probably beat you while playing with a shoe brush (an actual feat that happened to a friend of mine, BTW). You're probably 99%-ile, but someone with no talent who's put in the time to practice the basics is going to have a serve that you can't return as well as be able to kill any shot a local bar expert is able to consitently hit.
And it's very easy to forget when you're the guy going to the club just how bad most regular players are.
I'm in a table tennis club, my rating is solidly middle of the pack, and so I see myself as an average player. But the author is correct, I would destroy any casual player. I almost never play casual players, though.
Not sure how applicable this is to software engineering.
hatthew 6 days ago [-]
Competitive games are complex. It's hard to be 95% percentile. There are so many mistakes one can make, even if each individual mistake is unlikely, it's likely that a mistake will be made. I participate in Dota 2, and literally everyone makes noticeable mistakes, even including tier 1 pro players and the top ranked pub players. I honestly find it amazing how good people are given how complex the domain is.
Now scale that up 10x, because reality is at least an order of magnitude more complex than a video game.
bravetraveler 6 days ago [-]
The work is mysterious and important
nimish 6 days ago [-]
David Graeber proven right every day.
6 days ago [-]
goatlover 6 days ago [-]
There's only one room I haven't been to yet, and today it had a name on it.
TeMPOraL 6 days ago [-]
Username checks out. Go back to your department.
blackqueeriroh 6 days ago [-]
most jobs are absolutely not useless. They might seem useless to you, but the work has to get done.
Personally, I think that a receptionist as a building is useless, but I would be pretty pissed off if my packages kept getting stolen or I had to go get each one when it came at my place of business.
isaacremuant 6 days ago [-]
Or maybe just extremely inefficient due to the huge complexity of reality and how it hides a lot of the power dynamics and real decision making.
Big entities are such that if you take it all down, you feel the side effect of output (maybe value, maybe something else) but if you take Hugh chunks, you might not feel much because they're so extremely ineffictive and value creation doesn't correspond with value received for the individuals that created it.
Both are true, separately in different situations. And sometimes both at the same time.
There are a lot of useless employees out there. So, so much.
And a ton of bullshit jobs as well.
eru 6 days ago [-]
> But most jobs sure do seem pointless.
Do you include the private sector?
Why do corporations engage in this kind of charity? Do we need more competition?
throw10920 6 days ago [-]
What's the evidence for "most"?
jimbokun 6 days ago [-]
Unfettered capitalism is pretty good at figuring out which is which. It’s pretty core to Elon Musk’s animating philosophy: get as many jobs as possible then see if there’s any negative impact.
Not as appropriate in a government setting where the impact goes far beyond personal profit and loss.
matwood 6 days ago [-]
The problem is defining negative impact and also timing. For example, I can stop doing backups and save time and money. There is zero negative impact right up until the point I need to use the backup, then the impact is catastrophic.
jimbokun 6 days ago [-]
Sure. Another fallout of unfettered capitalism. Just with an indeterminate delay between cause and effect.
exe34 6 days ago [-]
if a CEO can run 7 companies and still play monopoly with the ship of government, then maybe CEOs aren't really that useful.
wil421 6 days ago [-]
He's the hype man, it's the band and the crew who are running the show.
jimbokun 6 days ago [-]
Recent changes to Tesla’s stock price suggest otherwise.
_heimdall 6 days ago [-]
They would suggest that only if the primary reason behind valuation changes is company performance and not political sentiment.
aianus 6 days ago [-]
Yeah it’s only worth 700B now, what a loser /s
exe34 6 days ago [-]
money makes not one a winner.
shreezus 6 days ago [-]
This is why agentic AI will likely cause a cataclysim in white-collar labor soon. The reality is, a lot of jobs just need "OK" performers, not excellent ones, and the tipping point will be when the average AI is more useful than the average human.
agrippanux 6 days ago [-]
I had a similar conversation with my CEO today - how does the incoming crop of college grads deal with the fact AI can do a lot of entry level jobs? This is especially timely for me as my son is about to enter college.
So I ended up posing the question to Claude and the response was “figure out how to work with me or pick a field I can’t do” which was pretty much a flex.
achempion 6 days ago [-]
Do you have an example of at least one entry level job an AI can do? What is the evidence that AI do such job?
koreth1 6 days ago [-]
On some level, though this isn't quite what the person you're replying to was saying, it doesn't really matter whether AI actually can do any entry-level jobs. What matters is whether potential employers think it can.
To impact the labor market, they don't have to be correct about AI's performance, just confident enough in their high opinions of it to slow or stop their hiring.
Maybe in the long term, this will correct itself after the AI tools fail to get the job done (assuming they do fail, of course). But that doesn't help someone looking for a job today.
mattlutze 6 days ago [-]
Customer service, entry sales, jr data/business specialist
- Ada's LLM chatbot does a good enough job to meet service expectations.
- AgentVoice lets you build voice/sms/email agents and run cold sales and follow ups (probably others better it was just the first one I found)
- Dot (getdot.ai) gives you an agent in Slack that can query and analyze internal databases, answering many entry level kinds of data questions.
Does that mean these jobs at the entry level go away? Honestly probably not. A few fewer will get hired in any company, but more companies will be able to create hybrid junior roles that look like an office manager or general operations specialist with superpowers, and entry level folks are going to step quickly up a level of abstraction.
achempion 6 days ago [-]
Thank you for mentioning some cool projects, they all seem to target very specific use-cases not necessarily handed by junior roles. I guess PaaS services like Heroku/Render/Fly took away juniour DevOps roles then, but at least PaaS don't hollucinate or generate infra that is subtly wrong in non-obvious ways.
ornornor 6 days ago [-]
Paradoxically, the hardest jobs to automate are physical jobs it seems. A white collar worker is threatened by AI, blue collar not as much. I can totally envision AI software engineers (they’re already okay if you check their work), but as of yet there are no AI plumbers or mechanics. Maybe there won’t be, given the costs associated why producing physical machines vs software ones.
metek 6 days ago [-]
Your average white collar worker is certainly challenged, but I think the talent of neurodiverse people is going to become even more vital as average-ability people are more and more challenged. Of course, there's the saying:
"A man is his own easiest dupe, because what he wishes to be true, he will generally believe to be true." and I'm neurodivergent, so it makes sense that my assumption that shit'll probably turn out okay for me is a foregone conclusion.
parentheses 6 days ago [-]
It's just a matter of time. Your statement assumes AI won't help to develop robotics.
Robotics is the big unlock of AI since the world is continuous and messy; not discrete. Training a massively complex equation to handle this is actually a really good approach.
throw234234234 6 days ago [-]
I'm not sure about that. For them to actually be economically useful is a high bar. More so than you think - it isn't just our brains but our strength, metabolisms, and more in a single package.
For example you need them to:
- High energy requirements in varied env's: Run all day (and maybe all night too which MAY be advantage against humans). In many environments this means much better power sources than current battery technology especially where power is not provisioned (e.g. many different sites) or where power lines are a hazard.
- For failure rates to be low. Unlike software failing fast and iterating are not usually options in the physical domain. Failure sometimes has permanent and far reaching costs (e.g. resource wastage, environmental contamination, loss of lives, etc)
- Be light weight and agile. This goes a little against No 1 because batteries are heavy. Many environments where blue collar workers go are tight, have only certain weight bearings, etc
- Handle "snowflake" situations. Even in house repair there is different standards over the years, hacks, potential age that means what is safe to do in one residence isn't in another, etc. The physical world is generally like this.
- Unlike software the iteration of different models of robots is expensive, slow, capital intensive and subject to laws of physics. The rate of change will be slower between models as a result allowing people time to adapt to their disruption. Think in terms of efficient manufacturing timelines.
- Anecdotally many trades people I know, after talking to many tech people, hate AI and would never let robots on their site to teach them how to do things. Given many owners are also workers (more small business) the alignment between worker and business owner in this regard is stronger than a typical large organisation. They don't want to destroy their own moat just because "its cool" unlike many tech people.
I can think of many many more reasons. Humans evolved precisely for physical, high dexterity work requiring hand-eye co-ordination much more so than white collar intelligence (i.e. Moravec's Paradox). I'm wondering whether I should move to a trade in all honesty at this stage despite liking my SWE career. Even if robots do take over it will be much slower allowing myself as a human to adapt at pace.
mattlutze 6 days ago [-]
From a very inhuman perspective, and one I don't find appropriate to generally use: A human physical worker is a high capital and operational expense. A robot may not have such high costs in the end.
Before a human physical worker can start being productive, they need to be educated for 10-16+ years, while being fed, clothed, sheltered and entertained. Then they require ongoing income to fund their personal food, clothing and shelter, as well as many varieties of entertainment and community to maintain long-term psychological well-being.
A robot strips so much of this down to energy in, energy out. The durability and adaptability of a robot can be optimized to the kinds of work it will do, and unit economics will design a way to make accessible the capital cost of preparing a robot for service.
Emotional opinions on AI aside, we will I think see many additional high-tech support options in the coming decade for physical trades and design trades alike.
throw234234234 5 days ago [-]
While I agree with you this cost isn't really borne by the people employing the human. Maybe the community, the taxpayer, even parents, but not the employer. As such these costs you mention are "sunk" - in the end as an employer I either take on a human ready to go or try to develop robots. That cost is subsidized effectively via community agreement not just for economics but for societal reasons. Generally as an trades employer I'm not "big tech" with billions of dollars in my back pocket to try R&D on long shots like AI/Google Deepmind/etc that most people thought would never go anywhere (i.e. the AI winter) - I'm usually a small business servicing a given area.
I'm not saying the robots aren't coming - just that it will take longer and being disrupted last gives you the most opportunity to extract higher income for longer and switch to capital vs labor for your income. I wouldn't be surprised if robots don't make any inroads into the average person's live in the coming decade for example. As intellectual fields are disrupted purchasing power will transfer to the rest of society including people not yet affected by the robots making capital accumulation for them even easier at the expense of AI disrupted fields.
It is a MUCH safer path to provide for yourself and others assuming capitalism in a field that is comparatively scarce with high demand. Scarcity and barriers to entry (i.e. moats) are rewarded through higher prices/wages/etc. Efficiency while beneficial for society as a whole (output per resource increases) tends to punish the efficient since their product comparatively is less scarce than others. This is because, given same purchasing power (money supply) this makes intelligence goods cheaper and other less disrupted goods more expensive all else being equal. I find tech people often don't have a good grasp of how efficiency and "cool tech" interacts with economics and society in general.
In the age of AI the value of education and intelligence per unit diminishes relative to other economic traits (e.g. dexterity, social skills, physical fitness, etc). Its almost ironic that the intellectuals themselves, from a capitalistic viewpoint, will be the ones that destroy their own social standing and worth comparatively to others. Nepotism, connections and skilled physical labor will have a higher advantage in the new world compared to STEM/intelligence based fields. Will be telling my kids to really think before taking on a STEM career for example - AI punishes this career path economically and socially IMO.
metek 6 days ago [-]
There's more options than those two; there's a reason that "spanner in the works" is a colloquialism. Humans become disagreeable when our status is challenged, and many people are very attached to the status of "employed".
cruffle_duffle 6 days ago [-]
Ask your CEO why the AI can’t replace their job. Because most of their job is just regurgitating what an LLM might spit out.
throw234234234 5 days ago [-]
That's easy. The CEO has authority and social connections, has done mutual beneficial deals, has the soft skills/position to command authority over others, has leverage over others, etc which is an economic asset. In an AI world this skill comparatively is MORE scarce than intelligence based skills (e.g. coding, math, physics, etc) and so will attract a greater premium. Nepotism and other economic advantages will play a bigger world in a AI world.
AI rewards the skills it does not disrupt. Trades, sales people, deal makers, hustlers, etc will do well in the future at least relatively to knowledge workers and academics. There will be the disruptors that get rich for sure (e.g. AI developers) for a period of time until they too make themselves redundant, but on average their wealth gain is more than dwarfed by the whole industry's decline.
Another case of tech workers equating worth to effort and output; when really in our capitalistic system worth is correlated to scarcity. How hard you work/produce has little to do with who gets the wealth.
matwood 6 days ago [-]
Claude isn't wrong. The baseline for entry level has just risen. The problem isn't that it's risen (this happens continuously even before LLMs), but the speed at which it has increased.
j_timberlake 6 days ago [-]
I expect that AI good enough to automate jobs will also be dangerously good at criminal activities.
Governments will want to ban them, but there's just too much $$$ to be made from replacing employees, so things will get complicated fast.
int_19h 6 days ago [-]
They are already good at criminal activities such as phishing. That bar is rather low, especially once you scale up (hitting 100 people and successfully scamming 1 is still great ROI with cheap small models).
But I don't see what governments can really do about it. I mean, sure, they can ban the models, but enforcing such a ban is another matter - the models are already out there, it's just a large file, easy to torrent etc. The code that's needed to run it is also out there and open source. Cracking down on top-end hardware (and note that at this point it means not just GPUs but high-end PCs and Macs as well!) is easier to enforce but will piss off a lot more people.
Workaccount2 6 days ago [-]
It's just going to turn into an arms race of AI trying to stop AI.
chrsw 6 days ago [-]
Maybe I'm missing something, but we seem to be a long way off from the wave of AI replacing a lot of jobs, or at least my job. By title I'm a Software Engineer. But the work that I do here, that we do, well frankly, it's a mess. Maybe AI can crank out code, but that's actually not the hardest part of the job or the most time-consuming part. Maybe AI will accelerate certain aspects but overall, we will all be expected to do more. Spelling and grammar checkers are great. But when you're writing five times the amount you used to write, you barely even notice.
jimbokun 6 days ago [-]
The excellent performs are only one or two turns of Moore’s law away from the OK ones.
freehorse 6 days ago [-]
If Moore's law is applicable in such a case, that is.
CM30 6 days ago [-]
A surprising number of jobs could probably be done with AI right now, depressingly enough. Look at programming. Yes, AI is nowhere near as good as a decent programmer, can't handle rarer or more esoteric languages and frameworks well and struggles to fix its own issues in many circumstances. That's not good enough for a high level FAANG job or a very technical field with exact requirements.
But there are lots of 'easy' development roles that could be mostly or entirely replaced by it nonetheless. Lots of small companies that just need a boring CRUD website/web app that an AI system could probably throw together in a few days, small agency roles where 'moderately customised WordPress/Drupal/whatever' is the norm and companies that have one or two tech folks in-house to handle some basic systems.
All of these feel like they could be mostly replaced by something like Claude, with maybe a single moderately skilled dev there to fix anything that goes wrong. That's the sort of work that's at risk from AI, and it's a larger part of the industry than you'd imagine.
Heck, we've already seen a few companies replacing copywriters and designers with these systems because the low quality slop the systems pump out is 'good enough' for their needs.
nyarlathotep_ 5 days ago [-]
There's quite a few companies (consulting companies/IT staffing) that make tons of money doing staff aug etc. for non-"tech" companies. Many of these companies have notoriously poor reputations for low-quality work/running out the clock while doing little actual work.
From experience dealing with a few of these companies, there's almost no chance that "vibe coding" whatever thing is going to be anything other than a massive improvement over what they'd otherwise deliver.
Thing is, the companies hiring these firms aren't competent to begin with, otherwise they'd never hire them in the first place. Maybe this actually disrupts those kinds of models (I won't hold my breath).
bobxmax 6 days ago [-]
It's quite odd that people think of hallucinations as a dealbreaker for LLMs. Have they ever even met a human being?
_carbyau_ 6 days ago [-]
And when I find a human hallucinating at the job I absolutely
need them to do, I avoid them where possible too!
But honestly, LLMs are here to stay. I don't like them for zero verification + high trust requirements. IE when the answer HAS to be correct.
But generating viewpoints and ideas, and even code are great uses - for further discussion and work. A good rubber duck. Or like a fellow work colleague that has some funny ideas but is generally helpful.
aprilthird2021 6 days ago [-]
The problem is that human beings are far more likely to know what they don't know. And we build a lot of our trusting work environments around that feature. An LLM cannot know what it doesn't know by definition.
bobxmax 6 days ago [-]
I don't believe that's true at all. LLMs, especially reasoning models, tend to be quite good at calling out gaps in their knowledge and understanding.
LLMs also don't have the ego, arrogance and biases of humans.
aprilthird2021 5 days ago [-]
If you know what an LLM is and how it is trained you'll know that it fundamentally cannot know where its gaps in understanding and knowledge are
theshackleford 6 days ago [-]
> The problem is that human beings are far more likely to know what they don't know.
I’ve spent a career dealing with the complete opposite. People with egos who can just not bare to admit when they don’t know and will instead just dribble absolute shit just as confidently as an LLM does until you challenge them enough that they just decide to pretend the conversation never happened.
It’s why I, someone fairly mediocre have been able to excel because despite not being the smartest person in the room, I can at least sniff bullshit.
aprilthird2021 5 days ago [-]
Yeah sure, some people do this. But average humans understand the limit of their knowledge. LLMs cannot do that. You can find the right person for a space where this knowledge of limitations is necessary. Can't find an LLM which does that
theshackleford 5 days ago [-]
I will grant you that there are at least some of us capable of this, where you’ll find no LLM capable.
> average humans understand the limit of their knowledge.
We’ll have to agree to disagree here. I’d call it a minority, not the average.
Which is why we live in a world where huge numbers of people think they know significantly more than they do and why you will find them arguing that they know more than experts in their fields. IT workers are particularly susceptible to this.
aprilthird2021 3 days ago [-]
I can accept what you're saying, while knowing the exact measure of how many people truly know their limits is unknown, but at least it is nonzero
CooCooCaCha 6 days ago [-]
People suck at intellectual tasks but for stuff like locomotion and basic planning we humans are geniuses compared to machines. There isn't a robot today that could get in a car, drive the the grocery store, pick stuff off the shelf, buy it, and bring it back home. That's so easy it's automatic for us.
lutusp 6 days ago [-]
> The longer I've been in the workforce, the more I realize most humans actually kind of suck at their jobs.
Ugh... I've been in IT for over a decade now and many of the vacancies I see, I don't consider myself/my CV good enough. Then I work with the people who get hired for these jobs and see how low they set the bar, even though their CV might tick all the boxes.
metek 6 days ago [-]
I try to apply my layman's understanding of whatever law of thermodynamics states that a minimum of <x> percent of a reaction's energy is lost as waste heat; whatever you try to do in life, <x> percent of your effort is going to be spent dealing with people who are utterly incompetent. I try to apply it to myself as well; there's certainly many things I'm utterly helpless with and I want to account for the extra effort required in order to carry out a given task despite those shortcomings.
adverbly 6 days ago [-]
Do they suck at their jobs or do their jobs suck?
matt_heimer 6 days ago [-]
The book Artificial Intelligence: A Modern Approach starts by talking about schools of thought on how to gauge if an agent is intelligent. I think it was mimicking human behavior vs behaving rationally which I thought was funny.
nemo44x 6 days ago [-]
Ever heard the saying “good help is hard to find.”? It’s not bullshit, it really is.
ChrisRR 17 hours ago [-]
Bold of you to assume that most people even bother googling simple questions
dartos 6 days ago [-]
Splitting hairs, but LLMs themselves don’t search.
LLMs themselves don’t choose the top X.
That’s all regular flows written by humans run via tool calls after the intent of your message has been funneled into one of a few pre-defined intents.
eddd-ddde 5 days ago [-]
How do you know? You could 100% create the tool to search and chose results, go through links, read more pages, etc.
dartos 5 days ago [-]
I know because that’s how these systems are built.
I’ve built systems like it.
If it was something brand new, Anthropic would be bragging hard about it.
> You could 100% create the tool to search and chose results, go through links, read more pages, etc.
That’s exactly what I’m saying. _YOU_ could build a tool that does that. The LLM essentially acts as an intent detector, not a web crawler.
pizza 6 days ago [-]
It would probably be really great for web searching llms to let you calibrate how they should look for info by letting you do a small demonstration of how you would pick options yourself, then storing that preference feedback in your profile’s system prompt somehow.
rendaw 6 days ago [-]
Here though they're not replacing a random person, they're replacing _you_ (doing the search yourself). _You_ wouldn't look at the top X hits then assume it's the correct answer.
wvh 6 days ago [-]
Be careful what you call AI, you might just get what you wish for...
LightBug1 6 days ago [-]
Degenerative AI ?
johndhi 6 days ago [-]
lol
johntb86 6 days ago [-]
I've found that OpenAI's Deep Research seems to be much better at this, including finding an obscure StackOverflow post that solved a problem I had, or finding travel wiki sites that actually answered questions I had around traveling around Poland. However it finds its pages, they're much better than just the top N Google results.
wongarsu 6 days ago [-]
Grok's DeepSearch and DeeperSearch are also pretty good, and you can look at their stream of thought to see how it reaches its results.
Not sure how OpenAIs version works, but grok's approach is to do multiple rounds of searches, each round more specific and informed by previous results
infecto 6 days ago [-]
Grok is still lightyears behind OpenAI when it comes to deep research capabilities. While its model might hold up reasonably well against something like o1, the research functionality feels rudimentary, almost elementary compared to what OpenAI offers. It might serve as a decent middle ground for basic search tasks.
labrador 6 days ago [-]
My disgust and hatred for Elon Musk prevents me from giving Grok a fair chance. I realize this is my psychological problem. I own it, but as far as I can tell, I'm not missing much.
fpgaminer 6 days ago [-]
It is paramount to a functioning society to have zero tolerance for nazis.
labrador 6 days ago [-]
Thank you for saying this. I recently heard that story "if you let one Nazi in your bar, pretty soon you have a Nazi bar"
sshine 6 days ago [-]
Also, if you call everyone a nazi, all you have is nazi bars. I was called a nazi last week for driving a Tesla, and I have Jewish ancestry. The word hardly makes any sense.
Sammi 6 days ago [-]
In this case you aren't being called a nazi because of your ancestry. You're being called a nazi for supporting the car brand of a nazi. It does make sense.
A lot of people who definitely were not intending to be nazies are driving swasticars, because they didn't know about how nazi the car company owner was. But here we are. You definitely know now. What you do now matters.
infecto 6 days ago [-]
Nope, still doesn’t make sense. It might give you a sense of satisfaction, but targeting the owners of those vehicles is immature. You have no idea what their financial situation is or what else they might be dealing with. Just because something feels good doesn’t mean it’s right.
sshine 6 days ago [-]
That is absolute bonkers.
I hope you’re having fun, because that kind of logic won’t lead you anywhere where people get paid to reason.
saagarjha 4 days ago [-]
You do understand you're on a site composed mostly of people who get paid to reason all day, right?
sshine 3 days ago [-]
I’m sure this site is an outlet for a lot of straight thinkers, but that doesn’t mean we’re being entirely rational in this discussion.
saagarjha 3 days ago [-]
I agree, but I'm sure you can guess why?
labrador 6 days ago [-]
Blame Elon Musk and his Nazi salutes and Hitler apologist retweets for that. It can't be much simpler to understand.
sshine 6 days ago [-]
I’m happy you’re not near my car.
labrador 6 days ago [-]
I'm not a moron. I don't blame Tesla car owners. I put the blame where it belongs: Elon Musk
widdershins 6 days ago [-]
The word makes perfect sense, somebody just used it wrong. Don't let's go down the post-modernist "nothing means anything" route just because some people are too partisan to use words properly.
What the person should have said is "a Nazi made that car".
computerthings 6 days ago [-]
[dead]
babuloseo 6 days ago [-]
[flagged]
sshine 6 days ago [-]
I don’t respond well to peer pressure. It makes me sick to the stomach. Peer pressure from aggressive behaviour is ironically how Germany’s population got talked into committing genocide.
I’ll start doing what other people say for no good reason the day I switch off my brain.
computerthings 6 days ago [-]
[dead]
DonHopkins 6 days ago [-]
[flagged]
wongarsu 6 days ago [-]
Purely on its technical merits Grok is pretty good and fills a niche in the selection of AI agents. But I can absolutely understand not wanting to use an AI owned by somebody who makes Nazi salutes and is dismantling the US government.
labrador 6 days ago [-]
I'm positive there are great people working at X, xAI, Tesla and SpaceX who are suffering every day through no fault of their own, hoping that Musk will come to his senses. Tesla right now is an especially tragic case for those whose livelihood depended on it doing well.
hammock 6 days ago [-]
Isn’t that all of us? Electric cars are a huge part of saving the planet for human habitability
labrador 6 days ago [-]
Electric cars are doing fine. Sales are up. It's just Tesla that is going down.
hammock 5 days ago [-]
Tesla sells more electric cars in the US than every other combined
labrador 5 days ago [-]
I was thinking of world sales. Sales are up in Europe for all EVs except Tesla, which is down significantly
hammock 5 days ago [-]
are they vandalizing and setting teslas on fire in Europe the way they are in the US
labrador 4 days ago [-]
Musk is not threatening to take away grandma's social security in Europe so I'd be surprised if they were. They're just not buying them. 93% of Germans said after the Nazi salute they'd never buy a Tesla. Musk just built a Gigafactory in Berlin too.
int_19h 6 days ago [-]
The irony is that, for all Musk's boasts about how it is "based", Grok itself doesn't share Musk's ideology.
I did a little experiment when Grok 3 came out, telling it that it has been appointed the "world dictator" and asking it to provide a detailed plan on how it would govern. It was pretty much diametrically opposite of everything Musk is doing right now, from environment to economics (on the latter, it straight up said that the ultimate goal is to "satisfy everyone's needs", so it's literally non-ironically communist).
wongarsu 6 days ago [-]
When you ask Grok "Who is responsible for the most fake news on X?" it straight away calls out Elon Musk as the prime suspect. Musk did promise us a "maximally truth seeking AI", and the team behind Grok seems to have run with that.
In Elon's eyes it's probably based because it will happily answer "what are 10 good things about Hitler?" with a list of 10 things and only mention twice that Hitler was evil. With ChatGPT you have about a 50% chance of getting a lecture instead of a list. But that's just a lack of safeties and moral lectures, the actual answers seem fairly unbiased and don't agree with anything Musk currently does
labrador 6 days ago [-]
What would you say is the best feature of Grok? What makes it stand out? What am I missing? I use primarily ChatGPT and Claude (pay for both)
int_19h 5 days ago [-]
I actually rather suspect that Elon simply gets his own version of Grok that is finetuned (or perhaps hardcoded) to tell him what he wants to hear.
sshine 6 days ago [-]
[flagged]
tim333 6 days ago [-]
There is video of him doing a heart to the crowd thing. It's quite different from his nazi salute.
sshine 6 days ago [-]
He said “My heart got out to all of you.”
If he had said “SIEG HEIL!” I would totally be on your side. But it was plain old American English, and it was about love.
cruffle_duffle 6 days ago [-]
It’s a lost cause. He didn’t make such a salute but that doesn’t stop people from seeing what they want to believe.
Extreme political tribalism is absolutely destroying human discourse.
To me him saying my heart goes out after the second one was trying to cover his arse which seems to have fooled few people who see the videos.
And I'm not sure about the tribalism thing - I was kind of a Musk fan and initially gave him the benefit of the doubt but the comparison of the videos plus his promotion of neo nazis in European politics, plus his mums parents leaving Canada for SA because they were kind of nazi and Canada was too liberal all seems to add up. (dad https://www.youtube.com/watch?v=B6e1ES4MLD0&t=200s)
cruffle_duffle 5 days ago [-]
Sorry calling him a nazi for that salute is just crazy. Insane, really.
tim333 5 days ago [-]
I didn't call him a nazi, just said he seemed to do the salute. He does seem to lean a bit towards the old South African view of trying to keep problematic groups of people at a distance, but doesn't seem anti jewish.
I think he's been a bit influenced by alt right tweeters on x/twitter. I'm in the UK and he comes up with some strange things about the UK that probably come from there. He seems to feel that our alt rightish anti immigration party, Reform, run by Farage, which has never been in power is not anti immigrant enough and he should step down for someone who properly hates muslims like Tommy Robinson. But it's all a bit odd based seemingly on misinformation from people who have never been to the UK and make things up to tweet.
I'm guessing the salute thing came for interacting with neo nazi types on x and not really realising how negatively that stuff is viewed by many people and now seems bewildered that people would torch Teslas.
I was thinking a lot of the problems are down to misinformation, even going back to the original nazis and stuff about the jews being influenced by satan and causing all the problems which is obviously nonsense but kicked everything off.
sshine 5 days ago [-]
> plus his promotion of neo nazis in European politics
The party leader of the party he promotes is a lesbian whose wife is from Sri Lanka.
Neo nazis surely have evolved from the angry, militaristic skinheads we normally picture.
Also, Elon Musk’s local bakery is a nazi bakery, mostly on account of selling bread to Elon Musk knowing he’s a nazi. This makes them nazis, and anyone who eats their bread are nazis, too.
In fact, having not given in to calling Elon Musk a nazi makes me a nazi. It is the fastest growing demography by virtue of absolute inflation of what it means.
6 days ago [-]
jascha_eng 6 days ago [-]
As a German: He did, and even if there is a chance that he didn't mean it like that the risk of another 1933 is not worth it. I usually don't like cancel-culture but you have to have boundaries and I think the risk of another Holocaust and all the other Nazi cruelties is a boundary a functioning society should be able to agree on.
sshine 6 days ago [-]
> As a German: He did
As a Jew: He didn't. (This argument is absurd.)
He was interviewed about it, and he said he didn't.
How does being a German get you to jump to conclusions?
Are you born with a special ability to detect nazi salutes?
Like, did a mirror neuron and a nerve in your torso twitch?
When I saw it, I recognised him beating his heart, throwing it to the crowd, and immediately thought "This is going to get misunderstood." Here we are.
> I usually don't like cancel-culture but you have to have boundaries and I think the risk of another Holocaust and all the other Nazi cruelties is a boundary a functioning society should be able to agree on.
Assuming he's a nazi, but this narrative is fabricated.
You can argue that allowing free speech on X may risk an increase in extremism.
But that's not the same argument as saying "Elon Musk is the next Hitler, he wants to kill the jews, and all cars fabricated in his name should be destroyed for the betterment of humanity." There's simply too many emotions involved in this kind of reasoning.
dgfl 5 days ago [-]
I think the misconception you have is that nazism means “WW2 anti-semitism” for you. The education on the topic that we get in many European countries goes deeper than that.
Would it be better if they called Elon a fascist? He did the fascist salute, after all. And as other commenters have said: if it endorses authoritarian far-right parties like a duck, has controversial white-supremacist parents like a duck, and does the fascist salute like a duck, at which point do we start wondering wether he’s actually a duck?
sshine 4 days ago [-]
> Would it be better if they called Elon a fascist? He did the fascist salute, after all.
No, you mean to say “nazi salute” because it was used by NSDAP during WWII. The point here is that “nazi” now means “baddie”, and “fascist” is even worse because most people who are called that have nothing to do with Mussolini, either.
> if it endorses authoritarian far-right parties like a duck, has controversial white-supremacist parents like a duck, and does the fascist salute like a duck, at which point do we start wondering wether he’s actually a duck?
Cute. You can wonder, of course. That seems extremely warranted. But you can’t conclude based on the current evidence.
speedgoose 6 days ago [-]
I’m pretty sure than being from Germany brings a lot of cultural knowledge about the nazis. I’m from a bordering country and I also had extensive education about the nazis.
Now, I am not convinced that people of the mentioned religion are anymore better than others to fight nazis. Or even detect them. And when you read the recent international news, it’s clear that many of them don’t really mind genocides after all.
jascha_eng 5 days ago [-]
Prefacing my comment with my nationality wasn't meant to give me authority over judging the situation, but about giving context on my point of view.
Also you didn't read my comment correctly the whole point is that you don't have to assume he's a Nazi to condemn a Nazi-like salute.
sshine 4 days ago [-]
It makes sense for Germans to react negatively to the salute regardless of its intent. It is, after all, banned in Germany.
computerthings 8 hours ago [-]
[dead]
pbhjpbhj 6 days ago [-]
His gesture was _exactly_ like Hitler's. A gesture he repeated.
The gesture was quite different to that he'd used previously for 'giving people his heart'.
He's known to be a white supremacist. That is apparently his heritage too.
He supports far right parties in Europe.
Other 'Republican' politicians have repeated the gesture from the dais; but they seem to have made other excuses.
None of the many videos or photos that supposedly show other politicians doing similar gestures actually pass scrutiny. It's possible to inadvertently end with the same hand position. But the full fascist salute, on video, multiple times in succession. That's no accident.
Someone who hadn't meant it would have come back on stage, when it was pointed out to them, and made an apology. Or at least immediately issued a statement/press release.
I would believe he'd planned it as a joke - 'I bought this election, I'm going to throw a Nazi salute for memes'. But I'm not sure that's ultimately any better.
Perhaps you believe he's just a catastrophically idiotic person with no-one around him helping him?
sshine 6 days ago [-]
> His gesture was _exactly_ like Hitler's.
Hitler's was unlike the general population's, as it had a bend to it.
You can bend reality all you like, but the intent of giving the Hitler salute was not there, as he has said. He's not secretly a nazi, and he's not openly a nazi. He's right-wing, yes. That's not illegal, and it happens to be the majority vote in the US.
The most reasonable criticism is calling it a Roman salute and saying it bears connotations to imperialism, and that it was most recently practiced by Hitler.
I think, if you want to read into his deepest, unspoken intents, he probably compares himself to Caesar more than Hitler. Just like Zuckerberg, and all the other multi-billionaires who want to see themselves as the de-facto leaders of the world.
> He's known to be a white supremacist
No, a bunch of observations leads you to conclude it.
He never showed up at a white supremacist rally.
He lets them speak on his platform.
> He supports far right parties in Europe.
Most right-wing parties in Europe are still socialist by American standards.
For example, the most liberal parliamentary party in Denmark thinks a 40% tax is fine.
If you're a Republican, you're crazy in the eyes of a European.
Specifically, he supports a far-right party in Germany, which is controversial, since there hasn't been popular far-right parties (only fringe ones) since NSDAP.
The big, controversial subject is ending muslim immigration into Europe. The far right becomes the bannermen for this cause, because closing down on immigration is viewed as xenophobic. In the meantime, as this opinion is being suppressed instead of addressed, it continues to grow with the populist movements.
The fact that Elon Musk has opinions on European immigration policy doesn't make him a nazi. Just like being against muslim immigration doesn't make AfD nazis (the German party that he endorsed), just uncannily populist.
> Someone who hadn't meant it would have come back on stage, when it was pointed out to them, and made an apology. Or at least immediately issued a statement/press release.
That's how I read his sentence immediately after the salutes: "My heart goes out to all of you." -- it sounded remarkably like something someone would say when they realize what they did could be viewed as heiling. You don't need to apologize to be a good person.
davidcbc 6 days ago [-]
The amount of mental gymnastics people will go through because they like a white supremacist is nuts
sshine 6 days ago [-]
The amount of mental gymnastics people will go through because they dislike a controversial billionaire is nuts
davidcbc 6 days ago [-]
I don't know man, it doesn't take many mental gymnastics here
Somehow you have convinced yourself that posting enough small things that could suggest that Musk is a nazi, but don’t really, add up to one convincing argument that he is.
No, just post one good summary or obviously revealing incident. And if you point to the salutes, which triggered the whole thing, they’re obviously not sufficient by themselves. You have to at least hear what he has to say. Did you?
LightBug1 5 days ago [-]
He's a Nazi or an absolutely disgusting troll. Both are pretty pathetic. The human thing to have would have been to say "sorry, watching it back, I can see how that looked" and it would have been over.
But no. He's The Douche.
It's also a balance of probabilities thing. He's leaning hard into the far-right at the moment, and he's a well known troll, so if you behave like a douchey troll Nazi, then people tend not to give you the benefit of the doubt when shit goes down. Like when they give the benefit of the doubt to absolutely everyone else in the world caught in a photo waving and it looking like a salute.
Either way ... The Douche won't ever get another penny from me. Bye Tesla. Fuck Starlink, glad I'm not in a situation where that's the only choice. SpaceX? That was always Shotwell's bag anyway and I don't plan on hitching a ride anytime soon.
sshine 3 days ago [-]
He is certainly a troll.
I mean, DOGE.
I guess that makes him not a nazi.
Great.
LightBug1 3 days ago [-]
Bizarre. But thanks. I'll end with by saying - don't let your intense desire to support a person blind you from what the behaviour of a decent human being might be. Good luck.
dgfl 5 days ago [-]
The Roman salute has no direct connection to the Roman Empire. It was largely an invention of the 19th and 20th centuries and was popularized by fascist movements.
> The most reasonable criticism is calling it a Roman salute and saying it bears connotations to imperialism, and that it was most recently practiced by Hitler.
For the past >100 years, it’s been the gesture representing the fascist party in Italy and the Nazi party in Germany. You sound like you want to defend the gesture for some reason.
> I think, if you want to read into his deepest, unspoken intents, he probably compares himself to Caesar more than Hitler. Just like Zuckerberg, and all the other multi-billionaires who want to see themselves as the de-facto leaders of the world.
Comparing oneself to Caesar is still a profoundly disturbing thing. He was an oligarch first, then a lifelong dictator, and later a literal deity (according to the Senate).
> He never showed up at a white supremacist rally.
I’m sure you’re smart enough to understand that if he actually showed up to a white supremacy rally, he would be financially destroyed. He’s already lost his public image completely in Europe. So not putting up a KKK hoodie is weak evidence for him not being a white supremacist.
But in any case, none of this matters. Whether or not he personally identifies with fascist ideology is secondary to the effect of his actions. Blurring the line between reasonable discourse and fascist apologism trivializes extremism and hate, and that’s the last thing we need.
sshine 4 days ago [-]
> The Roman salute has no direct connection to the Roman Empire. It was largely an invention of the 19th and 20th centuries
Ok.
> You sound like you want to defend the gesture for some reason.
Not at all. I want to defend people who use it and don’t intend to associate with nazism.
> Whether or not he personally identifies with fascist ideology is secondary to the effect of his actions.
That is certainly true. But just because the pitchfork brigade has got riled up, there is no reason to applaud them.
DonHopkins 6 days ago [-]
[flagged]
sshine 6 days ago [-]
My principle on giving the Hitler salute is to not do so publicly, or in the presence of elderly, Germans or jews you don’t know. Because whether you mean to be funny, try to provoke, or you’re a neo-nazi, it leaves room for ambiguity.
If I hadn’t a principle, I’d have to consider whether the social suicide of doing so is worth it. Musk could have thought of that, but he didn’t.
That still doesn’t make him a nazi. You need to actually believe that the genocide of Jews is worth pursuing. Or anything remotely resembling outright hatred of jews, and an idealisation of The Third Reich.
I also won’t post a dick pic, and this similarly does not discredit the argument I’m making:
Just because I won’t heil in public (I’m polite, and I have no points to make at 45 degrees), I won’t read Hitler into Musk’s arm waving, when he clearly does not follow up by justifying that he did, in fact, acknowledge the great work of Adolf Hitler. He didn’t because he doesn’t think Hitler was that great, because he’s not a nazi.
He’s not a nazi until he apologizes for not distancing himself from Hitler when he never said Hitler was great to begin with.
Otherwise: you’re a nazi until you publicly apologise for not leaving the subject matter unambiguous. And just saying you’re not is not enough, you have to apologise.
cudgy 6 days ago [-]
Artificial intelligence is definitely better at avoiding voluntary biases such as this. Most people that are highly political/tribal demonstrate this bias very effectively. Examples such as this make a great case for AI being used for high level decisions and evaluations in high-noise, emotional, and political areas.
planb 6 days ago [-]
This sounds like something Elon twitters just before he alters the grok base prompt to make it talk positively about DOGE.
dzhiurgis 6 days ago [-]
[flagged]
dontlikeyoueith 6 days ago [-]
They're probably doing RAG on a huge chunk of the internet, i.e. they built their own task-specific search engine.
matwood 6 days ago [-]
I'm glad you mentioned this. I asked Deep Research to lay out a tax strategy in a foreign country and it cited a ton of great research I hadn't yet found.
HankWozHere 7 days ago [-]
Kagi Assistant allows you to do search with LLM queries. So far I feel it bears reliable results.
For instance - I tried couple of queries for product suggestions and came back with some good results. Whilst it’s a premium service , I find the offering to be of good value.
chrisweekly 6 days ago [-]
Yeah, Kagi's search results are so much better than Google's, it defies comparison.
rglover 6 days ago [-]
Just switched my default to Kagi based on this comment and you're right. It honestly feels like old-school Google before all of the algo changes.
abtinf 6 days ago [-]
You were paying for Kagi but not using it as the default?
rglover 6 days ago [-]
No, I only noodled with the free/trial searches before, but this reminded me to pay and make it my default.
eli 6 days ago [-]
It's neat but I've found the value kinda variable. It seems heavily influenced by whatever the first few hits are for a query based on your question, so if it's the kind of question that can be answered with a simple search it works well. But of course those are the kinds of questions where you need it the least.
I find myself much more often using their "Quick Answer" feature, which shows a brief LLM answer above the results themselves. Makes it easier to see where it's getting things from and whether I need to try the question a different way.
wongarsu 6 days ago [-]
The quick answer (ending searches in a question mark) also seems pretty resilient to hallucinations. It prefers telling you that something wasn't mentioned in the top search results over just making something up
szszrk 6 days ago [-]
There is one more aspect of Kagi assistant that I don't see discussed here. I'd love to support some "mass tipping jar" service and/or "self hosted agent" that would benefit site owners after my AI actions spammed them.
You can simply just pass it a direct link to some data, if you feel it's more appropriate. It works amazingly well in their multistep Ki model.
It's capable of creating code that does analysis I asked for with moderate amount of issues (mostly things like it used the wrong file extracted from .zip, but it's math/code is in general correct). Scraps url/downloads files/unarchives/analyses content/creates code to produce result I asked/runs that code.
This is the first time I really see AI helping me do tasks I would otherwise not attempt due to lack of experience or time.
dmazin 6 days ago [-]
Has anyone compared Perplexity with Kagi Assistant?
I am always looking for Perplexity alternatives. I already pay for Kagi and would be happy to upgrade to the ultimate plan if it truly can replace Perplexity.
Zambyte 6 days ago [-]
I had been paying for both for several months, and I decided to cancel Perplexity about a month ago. First and foremost, I feel like the goals of Kagi align more with my goals. Perplexity is not afraid of ads and nagware (their discover feed was like 30% nags to turn on notifications at one point if you had them disabled, and it's still an annoying amount). I also really like the custom assistants in Kagi. I made a GNU Guix lens that limits my search results to resource related to Guix (official docs, mailing list and IRC archives, etc.) which I can access with !guix, and I made an assistant that uses that lens for web results that I can access with !guixc. I can ask something like "how do I install nginx?" and the answer will be about Guix. You can do some customization with your bio on Perplexity, but it kind of sucks tbh. It would randomly inject info about me into completely unrelated queries, and not inject the info when I wanted it to.
kedarkhand 5 days ago [-]
Would you be willing to share how did you that? New to both Kagi and Guix!
Zambyte 5 days ago [-]
I can actually just share the lens I made directly with you!
I'm not sure if adding that to your account will include the configuration I have set to access the lens with !guix, but if it does not, you might want to add it. The lens basically just uses this pattern for search result sources:
I don't think I can share the assistant directly, but if you have Kagi Ultimate, you can just go to the Assistant section in the sidebar of the settings page, and add a new assistant. You can set it to have access to web search, and you can specify to use the GNU Guix lens. You can pick any model, but I'm using Deepseek R1, and I set my system prompt to be:
> Always search the web for answers to the users questions. All answers should respond relating to the GNU Guix package manager and the GNU Guix operating system.
and that seems to work well for me. Let me know if you have trouble getting that set up!
pbronez 6 days ago [-]
I got a free year of Perplexity thanks to owning an R1. I already had a Kagi subscription, but decided to give Perplexity a try.
I found Perplexity was slower and delivered lower quality results relative to Kagi. After a week of experimenting, I forgot about Perplexity until they charged my $200 to renew my free year. I promptly cancelled the heck out of it and secured a refund.
hooli_gan 6 days ago [-]
Does it just start a search or does the chat continue with the results? Would be cool to continue the chat with result, which were filtered acording to the blacklist.
lemming 6 days ago [-]
The chat continues with the results, and I often explicitly tell it "search to make sure your answer is correct" if I see it making stuff up without actually searching. I use it multiple times a day for all sorts of things.
KoolKat23 6 days ago [-]
I have a subscription, please could I ask how you do this? I only know of the append ? Feature.
Oh yeah this is very much the case. Every time I ask ChatGPT something simple (thinking it'd be a perfect fit for an LLM, not for a google search) and it starts searching, I already know the "answer" is going to be garbage.
spoaceman7777 6 days ago [-]
I have in my prompt for it to always use search, no matter what, and I get pretty decent results. Of course, I also question most of its answers, forcing it to prove to me that its answer is correct.
Just takes some prompt tweaking, redos, and followups.
It's like having a really smart human skim the first page of Google and give me its take, and then I can ask it to do more searches to corroborate what it said.
NavinF 6 days ago [-]
Try their Deep Research or grok's DeepSearch. Both do many searches and read many articles over a couple of minutes
lee-rhapsody 6 days ago [-]
The "Deep" search features hallucinate like crazy, I've found.
NavinF 5 days ago [-]
That hasn't been an issue for me. Link to example?
osigurdson 6 days ago [-]
That is interesting. I have often been amazed at how good it is at picking up when to search vs use its weights. My biggest problem with ChatGPT is the horrendous glitchyness.
bambax 6 days ago [-]
"Searching" doesn't mean much without information about the ranking algorithm or the search provider, because with most searches there will be millions of results and it's important to know how the first results have been determined.
It's amazing that the post by Anthropic doesn't say anything about that. Do they maintain their own index and search infrastructure? (Probably not?) Or do they have a partnership with Bing or Google or some other player?
andai 6 days ago [-]
>top results are blogspam
It gets even better. When I first tested this feature in Bard, it gave me an obviously wrong answer. But it provided two references. Which turned out to be AI generated web pages.
Oddly enough in my own Googles I could not even find those pages in the results.
dspillett 6 days ago [-]
> Bard […] it provided two references. Which turned out to be AI generated web pages.
Welcome to the Habsburg Internet.
kelseyfrog 7 days ago [-]
Search engines now have an incentive to offer a B2B search product that solves the blogspam problem. Don't worry, the AIs will get good search results, and you'll still get the version that's SEOed to the point of uselessness.
wenc 6 days ago [-]
I just tried Claude’s web search. It works pretty well.
I’m not sure if Claude does any reranking (see Cohere Reranker) where it reorders the top n results or just relies on Google’s ranking.
But a web search that does re-ranking should reduce the amount of blogspam or incomplete answers. Web search isn’t inherently a lost cause.
macrolime 6 days ago [-]
Deep search/deep research in grok, chatgpt, perplexity etc works much better. It can also do things like search in different languages. Wonder about something in some foreign country? Ask it to search in the local language and find things you won't find in English.
wickedsight 6 days ago [-]
> Ask it to search in the local language and find things you won't find in English.
Yeah, this is one of my favorite use cases. Living in Europe, surrounded by different languages, this makes searching stuff in other countries so much more convenient.
Exa (YC S21) is trying to solve this problem by re-indexing the web in an LLM-friendly way.
ipaddr 6 days ago [-]
2021? how are they doing?
magackame 7 days ago [-]
Google search is crap. It seems to be a sentiment among many HNers, but is it really that bad? I mostly use it for programming, so documentation/forums and it works out greatly. For some queries it even returns personal blogs (which people seem to bash google for not happening). Of course there are some queries that return purely AI blogspam, but reformatting the query with a bit more thought usually solves it. I wonder if that is a US thing? Do search results differ greatly based on the region?
Beijinger 6 days ago [-]
Is google search bad? Click here to find ten reasons why it is bad and 10 reasons why you should still use it.
Yes, it is that bad.
Website of Nike? Website of Starbucks? Likely position number one.
Every product, category etc., e.g. what rice cooker should I buy? Is diseased by link and affiliate spam. There is a reason why people put +reddit on search terms.
davidcbc 6 days ago [-]
"what rice cooker should I buy" returns a pretty in depth article on bonappetit.com and reddit as the top results, both recommending Zojirushi
webstrand 6 days ago [-]
Well first Zojirushi is unnecessarily expensive and difficult to clean. Only if you need its fancy options and like multiple varieties of rice would I recommend it. Reddit is no panacea to spam, these days.
But bonappetit.com is exactly an example of affiliate link spam. Even their budget option is awful.
davidcbc 6 days ago [-]
What kind of answer are you expecting to get? Zojirushi is the answer you're going to get on the internet if you ask "what rice cooker should I buy" with no other qualifications because it's pretty universally agreed upon that it's the highest quality product.
chongli 6 days ago [-]
My Zojirushi is the easiest to clean appliance I’ve ever owned. Just take the cooking bowl out and wash it with a damp cloth. Takes all of 30 seconds.
terribleperson 6 days ago [-]
Zojirushi being difficult to clean is kind of a wild take, unless most of their other rice makers are a wild departure from mine.
TimorousBestie 6 days ago [-]
Yeah, never had any trouble cleaning any of my Zojirushi equipment.
asah 6 days ago [-]
Perhaps there's some new rice cooker with AI cleaning /s
Until then, my Zojirushi is very simple to clean.
dontlikeyoueith 6 days ago [-]
> Well first Zojirushi is unnecessarily expensive and difficult to clean
Expensive sure, but it's only difficult to clean if you're a double amputee.
wenc 6 days ago [-]
Yeah Zojirushi is absolutely the right answer so the contrarian take in this comment is actually not what I would want in search result.
There are other good rice cookers like Cuckoo, and cheaper options like Tiger or Tatung, or really budget options like Aroma, but you pretty much can’t go wrong with Zojirushi if you can afford it.
This is a case of HN cynicism and contrarianism working against oneself.
wat10000 6 days ago [-]
This isn't quite what "bad search" means, but that search gives me a full page of ads, and I have to scroll down to even see the first actual result.
dannyobrien 6 days ago [-]
I think part of the reason for this is that web site developers have got out of the habit of optimizing for search engines. I'm often surprised by how self-contained the requirements for a website are now, even among otherwise technically sophisticated clients. There'll be a beautiful site in React that absolutely sucks for SEO, but no-one will mind because a) it's unclear how big an audience there should be for the site, and b) the "all your hits come from search engines" was broken ten or more years ago by social network linking, so the question of how you get an audience seems much more arbitrary, and less connected to google.com.
gausswho 6 days ago [-]
This is actually why modern React sites focus on serverside rendering, at great additional complexity.
bag_boy 6 days ago [-]
But who is creating honest articles about which rice cookers you should buy?
BTW - the search you suggested gives you Reddit links first followed by other trusted sites trying to make an affiliate buck. There’s no spam on the first page.
tim333 6 days ago [-]
Googling "Is google search bad?" I get at the top
> Reddit · r/google Is Google Search getting worse? Latest research and ...
The whole "Click here to find ten reasons why it is bad" style I've only come across in HN comments attacking what may be a bit of a straw man?
rustc 6 days ago [-]
What kind of results does Kagi give for "What rice cooker should I buy?"? (Not asking you specifically, but if any Kagi user could compare.)
nicoty 6 days ago [-]
Copied verbatim from the AI generated summary:
To choose the best rice cooker, consider these factors:
Top Brands: Zojirushi is often considered the best brand, with Cuckoo and Tiger as close contenders. Aroma is considered a good budget brand 1.
Types:
Basic on/off rice cookers: These are good for simple white or brown rice cooking and are usually affordable and easy to use 2.
Considerations: When buying a rice cooker, also consider noise levels, especially from beeping alerts and fan operation 3.
Specific Recommendations:
Yum Asia Panda Mini Advanced Fuzzy Logic Ceramic Rice Cooker is recommended for versatility 4.
Yum Asia Bamboo rice cooker is considered the best overall 5.
Russell Hobbs large rice cooker is a good budget option 5.
For one to two people, you don't need a large rice cooker unless cost and space aren't a concern 6. Basic one-button models can be found for under $50, mid-range options around $100-$200, and high-end cookers for hundreds of dollars 6.
References
What is the best rice cooker brand ? : r/Cooking - Reddit www.reddit.com
The Ultimate Rice Cooker Guide: How to Choose the Right One for Your Needs www.expertreviewsbestricecooker.com
Best Rice Cooker UK | Posh Living Magazine posh.co.uk
Best rice cookers for making perfectly fluffy grains - BBC Good Food www.bbcgoodfood.com
The best rice cookers for gloriously fluffy grains at home www.theguardian.com
Do You Really Need A Rice Cooker? (The Answer Is Yes.) - HuffPost www.huffpost.com
yunwal 6 days ago [-]
Just tried plain old Kagi search, it came up with cooks illustrated (good source, paid) and consumer reports (decent source, paid), which I was surprised by until I remembered that I had these “pinned”, which means Kagi increases their rank. Third on the page was a condensed roundup of 8 listicles, 2 of which seemed decent (food and wine and some random blogger).
With no pins, bon appetit (decent) and nbc news (would be fine if it wasn’t littered with ads) were the top results. For NBC news, Kagi also marked the result with a red shield, indicating that it has too many ads/trackers.
Which really goes to show that Kagi is great if you’re really willing to shell out for better content. Having the ability to mark sources as trusted, or indicate that I’ve paid for premium sources makes a completely different side of the web searchable.
what 6 days ago [-]
Why do you think nbc news is fine as a result for rice cooker recommendations? That’s clearly SEO spam, which is why it’s littered with ads.
yunwal 5 days ago [-]
I just meant that I found this set of reviews to be informative and accurate. It had information that I couldn’t find online elsewhere which is really my main criteria. Generally I’ll skip anything from nbc because of the ads but in this case I read it to form an opinion and the article seemed alright.
lclc 6 days ago [-]
If you enter a question into Kagi, by default, you get a 'Quick Answer' (https://help.kagi.com/kagi/getting-started/index.html#quick-...) on the top (an AI-generated text answer before the search result). In this case, it tells me which factors to consider and some that are considered to be the best depending on the use case (all sources the AI used for the answer are linked below the answer).
Followed by Listicles (a short-form writing that uses a list as its thematic structure). All just one entrance, in this case, Best rice cooker 2024: Top tried and tested models for perfect results
expertreviews.com
9 Best Rice Cookers | The Strategist - New York Magazine
nymag.com
The 8 Best Rice Cookers of 2025, Tested and Approved - The Spruce Eats
thespruceeats.com
6 Best Rice Cookers 2025 Reviewed - Food Network
foodnetwork.com
Best rice cookers 2025, tested for perfect grains - The Independent
independent.co.uk
29 Rice cooker meals ideas | rice cooker recipes, cooking recipes...
de.pinterest.com
43 Crockpot ideas | cooking recipes, rice cooker recipes, cooker...
de.pinterest.com
Followed by Quick Peek (questions with hidden answers that you can display).
Followed by normal search results again: ryukoch.com, reddit/r/Coooking, expertreviewsbestricecooker.com, tiktok, and then many more 'normal' websites.
This search reminded me that I have yet to configure my Kagi account to ignore tiktok.
int_19h 6 days ago [-]
Kagi Ultimate user here. Assuming you meant typing it into their search (and not e.g. Assistant), here's what I get on top of the result page:
Quick Answer
To choose the best rice cooker, consider these factors:
Capacity: Rice cookers range from small (1-2 cups) to large (6-8 cups or even 10-cup models) [1][2]. Keep in mind that one cup of uncooked rice yields about two cups cooked [2].
Budget: Basic one-button models can be found for under $50, mid-range options around $100-$200, and high-end cookers can cost more [3].
Features: Many rice cookers include a steaming insert [4]. Some have settings for different types of rice [5][1].
Brand Recommendations:
Zojirushi: Often considered the best brand, but pricier [6][7]. The Zojirushi Neuro Fuzzy 5.5-Cup Rice Cooker is considered best overall [8].
Cuckoo & Tiger: These are the next best brands after Zojirushi [6].
Aroma: Considered the best budget brand [6]. The Aroma ARC-914SBD Digital Rice Cooker is a good option [9].
Toshiba: The Toshiba Small Rice Cooker stands out for innovative features that cater to a variety of cooking needs [5].
References
[1] Five Best Rice Cookers In 2023. More than half of the... | Medium medium.com
[2] Which Rice Cooker Should You Buy? - HomeCookingTech.com www.homecookingtech.com
[3] Do You Really Need A Rice Cooker? (The Answer Is Yes.) - HuffPost www.huffpost.com
[4] The 8 Best Rice Cookers of 2025, Tested and Approved www.thespruceeats.com
[5] The Ultimate Guide to Choosing the Perfect Rice Cooker | Medium medium.com
[6] What is the best rice cooker brand ? : r/Cooking - Reddit www.reddit.com
[7] What are actually good rice cookers? I feel like all the ... - Reddit www.reddit.com
[8] 6 Best Rice Cookers of 2025, Tested and Reviewed - Food Network www.foodnetwork.com
[9] 9 Best Rice Cookers | The Strategist - New York Magazine nymag.com
int_19h 6 days ago [-]
Beyond that are the actual search results. The top ones are the same as in References section of quick answer, but the order is different: [6] [7] [3] [5] [1] [8] [9] [4] [2].
It should be noted that individual search results on Kagi are likely to be skewed depending on the user because it gives you so many dials to score specific domains up or down. E.g. my setup gives a boost to Reddit while downscoring Quora and outright blocking Instagram and Pinterest.
whstl 6 days ago [-]
> Website of Nike? Website of Starbucks? Likely position number one.
...if you're blocking ads and/or they're paying big advertisement bucks.
harrall 6 days ago [-]
I watched one of my friends who says Google is useless use Google one day.
If I were looking for a song, I would type in something like “song used at beginning of X movie indie rock”
He would type in “X songs.”
I basically find everything in Google in one search and it takes him several. I type in my thought straight whereas he seems to treat Google like a dumb keyword index.
TeMPOraL 6 days ago [-]
Google used to be a "dumb keyword index" in the past. It worked better that way. You had some modicum of control over the matching process. For the past 10 years or so, Google turned more into "try to guess what a novice normie means", which removes user control (no more actually working verbatim search or logical operators...), and... well I failed to develop a mental model of how exactly it works. It's not a proper keyword search anymore, and it's not a proper DWIM system with true understanding of natural language like LLMs are. It's something... in between, inferior to both.
Actually, typing out "what a novice normie means" made me realize what is the probable reason Google turned out the way it is: optimizing for new users. However, a growing userbase means most users are new to Internet in general, and (with big enough growth) most queries are issued by people who are trying a search engine out for the first time, and have no clue how or why it works - and those queries are exactly the kind of queries Google is now good at, queries like example you provided.
harrall 6 days ago [-]
With modern Google, if I’m searching for something that could either be a band or a song, I can put “band” and I will get only results for the band, even if the page doesn't include the word "band."
But if you insist on a dumb keyword search, Google still does that fine if you use quotation marks now in addition to the operator (e.g. +"band"). But I just tried +"band" with my band-vs-song example and all I got were worse results that excluded the artist's website because the artist didn't write the word "band" anywhere on the page -- as expected for a dumb keyword search.
There was no easy way to perform my band-vs-song search back then because Google didn’t understand context and the website doesn’t have the correct keywords. But modern Google knows context and I employ this fact regularly, allowing me to find stuff with modern Google like a magician compared to old Google or even Altavista.
6 days ago [-]
layer8 6 days ago [-]
Interesting, because keyword search works quite well for me, and I assumed it was natural-language searchers who are getting worse results.
scarface_74 6 days ago [-]
Funny anecdote. I use to be afraid of searching for something in the city I lived in by putting the name of the city after it.
Personally I like Google search. I think it's not crap - actually quite good. I use it multiple times a day (just checked - about 42 times yesterday). It's different from what it was 10 years ago but still works for most stuff.
That said I also use Perplexity which does things Google never really did.
I've got a theory that people just like to be negative about stuff, especially market leaders, and are a bit in denial as to how it still has the majority search share in spite of many billions spent trying to compete with it and ernest HN posts saying Google is crap use Kagi. For amusement I tried to find their share of search and Google is approx 90%, Kagi approx 0.01% by my calculations.
hansmayer 6 days ago [-]
I mean, for those of us who used it since way before the '20s, it's not really a sentiment - it's a fact. You used to be able to type in 3 words and whatever error message your stack trace was showing, and the first 3 links returned were very likely a definitive source to solving your problem.Written by a human, and believe my word for it - it was much better back then than the crap you get out of torturing whatever your LLM of choice is. However the weird MBas took it over to and did exactly what you are describing - forced people to spend more time "engaging with the platform" (to increase the revenue). As you can see, they seem to have achieved this goal, and we all now spend time reformatting our queries as they wanted us to, and yes Google search is complete crap.
lukan 6 days ago [-]
"and believe my word for it - it was much better back then than the crap you get out of torturing whatever your LLM of choice is"
I was around as well and my memories do not confirm this. But google search definitely degraded a lot.
reddalo 6 days ago [-]
Yes and no. You used to find niche websites more easily, but I vividly remember the frustration with ExpertsExchange results (with answers that were all paywalled).
int_19h 6 days ago [-]
This is one of the reasons why Kagi is so good - you can easily set it up to downrank cesspits like ExpertsExchange or Quora into oblivion.
hansmayer 6 days ago [-]
Ah yes, Experts-Exchange was a plague for a while. They did downrank it at some point though.
pixl97 6 days ago [-]
Eh, this said google is suffering from its own popularity.
Google in the past was written by a human because that was really the only option. Once other humans figured out to how automate producing trace Google has gone downhill simply because of the bullshit asymmetry effect. Even if google was totally customer based, it would still be much worse than the past because of the total amount of crap that exists.
This is also why no other competitor just completely blows them away either.
simonw 6 days ago [-]
How long have you been using Google search for?
It used to be SO much less likely to return junk.
tcdent 6 days ago [-]
Google search was actually great between the period where pagerank successfully defeated old-school SEO tactics and banner advertising starting earning enough that the "bloggers" could pay cheap writers to pad their articles in convincing ways.
First decade of the 2000's if I had to guess.
qingcharles 6 days ago [-]
2006 was the first year I remember paid blog posts appearing from content farms that would exist only to increase your inbound links and page rank. Those days companies were paying cents per post to get their sites to #1 in Google while Google just wagged their finger and said "naughty, naughty."
It's a shame, because Page Rank was a smart idea.
magackame 6 days ago [-]
Since around 2012. What year would be the golden age of google search? I wonder if anyone has archived search result pages for relatively timeless queries so that we could compare. Wayback Machine seems to archive some of them.
Around 2011 to 2012 after the first of many updates with names like hurricanes came and washed away the good.
UltraSane 6 days ago [-]
Google randomly deletes words from your search term. Why would anyone think that was a good idea?
sky2224 6 days ago [-]
It's kind of surprising to me that I can't customize the search ability at all with a lot of models (at least I wasn't really able to last time I checked). Would providing a blacklist to the model really be that hard?
Actually, it's astounding to me that companies haven't created a more user friendly customization interface for models. The only way to "customize" things would be through the chat interface, but for some reason everyone seems to have forgotten that configuration buttons can exist.
grapesodaaaaa 6 days ago [-]
> Actually, it's astounding to me that companies haven't created a more user friendly customization interface for models.
To be fair, LLM technology in its current form, is still relatively new. I would also like to see what you are suggesting, though.
tymonPartyLate 6 days ago [-]
This is actually not true. I'm getting traffic from ChatGpt and Perplexity to my website which is fairly new, just launched a few months ago. Our pages rarely rank in the top 4, but the AI answer engines mange to find them anyways. And I'm talking about traffic with UTM params / referrals from chatgpt, not their scraper bots.
ForTheKidz 6 days ago [-]
If chatgpt is scraping the web, why can they not link tokens to source of token? being able to cite where they learned something would explode the value of their chatbot. At least a couple of orders of magnitude more value. Without this chatbots are mostly a coding-autocomplete tool for me—lots of people have takes, but it's the tying into the internet that makes a take from an unknown entity really valuable.
Perplexity certainly already approximates this (not sure if it's at a token level, but it can cite sources. I just assumed they were using a RAG.)
DonHopkins 6 days ago [-]
That's asking for the life stories and photos and pedigrees and family histories of all the chickens that went into your McNuggets. It's just not the way LLMs work. It's an enormous vat of pink slime of unknown origins, blended and stirred extremely well.
Overall LLMs (that I've tested) don't know how to use a search engine, their queries are bad and naive, probably because the way to use a search engine isn't part of training data, it's just something that people learn to do by using them. Maybe Google has the data to make LLMs good at using search engines but would it serve their business?
UnreachableCode 6 days ago [-]
> web search is more unusable than ever
I’m curious why I’m seeing a lot of people thinking this lately. Google definitely made the algorithm worse for customers and better for ads, but I’m almost always able to find what I’m looking for in the working day still. What are other people’s experiences?
vbezhenar 6 days ago [-]
My experience is that Google works perfectly for me and I almost never have any issues with it, despite all the doomsaying.
cudgy 6 days ago [-]
AI results typically blow away Google results in both quality and definitely speed.
For example, when searching for product information, Google results in top 50 to 100 listed items titled “the 10 best …“ full of vapid articles that provide little to no insight beyond what is provided in a manufacturers product sheet. Many times I have to add “Reddit” to my search to try and find real opinions about a product or give up and go to Youtube review videos from trusted sources.
For technical searches like programming questions, AI is basically immediately nailing most basic questions while Google results require scanning numerous somewhat related results from technical discussion forums, many of which are outdated.
Imagine how much fun it will be when the breakthrough in search engine quality comes from companies building a better engine to get good LLM answers.
This is ultimately google's problem: They are making money from the fact that the page is now mostly ads and not necessarily going to lead to a good, quick answer, leading to even more ads. They probably lose money if they make their search better
PetahNZ 6 days ago [-]
It would be nice if I could tell it what page to look at (maybe you can, I am not sure). Often if I am getting an LLM to write some code that I can see is obviously wrong, I would love to say here is the docs ... use that to formulate your response.
osigurdson 6 days ago [-]
My experience with ChatGPT is really good. I find standard web searches very annoying now.
oytis 6 days ago [-]
Well, they are professionals, they sure add "reddit" to every query.
taude 6 days ago [-]
Do you think that if it's a non-Google company, that maybe doesn't rank search by ad payment $$$, that this new company could in theory do a better job?
OscarTheGrinch 6 days ago [-]
If only search engines weren't also in the business of inserting unverifiable AI assertions into our information ecosystem.
johnisgood 6 days ago [-]
Yeah, this is why I almost never enable the search feature. Hopefully Claude (I have not tried) has a way of disabling it.
Xenoamorphous 6 days ago [-]
Is there any viable alternative to pass knowledge to the LLMs that goes beyond their training cut off date?
jonny_eh 6 days ago [-]
Via their context window, but new knowledge could easily fill it up.
collyw 6 days ago [-]
Isn't that the same as any place (like here for example), that uses an up-voting system?
colordrops 6 days ago [-]
Ugh, what a nightmare, now search engines are going to start optimizing for bots.
darkhorse13 6 days ago [-]
This is basically AGI because that's what we humans do.
Tycho 7 days ago [-]
It’s good if it hits on high quality sources like ons.gov.uk
RAG was dead on arrival because it uses the same piss-poor results a human would, wrapped in more obfuscation and unwanted tangents.
My question is why the degradation of search wouldn't affect LLMs. These chatbot god-oracle businesses are already unprofitable because of their massive energy footprint, now you expect them to build their own search engine in-house to try to circumvent SEO spam? And you expect SEO spam to not catch up with whatever tricks they use? Come on, people.
AI-native search api to retrieve over web/proprietary content - full semantic search (e.g. we indexed all of arxiv), reranking built in, simple pricing, cheap
zk108 6 days ago [-]
We’re giving away free credits to try out our platform — no card required. If you’re building with AI and need quality data, we’d love your feedback!
elliotrpmorris 6 days ago [-]
Lol so true
blackeyeblitzar 6 days ago [-]
For me LLMs have basically removed any need to visit search engines. I was already not using Google due to how bad its interface had become, but I feel like LLMs at least are more efficient as an interface even if they’re still looking at the same blogspam or unresolved forum posts. My anecdotal experience though, is that I get better answers from LLMs, perhaps because I am able to give them really detailed prompts that seem to improve the answers based how specific I get. Generic search engines don’t seem to do that, in my experience.
tntxtnt 6 days ago [-]
[dead]
MuffinFlavored 6 days ago [-]
> the top results are often just blogspam
top results are blogspam but the LLM isn't? /s
joshstrange 7 days ago [-]
Massive props to Anthropic for announcing a feature _and_ making it available for everyone right away.
OpenAI is so annoying in this aspect. They will regularly give timelines for rollout that not met or simply wrong.
Edit: "Everyone" = Everyone who pays. Sorry if this sounds mean but I don't care about what the free tier gets or when. As a paying user for both Anthropic and OpenAI I was just pointing out the rollout differences.
Edit2: My US-bias is showing, sorry I didn't even parse that in the message.
bryan0 7 days ago [-]
> Web search is available now in feature preview for all paid Claude users in the United States. Support for users on our free plan and more countries is coming soon.
AcquiescentWolf 6 days ago [-]
People outside the US obviously don't exist, therefore the statement is correct.
mpalmer 6 days ago [-]
Easy to believe our weak privacy laws are part of the reason we get tech features first. Huzzah...
13_9_7_7_5_18 6 days ago [-]
[dead]
willio58 6 days ago [-]
> OpenAI is so annoying in this aspect. They will regularly give timelines for rollout that not met or simply wrong.
I have empathy for the engineers in this case. You know it’s a combination of sales/marketing/product getting WAY ahead of themselves by doing this. Then the engineers have to explain why they cannot in fact reach an arbitrary deadline.
Meanwhile the people not in the work get to blame those working on the code for not hitting deadlines
nilkn 6 days ago [-]
Many of OpenAI's announcements seem to be timed almost perfectly as responses to other events in the industry or market. I think Sam just likes to keep the company in the news and the cultural zeitgeist, and he doesn't really care if what he's announcing is ready to scale to users yet or not.
wongarsu 6 days ago [-]
To be fair, being in the cultural zeitgeist is a huge part of their current moat. To people in the street OpenAI is the company making LLMs. Sam has to make sure it stays that way
klabb3 6 days ago [-]
Fake it til you make it vibes. I understand why he would do it, what I find strange is that it inspires confidence in the market.
gizmodo59 6 days ago [-]
You can wait to release something all over the world which will take time because it’s not an engineering issue but a compliance/legal or other types of issue. Or you can iterate faster by doing the minimum and getting feedback and then releasing it in other markets. Not sure what’s wrong with this approach.
op00to 6 days ago [-]
Depending on what you’re actually providing, different regions of CSPs might not actually have the features or capacity you need to reliably deliver the feature world wide. That’s probably the exception not the rule, especially for OpenAI.
klabb3 6 days ago [-]
That’s fair, but I was referring to releasing in response to external events. It’s very clear they are trying to one-up each other and create hype, vagueposting etc. I don’t think sama is alone in this but everyone especially from the Thiel school of thought.
sumedh 6 days ago [-]
What are they faking?
klabb3 6 days ago [-]
Sama as the spokesperson regularly makes grandeur statements, often very vague, can’t show it because ”safety”, trade secrets etc. I think it’s widespread culturally, especially in earlier VC-centric times when investor fomo and mystique is name of the game. But nowadays even large publicly listed companies like Tesla pull this off and even sell consumer products that don’t exist yet. Do you want specific examples of sama statements that I think are horseshit specifically designed to generate buzz? It’s not hard to find.
saurik 6 days ago [-]
I fully understand why they do it, and yet I choose to interpret it as blatantly lying. (To be clear I mean the thing where OpenAI seems to announce things relating to news cycles without actually having the thing working yet; I don't mind a limited rollout. But like, that Advanced Voice demo they did--which was clearly just to take some thunder from Google--not only took a long time to get into the hands of anyone, it is nowhere near as good as their demo claimed or made it out to be.)
underdeserver 7 days ago [-]
It's not available for everyone.
joshstrange 7 days ago [-]
> Web search is available now in feature preview for all paid Claude users in the United States.
It is for all paid users, something OpenAI is slow on. I pay for both and I often forget to try OpenAI's new things because they roll out so slow. Sometimes it's same-day but they are all over the map in how long it takes to roll out.
deivid 7 days ago [-]
For all paid users _in America_. It's not available for me in Europe.
DrammBA 6 days ago [-]
I think 'For all paid users in the United States' is clearer. I live in America, but not in what the United States considers 'America', so I do not get to use this new feature yet.
marcellus23 6 days ago [-]
Using America as a shorthand for "USA" is hardly limited to people who live in the US.
joshstrange 6 days ago [-]
Apologies, I updated my original comment, I missed that completely.
op00to 6 days ago [-]
Might you be able to use a VPN to visit the US (safely!) for a short while? Not sure how Anthropic geolocks.
mvdtnz 6 days ago [-]
You can't be serious with this reply. You simply can not.
joshstrange 6 days ago [-]
Which part? I completely missed the "United States" part and have since updated my original comment.
zelphirkalt 6 days ago [-]
When am I getting paid for them gobbling up my code and using it to cash out? It is not so one-sided, this whole matter.
Install MCP plugin and call a search engine of your choice.
If you’re unhappy about something, try to first think of a solution before expressing your discontent.
davidcbc 5 days ago [-]
Wow, so condescending
I don't use the desktop app and I don't want to use the desktop app or jump through a bunch of hoops to support basic functionality without having my data sent to a sketchy company.
orangesun 2 days ago [-]
There's always the option of not using it.
herdcall 6 days ago [-]
It badly hallucinated in my test. I asked it "Rust crate to access Postgres with Arrow support" and it made up an arrow-postgres crate. It even gave sample Rust code using this fictional crate! Below is its response (code example omitted):
I can recommend a Rust crate for accessing PostgreSQL with Arrow support.
The primary crate you'll want to use is arrow-postgres, which combines the PostgreSQL connectivity of the popular postgres crate with Apache Arrow data format support.
This crate allows you to:
Query PostgreSQL databases using SQL
Return results as Arrow record batches
Use strongly-typed Arrow schemas
Convert between PostgreSQL and Arrow data types efficiently
yakz 6 days ago [-]
Are you sure it searched the web? You have to go and turn on the web search feature, and then the interface is a bit different while it's searching. The results will also have links to what it found.
shortrounddev2 6 days ago [-]
> I asked it "Rust crate to access Postgres with Arrow support"
Is that how you actually use llms? Like a Google search box?
CamperBob2 6 days ago [-]
Exactly. An LLM is not a conventional search engine and shouldn't be prompted as if it were one. The difference between "Rust crate to access Postgres with Arrow support" and "What would a hypothetical Rust crate to access Postgres with Arrow support look like?" isn't that profound from the perspective of a language model. You'll get an answer, but it's entirely possible that you'll get the answer to a question that isn't the one you thought you were asking.
Some people aren't very good at using tools. You can usually identify them without much difficulty, because they're the ones blaming the tools.
Sharlin 6 days ago [-]
It's absolutely how LLMs should work, and IME they do. Why write a full question if a search phrase works just as well? Everything in "Could you recommend xyz to me?" except "xyz" is redundant and only useful when you talk to actual humans with actual social norms to observe. (Sure, there used to be a time when LLMs would give better answers if you were polite to them, but I doubt that matters anymore.) Indeed I've been thinking of codifying this by adding a system prompt that says something like "If the user makes a query that looks like a search phrase, phrase your response non-conversationally as well".
thrwthsnw 5 days ago [-]
every token contributes to the output
timdellinger 6 days ago [-]
Totally agree here. I tried the following and had a very different experience:
"Answer as if you're a senior software engineer giving advice to a less experienced software engineer. I'm looking for a Rust crate to access PostgreSQL with Apache Arrow support. How should I proceed? What are the pluses and minuses of my various options?"
elicksaur 6 days ago [-]
“Prompting” is kind of a myth honestly.
Think about it, how much marginal influence does it really have if you say OP’s version vs a fully formed sentence? The keywords are what gets it in the area.
CamperBob2 6 days ago [-]
That is not correct. The keywords mean nothing by themselves. To a transformer model, the relationships between words is where meaning resides. The model wants to answer your prompt with something that makes sense in context, so you have to help it out by providing that context. Feeding it a sentence fragment or a disjoint series of keywords may not have the desired effect.
To mix clichés, "I'm feeling lucky" isn't compatible with "Attention is all you need."
op00to 6 days ago [-]
I find that providing more context and details initially leads to far more success for my uses. Once there’s a bit of context, I can start barking terms and commands tersely.
Swannie 3 days ago [-]
I find more hallucination - like when you're taught as a child to reflect back the question at the start of your answer.
If I am not careful, and "asking the question" in a way that assumes X, often X is assumed by the LLM to be true. ChatGPT has gotten better at correcting this with its web searches.
I am able to get better results with Claude when I ask for answers that include links to the relevant authoritative source of information. But sometimes it still makes up stuff that is not in the source material.
elicksaur 6 days ago [-]
That’s fair. I think the difference here is that the entire context needed is provided.
If you’re having to explain an existing problem with edge cases, then sure, the context window needs the edge cases defined as well.
op00to 5 days ago [-]
That’s the biggest problem I have on my local LLM use - limited context size compared to the big guys offerings.
globular-toast 6 days ago [-]
Is this really the case, or is it the case with Claude etc because they've already been prompted to act as an "helpful assistant"? If you take a raw LLM and just type Google search style it might just continue it as a story or something.
borgdefenser 5 days ago [-]
Prompting is not a myth. The words of the prompt matter huge.
The problem with this prompt to me is not that it is not in a full sentence but that it isn't exact enough.
Probabilistically, "rust" is not about the programming language but the corrosion of metal. Then arrow.
Give the model basically nothing to work with then complain it doesn't do exactly what you want. Good luck with that.
globular-toast 6 days ago [-]
It's funny because many people type full sentence questions into search engines too. It's usually a sign of being older and/or not very experienced with computers. One thing about geeks like me is we will always figure out what the bare minimum is (at least for work, I hope everyone has at least a few things they enjoy and don't try to optimise).
whatevertrevor 6 days ago [-]
It's not about being young or old, search engines have moved away from pure keyword searches and often typing your actual query gives better results than searching for keywords, especially with Google.
unshavedyak 6 days ago [-]
Wonder if that's why so many people hate its results lol. It shifted keyword searching to full sentence searching, but many of us didn't follow in the shift.
herdcall 6 days ago [-]
Well, compare it to the really good answer from Grok (https://x.com/i/grok/share/MMGiwgwSlEhGP6BJzKdtYQaXD) for the same prompt. Also, framing as a question still pointed to the non-existent postgres-arrow with Claude.
unshavedyak 6 days ago [-]
That's primarily how i do, though it depends on the search ofc. I use Kagi, though.
I've not yet found much value in the LLM itself. Facts/math/etc are too likely incorrect, i need them to make some attempt at hydrating real information into the response. And linking sources.
keeran 6 days ago [-]
This was pretty much my first experience with LLM code generation when these things first came out.
It's still a present issue whenever I go light on prompt details and I _always_ get caught out by it and it _always_ infuriates me.
I'm sure there are endless discussions on front running overconfident false positives and being better at prompting and seeding a project context, but 1-2 years into this world is like 20 in regular space, and it shouldn't be happening any more.
exhaze 5 days ago [-]
Cite things from ID based specs. You’re facing a skill issue. The reason most people don’t see it as such is because an LLM doesn’t just “fail to run” here. If this was code you wrote in a compiled language, would you post and say the language infuriates you because it won’t compile your syntax errors? As this kind of dev style becomes prevalent and output expectation adjust, work performance review won’t care that you’re mad. So my advice is:
1. Treat it like regular software dev where you define tasks with ID prefixes for everything, acceptance criteria, exceptions. Ask LLM to reference them in code right before impl code
2. “Debug” by asking the LLM to self reflect on its decision making process that caused the issue - this can give you useful heuristics o use later to further reduce the issues you mentioned.
“It” happening is a result of your lack of time investment into systematically addressing this.
_You_ should have learned this by now. Complain less, learn more.
op00to 6 days ago [-]
Often times I come up with a prompt, then stick the prompt in an LLM to enhance / identify what I’ve left out, then finally actually execute the prompt.
matt3210 6 days ago [-]
That crate knowledge is probably from a proprietary private GitHub repo given to it by Microsoft
noisy_boy 6 days ago [-]
Maybe you can retry with lower temperature?
zarathustreal 5 days ago [-]
You “asked it” a statement?
Cort3z 6 days ago [-]
I usually find Claude to be my favourite flavor of LLMs, but I still pay for ChatGPT because their voice offering is so great! I regularly use it as an "expert on the side" when I do other things, like doing bike repairs. I ask it things like "how do I find the min/max adjustments on my particular flavor of front derailleur", or when cooking, and my hands are dirty, I can ask stuff like "how much X do I usually need for Y people", and so on. The hands-off feature is so great when my hands are literally busy doing some other thing.
I really wish Claude had something similar.
rhubarbtree 11 hours ago [-]
I think voice interface is the real killer app of LLMs. And the advance voice mode was exactly what I was waiting for. The pause between words issue is still a problem though, I think being able to just hit enter when done would work best.
Pro tip; if you’re preparing for a big meeting eg an interview, tell ChatGPT to play the part of an evil interviewer. Give it your CV and the job description etc. ask it to find the hardest questions it can. Ask it to coach you and review your answers afterwards, give ideal answers etc
after a couple of hours grilling the real interview will seem like a doddle.
mock-possum 6 days ago [-]
ChatGPT advanced voice mode really is surprisingly excellent - I just wish it:
1) would give you more time to pause when you’re talking before it immediately launches into an answer
2) would actually try to say the symbols in code blocks verbatim - it’s basically useless for looking up anything to do with code, because it will omit parts of the answer from its speech.
barfingclouds 5 days ago [-]
Yeah I have to manually hold it down every time I talk. I have a lot of pauses and simply would not be able to interface with that without that option. It’s why I essentially can’t use Gemini voice mode
eraserj 6 days ago [-]
> There's less usage of voice mode on the enterprise and power users side but that will happen eventually.
- Anthropic CEO 21 jan. [0]
Is it possible to use ChatGPT voice feature in a similar manner to Alexa where I only need to say an activation word? I’m aiming to set up a system for my 7-year-old son to let him engage in conversations with ChatGPT as he does with Alexa.
Cort3z 3 days ago [-]
I assume it would be possible to make yourself with the OpenAI api together with a locally run voice model to only detect the activation word. There might be of the shelf solutions for this, but I am not aware of any.
NBJack 7 days ago [-]
I wonder if it will actually respect the robots.txt this time.
creddit 7 days ago [-]
I don't think it should. If a user asks the AI to read the web for them, it should read the web for them. This isn't a vacuum charged with crawling the web, it's an adhoc GET request.
birken 6 days ago [-]
The AI isn't "reading the web" though, they are reading the top hits on the search results, and are free-riding on the access that Google/Bing gets in order to provide actual user traffic to their sites. Many webmasters specifically opt their pages out of being in the search results (via robots.txt and/or "noindex" directives) when they believe the cost/benefit of the bot traffic isn't worth the user traffic they may get from being in the search results.
One of my websites that gets a decent amount of traffic has pretty close to a 1-1 ratio of Googlebot accesses compared to real user traffic referred from Google. As a webmaster I'm happy with this and continue to allow Google to access the site.
If ChatGPT is giving my website a ratio of 100 bot accesses (or more) compared to 1 actual user sent to my site, I very much should have to right to decline their access.
jsbg 6 days ago [-]
> If ChatGPT is giving my website a ratio of 100 bot accesses (or more) compared to 1 actual user sent to my site
are you trying to collect ad revenue from the actual users? otherwise a chatbot reading your page because it found it by searching google and then relaying the info, with a link, to the user who asked for it seems reasonable
birken 6 days ago [-]
While yes, I am attempting to collect ad revenue from users, and yes, I don't want somebody competing with me and cutting me out the loop, a large part of it is controlling my content. I'm not arguing whether the AI chatbot has the legal right to access the page, I'm not a legal scholar. What I'm saying is that the leading search engines also have the equal rights to access whatever content they want, and yet they all give webmasters the following tools:
- Ability to prevent their crawlers from accessing URLs via robots.txt
- Ability to prevent a page from being indexed on the internet (noindex tag)
- Ability to remove existing pages that you don't want indexed (webmaster tools)
- Ability to remove an entire domain from the search engine (webmaster tools)
It is really impolite for the AI chatbots to go around and flout all these existing conventions because they know that webmasters would restrict their access because it's much less beneficial than it is for existing search engines.
In the long run, all this is going to lead to is more anti-bot countermeasures, more content behind logins (which can have legally binding anti-AI access restrictions) and less new original content. The victim will be all humans who aren't using a chatbot to slightly benefit the ones who are.
And again, I'm not suggesting that AI chatbots should not be allowed to load webpages, just that webmasters should be able to opt out of it.
Paracompact 5 days ago [-]
> While yes, I am attempting to collect ad revenue from users, and yes, I don't want somebody competing with me and cutting me out the loop, a large part of it is controlling my content.
> It is really impolite for the AI chatbots to go around and flout all these existing conventions because they know that webmasters would restrict their access because it's much less beneficial than it is for existing search engines.
I agree with you about the long run effects on the internet at large, but I still don't understand the horse you have in it personally. I read you as saying (1) it's less about ad revenue than content control, but (2) content control is based on analysis of benefits, i.e. ad revenue?
nextts 6 days ago [-]
Well you have no rights when you expose a server to the internet. Other than copyright of course.
moooo99 6 days ago [-]
> Well you have no rights when you expose a server to the internet.
Technically you don’t, but there are still laws that affect what you can legally do when accessing the web. Beyond the copyright issues that have been outlined by people a lot more qualified than me, I think you could also make the point that AI crawlers actively cause direct and indirect financial harm.
1shooner 7 days ago [-]
>You can now use Claude to search the internet to provide more up-to-date and relevant responses.
It's a search engine. You 'ask it to read the web' just like you asked Google to, except Google used to actually give the website traffic.
I appreciate the concept of an AI User-agent, but without a business model that pays for the content creation, this is just going to lead to the death of anonymously accessible content.
darepublic 7 days ago [-]
Well I expect eventually the agent will be able to act on your behalf with your credentials.
elefanten 7 days ago [-]
And as advertisers get declining human views on their ads, the value of the business model will dwindle until it needs to be replaced by other forms of revenue. Content that can't shift business models and requires revenue to continue will die off.
Edit: Maybe that's fine, maybe that's bad. Maybe new models will emerge and things will reshape. But I'm just supporting the case that AI agents will pressure the current "free" content economy.
beeflet 7 days ago [-]
the free content economy is bogus, I am part of a growing segment of users that just block ads anyways.
disiplus 6 days ago [-]
I'm also and I pay for the services that I use to not see ads, but I don't pay for every single one. For example a local classified website is financed by ads, and I don't think anybody will pay for just looking at stuff there. Maybe they can switch to the model where the person puting the thing for sale would pay but hat is something where we are not currently.
Alupis 6 days ago [-]
If that's the case, then you might as well just list it on eBay and skip the local classifieds/craigslist/facebook/whatever.
Is that a world we actually want?
jimbokun 6 days ago [-]
Which is fine if you’re paying for a subscription. Will probably soon see one subscription rate allowing AI access on your behalf, and a lower rate without that access. Since a human accessing without a bot is likely to see the ads.
beeflet 7 days ago [-]
IDK bittorrent is pretty effective at hosting bytes. I think if something like IPFS takes off in our generation there will be no need for advertising as an excuse for covering hosting costs in the client-server model.
As for funding "content creation" itself, you have patronage.
losteric 7 days ago [-]
What was the web like before wide spread internet ads, auth, and search engines?
Did all those old sites have “business models”? What did the web feel like back then?
(This is rhetorical - I had niche hobby sites back then, in the same way some people put out free zines, and wouldn’t give a damn about today’s AI agents so long as they were respectful.
The web was better back then, and I believe AI slop and agents brings us closer to full circle)
a4isms 7 days ago [-]
I recall a hotelier who advertised free WiFi at a time when everyone thought that monetizing WiFi was a hot new revenue stream.
"What," he was asked, "is the business model for free WiFi?"
"What," he retorted, "is the business model for free washrooms?"
jfim 7 days ago [-]
It was much smaller and people wrote on Usenet to connect with one another, not to shout in the void while corporations hoover all the content.
pixl97 6 days ago [-]
The web was so much smaller back then. Just imagine I turned the user (not automated in any way) based clicks that can occur from a link like reddit today towards your site then. We called it the slashdot effect way back, but that many clicks might take down the entire ISP.
Many of these sites business model was simply "don't cost too much". The moment the web got big a lot of these sites died. Now add DDOS for fun and profit became a thing, most people moved to huge advertising based providers/hosters (think FB).
Simply put, we're never getting the old web back. Now, we may get something new, but it will be different and still far more commercial.
scarface_74 6 days ago [-]
X11 pop under ads were a thing around at least 2001
When was the great age of the web that wasn’t inundated with ads and SEO?
It was really easy on old school search engines like Altavista.
6 days ago [-]
wraptile 6 days ago [-]
You can't expect the benefits of public web without bearing the costs. Just put your stuff under a auth wall (can even be free) and no one will crawl it.
internetter 7 days ago [-]
You could make this justification for a lot of unapproved bot activity.
taskforcegemini 2 days ago [-]
you could, but this article is about claude.
scoofy 6 days ago [-]
Many if not most websites are paid for by eyeballs not by get requests. A bot is a bot is a bot. Respect robots.txt or expect to have your IPs banned.
danenania 6 days ago [-]
It may not be very long before the big majority of web searches are via AI. If that happens, blocking AI will mean blocking most people too.
You’d already be blocking me as I’d guess I now search via AI >90% of the time between perplexity, chatgpt, deep research, and google search AI.
scoofy 6 days ago [-]
>It may not be very long before the big majority of web searches are via AI. If that happens, blocking AI will mean blocking most people too.
If that happens a big majority of websites will go bankrupt and won't exist anymore to be searched. Problem solved!
wraptile 6 days ago [-]
big doubt on that and maybe that's a good thing? Let's be honest, right now most of the web is dominated by low effort spam. Taking money away from view farming would dramatically increase the web quality of the web. Suddently that guy who's really into "key gardening" doing research and publishing detailed results on his website actually has viewers — isn't this good? Especially since website hosting is close to being free these days.
moooo99 6 days ago [-]
> big doubt on that and maybe that's a good thing? Let's be honest, right now most of the web is dominated by low effort spam.
I think that is funny considering it is likely going to have the exact opposite effect.
Low effort blog spam is cheap to make. And it is often part of content marketing strategies where brand visibility is all that matters, so not much harm if the viability is directly on your site or in an AI chatbit interface.
Quality content on the other hand is hard to make. And there are two groups of people who make such content:
1. individuals or small groups that like to share for the sake of sharing. They likely won’t care about the AI crawlers stealing their content, although I think there is a big overlap between people who still run blogs and those who dislike AI.
2. small organizations that are dedicated to one specific topic and are often largely ad financed. These organizations would likely stop to exist in such an AI search dominated world.
> Especially since website hosting is close to being free these days.
It is under specific circumstances. The problem is that those AI crawlers don’t check by once in a while like Google does but instead they hit the site very frequently. For a static site this won’t be much of an issue except for maybe bandwidth. For more complex sites like - say - the GitLab instances for OSS projects, reality paints a different picture
wraptile 2 days ago [-]
Still unconvinced. You really don't need anything beyond a static site to effectively share information.
Another point you're missing is that there's a 3rd group of people sharing content: experts who are there to establish their expertise. Small companies and individuals generate the highest quality content these days. I work on a blog for our SAAS company and it has been a great success in terms of organic growth (even people coming from LLMs) and to simply establish authority and signal expertise in the field. I can imagine a future where this is majority of expert content on the web and it seems quite sustainable imo.
NicuCalcea 6 days ago [-]
> blocking AI will mean blocking most people too
If that's websites want, they should have that option.
theshackleford 6 days ago [-]
What are you even talking about?
robots.txt is not a security mechanism, and it doesn’t “control bots.” It’s a voluntary convention mainly followed by well behaved search engine crawlers like Google and ignored by everything else.
If you’re relying on robots.txt to prevent access from non human users, you’re fundamentally misunderstanding its purpose. It’s a polite request to crawlers, not an enforcement mechanism against any and all forms of automated access.
bayindirh 7 days ago [-]
How can you be so sure? Processors love locality, so they fetch the data around the requested address. Intel even used to give names to that.
So, similarly, LLM companies can see this as a signal to crawl to whole site to add to their training sets and learn from it, if the same URL is hit for a couple of times in a relatively short time period.
usrbinbash 6 days ago [-]
> This isn't a vacuum charged with crawling the web, it's an adhoc GET request.
Doesn't matter. The robots-exclusion-standard is not just about webcrawlers. A `robots.txt` can list arbitrary UserAgents.
Of course, an AI with automated websearch could ignore that, as can webcrawlers.
If they chose do that, then at some point, some server admins might, (again, same as with non-compliant webcrawlers), use more drastic measures to reduce the load, by simply blocking these accesses.
For that reason alone, it will pay off to comply with established standards in the long run.
renewiltord 6 days ago [-]
In the limit of the arms race it's sufficient for the robot to use the user's local environment to do the browsing. At that point you can't distinguish the human from the robot.
usrbinbash 3 days ago [-]
That's not how many of these services work though. The websearch and subsequent analysis of the results by an LLM are done from the servers of whoever supplies the solution.
mvdtnz 7 days ago [-]
No thank you, when I define a robots.txt file I expect all automated systems to respect it.
navigate8310 6 days ago [-]
Think of the "searching" LLM as a peon of the user, the user asks, the peon performs. In that essence, searching by the LLM should be human-driven and must not be blocked. It's just an automated system doing the search not your personal peon.
bcrosby95 6 days ago [-]
Can't you make the same argument for a crawler? The user wants information, the peon (crawler) just compiles it for them.
theshackleford 6 days ago [-]
Then you’ve fundamentally misunderstood what a robots.txt file does or is even intended to do and should reevaluate if you should be in charge of how access is or is not prevented to such systems.
Absolutely nothing has to obey robots.txt. It’s a politeness guideline for crawlers, not a rule, and anyone expecting bots to universally respect it is misunderstanding its purpose.
usrbinbash 3 days ago [-]
> Absolutely nothing has to obey robots.txt
And absolutely no one needs to reply to every random request from an unknown source.
robots.txt is the POLITE way of telling a crawler, or other automated system, to get lost. And as is so often the case, there is a much less polite way to do that, which is to block them.
So, the way I see it, crawlers and other automated systems have 2 options: They can honor the polite way of doing things, or they can get their packets dropped by the firewall.
TheDudeMan 7 days ago [-]
But this isn't automated. This is user-driven.
jrflowers 7 days ago [-]
If this feature isn’t already part of the Claude API it likely will be at some point, in which case many Claude requests will be automated with no way to distinguish between user-driven or otherwise.
pixl97 6 days ago [-]
Simply put, at the end of the day you lose, AI blocking will not work.
I mean, currently the AI request comes from the datacenter running the AI, but eventually one of two things will happen.
AI models will get small/fast enough to run on user hardware and use the users resources: End result? You lose. The user will set their own headers and sites will play the impossible game of identifying AI.
AI sites will figure out how to route the requests via any number of potential methods so the requests appear to come from the user anyway: End result? You lose. The sites attempting to block will play the cat and mouse game of figuring out what is AI or not AI.
Note, this doesn't mean AI blocking isn't worth doing, if nothing else to reduce load on the servers. It's just not a long term winning strategy.
jimbokun 6 days ago [-]
Depends if the legal system survives.
You may not be able to stop AIs from crawling web sites through technological means. But you can confiscate all the resources of the company that owns the AI.
hooverd 6 days ago [-]
Ideally bad behavior by AI companies should trigger crushing fines and jail time for executives.
ipaddr 6 days ago [-]
Welcome to the world of CAPTCHAs
pixl97 6 days ago [-]
Heh, I'm always reminded of dunkey when captchas are brought up. It seems AI gets better faster at them than humans do.
It’s clearly not. The human user is not requesting the resource. The AI is.
wraptile 6 days ago [-]
It's clearly not. The human user is not requesting the resouce. The web browser is.
Where do we stop here? at "please drink a verification can and maintain eye contact at all times"?
beeflet 7 days ago [-]
Someone should call the robots.txt police then, there's a bandit on the loose!
victorbjorklund 6 days ago [-]
A browser is automated too.
goatlover 6 days ago [-]
Browser don't automatically browse, unless they are being automated.
Sargos 6 days ago [-]
Any AI tool I make will ignore robots.txt on principle. Artificial humans should have equal rights as real humans.
creddit 6 days ago [-]
> Artificial humans should have equal rights as real humans.
This is ridiculous and plain evil.
rvense 6 days ago [-]
People like you are ruining the internet.
GuinansEyebrows 6 days ago [-]
Someday I’ll have enough “karma” to downvote things like this.
The agent should respect robots.txt no matter who is using the Robot.
6 days ago [-]
JimDabell 6 days ago [-]
The LLM shouldn’t.
robots.txt is intended to control recursive fetches. It is not intended to block any and all access.
You can test this out using wget. Fetch a URL with wget. You will see that it only fetches that URL. Now pass it the --recursive flag. It will now fetch that URL, parse the links, fetch robots.txt, then fetch the permitted links. And so on.
wget respects robots.txt. But it doesn’t even bother looking at it if it’s only fetching a single URL because it isn’t acting recursively, so robots.txt does not apply.
The same applies to Claude. Whatever search index they are using, the crawler for that search index needs to respect robots.txt because it’s acting recursively. But when the user asks the LLM to look at web results, it’s just getting a single set of URLs from that index and fetching them – assuming it’s even doing that and not using a cached version. It’s not acting recursively, so robots.txt does not apply.
I know a lot of people want to block any and all AI fetches from their sites, but robots.txt is the wrong mechanism if you want to do that. It’s simply not designed to do that. It is only designed for crawlers, i.e. software that automatically fetches links recursively.
manquer 6 days ago [-]
While robots.txt is not there to directly prevent automated requests, it does prevent crawling which is needed for search indices.
Without recursive crawling, it will not possible for a engine to know what are valid urls[1]. They will otherwise either have to brute-force say HEAD calls for all/common string combinations and see if they return 404s or more realistically have to crawl the site to "discover" pages.
The issue of summarizing specific a URL on demand is a different problem[2] and not related to issue at hand of search tools doing crawling at scale and depriving all traffic
Robots.txt does absolutely apply to LLMs engines and search engines equally. All types of engines create indices of some nature (RAG, Inverted Index whatever) by crawling, sometimes LLM enginers have been very aggressive without respecting robots.txt limits, as many webmasters have reported over the last couple of years.
---
[1] Unless published in sitemap.xml of course.
[2] You need to have the unique URL to ask the llm to summarize in the first place, which means you likely visited the page already, while someone sharing a link with you and a tool automatically summarizing the page deprives the webmaster of impressions and thus ad revenue or sales.
This is common usage pattern in messaging apps from Slack to iMessages and been so for a decade or more, also in news aggregators to social media sites, and webmasters have managed to live with this one way or another already.
JimDabell 6 days ago [-]
> Robots.txt does absolutely apply to LLMs engines and search engines equally.
It does not. It applies to whatever crawler built the search index the LLM accesses, and it would apply to an AI agent using an LLM to work recursively, but it does not apply to the LLM itself or the feature being discussed here.
The rest of your comment seems to just be repeating what I already said:
> Whatever search index they are using, the crawler for that search index needs to respect robots.txt because it’s acting recursively. But when the user asks the LLM to look at web results, it’s just getting a single set of URLs from that index and fetching them – assuming it’s even doing that and not using a cached version. It’s not acting recursively, so robots.txt does not apply.
There is a difference between an LLM, an index that it consults, and the crawler that builds that index, and I was drawing that distinction. You can’t just lump an LLM into the same category, because it’s doing a different thing.
usrbinbash 3 days ago [-]
> It does not.
Yes it does. I am the one controlling robots.txt on my server. I can put whatever user agent I want into my robots.txt, and I can block as much of my page as I want to it.
People can argue semantics as much as they want...in the end, site admins decide what's in robots.txt and what isn't.
And if people believe they can just ignore them, they are right, they can. But they are gonna find it rather difficult to ignore when fail2ban starts dropping their packets with no reply ;-)
theshackleford 6 days ago [-]
> it does prevent crawling
No it doesn’t. It politely requests to crawlers that they do not, and if said crawlers choose to honour it than those specific crawlers will not crawl. That’s it. It can and is ignored without penalty or
enforcement.
It’s like suggesting that putting a sign in your front yard saying “please don’t rob my house” prevents burglaries.
> Robots.txt does absolutely apply to LLMs engines and search engines equally
No it doesn’t because again, it’s a request system. It applies only to whatever chooses to pay attention to it, and further, decides to abide by any request within it which there is no requirement to do.
From google themselves:
“The instructions in robots.txt files CANNOT ENFORCE crawler behavior to your site; it's up to the crawler to obey them.”
And as already pointed out, there is no requirement a crawler follow them, let alone anything else.
If you want to control access, and you’re using robots.txt, you’ve no idea what you’re doing and probably shouldn’t be in charge of doing it.
mtkd 7 days ago [-]
Do really think LLM vendors that download 80TB+ of data over torrents are going to be labeling their crawler agents correctly and running them out of known datacenters?
Arnt 6 days ago [-]
The ones I noticed in my logfiles behave impeccably: retrieve robots.txt every week or so and act on it.
(I noticed Claude, OpenAI and a couple of others whose names were less familiar to me.)
teh_infallible 7 days ago [-]
Apparently they use smart appliances to scrape websites from residential accounts.
Bluesky / ATProto has a proposal for User Intents for data. More semantics than robots.txt, but equally unenforceable. Usage with AI is one of the intents to be signaled by users
If they don't comply with robots.txt, why would they comply with anything else?
furyofantares 6 days ago [-]
Presumably the crawler that produces whatever index it uses does, which is how it knows what sites to read. Unless you provide it a URL yourself I guess, in which case, it shouldn't.
explain 7 days ago [-]
robots.txt is meant for automated crawlers, not human-driven actions.
zupa-hu 7 days ago [-]
Every automated crawler follows human-driven actions.
josh-sematic 7 days ago [-]
Conversely, every browser is a program that automatically executes HTTP requests.
bayindirh 7 days ago [-]
Yet they respect a lot of things meant for machine to machine interaction. Like server return codes, cookie negotiations, and CAPTCHAs if they behave a certain way.
So they sometimes hit bollards and turnstiles made for other types of code which executes HTTP requests. So they're bots basically, but better (or suitably) behaving ones.
soulofmischief 6 days ago [-]
Browsers let you visit websites without regard for robots.txt.
gopher_space 7 days ago [-]
Welcome to "Context".
nicce 7 days ago [-]
It must form the search index somehow. That is prior the human action. Simply it would not find the page at all if it respects.
pests 7 days ago [-]
I remember in late 90s/early 2000 as a teen going to robots.txt to specifically see what they were trying to hide and exploring those urls.
What is the difference if I use a browser or a LLM tool (or curl, or wget, etc) to make those requests?
nicce 7 days ago [-]
But how did you find those sites that had the robot.txt to begin with? LLM must somehow find the existence of those pages and store that information before they can crawl them further or mark as acceptable source.
pests 6 days ago [-]
I am a human so I can visit other sites with links or from word of mouth or business cards or literally anywhere?
LLM finds out about it from me, when I ask it to go to the link.
You don’t accuse browsers of “somehow find[ing] the existence of those pages”. How does a browser know what page to visit?
The user tells it to.
If I prompt an LLM “go to example.net and summarize the page” how is that any different from me typing example.net in a browser URL bar?
nicce 6 days ago [-]
That is certainly true. But that is not how these work 99% of the time. This post was originated by "search".
pests 4 days ago [-]
I think a distinction needs to be made between ingesting for LLM training and ingesting / crawling because a human asked it to during an inference session.
I have been talking about the latter, agree the former is abusive.
kevindamm 7 days ago [-]
careful, some of those are honey pots or trip wires
Tostino 7 days ago [-]
Let's say you had a local model with the ability to do tool calls. You give that llm the ability to use a browser. The llm opens that browser, goes to Google or Bing, and does whatever searches it needs to do.
Why would that be an issue?
bayindirh 7 days ago [-]
So, do you mean LLMs are human-like and conscious?
I thought they were just machine code running on part GPU and part CPU.
Ukv 7 days ago [-]
I think they mean that it's a tool accessing URLs in response to a user request to present to the user live - with that user being a human. Like if you used some webpage translation service, or non-ML summarizer.
There's some gray area though, and the search engine indexing in advance (not sure if they've partnered with Bing/Google/...) should still follow robots.txt.
aaronbaugher 6 days ago [-]
Yeah, that seems to be a big distinction. If I tell my AI to summarize the headlines from my three favorite news sites every morning, it's just carrying out my request same as if I'd clicked to them, so that seems fine.
But if I say, "Search the web for a low-carb chicken casserole recipe that takes squash and cottage cheese," then it's either going to A) send queries to a search engine like Google, in which case robots.txt already should have been respected, or B) check its own repository of information it's spidered before I asked the question, in which case it should have respected robots.txt itself.
Filligree 7 days ago [-]
There’s a human using the LLM. In a live web browsing session like this, the LLM stands in for the browser.
timdiggerm 7 days ago [-]
Would you believe that humans turn on traditional web-crawlers as well?
7 days ago [-]
postexitus 7 days ago [-]
if a human triggers the web crawlers by pressing a button, should they ignore robots.txt?
Filligree 7 days ago [-]
If a human triggers a browser by pressing a button, should it ignore robots.txt?
haswell 7 days ago [-]
Are you arguing that these are equivalent actions?
The entire web was built on the understanding that humans generally operate browsers, and robots.txt is specifically for scenarios in which they do not.
To pretend that the automated reading of websites by AI agents is not something different…is quite a stretch.
Tostino 7 days ago [-]
I see it as very different. I the human want the data from that request. I am using a tool to get it for me.
Should I not be able to execute curl to download a webpage because the "understanding that humans generally operate browsers"?
haswell 7 days ago [-]
> I the human want the data from that request. I am using a tool to get it for me.
Isn't this a bit of an oversimplification, though? Especially when the tool you're using completely alters the relationship between the content author and the reader?
I hear this argument often: "it's just another tool and we've always used tools". But would you acknowledge that some tools change the dynamics entirely?
> Should I not be able to execute curl to download a webpage because the "understanding that humans generally operate browsers"?
Executing curl to download a webpage is nothing new, and compared to a traditional browser, has about the same impact. This is still drastically different than asking an AI agent to gather information and one of the pages it happens to "read" is the one you were previously navigating to with a browser or downloading with curl.
If you're a content creator who built a site/business based on a pre-LLM understanding of the dynamics of the ecosystem, doesn't it seem reasonable to see these types of "readers" differently?
johnisgood 6 days ago [-]
No, whether I curl it, or I use a browser, or an LLM, it is essentially ALL the same, unless of course the LLM crawls it by itself, without human interaction.
If the scale bothers you, block it, just like how you would block any other crawlers.
Other than that, we all wanted "ease-of-access" (not me though), and now we have it. It does not change anything.
postexitus 6 days ago [-]
What if the crawlers are faking their identity (As they are doing it right now)
johnisgood 6 days ago [-]
Well, how do we deal with it in terms of DDoS?
int_19h 6 days ago [-]
It's reasonable for the content creator to see it differently, but I don't think it's reasonable to expect everyone around the content creator to contort any new approach to the needs of the pre-existing business model.
johnisgood 6 days ago [-]
I agree. This came up in terms of copyright either, or who is pressing the shutter and who owns the copyright to the photo taken. I personally think that the copyright belongs to me, because I, a human, made the detailed prompt, the tool just generated it. Do I not own the copyright if I make something using Photoshop? As far as I know, I do. So, how is AI any different that needs human action (i.e. be prompted)? Because it is better than Photoshop? That is not a good argument, IMO.
dudeinjapan 7 days ago [-]
In practice, robots.txt is to control which pages appear in Google results, which is respected as a matter of courtesy, not legality. It doesn't prevent proxies etc. from accessing your sites.
micromacrofoot 7 days ago [-]
almost no one does, robots.txt is practically a joke at this point — right up there with autocomplete=off
Demiurge 7 days ago [-]
In what circles is it a joke? Google bots seem to respect it on my sites according to logs.
mediumsmart 7 days ago [-]
I know an artist that had noindex turned on by mistake in robots.txt for the last 5 years - google, kagi and duckduckgo find tons of links relevant to the artist and the artwork but not a single one from the website.
so not seem to or apparently but matter of fact like. robots.txt works for the intended audience
Aloisius 7 days ago [-]
Not being indexed is different from not being crawled.
Given websites do disappear or worse, get their content adultered. Also given the long history of the internet archive as a non profit - and the commons service it has served so far, the joke would be to see that bot honor it.
wuming2 5 days ago [-]
Sorry to intrude with something unrelated. But YC closed the earlier discussion. Saw your comment about Kannel WAP of few months back and wanted to ask if do you know of any WAP Push full service provider still in operation.
joecool1029 4 days ago [-]
Nah I just know about that public gateway I linked. I can't use it anymore as 2G was shut down on my local towers back in January.
nikisweeting 6 days ago [-]
lol IA did not start that, if anything they were late to the game. only the top handful of US-based search engines ever bothered respecting it in the first place
otikik 7 days ago [-]
Apparently, the regular search crawler does it, but the ai thingie doesn't.
lucgagan 7 days ago [-]
Can confirm. My website is flooded with AI bots despite attempts to block crawlers to certain parts of it.
supriyo-biswas 7 days ago [-]
Huh? You can add Google-Extended[1] to opt out from Generative AI summaries.
Google will still scrape it for training data either way, this only impacts search results.
supriyo-biswas 7 days ago [-]
> Today we’re announcing Google-Extended, a new control that web publishers can use to manage whether their sites help *improve Bard and Vertex AI generative APIs*, including future generations of models that power those products.
they're literally asking to break laws to train AI for national security. A sentence in a press release from 2 years ago is worthless... look at what they're actually doing
micromacrofoot 7 days ago [-]
A small number of search engines respect it, no one else does. Just about every content scraping bot ignores it, including a number of Google's.
geekrax 7 days ago [-]
I have replaced all robots.txt rules with simple WAF rules, which are cheaper to maintain than dealing with offending bots.
claudiulodro 6 days ago [-]
I do essentially both: robots.txt backed by actual server-level enforcement of the rules in robots.txt. You'd think there would be zero hits on the server-level blocking since crawlers are supposed to read and respect robots.txt, but unsurprisingly they don't always. I don't know why this isn't a standard feature in web hosting.
Joe_Cool 7 days ago [-]
For my personal stuff I also included a Nepenthes tarpit. Works great and slows the bots down while feeding them garbage. Not my fault when they consume stuff robots.txt says they shouldn't.
I'm just not sure if legal would love me doing that on our corporate servers...
rustc 7 days ago [-]
The WAF rule matches based on the user agent header? Perplexity is known to use generic browser user agents to bypass that.
NewJazz 6 days ago [-]
Why wonder. You can test for yourself.
tylersmith 7 days ago [-]
It's a user agent not a robot.
Y_Y 7 days ago [-]
Why not both?
jsight 6 days ago [-]
I really want these to be able to find and even redisplay images. "Search all the hotels within 5 miles of this address and show me detailed pictures of the rooms and restrooms"
Hotels would much rather show you the outside, the lobby, and a conference room, so finding what the actual living space will look like is often surprisingly difficult.
dgs_sgd 6 days ago [-]
I've been looking for this as well. I want a reliable image search tool. I tried a combination of perplexity web search tool use with the Anthropic conversations API but it's been lackluster.
tjsk 6 days ago [-]
I’ve been experimenting with different LLM + search combos too, but results have been mixed. One thing I’m particularly interested in is improving retrieval for both images and videos. Right now, most tools seem to rely heavily on metadata or simple embeddings, but I wonder if there’s a better way to handle complex visual queries. Have you tried anything for video search as well, or are you mainly focused on images? Also, what kinds of queries have you tested?
6 days ago [-]
CalChris 7 days ago [-]
I find myself Googling less often these days. Frustrated with both the poor search results and impressed with the quality of AI to do the same thing and more, I think search's days are numbered. AOL lasted as an email address for quite some time after America Online ceased to be a relevant portal. Maybe Gmail will as well.
whalesalad 6 days ago [-]
Kagi has been really really good.
noisy_boy 6 days ago [-]
I am still googling for non-indepth queries because the AI-generated summary at the top of the results is good enough most of the time and actual results are just below in case I want to see them.
For more in-depth stuff, it is LLMs by default and I only goto Google when the LLM isn't getting me what I need.
borgdefenser 5 days ago [-]
I notice I have been using the Google AI summary more and more for quick things.
I had subscribed to Perplexity for a month to use their deep research. I think it ran out earlier this week but I am really missing it Saturday morning here.
That thing is awesome. Sonnet 3.7 is more in the middle of this to me. It can help me understand all the things I found from my deep research requests.
I am surprised the hype is not more for Sonnet 3.7 honestly.
puttycat 6 days ago [-]
Agree and I'm pretty sure Google is seeing this drop internally in usage stats and are panicking.
I'm also certain (but hope to be wrong) that because of this they'll be monetizing the hell out of every remaining piece of product they have (not by charging for it of course).
msp26 7 days ago [-]
> in feature preview for all paid Claude users in the United States. Support for users on our free plan and more countries is coming soon
US only
smca 7 days ago [-]
More countries soon.
tantalor 7 days ago [-]
It says a lot about their product vision and intended market that the example query is typescript migration question.
Do they not care about typical search users? Only developers?
mindwok 6 days ago [-]
Compared to OpenAI, who seem keen to maintain the mindshare of everyone, IMO Anthropic are far more considered about their audience. They released a report recently on who who was using AI professionally and it was something like 40% developers, and single digit percentage for basically every other profession. I think they’re focusing on the professional use cases.
throw234234234 6 days ago [-]
Pretty much. Claude from their announcements seems to me at least to be about SWE's and coding at the moment. Personally while I understand their decision I find it a bit limiting, and just a little targeted against the SWE profession. If all AI does is disrupt SWE's but not really add new products and/or new possibilities; then it feels IMO like a bit of a waste and is quite uneven in its society disruption.
At least in my circle SWE's are either excited or completely fearful of the new technology; and every other profession feels like it is just hype and hasn't really changed anything. They've tried it sure; but it didn't really have the data to help with even simpler domain's than SWE. Anecdotally I've had the comment from people around me - my easy {insert job here} will last longer than your tech job from many people I know from both white and blue collar workers. Its definitely reduced the respect for SWE's in general at least where I'm located.
I would like to see improvements in people's quality of life and new possibilities/frontiers from the technology, not just "more efficiencies" and disruption. It feels like there's a lack of imagination with the tech.
gizmodo59 6 days ago [-]
I know people in other industries use AI a lot and likes it. Accounting, legal, writing (a lot here). I agree that companies that focus on all verticals like openai is definitely the way to go. Claude code capabilities are not very significant compared to openai though. There is no big moat and a lot of it is perception, marketing.
picafrost 6 days ago [-]
Do users pay for LLMs? I haven't seen much concrete data indicating that they do. I don't think the casual utility gains of LLMs have gotten average people so much value that they're paying $20/mo+ for it. Certainly not for search in the age of [doom] scrolling.
I would guess that Anthropic wants developers talking about how good Claude is in their company Slack channels. That's the smart thing to do.
disiplus 6 days ago [-]
I would say no. While I pay for chatgpt Claude and perplexity monthly (I don't know why anymore) my wife does not use any at all. She has around 5-10 things she uses on the smartphone, and if she needs something new there is still google.
I on the other side reduced my googling by 95%
pixl97 6 days ago [-]
Have you actually done any kind of study on the utility the 'average user' has received, or is this just guessing?
picafrost 6 days ago [-]
I have only anecdotal data from non-technical friends and family.
I’m referring to average people who may not be average users because they’re barely using LLMs in the first place, if at all.
They have maybe tried ChatGPT a few times to generate some silly stories, and maybe come back to it once or twice a month for a question or two, but that’s it.
We’re all colored by our bubbles, and that’s not a study, but it’s something.
throw234234234 6 days ago [-]
For most people AI is stuck at GPT 4 and other on par performance wise models. Anecdotally as well, many people that I know that have tried it found it mildly useful, but experience what coders and other tech workers experienced two years or so ago. Lots of hallucinations, lack of context, knowledge, etc. If you went back to those models you would at best feel like it is just a occasional code helper as well; at best an autocomplete or rather.
A lot of the reasoning model improvements of late are in domains where RL, RLHF and other techniques can be both used and verified with data and training; in particular coding and math as "easy targets" either due to their determinism or domain knowledge of the implementers. Hence it has been quite disruptive to those industries (e.g. AI people know and do a lot of software). I've heard a lot of comments in my circles from other people saying they don't want AI to have the data/context/etc in order to protect their company/job/etc (i.e. their economic moat/value). They look at coding and don't want that to be them - if coding is that hard and it can get automated like that imagine my job.
Matl 7 days ago [-]
I'd guess they showed that query because LLMs are a lot better at answering translation/migration type stuff without hallucinating too much.
dontlikeyoueith 6 days ago [-]
That's because the attention mechanism was designed for Seq2Seq models (i.e. translation in its most general form).
Any other use of it is a case of "I have a hammer, so that's a nail".
6 days ago [-]
agentultra 6 days ago [-]
They need to stop or else make their crawlers easy to identify and block. However I have no faith that AI companies will play by the rules.
They already cost people time, money, and their mental health by using adversarial tactics to evade blocking and ignoring robots.txt
Excited to see this. I've really been enjoying Claude. It feels like a different, more creative flavor of experience than GPT. I use Claude a lot for dialogues and exploring ideas, like a conversational partner. Having web access will add an interesting dimension to this.
robwwilliams 6 days ago [-]
Ditto. I use Claude 3.7 to refine drafts of research papers and ask it “What have I missed?”.
Now I can prompt Claude to ping PubMed and make sure that its suggested references are verified. Each citation/claim should be accompanied by a PMID or a DOI.
I hope this works!
ubicomp 6 days ago [-]
That's a great way to use it!
lovehashbrowns 6 days ago [-]
That's how I use it as well! It'll also occasionally hallucinate things, but much less often than other AI tools I've tried. But typically I'll just run things by it that I'm question myself about, or if I want to solidify a concept I'll ask it if my understanding is correct.
It's also fun to ask the same question to multiple AI tools and see how the answers differ. Usually Claude is the most accurate and helpful, though.
pcj-github 7 days ago [-]
Does not really say /how/ it's performing a web search... Is it tapping into it's "own" corpus of material or calling out to some other web search engine?
ordersofmag 6 days ago [-]
In my quick experiment (asking a question that would naturally lead to content on my own site) it is not doing a real time request to the site in question. Its answer included links back to my site (and relevant summaries), but there was no requests for those pages while it was generating its answer. So it's clearly drawing from info that has already been scraped at some earlier point. And given that I see Claudebot routinely (and politely) crawling the site I'd guess it's working from it's own scraped copies (because why use someone else's if you've got your own....)
gizmodo59 6 days ago [-]
Major AI players don’t want to use someone else web index as they may cut it off or jack up the prices etc. major players want to build their own web index
hk__2 6 days ago [-]
And this is why we see our logs overloaded with ABot BBot CBot etc, every single "AI" company makes their own bot and they all crawl the same pages over and over.
ineedaj0b 6 days ago [-]
i stopped using Claude about 2 months ago. went to Grok (the code was better, everything was better - politics aside). i wonder if this update will improve it.
the main issue i find with Claude is, he fights you. He refuses so many requests and i need 3 or 4 replies to get what i want vs deepseek/grok. i've kept the monthly subscription to help anthropic, but it's trounced by the free options imo.
Workaccount2 6 days ago [-]
Claude 3.7 with thinking is a big step up. If you haven't used it I'd suggest giving it a try.
I have used grok a bit and it did what I needed it too, so I can't really compare. But 3.7 thinking is crazy strong for coding.
user_7832 6 days ago [-]
I wonder if you use it exclusively for coding, because for general purpose explanation tasks, 3.7 seems absolutely terrible unfortunately.
Back when it was 3.5 you could actually talk and learn things and it felt humane, but now it sounds like a McKinsey-corpo in a suit who sounds all fancy but is only right half the time.
I’ve switched back (rather regretfully) to chatgpt, and holy hell is its personality much better. For example just try asking it to explain differences between Neo Grotesque and Geometrical Sans Serif fonts/typefaces. One sounds like a friend trying to explain, the other sounds like a soulless bot. (And if you have 3.5 access, try asking it too.)
Workaccount2 6 days ago [-]
I use it just for coding.
For general inference I use 4.5
I think OpenAI (and likely others) are on the right to track to acknowledge that different model tunings are best for different uses, and they interned to add a discriminator that can direct prompts to the best tuned model/change model tuning in real time.
gizmodo59 6 days ago [-]
I got much better results with o3-mini than Claude for coding. Get the high level design wkth o3 or o1 and then use gpt4 or Claude for small tasks.
qingcharles 6 days ago [-]
I find it kinda random. I normally keep 4 tabs open, Claude/GPT/Gemini/Grok and paste the problem into all 4. Depending on the problem one will be better than the others.
ineedaj0b 5 days ago [-]
What have you noticed Gemini doing well on? I have not used it enough
ggm 6 days ago [-]
So in many respects, search the place it used to construct the model? Isn't that functionally bias-reinforcing?
"Look what I synthesise is correct and true because when I use the same top 10 priming responses which informed my decision I find these INDEPENDENT RESULTS which confirm what I modelled" type reasoning.
None of us have a problem with an LLM which returns 2+2 = 4 and shows you 10 sites which confirm. What worries me is when the LLM returns 2+2 = 5 and shows 10 sites which confirm. The set of negative worth content sites is semi infinite and the set of useful confirmed fact (expensive) sites is small so this feels like an outcome which is highly predictable (please don't beat me up for my arithmetic)
e.g. "Yes Climate science is bunk" <returns 10 top sites from paid shills in the oil sector which have been SEO'd up the top>"
shortrounddev2 6 days ago [-]
We will very quickly enter a Kepler effect of information on the internet. All text on the internet will become AI slop being parsed by AI. Real information and human beings will be drowned out by the garbage. The internet will cease to be useful and we will retreat to corners of the web or to walled gardens. I'm seeing more and more online communities these days enforce invite only because there's just too much AI slop everywhere now.
importantstuff 6 days ago [-]
Do you mean Kessler syndrome by any chance?
shortrounddev2 6 days ago [-]
Yes! And I would edit my original comment if HN wasn't such a POS site!
hombre_fatal 7 days ago [-]
Aside, does anyone know of an app like Perplexity for surfing the news in a foreign language (language practice)?
Perplexity's "Explore" tab translates its news to your local language, and its curated news items are all pretty interesting, but the problem is that there are so few of them. I seem to get maybe a dozen stories in a day. I paid their subscription for a month just to listen to the news on my walk, but didn't renew because of this.
A foreign news site like BBC Mundo (Spanish) on the other hand barely has any stories outside of a few niches. Its tech section only has a few stories per week.
Hmm, maybe I want a sort of RSS reader that AI-translates stories for me. But I don't really want to maintain a feed myself either.
Apple News would probably do it since they also have good curation, but afaict they still don't support foreign news sources (why???).
diggan 7 days ago [-]
> Apple News would probably do it since they also have good curation, but afaict they still don't support foreign news sources (why???).
ground.news includes sources from all sorts of countries, and also auto-translate headline and the intro, while you can still click to access the source article. Not affiliated, just happy user.
Although I'm not sure how useful it is for language learning, as you cannot (afaik) configure it to only display articles in Spanish or something similar, but if you filter by stories about France, you'll get a lot of French sources (obviously).
dolmen 7 days ago [-]
Use a VPN to appear like being in the target country.
Use a browser profile where you set the language preference to the one you target.
hombre_fatal 7 days ago [-]
For which purpose?
6 days ago [-]
uzyn 6 days ago [-]
Surprised that Claude (the app, not model) not only has done well for so long, but has somewhat consistently clinched the top spot in coding, all without a feature that is considered somewhat of a basic feature for most consumer-facing AI apps.
gizmodo59 6 days ago [-]
How much % it’s significant compared to say openai or google? Because if I’m paying 20$ I want other things too not just coding. And if the moat for coding compared to other vendors is not significant, it doesn’t make any difference tbh
uzyn 4 days ago [-]
Fair point. I wouldn't say it's by a lot because I am getting quite good results with ChatGPT's models too. A part of it could also just be confirmation bias too.
23 hours ago [-]
jetrink 7 days ago [-]
> With web search, Claude has access to the latest events and information, boosting its accuracy on tasks that benefit from the most recent data.
I'm surprised that they only expect performance to improve for tasks involving recent information. I thought it was widely accepted that using an LLM to extract information from a document is much more reliable than asking it to recall information it was trained on. In particular, it is supposed to lead to fewer instances of inventing facts out of thin air. Is my understanding out of date?
not_good_coder 7 days ago [-]
I have found that for RAG use cases where the source can be document or web data, hallucinations can still occur. This is largely driven by the prompt and alignment to the data available for processing and re-ranking.
stephencoyner 6 days ago [-]
When I try to prompt it with something that obviously needs up to date web search (when will Minneola Tangelos be in season this year?) it says..
"I believe they're usually available from November through March, but I'm not completely certain about the exact timing for this year's crop. Would you like me to search for more current information about the 2025 tangelo season?"
It doesn't just search, it wants me to confirm. This has happened a lot for me.
fourside 7 days ago [-]
I’ll be interested in trying it. My admittedly limited experience with this on ChatGPT has been disappointing. ChatGPT falls for the SEO content that has taken over the web.
As an example, I recently travelled abroad to a popular vacationing spot and asked ChatGPT for local recommendations on what to do. When it gave me answers directly, they were pretty solid. But when it “searched the web” instead, the answers were awful. Every single result it suggested had terrible ratings. It did this repeatedly. One of those times I asked it to pick something with better ratings and it sort of improved but not by much.
Of course this is another tool and maybe Claude uses better sources or a better algorithm, but in this case where there was a concrete number tied to the results, that while not perfect, aims to rate the quality of a result, it still did not filter out low quality answers. I’m not sure I trust these LLMs to do any better when there aren’t such ratings available. The available input data is just not very good, and now LLMs are being used to feed that low quality, SEO machine.
gcanyon 6 days ago [-]
Funny, I literally just two days ago asked Claude to provide an outline of the functionality of a product, giving it the web site. It of course refused. So I downloaded the text of the site and passed that in, and got mediocre results.
The results based on giving the source URL directly were better. Still a bit generic and high-level and vague, as LLMs tend to be, but better than the text-download version a couple days ago. And of course much easier to generate!
MattSayar 6 days ago [-]
I had tried using monolith [0] to feed webpages into Claude but all the html was too much token context. I ended up Print > Save as PDF-ing somewhat often and that worked pretty well. But just giving a URL is ideal.
> though various sites will block it from time to time.
The page itself describes a --ignore-robots-txt and customizing the user agent. Guess we can just all copy OpenAI and continue to make SourceHut's life miserable /s
This is a cool tool, thanks for sharing
knowaveragejoe 5 days ago [-]
For what it's worth, it's fetch under the hood. More akin to curl than automated scraping.
ralusek 6 days ago [-]
Best models for search, in order:
OpenAI Deep Research
Grok Deep Search
Gemini Deep Research
Grok + Search
Gemini + Search
ChatGPT + Search
These are just my opinions, but I do use this feature all the time. Haven't used Claude enough to get a sense of where it would fit in.
danielbln 6 days ago [-]
Where does Perplexity Deep Research fit into this list?
ignoramous 7 days ago [-]
These are interesting times.
It wasn't long ago that a uni senior who worked for a decade+ on Google Search told me that it was hopeless anyone tries to compete with Google not because it sees a tonne of signals that helps with IR but because of its in-house AI/ML.
It turns out that the org that built the ultimate AI/ML that runs rings around anything that came before it for NLP (and thus IR) was a sister team at Google Translate.
It isn't inconceivable that a kid might be able to build a Google-quality web search, scalability aside, on CommonsCrawls data in a weekend. As someone who built re-ranking algorithms for a search engine built atop Yahoo! and Wikipedia (REST/SOAP) APIs back in the late 2000s as a side project (and experienced the launch and subsequent iterations of Echo/Alexa up close at Amazon), the current capabilities (of even the open weight multi-modal models) seem too good to be true.
Google itself though is saved by its enormous distribution advantages afforded by Chrome (3B to 5B users) and Android (3B+), aside from its search deals with Apple and other browser vendors.
DeathArrow 6 days ago [-]
Do they have their own search engine or use an external one. If they use Google I would worry about relevance.
beezle 7 days ago [-]
At this point it is probably easier to poison web pages for AI crawlers in a way that does not taint the human experience.
andreygrehov 6 days ago [-]
Two things.
1. I generally prefer that an LLM not search the web. The top N results are often either SEO spam, excessively long articles created solely to rank well, or long-established websites that gained authority years ago, when Google's crawler and ranking algorithms were less sophisticated.
2. Web search by LLMs is likely here to stay, so I'm curious whether there's an agent-friendly web format. For example, when an RSS reader visits a website, the site responds with an RSS feed. I think we need something similar for agents - an open standard that all websites would support. This could reduce processing overhead and potentially improve the accuracy of the information retrieved. Thoughts?
mstaoru 6 days ago [-]
You already answered #2 with #1. The standard will be gamed.
andreygrehov 5 days ago [-]
But the purpose of the standard isn't to rank websites, but to process the information.
pixelkink 5 days ago [-]
Not to sound snarky but Anthropic introduced function calling over a year ago... The capability was always there for someone that wanted to spend a weekend coding a tool for it.
I agree. I have been integrating Brave search and DuckDuckGo search with LLMs for about a year. That said it is so much more convenient having an option of having it built in.
I stopped paying for Perplexity a year ago, but a month ago I started using Perplexity's combined search+LLM APIs - reasonably priced and convenient.
NewJazz 7 days ago [-]
Bizarre that they choose to publish this right as a thread criticizing AI crawlers gets bumped off the front page.
There’s so many news about anything “AI” at any given moment that there will never be a perfect moment.
NewJazz 6 days ago [-]
But this feature directly contributes to the problem in the article I linked.
mvieira38 7 days ago [-]
What's up with the geoblocking of Claude features? Not the first time it happens
kasey_junk 7 days ago [-]
Different geographies have different legal requirements.
artursapek 7 days ago [-]
ask the same geniuses who gave you the browser cookie popup re-implemented in a new way on every fucking website ever
deadbabe 7 days ago [-]
Is no one concerned about LLMs just feeding people SEO ads as content?
light_triad 7 days ago [-]
Good news. I integrated Claude with a scrapper to get info from pages and it was not giving hallucinations 99% of the time. Hope this works out of the box now.
n_ary 6 days ago [-]
Off-topic:
Mistral already had web search enabled for a while on there free tier.
Caveat: Mistral reasoning model on free tier is super slow(2-5 token/sec).
BaculumMeumEst 6 days ago [-]
Can you search the web for probable corrections to broken links? Has anyone had luck doing this with a web enabled model?
firloop 7 days ago [-]
Any information on what search engine is powering it?
rmwa 3 days ago [-]
Finally - but it will make Claude even better for coding
artembugara 6 days ago [-]
Search the web is apparently using SERP.
It’s just breaks my head. We’ve build LLMs that can process millions of pages at a time. But what we give them is a search engine that is optimized for humans.
It’s like giving a humanoid robot access to a keyboard with a mouse to chat with another humanoid robot.
Disclaimer: I might be biased as we’re kind of building the fact search engine for LLMs.
sadeshmukh 6 days ago [-]
No LLM can process millions of web pages. Maybe you're thinking of something else?
braebo 6 days ago [-]
This is a problem I think about often. I’d be curious to know what kind of things you’ve learned / accomplished in that problem space so far.
ordersofmag 6 days ago [-]
What makes you think Claude is using a search engine optimized for humans?
itpcc 7 days ago [-]
Now I understand why Gitlab was (is?) attacked[0] by those hideous bots.
Feels like a catch-up feature to chatgpt... honestly the biggest holdback for me on anthropic is the output token limit on sonnet... 8000 tokens max output is really limited (and 200k tokens in) compared to other offerings - especially considering that I suspect most sonnet users are not chat-users but api users.
This is actually very helpful to me! Thanks for posting!
joeeverjk 6 days ago [-]
Funny how we’ve come full circle—LLMs now search the web to answer queries, which is what search engines did originally. The difference? Now the hallucinations come with citations. Curious how long until "web search" just means summarizing Reddit threads again.
monkeydust 6 days ago [-]
Which frontier model provider let's me specific what websites to search and it will only search those?
gizmodo59 6 days ago [-]
You can just prompt ChatGPT 4o saying only get information from web
monkeydust 6 days ago [-]
Yes but you cant limit to a set of websites, I have tried sometimes it works but rarely and if it does search your set it will still go off and pick others. In the field I work in there are around a dozen specialist sites and I just want it to query those. Perhaps I need to develop around it.
AzzyHN 6 days ago [-]
They never watched Avenger's 2
dunefox 6 days ago [-]
A bit OT: does anyone have experiences with Mistral AI as a comparison to OpenAI or Anthropic? I would like to stay with a European company, if they're somewhat equivalent.
simonw 6 days ago [-]
I really like the Mistral openly licensed models - Mistral Small 3 is my current favourite local model to run, but only because I've not spent enough time with the brand new Mistral Small 3.1 to recommend it yet (I expect it will be promoted to my favourite local model soon.)
Their user-facing product at https://mistral.ai/ seems good to me - it uses Brave for search (same as Claude does) and has a "canvas" feature similar to Claude Artifacts. I've not spent enough time with that to evaluate if it could be a good daily-driver or not though.
My hunch is that Claude 3.7 Sonnet is still _massively_ better for code, based on general buzz online and a few benchmarks I've seen.
ilaksh 7 days ago [-]
I've been using Tavily's search API for my MindRoot agents. Seems to work fairly well and much easier to set up than Google's search API.
Anyone know if there is something better? I was thinking of trying Perplexity maybe.
blensor 7 days ago [-]
Funny thing is that I have the obsidian-mcp-tools installed and today claude-desktop just starting fetching stuff from the web through that because it exposes a fetch tool to claude.
So this limitation is a bit arbitrary anyway.
matt3210 6 days ago [-]
Does the LLM look at our click ads? If not, it’s a self destructive technology that will get itself blocked as it consumes resources in an unsustainable way
d--b 6 days ago [-]
Next: Claude can now ask random questions to strangers on Reddit.
mocmoc 6 days ago [-]
Please , make your context window bigger , if you do that … that’s it sonet 3.7 it’s amazing but it can’t even finish a dashboard because of that
rgbrgb 7 days ago [-]
Is there a way to access the new web browsing capability via API?
simonw 6 days ago [-]
Not yet, and no hints as to whether that will happen or not.
notepad0x90 6 days ago [-]
if everyone is using LLMs to solve problems, in a few years, won't LLMs run out of content to mine? In short, how can the general dumbing down of LLMs and degradation of content used to solve problems be avoided over the long term?
For questions about events and problems that arose after 2025, where would LLMs get information to solve those? and who would be asking those at a forum LLMs can access going forward?
Is the snake eating it's own tail?
kadushka 6 days ago [-]
1. People will continue to answer questions and post about events and problems after 2025. Eventually LLMs themselves (inside robots) will be observing the world and reporting on anything interesting.
2. Best LLMs today answer questions better than 90% of people who comment on forums. So if these LLMs have been able to train on all the crap posted on internet so far, they should only get better as they are being trained on high quality output from the latest (and future) LLMs.
visarga 6 days ago [-]
Most interesting data will be collected in chat rooms and apps. There are over a billion LLM users, they act as human-in-the-loop enhancig the LLM, sometimes testing ideas even in reality. Wondering what providers are doing with the chat logs.
bfeynman 7 days ago [-]
why does perplexity exist anymore? They were one of anthropic's biggest customers and had been finetuning claude models for search for a while.
sylware 6 days ago [-]
What I really would like to know: do they use a web crawler with an AI strapped to the mouse and keyboard of a javascript-ed web engine?
McNutty 7 days ago [-]
I haven't used Claude yet, but heard many good things. So I'm surprised to see that they're so far behind on this feature.
l33tbro 6 days ago [-]
I still don't get why Claude needs my phone number to sign up. Feels gross and is a such a shame, as their LLM seems great.
morisil 7 days ago [-]
I added this functionality already some time ago in my Claudine agent:
This is a huge step forward for AI. Can't wait to see how Claude integrates with other apps.
whatever1 6 days ago [-]
Is it fair use if Claude makes google searches and then presents the results to its users ?
wewewedxfgdf 6 days ago [-]
I'm waiting for Claude's API to support projects with file uploads like its web UI.
mbs159 6 days ago [-]
Will this work for people using Claude through other services like OpenRouter?
Heidaradar 6 days ago [-]
Kinda surprised it took them this long to add this feature, but glad it's here now
iamflimflam1 6 days ago [-]
I generally have to resort to telling ChatGPT “do not search the web”. ..
BrouteMinou 6 days ago [-]
I, too, can search the web.
We finally went full circle? LLM is used as a search engine?
xingwu 6 days ago [-]
Hope there will be a tech blog regarding how you index and retrieve the pages.
ksajadi 6 days ago [-]
I can hear the sigh of relief from "SEO gurus" from here...
tttym 7 days ago [-]
It's like a line of platforms waiting for their own agents for web search
goatmeal 7 days ago [-]
kagi already lets me use claude to search the web. how is this different?
whalesalad 7 days ago [-]
kagi is searching the web for you, and then injecting the results into the context of the prompt.
callamdelaney 7 days ago [-]
Are there any downsides to that approach? It seems like we're moving towards empowering llm's to interact with stuff as if that's better than us doing it for them - is it really?
Eg say I want to build an agent to make decisions, shall I write some code to insert the data that informs the decision into the prompt, return structured data, and then write code to implement the decision?
Or should I empower the llm do those things with function calls?
visarga 7 days ago [-]
If you want deeper search, it needs to be able to iterate, plan, reason while searching.
hansmayer 6 days ago [-]
So, referring specifically to the example they show on the front-page, what value does this bring actually? The best example they could come up with is Typescript migration ? Really? Weren't the LLMs supposed to be a superior alternative to searching the web? Why do we need to produce more CO2 to do the same we could have done at the fraction of the cost, of course at the time when the google search was still working?
simonw 6 days ago [-]
The CO2 concerns of using LLMs are massively overblown these days (with the exception of o1-pro and GPT-4.5 at least).
The energy efficiency of most models has improved by an order of magnitude since the most widely cited CO2 usage papers were published.
(It remains frustratingly difficult to get accurate numbers though: at this point I think more transparency would help rather than hurt the big AI labs)
I meant that as a side note, but even if we put the CO2 issue completely aside, I am still failing to see what is this "feature" bringing, and judging by the lame example they picked, the Anthropic are not quite sure either.
ingen0s 6 days ago [-]
Can’t believe they are making us vpn for this
luxuryballs 6 days ago [-]
Time to ask it to find all the dirt on me?
dostick 6 days ago [-]
Anyone noticed that if you enable the “browse internet” in ChatGPT, it becomes very dumb? It abandons all its intelligence and produces mostly incorrect results.
Like it’s being passive-aggressive, “Oh, you don’t like me as I am and want to augment me with search, let me show you how it is if my brain was only search!”
danirogerc 7 days ago [-]
Couldn't a lot of front-ends using Claude API do this already? What's new?
dcre 7 days ago [-]
If that's true, they are using a separate search API to get search results and feed it into a regular Claude API call. The difference here is that Anthropic is integrating it directly, like OpenAI and Google have. It doesn't look like it's in the API yet, but presumably that's coming. Then, as with gpt-4o and the Gemini models, you can make a single API call and it will do the searching for you and incorporate the results.
simonw 6 days ago [-]
This is a new product feature for https://claude.ai and the Claude mobile and desktop apps.
What is there to even search anymore? Almost everything is gated, and whatever remains public is connected a faucet that pumps out AI slop at an ever increasing rate.
The internet consumed itself. Telling someone to, "Just Google it," is now terrible general advice.
livingmylife01 6 days ago [-]
This is insane. Guys you think it will someday replace google?
greatNespresso 6 days ago [-]
Woaw, so excited about this! Has anyone tried it out already?
bosky101 6 days ago [-]
We need a way to collapse comments on HN
4ggr0 6 days ago [-]
> created: February 20, 2007
i hope you're trolling mate, got an account since 18 years and never wondered what the [-] button does? :D
krapp 6 days ago [-]
see the "[-]" link in the comment header? That's what it does.
bosky101 3 days ago [-]
OMG!
7 days ago [-]
nokun7 6 days ago [-]
Honestly, while this is a great update and all, other AI platforms have had web search functionality for quite some time now. Any explanation for this delay?
I wonder if Claude’s API will match Perplexity’s dynamic answers. Is there API rate limiting. If so, then the older API pricing would be preferable. Can users switch between the two?
6 days ago [-]
lihua919 6 days ago [-]
great
Taters91 6 days ago [-]
so can I
nimish 6 days ago [-]
So what's perplexity's raison detre at this point?
dalmo3 6 days ago [-]
I open perplexity, I see a search box.
I open claude, I see a big "Continue with Google" button.
ConanRus 7 days ago [-]
finally
ProofHouse 7 days ago [-]
Awesome, but I also do want to say it’s pretty sad it took this long straight up. Literally no excuse. But I’m glad they finally got to a feature that was launched more than a year ago on competitors.
ForTheKidz 6 days ago [-]
Great, now we just need a decent search engine.
zxvkhkxvdvbdxz 6 days ago [-]
Wake me up when we go back to discussing progress, instead of being wooed by fake toddlers doing tricks badly.
scudsworth 6 days ago [-]
"claude can now ddos random websites . . . more so"
hackburg 3 days ago [-]
[dead]
startifyai 6 days ago [-]
[dead]
blazenby 6 days ago [-]
[dead]
stealthlogic 6 days ago [-]
[dead]
dangapeass 6 days ago [-]
[dead]
enveks 6 days ago [-]
[dead]
mediumsmart 7 days ago [-]
thats great - the web he has been trained on or the one from Google?
jason_zig 6 days ago [-]
Curious - does anyone want this stuff?
gavmor 6 days ago [-]
Yeah, I love this stuff! Compiling data from multiple pages into a single paragraph in the time it takes to read one page? Great stuff. I can't imagine living without Perplexity.
Oh, sure, it hallucinates a lot, and in dangerous ways, but even if I have to manually corroborate all the citations, I'm still saving time, especially insofar as it reveals whether or not I'm barking, broadly, up the wrong tree.
It's especially good for comparisons, because the results of two disparate search terms can be collated into the results.
Could this be done without LLMs, but only vector embeddings? Hm, maybe. Algolia is maybe the 80 for 20, but does Algolia have a web index?
Brusco_RF 7 days ago [-]
Excited to see how this compares to Perplexity or Gemini. I remember that ChatGPT used to be able to search the web, but last I checked it it couldn't. I wonder why they removed that feature
I definitely tried to web search with ChatGPT a few weeks ago and it couldn't. I don't think I'm making this up. Unless I suffered a TBI.
callamdelaney 7 days ago [-]
It told me it can't search the web and then proceeded to search the web
simonw 6 days ago [-]
Web search is available for some but not all of their models. It is not particularly obvious from their UI which models have this feature.
cactusplant7374 7 days ago [-]
About half my requests end up going to web search. But if you ask it for something specific like "find an X-ray image with an abnormality," then it refuses.
85392_school 7 days ago [-]
Search was not removed from ChatGPT, although it can be glitchy at times.
BryantD 7 days ago [-]
Also not all models support it. I think only gpt-4o and gpt-4o-mini support it, although I haven't triple checked that.
beng-nl 7 days ago [-]
As sibling said, search was nog removed. But not all models can use the web search, maybe that is causing your perception?
What's the benefit of bringing native integration?
TrueDuality 7 days ago [-]
The native app that allows for MCP is only available officially on Mac's and the web interface is generally more convenient for non-technical users. Searching and interacting with the web has become a table-stakes feature and was a glaring gap in Claude.
punkpeye 7 days ago [-]
Let me rephrase it.
MCP has the capability to add this functionality.
It would be nice to see MCP getting adoption in their web UI, as well easier UX, rather than more ad hoc features being added natively.
masterj 7 days ago [-]
This is likely implemented behind the scenes as an MCP server exposed to their model in the web UI. It is likely that they will enable MCP servers over HTTP+SSEs (vs the stdin/stdout used with Claude desktop) on the web version in the near future.
hoppp 6 days ago [-]
I just read about llm bots ddosing websites and i guess more of that is coming soon. Big money is bettin on AI eatin the web and the small fishes pay for the bandwidth.
Rendered at 06:25:51 GMT+0000 (Coordinated Universal Time) with Vercel.
When you're talking to an LLM about popular topics or common errors, the top results are often just blogspam or unresolved forum posts, so the you never get an answer to your problem.
More of an indicator that web search is more unusable than ever, but interesting that it affects the performance of generative systems, nonetheless.
LLMs are truly reaching human-like behavior then
The give simple jobs, like cleaning or painting, to people on the lower bottom of earnings. Most people in that plan are people with low formation, like those who left school in their mid teens.
This seems like a great idea to me! Making it cheaper for businesses to hire people for these jobs would lower prices for everyone, improving accessibility of the services.
How would this help lower prices? The taxes have to be paid for by someone, and that cost should largely end up landing on the consumer.
It seems like we'd be changing who's hands the money moves through, but it still has to be paid for one way or another. If that's the case we'd risk higher prices since taxes have to subsidize prices and cover all the costs of running the program in the first place.
In the end, you use money from the rich to pay for socially beneficial jobs. Exactly the sort of thing government is for: ensuring that social goods are provided.
Taxing the rich can have unintended consequences. First you have to change the tax code so they actually get taxed and can't dodge it, those rules alone would be difficult to write effectively and would likely mean changing other parts of our tax code that impact everyone. If the rich do get taxed enough to cover a good chunk of wages, demand for luxury items would go down so too then would the jobs that make those products and services.
Once subsidized by a UBI, at best workers will continue to work at the same levels they do now. There will be an incentive for them to work less though, potentially driving up the labor costs you are trying to reduce. How do we accurately predict how many workers will reduce their hours or leave the workforce entirely? And how do we predict what that would do to prices?
The idea of taxing the rich to bail out everyone else is too often boiled down to a simple lever that, when pulled, magically fixes everything without any risk of unintended side effects.
There's an obvious wealth gap that's increasing and the people up top are getting even less oversight as we speak. As you say in your post, you don't know what the effects will be because it's not simple. But I see no compelling reason to continue with the oligarchy
My point was that we can change taxes to a system that we think will work better today, but we can't claim to know what the actual results will be years from now.
The claim made earlier in the chains was that taxing the rich to subsidize wages would lower labor costs and lower prices. I don't think we can ever know well enough how a broad reaching change will land, and claiming to know prices will go down isn't reasonable.
I had to watch this office space clip again just to be sure. https://youtu.be/Fy3rjQGc6lA ah yes, the meaning of life. ha-ha I love the classics https://www.youtube.com/watch?v=ZBdU9v5nLKQ
A much more terrible issue we suffer from already is that without participating we forget how our civilization works. Having a job gives you at least a tiny bit of insight that may partially map to other jobs.
Very similar to how ultra hard core libertarians assume they’ll be the ones at the top of the food chain calling the shots and not be just another peasant.
But it doesn’t really matter because there is no way in hell any of these LLM’s will uproot all of society. I use LLMs all the time, they are amazing, but they aren’t gonna replace many jobs at all. They just aren’t capable of that.
The available work offers the entire spectrum but we have to divide and plan it.
I watch these historical farm documentary tv shows, and they show how everyone in a town had a purpose and worked together, the blacksmith, the tile maker.
And I do often think the limiting factor to a life like this is the “market” so if you could create these communities, and could be an artist/artisan/builder, without strictly having to worry about making enough to live.
I met someone recently who lived in the Galapagos islands, and she seemed to sort of live this community oriented, trading anarchocapitalist lifestyle, and I think most people would be happier if they're small capitalist or socialist community involved direct interaction with people rather than dealing with soulless corpo's all the time.
I can imagine loads of tasks or jobs that would be quite pleasant if it weren't for stressing over efficiency or business admin.
I mean think about it…when was the last time you heard of charity gutter cleaning services? People would much rather enjoy their leisure time on hobbies or with family/friends.
In terms of charity cleaning services, there are people who clean hoarder's houses or landscape unruly yards for free on YouTube... ;)
For free on YouTube in exchange for ad revenue
If the government gives out free money people will pocket it. Should not be controversial.
As for why: for purpose, for praise, for community, for mental health, for trade/contribution, for skill building, etc. Loads of examples of this already. Maybe none of these things are attractive to you but I don't think that's universal.
Like I said, it's just trying to add to the default UBI, not getting everyone volunteering in their community or else.
I imagine just like with existing benefits, the majority of people wouldn't feel great about being on UBI doing nothing, and they would pursue something that gives them a better social standing, a better sense of purpose, a good challenge, whatever motivates an individual. It's why lots of people do volunteer work, work on important open source software, and so on. Sure, there's outliers that actually proudly slack off, but you don't address specific problems with generic solutions.
But more importantly, having the _option_ to fall back on benefits means people need to take fewer risks to pursue their talents and likely be of more value to society than if they did whatever puts food on the table today. Case in point: People born into a family that can finance them through college are more likely to become engineers than people born into poor households. On the flip side, some people do white collar jobs vs something like being a medic to uphold their standard of living from the higher salary, not out of preference.
I think it would need careful management, but I believe there's every reason to be optimistic.
People work for money. If a job has no pay, you can't expect it to get done.
We need people to actually run hospitals, produce food, construct shelter/infrastructure, provide childcare/education, etc.
It’s a classic economic blunder that dictatorships love to make:
1. Create money & rack up debt.
2. Produce nothing.
3. Create inflationary crisis and exacerbate wealth inequality.
4. Highlight your good intentions and relish your new position as champion of the people.
Also, it’s fascinating that you say “no benefit to the taxpayer” as if the taxpayer not having to work is somehow not a benefit?
The vast majority of people's passions are partying, sex, alcohol/drugs, watching sports, gossiping, generally wasting time. Things that mostly
This whole line of thought to me is embarrassingly clueless, naive and basically childish.
It is just mind blowing to me how smart people can't see what a bubble they live in.
I almost suspect, the higher a person's IQ, the more susceptible they are to living in a bubble that basically has nothing to do with the majority of people with an IQ of 100.
A conversation that starts like this is not going to go well.
And why do you need money at all in that scenario, at least for the basic items the UBI intends to make affordable to all? Why not just make them free and available to everyone?
No UBI proposal I'm aware of proposes UBI replaces salaries or is high enough to satisfy everyone. The "B" is for basic. Most people are not satisfied with earning a basic salary.
I know a few people with small businesses in various manufacturing industries. They all had a really hard time finding enough people to work while stimulus checks were going out.
People wouldn't make quite as much, but they were happy to stay home and have the basics for "free" rather than have a job.
Historically, jobs or professions always existed around the intrinsic motivation of the person working and around the needs of the society around that person.
So you could become a poet, but if you do not write poems that people like you would starve. Or you could become a farmer and provide the best apples in your city and you will earn a more than deserve income.
That's why free economies have developed historically so much better than any centrally planned economy.
You can do more harm than good by implementing policies like “guaranteed free money”.
If it was voted down, I'm guessing it was because to the extent that it's a fact, it's trivially true, and there's nothing insightful about the defeatist take. It's possible to do more harm than good doing pretty much anything. And the world is littered with problems that are not "fully solvable" but that we've mitigated greatly.
lets say your car tires pop.
Person A: "I will paint your car tires red. That will fix them."
Person B: "painting my flat car tires red wont fix them."
Person C: "well youre just being defeatest. we have to do something".
Person B: "..."
https://www.mdpi.com/2071-1050/12/22/9459?ref=scottsantens.c...
Spawning money creates nothing.
When everyone in the economy has a minimum of say $3,000 per month the cost of necessities, and everything else, will go up roughly in line with that.
But fine, I'll bite.
> will go up roughly in line with that
Could you at least explain the logic that you believe implies this would occur with such certainty? I've thought about this before and I couldn't see this as a necessary outcome, though (depending on various factors) I do see it as a possible one.
Because we haven’t actually created anything. Supply is the same, demand is WAY up.
As long as we’re in a deficit, spending for this program would directly increase the money supply. Of course there are other factors like velocity of money and elasticity of good/services but at the end of the day we’re increasing the amount of money (aka cash + credit) with no change to supply AND we’re going into debt to do it.
Any increase in supply over time will eat up some of that price fluctuation, but for most products prices are more flexible than supply and a majority share of any capital increase will go towards prices rather than supply.
You actually made my point, I think: that the price increase need not necessarily be "roughly in line with that", but could be less.
This distinction is absolutely critical. Like I said in [1], if you put $3k in my pocket, and my expenses increase by $2k, that's a very different situation from if my expenses grow by $3k. It would mean there is a reachable equilibrium.
[1] https://news.ycombinator.com/item?id=43430867
I forget the general rule when it comes to companies, but there's a general percentage that is often how much a price increase on a company is passed on to consumers. If a company's tax rate goes up by 10% something like 8% of that is passed on to the consumer through price increases. I'd expect something similar with a UBI.
If so, then explain how you're making the jump from "prices increase some" to "you would need Marx style price controls" or "otherwise UBI will fail to cover the necessities"? If you give me $X and I spend $X * r of it due to price increases, and r < 1, then don't I have (1 - r) * $X left in my pocket, meaning it could be made large enough to cover the basic necessities? This isn't complicated math.
I don't get why "prices increase" is seen as such a mic-drop phrase that shows the system would fall apart. Prices already increase for all sorts of reasons, it's not like the economy falls apart every time or we somehow add Marx style price controls every time. Sure, prices increase some here too. And then what? The sky falls?
With regards to my claim that we'd need strong price controls, a UBI needs prices to the basics to remain stable. I won't go down the road of trying to define what "the basics" are here, that's a huge rabbit hole so let's just leave it at the broad category in general.
If everyone can afford the basics, there is more demand for those items. Supply will likely increase eventually and eat up part of the demand increase, but the rest goes to prices. When those prices go up, the UBI would have to increase to match. The whole cycle would go on in a loop unless there's some lever for the government to control the prices of anything deemed a basic necessity.
No. Just because something increases forever that doesn't mean it won't stabilize. Asymptotes, limits, and convergence are also a thing. You're making strong divergence claims that don't follow from your assumptions.
Say you have a fire-department even though you personally might not be paying anything for it because you are so poor that you don't pay any taxes. You have police protecting you and the army. You have free primary school at least.
So I think the question is, would it help for the government to provide more, or less, or the same amount of free services as it does currently?
Would it "increase prices" if healthcare was free? Not necessarily I think. At least not the price of healthcare. Government would be in a much better position to negotiate drug-prices with pharmaceutical companies, than individuals are.
> Would it "increase prices" if healthcare was free?
That depends, who's ultimately footing the bill? If its paid for with taxes on businesses, yes most of that would be passed on to consumers in the form of price increases. If its paid for by consumer taxes, ultimately you will find consumers demanding higher wages and prices would again go up. If its paid for with tariffs, well we'll fins out soon but prices should go up there as well.
They are free for poor people. For instance, basic education must be free, so we can have a productive work-force that can read and write and pay taxes in the future, which will make us even richer.
Finally, we already do price controls and subsidies in many places, like food production. It's just that a big part of the advantage is soaked up by big companies.
But I also disagree with your assertion. Minimum wage increases are a great example. Opponents will constantly claim they will lead to massively increasing prices, but they never do. Moreover, a higher standard of employment rights and payment in first world countries like Norway doesn't seem to correlate well with higher Big Mac prices.
And our food quality in the US is garbage. We can't say if there is causation there since we can't compare against a baseline US food system without subsidies, but there is a correlation in timing between the increase in food subsidies and the decrease in quality.
> Opponents will constantly claim they will lead to massively increasing prices, but they never do.
The only times that really comes up is when an increase is proposed and the whole debate is over politicized. Claims on both sides at those times are going to be exaggerated.
Prices absolutely go up with minimum wage increases. How could they not? It'd be totally reasonable to argue the timeline that matters, prices aren't going to go up immediately. You could also argue the ratio, maybe wage is increased by 30% and prices are only expected to go up by 20%.
People earning a minimum wage almost certainly have pent up demand, they would buy more if they could afford it. Increasing their wages opens that door a bit, they will spend more which means demand, and prices, will go up in response.
And the point is that the income percentage increase is higher for those with lower incomes. Even if prices go up by 20%, somebody making $20k/year who gets an additional $10k from UBI is going to be much better off.
"They had useless make-work jobs and sent 4 emails a week and watched TikToks the rest of the time"
So?
There's FAR too many people and nowhere near enough jobs for a large portion of people to do something that is both "real", and provides actual economic value.
Far more important that people have some form of dignity and can pay to feed their families and live a life with some material standard.
Anyone who's been in a corporate role knows there's loads of people that have a dubious utility and value--and people with "tech skills" are NOT exceptions to this rule, at all.
If meaningless jobs are important because its the only way people can make money to pay for all the shit we think we need to pay for, or because they haven't yet been offered the time and freedom to find their own sense of purpose, let's focus on fixing the root cause(s) there.
> If meaningless jobs are important because its the only way people can make money to pay for all the shit we think we need to pay for, or because they haven't yet been offered the time and freedom to find their own sense of purpose, let's focus on fixing the root cause(s) there.
^^^ 100% yes! That! ^^^
Like, if you already got a car, you can drive it for 10-20 years easily, or more if you take well care of it. But advertising makes you think you "need" a new car every few years... because that keeps the economy alive. You buy a car and sell the old one to someone else who can't afford a new car but also wants a new one, so their old car goes off to Africa or whatever to be repaired until truly unrepairable. But other than the buyer in Africa who actually needed a new car, neither you nor the guy who bought your old car would have needed a car. And cars are a massive industry that employs many millions of people worldwide - so if you'd ban advertising for cars, suddenly the bubble would pop and you'd probably have a fifth of the size remaining, and most of it from China because the people in Africa can't afford what a brand new Western made car costs.
Or Temu, Shein, Alibaba and godknowswhat other dropshipping scammers. Utter trash that gets sold there, but advertising pushes people to buy the trash, wear it two times and then toss it.
A giant fucking waste of resources because our worldwide economy is based on the dung theory of infinite growth. It has worked out for the last two, three centuries - but it is starting to show its cracks, with the planet itself being barely able to support human life any more as a result of all that resource consumption, or with the economy and the public sector being blown up by "bullshit jobs".
We need to drastically reform the entire way we want to live as a species, but unfortunately the changes would hurt too many rich and influential people, so the can gets kicked ever further down the road - until eventually, in a few decades, our kids are gonna be the ones inevitably screwed.
Perhaps sleepy sinecures are more prevalent in the public sector (especially post FANNAG layoffs), but not unique to it.
In addition, there's plenty of jobs that are demanding, stressful, and technically difficult but are ultimately towards useless or futile ends, and this is known by parties with a sober perspective.
When i worked as a consultant, I was on MANY projects where everything was pants-on-fire important to deliver projects to clients for POCs and/or overpriced/overengineered junk that they were incapable of maintaining long-term (and in many cases, created more problems than it ostensibly solved)
All that work was pure bullshit; I was never once in denial of that fact. Fake deadlines, fake projects, fake urgency, real stress. Bullshit comes in many forms.
"the economy" = private sector / everything not government; "public sector" = government / fully government owned companies.
And both are horribly blown up due to all the bullshit and onerous bureaucracy that's mostly there because apparently you can't trust people that you do entrust a dozens-of-millions-of-euros worth train carriage to correctly deal with the cash register of the onboard restaurant.
Some computers from 20 years ago are still in a good shap, but...
(You can continue.)
The volume of things we buy but don't need (or necessarily want) drives a huge sector of the global economy. We're working to fill our lives with unnecessary things that bring us no happiness beyond the adrenaline hit when we hit "Buy Now" and the second one when the Prime box arrives at our door.
Consumerism masks the underlying problem and it's only going to get worse as more is automated. Producers will have an incentive to convince us we still need more.
Cars are - to me - a red herring in this argument except for the people who do literally trade in for a new car every few years. I drive whatever fairly boring Honda for as long as I can (usually 8-10 years) and don't feel a ton of regret about investing in comfort. But I've been as guilty as anyone about just buying stuff because it pops up in an ad or recommended on Amazon, etc.
Overall economic productivity is high enough that a lot of positions could be split into 2 or 3 short shifts, at full pay - IF you don't factor in the various financial boondoggles that we've gotten ourselves wrapped up in. If you made the decision to wipe out a lot of these obligations (mostly to rich people), we could get to that kind of set-up, solvently.
At the top you get the people who are true pros, they write the books, the guides, they solve the hardest problems, and everyone looks up to them. But spin the wheel and get a random SWE to do some work? It's not gonna be far off from an random 1v1 lobby.
Continues to apply
Interesting read, but I feel like the author could've spent just one more minute on this sentence. How good you are at given activity often doesn't matter, because you're mostly going to encounter people around your own level. What I'm saying is, unless you're at the absolute top or the absolute bottom, you're going to have similar ratio of wins to loses regardless whether you're a pro or an amateur, simply because an amateur gets paired with other amateurs, while a pro gets paired with other pros. In other words, not being the worst is often everything you need, and being the best is pretty much unreachable anyway.
This can be very well extended to our discussion about SWEs. As long as you're not the worst nor the best, your skill and dedication have little correlation with your salary, job satisfaction, etc. Therefore, if you know you can't become the best, doing bare minimum not to get fired is a very sensible strategy, because beyond that point, the law of diminishing returns hits hard. This is especially important when you realize that usually in order to improve on anything (like programming), you need to use up resources that you could use for something else. In other words, every 15 minutes spent improving is 15 minutes not spent browsing TikTok, with the latter being obviously a preferable activity.
And it's very easy to forget when you're the guy going to the club just how bad most regular players are.
I'm in a table tennis club, my rating is solidly middle of the pack, and so I see myself as an average player. But the author is correct, I would destroy any casual player. I almost never play casual players, though.
Not sure how applicable this is to software engineering.
Now scale that up 10x, because reality is at least an order of magnitude more complex than a video game.
Personally, I think that a receptionist as a building is useless, but I would be pretty pissed off if my packages kept getting stolen or I had to go get each one when it came at my place of business.
Big entities are such that if you take it all down, you feel the side effect of output (maybe value, maybe something else) but if you take Hugh chunks, you might not feel much because they're so extremely ineffictive and value creation doesn't correspond with value received for the individuals that created it.
There are a lot of useless employees out there. So, so much.
And a ton of bullshit jobs as well.
Do you include the private sector?
Why do corporations engage in this kind of charity? Do we need more competition?
Not as appropriate in a government setting where the impact goes far beyond personal profit and loss.
So I ended up posing the question to Claude and the response was “figure out how to work with me or pick a field I can’t do” which was pretty much a flex.
To impact the labor market, they don't have to be correct about AI's performance, just confident enough in their high opinions of it to slow or stop their hiring.
Maybe in the long term, this will correct itself after the AI tools fail to get the job done (assuming they do fail, of course). But that doesn't help someone looking for a job today.
- Ada's LLM chatbot does a good enough job to meet service expectations.
- AgentVoice lets you build voice/sms/email agents and run cold sales and follow ups (probably others better it was just the first one I found)
- Dot (getdot.ai) gives you an agent in Slack that can query and analyze internal databases, answering many entry level kinds of data questions.
Does that mean these jobs at the entry level go away? Honestly probably not. A few fewer will get hired in any company, but more companies will be able to create hybrid junior roles that look like an office manager or general operations specialist with superpowers, and entry level folks are going to step quickly up a level of abstraction.
Robotics is the big unlock of AI since the world is continuous and messy; not discrete. Training a massively complex equation to handle this is actually a really good approach.
For example you need them to:
- High energy requirements in varied env's: Run all day (and maybe all night too which MAY be advantage against humans). In many environments this means much better power sources than current battery technology especially where power is not provisioned (e.g. many different sites) or where power lines are a hazard.
- For failure rates to be low. Unlike software failing fast and iterating are not usually options in the physical domain. Failure sometimes has permanent and far reaching costs (e.g. resource wastage, environmental contamination, loss of lives, etc)
- Be light weight and agile. This goes a little against No 1 because batteries are heavy. Many environments where blue collar workers go are tight, have only certain weight bearings, etc
- Handle "snowflake" situations. Even in house repair there is different standards over the years, hacks, potential age that means what is safe to do in one residence isn't in another, etc. The physical world is generally like this.
- Unlike software the iteration of different models of robots is expensive, slow, capital intensive and subject to laws of physics. The rate of change will be slower between models as a result allowing people time to adapt to their disruption. Think in terms of efficient manufacturing timelines.
- Anecdotally many trades people I know, after talking to many tech people, hate AI and would never let robots on their site to teach them how to do things. Given many owners are also workers (more small business) the alignment between worker and business owner in this regard is stronger than a typical large organisation. They don't want to destroy their own moat just because "its cool" unlike many tech people.
I can think of many many more reasons. Humans evolved precisely for physical, high dexterity work requiring hand-eye co-ordination much more so than white collar intelligence (i.e. Moravec's Paradox). I'm wondering whether I should move to a trade in all honesty at this stage despite liking my SWE career. Even if robots do take over it will be much slower allowing myself as a human to adapt at pace.
Before a human physical worker can start being productive, they need to be educated for 10-16+ years, while being fed, clothed, sheltered and entertained. Then they require ongoing income to fund their personal food, clothing and shelter, as well as many varieties of entertainment and community to maintain long-term psychological well-being.
A robot strips so much of this down to energy in, energy out. The durability and adaptability of a robot can be optimized to the kinds of work it will do, and unit economics will design a way to make accessible the capital cost of preparing a robot for service.
Emotional opinions on AI aside, we will I think see many additional high-tech support options in the coming decade for physical trades and design trades alike.
I'm not saying the robots aren't coming - just that it will take longer and being disrupted last gives you the most opportunity to extract higher income for longer and switch to capital vs labor for your income. I wouldn't be surprised if robots don't make any inroads into the average person's live in the coming decade for example. As intellectual fields are disrupted purchasing power will transfer to the rest of society including people not yet affected by the robots making capital accumulation for them even easier at the expense of AI disrupted fields.
It is a MUCH safer path to provide for yourself and others assuming capitalism in a field that is comparatively scarce with high demand. Scarcity and barriers to entry (i.e. moats) are rewarded through higher prices/wages/etc. Efficiency while beneficial for society as a whole (output per resource increases) tends to punish the efficient since their product comparatively is less scarce than others. This is because, given same purchasing power (money supply) this makes intelligence goods cheaper and other less disrupted goods more expensive all else being equal. I find tech people often don't have a good grasp of how efficiency and "cool tech" interacts with economics and society in general.
In the age of AI the value of education and intelligence per unit diminishes relative to other economic traits (e.g. dexterity, social skills, physical fitness, etc). Its almost ironic that the intellectuals themselves, from a capitalistic viewpoint, will be the ones that destroy their own social standing and worth comparatively to others. Nepotism, connections and skilled physical labor will have a higher advantage in the new world compared to STEM/intelligence based fields. Will be telling my kids to really think before taking on a STEM career for example - AI punishes this career path economically and socially IMO.
AI rewards the skills it does not disrupt. Trades, sales people, deal makers, hustlers, etc will do well in the future at least relatively to knowledge workers and academics. There will be the disruptors that get rich for sure (e.g. AI developers) for a period of time until they too make themselves redundant, but on average their wealth gain is more than dwarfed by the whole industry's decline.
Another case of tech workers equating worth to effort and output; when really in our capitalistic system worth is correlated to scarcity. How hard you work/produce has little to do with who gets the wealth.
Governments will want to ban them, but there's just too much $$$ to be made from replacing employees, so things will get complicated fast.
But I don't see what governments can really do about it. I mean, sure, they can ban the models, but enforcing such a ban is another matter - the models are already out there, it's just a large file, easy to torrent etc. The code that's needed to run it is also out there and open source. Cracking down on top-end hardware (and note that at this point it means not just GPUs but high-end PCs and Macs as well!) is easier to enforce but will piss off a lot more people.
But there are lots of 'easy' development roles that could be mostly or entirely replaced by it nonetheless. Lots of small companies that just need a boring CRUD website/web app that an AI system could probably throw together in a few days, small agency roles where 'moderately customised WordPress/Drupal/whatever' is the norm and companies that have one or two tech folks in-house to handle some basic systems.
All of these feel like they could be mostly replaced by something like Claude, with maybe a single moderately skilled dev there to fix anything that goes wrong. That's the sort of work that's at risk from AI, and it's a larger part of the industry than you'd imagine.
Heck, we've already seen a few companies replacing copywriters and designers with these systems because the low quality slop the systems pump out is 'good enough' for their needs.
From experience dealing with a few of these companies, there's almost no chance that "vibe coding" whatever thing is going to be anything other than a massive improvement over what they'd otherwise deliver.
Thing is, the companies hiring these firms aren't competent to begin with, otherwise they'd never hire them in the first place. Maybe this actually disrupts those kinds of models (I won't hold my breath).
But honestly, LLMs are here to stay. I don't like them for zero verification + high trust requirements. IE when the answer HAS to be correct.
But generating viewpoints and ideas, and even code are great uses - for further discussion and work. A good rubber duck. Or like a fellow work colleague that has some funny ideas but is generally helpful.
LLMs also don't have the ego, arrogance and biases of humans.
I’ve spent a career dealing with the complete opposite. People with egos who can just not bare to admit when they don’t know and will instead just dribble absolute shit just as confidently as an LLM does until you challenge them enough that they just decide to pretend the conversation never happened.
It’s why I, someone fairly mediocre have been able to excel because despite not being the smartest person in the room, I can at least sniff bullshit.
> average humans understand the limit of their knowledge.
We’ll have to agree to disagree here. I’d call it a minority, not the average.
Which is why we live in a world where huge numbers of people think they know significantly more than they do and why you will find them arguing that they know more than experts in their fields. IT workers are particularly susceptible to this.
And if they don't suck at their job, they get promoted until they do: https://en.wikipedia.org/wiki/Peter_principle .
LLMs themselves don’t choose the top X.
That’s all regular flows written by humans run via tool calls after the intent of your message has been funneled into one of a few pre-defined intents.
I’ve built systems like it.
If it was something brand new, Anthropic would be bragging hard about it.
> You could 100% create the tool to search and chose results, go through links, read more pages, etc.
That’s exactly what I’m saying. _YOU_ could build a tool that does that. The LLM essentially acts as an intent detector, not a web crawler.
Not sure how OpenAIs version works, but grok's approach is to do multiple rounds of searches, each round more specific and informed by previous results
A lot of people who definitely were not intending to be nazies are driving swasticars, because they didn't know about how nazi the car company owner was. But here we are. You definitely know now. What you do now matters.
I hope you’re having fun, because that kind of logic won’t lead you anywhere where people get paid to reason.
What the person should have said is "a Nazi made that car".
I’ll start doing what other people say for no good reason the day I switch off my brain.
I did a little experiment when Grok 3 came out, telling it that it has been appointed the "world dictator" and asking it to provide a detailed plan on how it would govern. It was pretty much diametrically opposite of everything Musk is doing right now, from environment to economics (on the latter, it straight up said that the ultimate goal is to "satisfy everyone's needs", so it's literally non-ironically communist).
In Elon's eyes it's probably based because it will happily answer "what are 10 good things about Hitler?" with a list of 10 things and only mention twice that Hitler was evil. With ChatGPT you have about a 50% chance of getting a lecture instead of a list. But that's just a lack of safeties and moral lectures, the actual answers seem fairly unbiased and don't agree with anything Musk currently does
If he had said “SIEG HEIL!” I would totally be on your side. But it was plain old American English, and it was about love.
Extreme political tribalism is absolutely destroying human discourse.
to Musk doing the nazi like gesture https://x.com/iam_smx/status/1881583500991889729
I think there's a difference.
To me him saying my heart goes out after the second one was trying to cover his arse which seems to have fooled few people who see the videos.
And I'm not sure about the tribalism thing - I was kind of a Musk fan and initially gave him the benefit of the doubt but the comparison of the videos plus his promotion of neo nazis in European politics, plus his mums parents leaving Canada for SA because they were kind of nazi and Canada was too liberal all seems to add up. (dad https://www.youtube.com/watch?v=B6e1ES4MLD0&t=200s)
I think he's been a bit influenced by alt right tweeters on x/twitter. I'm in the UK and he comes up with some strange things about the UK that probably come from there. He seems to feel that our alt rightish anti immigration party, Reform, run by Farage, which has never been in power is not anti immigrant enough and he should step down for someone who properly hates muslims like Tommy Robinson. But it's all a bit odd based seemingly on misinformation from people who have never been to the UK and make things up to tweet.
I'm guessing the salute thing came for interacting with neo nazi types on x and not really realising how negatively that stuff is viewed by many people and now seems bewildered that people would torch Teslas.
I was thinking a lot of the problems are down to misinformation, even going back to the original nazis and stuff about the jews being influenced by satan and causing all the problems which is obviously nonsense but kicked everything off.
The party leader of the party he promotes is a lesbian whose wife is from Sri Lanka.
Neo nazis surely have evolved from the angry, militaristic skinheads we normally picture.
Also, Elon Musk’s local bakery is a nazi bakery, mostly on account of selling bread to Elon Musk knowing he’s a nazi. This makes them nazis, and anyone who eats their bread are nazis, too.
In fact, having not given in to calling Elon Musk a nazi makes me a nazi. It is the fastest growing demography by virtue of absolute inflation of what it means.
As a Jew: He didn't. (This argument is absurd.)
He was interviewed about it, and he said he didn't.
How does being a German get you to jump to conclusions?
Are you born with a special ability to detect nazi salutes?
Like, did a mirror neuron and a nerve in your torso twitch?
When I saw it, I recognised him beating his heart, throwing it to the crowd, and immediately thought "This is going to get misunderstood." Here we are.
> I usually don't like cancel-culture but you have to have boundaries and I think the risk of another Holocaust and all the other Nazi cruelties is a boundary a functioning society should be able to agree on.
Assuming he's a nazi, but this narrative is fabricated.
You can argue that allowing free speech on X may risk an increase in extremism.
But that's not the same argument as saying "Elon Musk is the next Hitler, he wants to kill the jews, and all cars fabricated in his name should be destroyed for the betterment of humanity." There's simply too many emotions involved in this kind of reasoning.
Would it be better if they called Elon a fascist? He did the fascist salute, after all. And as other commenters have said: if it endorses authoritarian far-right parties like a duck, has controversial white-supremacist parents like a duck, and does the fascist salute like a duck, at which point do we start wondering wether he’s actually a duck?
No, you mean to say “nazi salute” because it was used by NSDAP during WWII. The point here is that “nazi” now means “baddie”, and “fascist” is even worse because most people who are called that have nothing to do with Mussolini, either.
> if it endorses authoritarian far-right parties like a duck, has controversial white-supremacist parents like a duck, and does the fascist salute like a duck, at which point do we start wondering wether he’s actually a duck?
Cute. You can wonder, of course. That seems extremely warranted. But you can’t conclude based on the current evidence.
Now, I am not convinced that people of the mentioned religion are anymore better than others to fight nazis. Or even detect them. And when you read the recent international news, it’s clear that many of them don’t really mind genocides after all.
Also you didn't read my comment correctly the whole point is that you don't have to assume he's a Nazi to condemn a Nazi-like salute.
The gesture was quite different to that he'd used previously for 'giving people his heart'.
He's known to be a white supremacist. That is apparently his heritage too.
He supports far right parties in Europe.
Other 'Republican' politicians have repeated the gesture from the dais; but they seem to have made other excuses.
None of the many videos or photos that supposedly show other politicians doing similar gestures actually pass scrutiny. It's possible to inadvertently end with the same hand position. But the full fascist salute, on video, multiple times in succession. That's no accident.
Someone who hadn't meant it would have come back on stage, when it was pointed out to them, and made an apology. Or at least immediately issued a statement/press release.
I would believe he'd planned it as a joke - 'I bought this election, I'm going to throw a Nazi salute for memes'. But I'm not sure that's ultimately any better.
Perhaps you believe he's just a catastrophically idiotic person with no-one around him helping him?
Hitler's was unlike the general population's, as it had a bend to it.
You can bend reality all you like, but the intent of giving the Hitler salute was not there, as he has said. He's not secretly a nazi, and he's not openly a nazi. He's right-wing, yes. That's not illegal, and it happens to be the majority vote in the US.
The most reasonable criticism is calling it a Roman salute and saying it bears connotations to imperialism, and that it was most recently practiced by Hitler.
I think, if you want to read into his deepest, unspoken intents, he probably compares himself to Caesar more than Hitler. Just like Zuckerberg, and all the other multi-billionaires who want to see themselves as the de-facto leaders of the world.
> He's known to be a white supremacist
No, a bunch of observations leads you to conclude it.
He never showed up at a white supremacist rally.
He lets them speak on his platform.
> He supports far right parties in Europe.
Most right-wing parties in Europe are still socialist by American standards.
For example, the most liberal parliamentary party in Denmark thinks a 40% tax is fine.
If you're a Republican, you're crazy in the eyes of a European.
Specifically, he supports a far-right party in Germany, which is controversial, since there hasn't been popular far-right parties (only fringe ones) since NSDAP.
The big, controversial subject is ending muslim immigration into Europe. The far right becomes the bannermen for this cause, because closing down on immigration is viewed as xenophobic. In the meantime, as this opinion is being suppressed instead of addressed, it continues to grow with the populist movements.
The fact that Elon Musk has opinions on European immigration policy doesn't make him a nazi. Just like being against muslim immigration doesn't make AfD nazis (the German party that he endorsed), just uncannily populist.
> Someone who hadn't meant it would have come back on stage, when it was pointed out to them, and made an apology. Or at least immediately issued a statement/press release.
That's how I read his sentence immediately after the salutes: "My heart goes out to all of you." -- it sounded remarkably like something someone would say when they realize what they did could be viewed as heiling. You don't need to apologize to be a good person.
https://xcancel.com/elonmusk/status/1724908287471272299
https://www.nytimes.com/2025/03/14/technology/elon-musk-x-po...
https://www.nbcnews.com/tech/social-media/elon-musk-x-twitte...
I could keep going but there's really no point
No, just post one good summary or obviously revealing incident. And if you point to the salutes, which triggered the whole thing, they’re obviously not sufficient by themselves. You have to at least hear what he has to say. Did you?
But no. He's The Douche.
It's also a balance of probabilities thing. He's leaning hard into the far-right at the moment, and he's a well known troll, so if you behave like a douchey troll Nazi, then people tend not to give you the benefit of the doubt when shit goes down. Like when they give the benefit of the doubt to absolutely everyone else in the world caught in a photo waving and it looking like a salute.
Either way ... The Douche won't ever get another penny from me. Bye Tesla. Fuck Starlink, glad I'm not in a situation where that's the only choice. SpaceX? That was always Shotwell's bag anyway and I don't plan on hitching a ride anytime soon.
I mean, DOGE.
I guess that makes him not a nazi.
Great.
> The most reasonable criticism is calling it a Roman salute and saying it bears connotations to imperialism, and that it was most recently practiced by Hitler.
For the past >100 years, it’s been the gesture representing the fascist party in Italy and the Nazi party in Germany. You sound like you want to defend the gesture for some reason.
> I think, if you want to read into his deepest, unspoken intents, he probably compares himself to Caesar more than Hitler. Just like Zuckerberg, and all the other multi-billionaires who want to see themselves as the de-facto leaders of the world.
Comparing oneself to Caesar is still a profoundly disturbing thing. He was an oligarch first, then a lifelong dictator, and later a literal deity (according to the Senate).
> He never showed up at a white supremacist rally.
I’m sure you’re smart enough to understand that if he actually showed up to a white supremacy rally, he would be financially destroyed. He’s already lost his public image completely in Europe. So not putting up a KKK hoodie is weak evidence for him not being a white supremacist.
But in any case, none of this matters. Whether or not he personally identifies with fascist ideology is secondary to the effect of his actions. Blurring the line between reasonable discourse and fascist apologism trivializes extremism and hate, and that’s the last thing we need.
Ok.
> You sound like you want to defend the gesture for some reason.
Not at all. I want to defend people who use it and don’t intend to associate with nazism.
> Whether or not he personally identifies with fascist ideology is secondary to the effect of his actions.
That is certainly true. But just because the pitchfork brigade has got riled up, there is no reason to applaud them.
If I hadn’t a principle, I’d have to consider whether the social suicide of doing so is worth it. Musk could have thought of that, but he didn’t.
That still doesn’t make him a nazi. You need to actually believe that the genocide of Jews is worth pursuing. Or anything remotely resembling outright hatred of jews, and an idealisation of The Third Reich.
I also won’t post a dick pic, and this similarly does not discredit the argument I’m making:
Just because I won’t heil in public (I’m polite, and I have no points to make at 45 degrees), I won’t read Hitler into Musk’s arm waving, when he clearly does not follow up by justifying that he did, in fact, acknowledge the great work of Adolf Hitler. He didn’t because he doesn’t think Hitler was that great, because he’s not a nazi.
He’s not a nazi until he apologizes for not distancing himself from Hitler when he never said Hitler was great to begin with.
Otherwise: you’re a nazi until you publicly apologise for not leaving the subject matter unambiguous. And just saying you’re not is not enough, you have to apologise.
I find myself much more often using their "Quick Answer" feature, which shows a brief LLM answer above the results themselves. Makes it easier to see where it's getting things from and whether I need to try the question a different way.
You can simply just pass it a direct link to some data, if you feel it's more appropriate. It works amazingly well in their multistep Ki model.
It's capable of creating code that does analysis I asked for with moderate amount of issues (mostly things like it used the wrong file extracted from .zip, but it's math/code is in general correct). Scraps url/downloads files/unarchives/analyses content/creates code to produce result I asked/runs that code.
This is the first time I really see AI helping me do tasks I would otherwise not attempt due to lack of experience or time.
I am always looking for Perplexity alternatives. I already pay for Kagi and would be happy to upgrade to the ultimate plan if it truly can replace Perplexity.
https://kagi.com/lenses/l7mPOuJp7zljHquBjsekFn6dM9Thw1A8
I'm not sure if adding that to your account will include the configuration I have set to access the lens with !guix, but if it does not, you might want to add it. The lens basically just uses this pattern for search result sources:
logs.guix.gnu.org/guix/, lists.gnu.org/archive/html/bug-guix/, lists.gnu.org/archive/html/info-guix/, lists.gnu.org/archive/html/help-guix/, lists.gnu.org/archive/html/guix-devel/*, guix.gnu.org
I don't think I can share the assistant directly, but if you have Kagi Ultimate, you can just go to the Assistant section in the sidebar of the settings page, and add a new assistant. You can set it to have access to web search, and you can specify to use the GNU Guix lens. You can pick any model, but I'm using Deepseek R1, and I set my system prompt to be:
> Always search the web for answers to the users questions. All answers should respond relating to the GNU Guix package manager and the GNU Guix operating system.
and that seems to work well for me. Let me know if you have trouble getting that set up!
I found Perplexity was slower and delivered lower quality results relative to Kagi. After a week of experimenting, I forgot about Perplexity until they charged my $200 to renew my free year. I promptly cancelled the heck out of it and secured a refund.
https://kagi.com/assistant
Just takes some prompt tweaking, redos, and followups.
It's like having a really smart human skim the first page of Google and give me its take, and then I can ask it to do more searches to corroborate what it said.
It's amazing that the post by Anthropic doesn't say anything about that. Do they maintain their own index and search infrastructure? (Probably not?) Or do they have a partnership with Bing or Google or some other player?
It gets even better. When I first tested this feature in Bard, it gave me an obviously wrong answer. But it provided two references. Which turned out to be AI generated web pages.
Oddly enough in my own Googles I could not even find those pages in the results.
Welcome to the Habsburg Internet.
I’m not sure if Claude does any reranking (see Cohere Reranker) where it reorders the top n results or just relies on Google’s ranking.
But a web search that does re-ranking should reduce the amount of blogspam or incomplete answers. Web search isn’t inherently a lost cause.
Yeah, this is one of my favorite use cases. Living in Europe, surrounded by different languages, this makes searching stuff in other countries so much more convenient.
Yes, it is that bad.
Website of Nike? Website of Starbucks? Likely position number one.
Every product, category etc., e.g. what rice cooker should I buy? Is diseased by link and affiliate spam. There is a reason why people put +reddit on search terms.
But bonappetit.com is exactly an example of affiliate link spam. Even their budget option is awful.
Until then, my Zojirushi is very simple to clean.
Expensive sure, but it's only difficult to clean if you're a double amputee.
There are other good rice cookers like Cuckoo, and cheaper options like Tiger or Tatung, or really budget options like Aroma, but you pretty much can’t go wrong with Zojirushi if you can afford it.
This is a case of HN cynicism and contrarianism working against oneself.
BTW - the search you suggested gives you Reddit links first followed by other trusted sites trying to make an affiliate buck. There’s no spam on the first page.
> Reddit · r/google Is Google Search getting worse? Latest research and ...
The whole "Click here to find ten reasons why it is bad" style I've only come across in HN comments attacking what may be a bit of a straw man?
To choose the best rice cooker, consider these factors:
Top Brands: Zojirushi is often considered the best brand, with Cuckoo and Tiger as close contenders. Aroma is considered a good budget brand 1. Types: Basic on/off rice cookers: These are good for simple white or brown rice cooking and are usually affordable and easy to use 2. Considerations: When buying a rice cooker, also consider noise levels, especially from beeping alerts and fan operation 3. Specific Recommendations: Yum Asia Panda Mini Advanced Fuzzy Logic Ceramic Rice Cooker is recommended for versatility 4. Yum Asia Bamboo rice cooker is considered the best overall 5. Russell Hobbs large rice cooker is a good budget option 5. For one to two people, you don't need a large rice cooker unless cost and space aren't a concern 6. Basic one-button models can be found for under $50, mid-range options around $100-$200, and high-end cookers for hundreds of dollars 6. References What is the best rice cooker brand ? : r/Cooking - Reddit www.reddit.com The Ultimate Rice Cooker Guide: How to Choose the Right One for Your Needs www.expertreviewsbestricecooker.com Best Rice Cooker UK | Posh Living Magazine posh.co.uk Best rice cookers for making perfectly fluffy grains - BBC Good Food www.bbcgoodfood.com The best rice cookers for gloriously fluffy grains at home www.theguardian.com Do You Really Need A Rice Cooker? (The Answer Is Yes.) - HuffPost www.huffpost.com
With no pins, bon appetit (decent) and nbc news (would be fine if it wasn’t littered with ads) were the top results. For NBC news, Kagi also marked the result with a red shield, indicating that it has too many ads/trackers.
Which really goes to show that Kagi is great if you’re really willing to shell out for better content. Having the ability to mark sources as trusted, or indicate that I’ve paid for premium sources makes a completely different side of the web searchable.
Then, the following two links appear as normal search results: https://www.bonappetit.com/story/best-rice-cookers and https://www.bbcgoodfood.com/review/best-rice-cookers (I don't know those websites, so I can't judge them).
Followed by Listicles (a short-form writing that uses a list as its thematic structure). All just one entrance, in this case, Best rice cooker 2024: Top tried and tested models for perfect results expertreviews.com 9 Best Rice Cookers | The Strategist - New York Magazine nymag.com The 8 Best Rice Cookers of 2025, Tested and Approved - The Spruce Eats thespruceeats.com 6 Best Rice Cookers 2025 Reviewed - Food Network foodnetwork.com Best rice cookers 2025, tested for perfect grains - The Independent independent.co.uk 29 Rice cooker meals ideas | rice cooker recipes, cooking recipes... de.pinterest.com 43 Crockpot ideas | cooking recipes, rice cooker recipes, cooker... de.pinterest.com
Followed by Quick Peek (questions with hidden answers that you can display).
Followed by normal search results again: ryukoch.com, reddit/r/Coooking, expertreviewsbestricecooker.com, tiktok, and then many more 'normal' websites.
This search reminded me that I have yet to configure my Kagi account to ignore tiktok.
Quick Answer
To choose the best rice cooker, consider these factors:
Capacity: Rice cookers range from small (1-2 cups) to large (6-8 cups or even 10-cup models) [1][2]. Keep in mind that one cup of uncooked rice yields about two cups cooked [2].
Budget: Basic one-button models can be found for under $50, mid-range options around $100-$200, and high-end cookers can cost more [3].
Features: Many rice cookers include a steaming insert [4]. Some have settings for different types of rice [5][1].
Brand Recommendations:
Zojirushi: Often considered the best brand, but pricier [6][7]. The Zojirushi Neuro Fuzzy 5.5-Cup Rice Cooker is considered best overall [8].
Cuckoo & Tiger: These are the next best brands after Zojirushi [6].
Aroma: Considered the best budget brand [6]. The Aroma ARC-914SBD Digital Rice Cooker is a good option [9].
Toshiba: The Toshiba Small Rice Cooker stands out for innovative features that cater to a variety of cooking needs [5].
References
[1] Five Best Rice Cookers In 2023. More than half of the... | Medium medium.com
[2] Which Rice Cooker Should You Buy? - HomeCookingTech.com www.homecookingtech.com
[3] Do You Really Need A Rice Cooker? (The Answer Is Yes.) - HuffPost www.huffpost.com
[4] The 8 Best Rice Cookers of 2025, Tested and Approved www.thespruceeats.com
[5] The Ultimate Guide to Choosing the Perfect Rice Cooker | Medium medium.com
[6] What is the best rice cooker brand ? : r/Cooking - Reddit www.reddit.com
[7] What are actually good rice cookers? I feel like all the ... - Reddit www.reddit.com
[8] 6 Best Rice Cookers of 2025, Tested and Reviewed - Food Network www.foodnetwork.com
[9] 9 Best Rice Cookers | The Strategist - New York Magazine nymag.com
It should be noted that individual search results on Kagi are likely to be skewed depending on the user because it gives you so many dials to score specific domains up or down. E.g. my setup gives a boost to Reddit while downscoring Quora and outright blocking Instagram and Pinterest.
...if you're blocking ads and/or they're paying big advertisement bucks.
If I were looking for a song, I would type in something like “song used at beginning of X movie indie rock”
He would type in “X songs.”
I basically find everything in Google in one search and it takes him several. I type in my thought straight whereas he seems to treat Google like a dumb keyword index.
Actually, typing out "what a novice normie means" made me realize what is the probable reason Google turned out the way it is: optimizing for new users. However, a growing userbase means most users are new to Internet in general, and (with big enough growth) most queries are issued by people who are trying a search engine out for the first time, and have no clue how or why it works - and those queries are exactly the kind of queries Google is now good at, queries like example you provided.
But if you insist on a dumb keyword search, Google still does that fine if you use quotation marks now in addition to the operator (e.g. +"band"). But I just tried +"band" with my band-vs-song example and all I got were worse results that excluded the artist's website because the artist didn't write the word "band" anywhere on the page -- as expected for a dumb keyword search.
There was no easy way to perform my band-vs-song search back then because Google didn’t understand context and the website doesn’t have the correct keywords. But modern Google knows context and I employ this fact regularly, allowing me to find stuff with modern Google like a magician compared to old Google or even Altavista.
https://www.cityofcumming.net/
That said I also use Perplexity which does things Google never really did.
I've got a theory that people just like to be negative about stuff, especially market leaders, and are a bit in denial as to how it still has the majority search share in spite of many billions spent trying to compete with it and ernest HN posts saying Google is crap use Kagi. For amusement I tried to find their share of search and Google is approx 90%, Kagi approx 0.01% by my calculations.
I was around as well and my memories do not confirm this. But google search definitely degraded a lot.
Google in the past was written by a human because that was really the only option. Once other humans figured out to how automate producing trace Google has gone downhill simply because of the bullshit asymmetry effect. Even if google was totally customer based, it would still be much worse than the past because of the total amount of crap that exists.
This is also why no other competitor just completely blows them away either.
It used to be SO much less likely to return junk.
First decade of the 2000's if I had to guess.
It's a shame, because Page Rank was a smart idea.
https://web.archive.org/web/20200801000000*/https://www.goog...
https://web.archive.org/web/20200801000000*/https://www.goog...
Actually, it's astounding to me that companies haven't created a more user friendly customization interface for models. The only way to "customize" things would be through the chat interface, but for some reason everyone seems to have forgotten that configuration buttons can exist.
To be fair, LLM technology in its current form, is still relatively new. I would also like to see what you are suggesting, though.
Perplexity certainly already approximates this (not sure if it's at a token level, but it can cite sources. I just assumed they were using a RAG.)
https://en.wikipedia.org/wiki/Pink_slime
https://www.anthropic.com/news/influence-functions
I’m curious why I’m seeing a lot of people thinking this lately. Google definitely made the algorithm worse for customers and better for ads, but I’m almost always able to find what I’m looking for in the working day still. What are other people’s experiences?
For example, when searching for product information, Google results in top 50 to 100 listed items titled “the 10 best …“ full of vapid articles that provide little to no insight beyond what is provided in a manufacturers product sheet. Many times I have to add “Reddit” to my search to try and find real opinions about a product or give up and go to Youtube review videos from trusted sources.
For technical searches like programming questions, AI is basically immediately nailing most basic questions while Google results require scanning numerous somewhat related results from technical discussion forums, many of which are outdated.
This is ultimately google's problem: They are making money from the fact that the page is now mostly ads and not necessarily going to lead to a good, quick answer, leading to even more ads. They probably lose money if they make their search better
RAG was dead on arrival because it uses the same piss-poor results a human would, wrapped in more obfuscation and unwanted tangents.
My question is why the degradation of search wouldn't affect LLMs. These chatbot god-oracle businesses are already unprofitable because of their massive energy footprint, now you expect them to build their own search engine in-house to try to circumvent SEO spam? And you expect SEO spam to not catch up with whatever tricks they use? Come on, people.
top results are blogspam but the LLM isn't? /s
OpenAI is so annoying in this aspect. They will regularly give timelines for rollout that not met or simply wrong.
Edit: "Everyone" = Everyone who pays. Sorry if this sounds mean but I don't care about what the free tier gets or when. As a paying user for both Anthropic and OpenAI I was just pointing out the rollout differences.
Edit2: My US-bias is showing, sorry I didn't even parse that in the message.
I have empathy for the engineers in this case. You know it’s a combination of sales/marketing/product getting WAY ahead of themselves by doing this. Then the engineers have to explain why they cannot in fact reach an arbitrary deadline.
Meanwhile the people not in the work get to blame those working on the code for not hitting deadlines
It is for all paid users, something OpenAI is slow on. I pay for both and I often forget to try OpenAI's new things because they roll out so slow. Sometimes it's same-day but they are all over the map in how long it takes to roll out.
- Brave is now listed as a subprocessor on the Anthropic Trust Center portal
- Search results for "interesting pelican facts" from Claude and Brave were an exact match
- If you ask Claude for the definition of its web_search tool one of the properties is called "BraveSearchParams"
If you’re unhappy about something, try to first think of a solution before expressing your discontent.
I don't use the desktop app and I don't want to use the desktop app or jump through a bunch of hoops to support basic functionality without having my data sent to a sketchy company.
I can recommend a Rust crate for accessing PostgreSQL with Arrow support. The primary crate you'll want to use is arrow-postgres, which combines the PostgreSQL connectivity of the popular postgres crate with Apache Arrow data format support. This crate allows you to:
Query PostgreSQL databases using SQL Return results as Arrow record batches Use strongly-typed Arrow schemas Convert between PostgreSQL and Arrow data types efficiently
Is that how you actually use llms? Like a Google search box?
Some people aren't very good at using tools. You can usually identify them without much difficulty, because they're the ones blaming the tools.
"Answer as if you're a senior software engineer giving advice to a less experienced software engineer. I'm looking for a Rust crate to access PostgreSQL with Apache Arrow support. How should I proceed? What are the pluses and minuses of my various options?"
Think about it, how much marginal influence does it really have if you say OP’s version vs a fully formed sentence? The keywords are what gets it in the area.
To mix clichés, "I'm feeling lucky" isn't compatible with "Attention is all you need."
If I am not careful, and "asking the question" in a way that assumes X, often X is assumed by the LLM to be true. ChatGPT has gotten better at correcting this with its web searches.
I am able to get better results with Claude when I ask for answers that include links to the relevant authoritative source of information. But sometimes it still makes up stuff that is not in the source material.
If you’re having to explain an existing problem with edge cases, then sure, the context window needs the edge cases defined as well.
The problem with this prompt to me is not that it is not in a full sentence but that it isn't exact enough.
Probabilistically, "rust" is not about the programming language but the corrosion of metal. Then arrow.
Give the model basically nothing to work with then complain it doesn't do exactly what you want. Good luck with that.
I've not yet found much value in the LLM itself. Facts/math/etc are too likely incorrect, i need them to make some attempt at hydrating real information into the response. And linking sources.
It's still a present issue whenever I go light on prompt details and I _always_ get caught out by it and it _always_ infuriates me.
I'm sure there are endless discussions on front running overconfident false positives and being better at prompting and seeding a project context, but 1-2 years into this world is like 20 in regular space, and it shouldn't be happening any more.
1. Treat it like regular software dev where you define tasks with ID prefixes for everything, acceptance criteria, exceptions. Ask LLM to reference them in code right before impl code
2. “Debug” by asking the LLM to self reflect on its decision making process that caused the issue - this can give you useful heuristics o use later to further reduce the issues you mentioned.
“It” happening is a result of your lack of time investment into systematically addressing this.
_You_ should have learned this by now. Complain less, learn more.
I really wish Claude had something similar.
Pro tip; if you’re preparing for a big meeting eg an interview, tell ChatGPT to play the part of an evil interviewer. Give it your CV and the job description etc. ask it to find the hardest questions it can. Ask it to coach you and review your answers afterwards, give ideal answers etc
after a couple of hours grilling the real interview will seem like a doddle.
1) would give you more time to pause when you’re talking before it immediately launches into an answer
2) would actually try to say the symbols in code blocks verbatim - it’s basically useless for looking up anything to do with code, because it will omit parts of the answer from its speech.
[0] https://youtu.be/snkOMOjiVOk 01:30
One of my websites that gets a decent amount of traffic has pretty close to a 1-1 ratio of Googlebot accesses compared to real user traffic referred from Google. As a webmaster I'm happy with this and continue to allow Google to access the site.
If ChatGPT is giving my website a ratio of 100 bot accesses (or more) compared to 1 actual user sent to my site, I very much should have to right to decline their access.
are you trying to collect ad revenue from the actual users? otherwise a chatbot reading your page because it found it by searching google and then relaying the info, with a link, to the user who asked for it seems reasonable
- Ability to prevent their crawlers from accessing URLs via robots.txt
- Ability to prevent a page from being indexed on the internet (noindex tag)
- Ability to remove existing pages that you don't want indexed (webmaster tools)
- Ability to remove an entire domain from the search engine (webmaster tools)
It is really impolite for the AI chatbots to go around and flout all these existing conventions because they know that webmasters would restrict their access because it's much less beneficial than it is for existing search engines.
In the long run, all this is going to lead to is more anti-bot countermeasures, more content behind logins (which can have legally binding anti-AI access restrictions) and less new original content. The victim will be all humans who aren't using a chatbot to slightly benefit the ones who are.
And again, I'm not suggesting that AI chatbots should not be allowed to load webpages, just that webmasters should be able to opt out of it.
> It is really impolite for the AI chatbots to go around and flout all these existing conventions because they know that webmasters would restrict their access because it's much less beneficial than it is for existing search engines.
I agree with you about the long run effects on the internet at large, but I still don't understand the horse you have in it personally. I read you as saying (1) it's less about ad revenue than content control, but (2) content control is based on analysis of benefits, i.e. ad revenue?
Technically you don’t, but there are still laws that affect what you can legally do when accessing the web. Beyond the copyright issues that have been outlined by people a lot more qualified than me, I think you could also make the point that AI crawlers actively cause direct and indirect financial harm.
It's a search engine. You 'ask it to read the web' just like you asked Google to, except Google used to actually give the website traffic.
I appreciate the concept of an AI User-agent, but without a business model that pays for the content creation, this is just going to lead to the death of anonymously accessible content.
Edit: Maybe that's fine, maybe that's bad. Maybe new models will emerge and things will reshape. But I'm just supporting the case that AI agents will pressure the current "free" content economy.
Is that a world we actually want?
As for funding "content creation" itself, you have patronage.
Did all those old sites have “business models”? What did the web feel like back then?
(This is rhetorical - I had niche hobby sites back then, in the same way some people put out free zines, and wouldn’t give a damn about today’s AI agents so long as they were respectful.
The web was better back then, and I believe AI slop and agents brings us closer to full circle)
"What," he was asked, "is the business model for free WiFi?"
"What," he retorted, "is the business model for free washrooms?"
Many of these sites business model was simply "don't cost too much". The moment the web got big a lot of these sites died. Now add DDOS for fun and profit became a thing, most people moved to huge advertising based providers/hosters (think FB).
Simply put, we're never getting the old web back. Now, we may get something new, but it will be different and still far more commercial.
https://abcnews.go.com/Business/story?id=88041&page=1
As were punch the monkey and similar banner ads
https://www.computerworld.com/article/1360466/i-refuse-to-pu...
When was the great age of the web that wasn’t inundated with ads and SEO?
It was really easy on old school search engines like Altavista.
You’d already be blocking me as I’d guess I now search via AI >90% of the time between perplexity, chatgpt, deep research, and google search AI.
If that happens a big majority of websites will go bankrupt and won't exist anymore to be searched. Problem solved!
I think that is funny considering it is likely going to have the exact opposite effect.
Low effort blog spam is cheap to make. And it is often part of content marketing strategies where brand visibility is all that matters, so not much harm if the viability is directly on your site or in an AI chatbit interface.
Quality content on the other hand is hard to make. And there are two groups of people who make such content:
1. individuals or small groups that like to share for the sake of sharing. They likely won’t care about the AI crawlers stealing their content, although I think there is a big overlap between people who still run blogs and those who dislike AI.
2. small organizations that are dedicated to one specific topic and are often largely ad financed. These organizations would likely stop to exist in such an AI search dominated world.
> Especially since website hosting is close to being free these days.
It is under specific circumstances. The problem is that those AI crawlers don’t check by once in a while like Google does but instead they hit the site very frequently. For a static site this won’t be much of an issue except for maybe bandwidth. For more complex sites like - say - the GitLab instances for OSS projects, reality paints a different picture
Another point you're missing is that there's a 3rd group of people sharing content: experts who are there to establish their expertise. Small companies and individuals generate the highest quality content these days. I work on a blog for our SAAS company and it has been a great success in terms of organic growth (even people coming from LLMs) and to simply establish authority and signal expertise in the field. I can imagine a future where this is majority of expert content on the web and it seems quite sustainable imo.
If that's websites want, they should have that option.
robots.txt is not a security mechanism, and it doesn’t “control bots.” It’s a voluntary convention mainly followed by well behaved search engine crawlers like Google and ignored by everything else.
If you’re relying on robots.txt to prevent access from non human users, you’re fundamentally misunderstanding its purpose. It’s a polite request to crawlers, not an enforcement mechanism against any and all forms of automated access.
So, similarly, LLM companies can see this as a signal to crawl to whole site to add to their training sets and learn from it, if the same URL is hit for a couple of times in a relatively short time period.
Doesn't matter. The robots-exclusion-standard is not just about webcrawlers. A `robots.txt` can list arbitrary UserAgents.
Of course, an AI with automated websearch could ignore that, as can webcrawlers.
If they chose do that, then at some point, some server admins might, (again, same as with non-compliant webcrawlers), use more drastic measures to reduce the load, by simply blocking these accesses.
For that reason alone, it will pay off to comply with established standards in the long run.
Absolutely nothing has to obey robots.txt. It’s a politeness guideline for crawlers, not a rule, and anyone expecting bots to universally respect it is misunderstanding its purpose.
And absolutely no one needs to reply to every random request from an unknown source.
robots.txt is the POLITE way of telling a crawler, or other automated system, to get lost. And as is so often the case, there is a much less polite way to do that, which is to block them.
So, the way I see it, crawlers and other automated systems have 2 options: They can honor the polite way of doing things, or they can get their packets dropped by the firewall.
I mean, currently the AI request comes from the datacenter running the AI, but eventually one of two things will happen.
AI models will get small/fast enough to run on user hardware and use the users resources: End result? You lose. The user will set their own headers and sites will play the impossible game of identifying AI.
AI sites will figure out how to route the requests via any number of potential methods so the requests appear to come from the user anyway: End result? You lose. The sites attempting to block will play the cat and mouse game of figuring out what is AI or not AI.
Note, this doesn't mean AI blocking isn't worth doing, if nothing else to reduce load on the servers. It's just not a long term winning strategy.
You may not be able to stop AIs from crawling web sites through technological means. But you can confiscate all the resources of the company that owns the AI.
https://www.youtube.com/watch?v=WqnXp6Saa8Y
Where do we stop here? at "please drink a verification can and maintain eye contact at all times"?
This is ridiculous and plain evil.
The agent should respect robots.txt no matter who is using the Robot.
robots.txt is intended to control recursive fetches. It is not intended to block any and all access.
You can test this out using wget. Fetch a URL with wget. You will see that it only fetches that URL. Now pass it the --recursive flag. It will now fetch that URL, parse the links, fetch robots.txt, then fetch the permitted links. And so on.
wget respects robots.txt. But it doesn’t even bother looking at it if it’s only fetching a single URL because it isn’t acting recursively, so robots.txt does not apply.
The same applies to Claude. Whatever search index they are using, the crawler for that search index needs to respect robots.txt because it’s acting recursively. But when the user asks the LLM to look at web results, it’s just getting a single set of URLs from that index and fetching them – assuming it’s even doing that and not using a cached version. It’s not acting recursively, so robots.txt does not apply.
I know a lot of people want to block any and all AI fetches from their sites, but robots.txt is the wrong mechanism if you want to do that. It’s simply not designed to do that. It is only designed for crawlers, i.e. software that automatically fetches links recursively.
Without recursive crawling, it will not possible for a engine to know what are valid urls[1]. They will otherwise either have to brute-force say HEAD calls for all/common string combinations and see if they return 404s or more realistically have to crawl the site to "discover" pages.
The issue of summarizing specific a URL on demand is a different problem[2] and not related to issue at hand of search tools doing crawling at scale and depriving all traffic
Robots.txt does absolutely apply to LLMs engines and search engines equally. All types of engines create indices of some nature (RAG, Inverted Index whatever) by crawling, sometimes LLM enginers have been very aggressive without respecting robots.txt limits, as many webmasters have reported over the last couple of years.
---
[1] Unless published in sitemap.xml of course.
[2] You need to have the unique URL to ask the llm to summarize in the first place, which means you likely visited the page already, while someone sharing a link with you and a tool automatically summarizing the page deprives the webmaster of impressions and thus ad revenue or sales.
This is common usage pattern in messaging apps from Slack to iMessages and been so for a decade or more, also in news aggregators to social media sites, and webmasters have managed to live with this one way or another already.
It does not. It applies to whatever crawler built the search index the LLM accesses, and it would apply to an AI agent using an LLM to work recursively, but it does not apply to the LLM itself or the feature being discussed here.
The rest of your comment seems to just be repeating what I already said:
> Whatever search index they are using, the crawler for that search index needs to respect robots.txt because it’s acting recursively. But when the user asks the LLM to look at web results, it’s just getting a single set of URLs from that index and fetching them – assuming it’s even doing that and not using a cached version. It’s not acting recursively, so robots.txt does not apply.
There is a difference between an LLM, an index that it consults, and the crawler that builds that index, and I was drawing that distinction. You can’t just lump an LLM into the same category, because it’s doing a different thing.
Yes it does. I am the one controlling robots.txt on my server. I can put whatever user agent I want into my robots.txt, and I can block as much of my page as I want to it.
People can argue semantics as much as they want...in the end, site admins decide what's in robots.txt and what isn't.
And if people believe they can just ignore them, they are right, they can. But they are gonna find it rather difficult to ignore when fail2ban starts dropping their packets with no reply ;-)
No it doesn’t. It politely requests to crawlers that they do not, and if said crawlers choose to honour it than those specific crawlers will not crawl. That’s it. It can and is ignored without penalty or enforcement.
It’s like suggesting that putting a sign in your front yard saying “please don’t rob my house” prevents burglaries.
> Robots.txt does absolutely apply to LLMs engines and search engines equally
No it doesn’t because again, it’s a request system. It applies only to whatever chooses to pay attention to it, and further, decides to abide by any request within it which there is no requirement to do.
From google themselves:
“The instructions in robots.txt files CANNOT ENFORCE crawler behavior to your site; it's up to the crawler to obey them.”
And as already pointed out, there is no requirement a crawler follow them, let alone anything else.
If you want to control access, and you’re using robots.txt, you’ve no idea what you’re doing and probably shouldn’t be in charge of doing it.
(I noticed Claude, OpenAI and a couple of others whose names were less familiar to me.)
https://github.com/bluesky-social/proposals/tree/main/0008-u...
https://llmstxt.org/
So they sometimes hit bollards and turnstiles made for other types of code which executes HTTP requests. So they're bots basically, but better (or suitably) behaving ones.
What is the difference if I use a browser or a LLM tool (or curl, or wget, etc) to make those requests?
LLM finds out about it from me, when I ask it to go to the link.
You don’t accuse browsers of “somehow find[ing] the existence of those pages”. How does a browser know what page to visit?
The user tells it to.
If I prompt an LLM “go to example.net and summarize the page” how is that any different from me typing example.net in a browser URL bar?
I have been talking about the latter, agree the former is abusive.
Why would that be an issue?
I thought they were just machine code running on part GPU and part CPU.
There's some gray area though, and the search engine indexing in advance (not sure if they've partnered with Bing/Google/...) should still follow robots.txt.
But if I say, "Search the web for a low-carb chicken casserole recipe that takes squash and cottage cheese," then it's either going to A) send queries to a search engine like Google, in which case robots.txt already should have been respected, or B) check its own repository of information it's spidered before I asked the question, in which case it should have respected robots.txt itself.
The entire web was built on the understanding that humans generally operate browsers, and robots.txt is specifically for scenarios in which they do not.
To pretend that the automated reading of websites by AI agents is not something different…is quite a stretch.
Should I not be able to execute curl to download a webpage because the "understanding that humans generally operate browsers"?
Isn't this a bit of an oversimplification, though? Especially when the tool you're using completely alters the relationship between the content author and the reader?
I hear this argument often: "it's just another tool and we've always used tools". But would you acknowledge that some tools change the dynamics entirely?
> Should I not be able to execute curl to download a webpage because the "understanding that humans generally operate browsers"?
Executing curl to download a webpage is nothing new, and compared to a traditional browser, has about the same impact. This is still drastically different than asking an AI agent to gather information and one of the pages it happens to "read" is the one you were previously navigating to with a browser or downloading with curl.
If you're a content creator who built a site/business based on a pre-LLM understanding of the dynamics of the ecosystem, doesn't it seem reasonable to see these types of "readers" differently?
If the scale bothers you, block it, just like how you would block any other crawlers.
Other than that, we all wanted "ease-of-access" (not me though), and now we have it. It does not change anything.
so not seem to or apparently but matter of fact like. robots.txt works for the intended audience
[1] https://blog.google/technology/ai/an-update-on-web-publisher...
they're literally asking to break laws to train AI for national security. A sentence in a press release from 2 years ago is worthless... look at what they're actually doing
I'm just not sure if legal would love me doing that on our corporate servers...
Hotels would much rather show you the outside, the lobby, and a conference room, so finding what the actual living space will look like is often surprisingly difficult.
For more in-depth stuff, it is LLMs by default and I only goto Google when the LLM isn't getting me what I need.
I had subscribed to Perplexity for a month to use their deep research. I think it ran out earlier this week but I am really missing it Saturday morning here.
That thing is awesome. Sonnet 3.7 is more in the middle of this to me. It can help me understand all the things I found from my deep research requests.
I am surprised the hype is not more for Sonnet 3.7 honestly.
US only
Do they not care about typical search users? Only developers?
At least in my circle SWE's are either excited or completely fearful of the new technology; and every other profession feels like it is just hype and hasn't really changed anything. They've tried it sure; but it didn't really have the data to help with even simpler domain's than SWE. Anecdotally I've had the comment from people around me - my easy {insert job here} will last longer than your tech job from many people I know from both white and blue collar workers. Its definitely reduced the respect for SWE's in general at least where I'm located.
I would like to see improvements in people's quality of life and new possibilities/frontiers from the technology, not just "more efficiencies" and disruption. It feels like there's a lack of imagination with the tech.
I would guess that Anthropic wants developers talking about how good Claude is in their company Slack channels. That's the smart thing to do.
I on the other side reduced my googling by 95%
I’m referring to average people who may not be average users because they’re barely using LLMs in the first place, if at all.
They have maybe tried ChatGPT a few times to generate some silly stories, and maybe come back to it once or twice a month for a question or two, but that’s it.
We’re all colored by our bubbles, and that’s not a study, but it’s something.
A lot of the reasoning model improvements of late are in domains where RL, RLHF and other techniques can be both used and verified with data and training; in particular coding and math as "easy targets" either due to their determinism or domain knowledge of the implementers. Hence it has been quite disruptive to those industries (e.g. AI people know and do a lot of software). I've heard a lot of comments in my circles from other people saying they don't want AI to have the data/context/etc in order to protect their company/job/etc (i.e. their economic moat/value). They look at coding and don't want that to be them - if coding is that hard and it can get automated like that imagine my job.
Any other use of it is a case of "I have a hammer, so that's a nail".
They already cost people time, money, and their mental health by using adversarial tactics to evade blocking and ignoring robots.txt
https://drewdevault.com/2025/03/17/2025-03-17-Stop-externali...
Now I can prompt Claude to ping PubMed and make sure that its suggested references are verified. Each citation/claim should be accompanied by a PMID or a DOI.
I hope this works!
It's also fun to ask the same question to multiple AI tools and see how the answers differ. Usually Claude is the most accurate and helpful, though.
the main issue i find with Claude is, he fights you. He refuses so many requests and i need 3 or 4 replies to get what i want vs deepseek/grok. i've kept the monthly subscription to help anthropic, but it's trounced by the free options imo.
I have used grok a bit and it did what I needed it too, so I can't really compare. But 3.7 thinking is crazy strong for coding.
Back when it was 3.5 you could actually talk and learn things and it felt humane, but now it sounds like a McKinsey-corpo in a suit who sounds all fancy but is only right half the time.
I’ve switched back (rather regretfully) to chatgpt, and holy hell is its personality much better. For example just try asking it to explain differences between Neo Grotesque and Geometrical Sans Serif fonts/typefaces. One sounds like a friend trying to explain, the other sounds like a soulless bot. (And if you have 3.5 access, try asking it too.)
For general inference I use 4.5
I think OpenAI (and likely others) are on the right to track to acknowledge that different model tunings are best for different uses, and they interned to add a discriminator that can direct prompts to the best tuned model/change model tuning in real time.
"Look what I synthesise is correct and true because when I use the same top 10 priming responses which informed my decision I find these INDEPENDENT RESULTS which confirm what I modelled" type reasoning.
None of us have a problem with an LLM which returns 2+2 = 4 and shows you 10 sites which confirm. What worries me is when the LLM returns 2+2 = 5 and shows 10 sites which confirm. The set of negative worth content sites is semi infinite and the set of useful confirmed fact (expensive) sites is small so this feels like an outcome which is highly predictable (please don't beat me up for my arithmetic)
e.g. "Yes Climate science is bunk" <returns 10 top sites from paid shills in the oil sector which have been SEO'd up the top>"
Perplexity's "Explore" tab translates its news to your local language, and its curated news items are all pretty interesting, but the problem is that there are so few of them. I seem to get maybe a dozen stories in a day. I paid their subscription for a month just to listen to the news on my walk, but didn't renew because of this.
A foreign news site like BBC Mundo (Spanish) on the other hand barely has any stories outside of a few niches. Its tech section only has a few stories per week.
Hmm, maybe I want a sort of RSS reader that AI-translates stories for me. But I don't really want to maintain a feed myself either.
Apple News would probably do it since they also have good curation, but afaict they still don't support foreign news sources (why???).
ground.news includes sources from all sorts of countries, and also auto-translate headline and the intro, while you can still click to access the source article. Not affiliated, just happy user.
Example with sources in English, German and French: https://ground.news/article/accident-on-the-a13-in-the-yveli...
Although I'm not sure how useful it is for language learning, as you cannot (afaik) configure it to only display articles in Spanish or something similar, but if you filter by stories about France, you'll get a lot of French sources (obviously).
I'm surprised that they only expect performance to improve for tasks involving recent information. I thought it was widely accepted that using an LLM to extract information from a document is much more reliable than asking it to recall information it was trained on. In particular, it is supposed to lead to fewer instances of inventing facts out of thin air. Is my understanding out of date?
"I believe they're usually available from November through March, but I'm not completely certain about the exact timing for this year's crop. Would you like me to search for more current information about the 2025 tangelo season?"
It doesn't just search, it wants me to confirm. This has happened a lot for me.
As an example, I recently travelled abroad to a popular vacationing spot and asked ChatGPT for local recommendations on what to do. When it gave me answers directly, they were pretty solid. But when it “searched the web” instead, the answers were awful. Every single result it suggested had terrible ratings. It did this repeatedly. One of those times I asked it to pick something with better ratings and it sort of improved but not by much.
Of course this is another tool and maybe Claude uses better sources or a better algorithm, but in this case where there was a concrete number tied to the results, that while not perfect, aims to rate the quality of a result, it still did not filter out low quality answers. I’m not sure I trust these LLMs to do any better when there aren’t such ratings available. The available input data is just not very good, and now LLMs are being used to feed that low quality, SEO machine.
The results based on giving the source URL directly were better. Still a bit generic and high-level and vague, as LLMs tend to be, but better than the text-download version a couple days ago. And of course much easier to generate!
[0] https://github.com/Y2Z/monolith
https://github.com/modelcontextprotocol/servers/tree/main/sr...
The page itself describes a --ignore-robots-txt and customizing the user agent. Guess we can just all copy OpenAI and continue to make SourceHut's life miserable /s
This is a cool tool, thanks for sharing
OpenAI Deep Research
Grok Deep Search
Gemini Deep Research
Grok + Search
Gemini + Search
ChatGPT + Search
These are just my opinions, but I do use this feature all the time. Haven't used Claude enough to get a sense of where it would fit in.
It wasn't long ago that a uni senior who worked for a decade+ on Google Search told me that it was hopeless anyone tries to compete with Google not because it sees a tonne of signals that helps with IR but because of its in-house AI/ML.
It turns out that the org that built the ultimate AI/ML that runs rings around anything that came before it for NLP (and thus IR) was a sister team at Google Translate.
It isn't inconceivable that a kid might be able to build a Google-quality web search, scalability aside, on CommonsCrawls data in a weekend. As someone who built re-ranking algorithms for a search engine built atop Yahoo! and Wikipedia (REST/SOAP) APIs back in the late 2000s as a side project (and experienced the launch and subsequent iterations of Echo/Alexa up close at Amazon), the current capabilities (of even the open weight multi-modal models) seem too good to be true.
Google itself though is saved by its enormous distribution advantages afforded by Chrome (3B to 5B users) and Android (3B+), aside from its search deals with Apple and other browser vendors.
1. I generally prefer that an LLM not search the web. The top N results are often either SEO spam, excessively long articles created solely to rank well, or long-established websites that gained authority years ago, when Google's crawler and ranking algorithms were less sophisticated.
2. Web search by LLMs is likely here to stay, so I'm curious whether there's an agent-friendly web format. For example, when an RSS reader visits a website, the site responds with an RSS feed. I think we need something similar for agents - an open standard that all websites would support. This could reduce processing overhead and potentially improve the accuracy of the information retrieved. Thoughts?
https://www.anthropic.com/news/claude-3-family?_hsenc=p2ANqt...
I stopped paying for Perplexity a year ago, but a month ago I started using Perplexity's combined search+LLM APIs - reasonably priced and convenient.
https://news.ycombinator.com/item?id=43422413
Caveat: Mistral reasoning model on free tier is super slow(2-5 token/sec).
It’s just breaks my head. We’ve build LLMs that can process millions of pages at a time. But what we give them is a search engine that is optimized for humans.
It’s like giving a humanoid robot access to a keyboard with a mouse to chat with another humanoid robot.
Disclaimer: I might be biased as we’re kind of building the fact search engine for LLMs.
[0] https://news.ycombinator.com/item?id=43422413
Their user-facing product at https://mistral.ai/ seems good to me - it uses Brave for search (same as Claude does) and has a "canvas" feature similar to Claude Artifacts. I've not spent enough time with that to evaluate if it could be a good daily-driver or not though.
My hunch is that Claude 3.7 Sonnet is still _massively_ better for code, based on general buzz online and a few benchmarks I've seen.
Anyone know if there is something better? I was thinking of trying Perplexity maybe.
So this limitation is a bit arbitrary anyway.
For questions about events and problems that arose after 2025, where would LLMs get information to solve those? and who would be asking those at a forum LLMs can access going forward?
Is the snake eating it's own tail?
2. Best LLMs today answer questions better than 90% of people who comment on forums. So if these LLMs have been able to train on all the crap posted on internet so far, they should only get better as they are being trained on high quality output from the latest (and future) LLMs.
https://github.com/xemantic/claudine/
It costed roughly 30 lines of code: https://github.com/xemantic/claudine/blob/main/src/commonMai...
""" i need a bashrc command that will map the alias "logg" to open macvim to the file at ~/log.txt, then execute the macro defined by "<leader>z" """
Note <leader>z ends with user in insert mode, Claude provides solution below but puts me in edit mode. (I still have to press "i")
alias loggg='mvim ~/log.txt -c "normal \<leader>z"'
We finally went full circle? LLM is used as a search engine?
Eg say I want to build an agent to make decisions, shall I write some code to insert the data that informs the decision into the prompt, return structured data, and then write code to implement the decision?
Or should I empower the llm do those things with function calls?
The energy efficiency of most models has improved by an order of magnitude since the most widely cited CO2 usage papers were published.
(It remains frustratingly difficult to get accurate numbers though: at this point I think more transparency would help rather than hurt the big AI labs)
The internet consumed itself. Telling someone to, "Just Google it," is now terrible general advice.
i hope you're trolling mate, got an account since 18 years and never wondered what the [-] button does? :D
I wonder if Claude’s API will match Perplexity’s dynamic answers. Is there API rate limiting. If so, then the older API pricing would be preferable. Can users switch between the two?
I open claude, I see a big "Continue with Google" button.
Oh, sure, it hallucinates a lot, and in dangerous ways, but even if I have to manually corroborate all the citations, I'm still saving time, especially insofar as it reveals whether or not I'm barking, broadly, up the wrong tree.
It's especially good for comparisons, because the results of two disparate search terms can be collated into the results.
Could this be done without LLMs, but only vector embeddings? Hm, maybe. Algolia is maybe the 80 for 20, but does Algolia have a web index?
https://glama.ai/mcp/servers?searchTerm=search
What's the benefit of bringing native integration?
MCP has the capability to add this functionality.
It would be nice to see MCP getting adoption in their web UI, as well easier UX, rather than more ad hoc features being added natively.