NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Introducing deep research (openai.com)
timabdulla 7 hours ago [-]
I just gave it a whirl. Pretty neat, but definitely watch out for hallucinations. For instance, I asked it to compile a report on myself (vain, I know.) In this 500-word report (ok, I'm not that important, I guess), it made at least three errors.

It stated that I had 47,000 reputation points on Stack Overflow -- quite a surprise to me, given my minimal activity on Stack Overflow over the years. I popped over to the link it had cited (my profile on Stack Overflow) and it seems it confused my number of people reached (47k) with my reputation, a sadly paltry 525.

Then it cited an answer I gave on Stack Overflow on the topic of monkey-patching in PHP, using this as evidence for my technical expertise. Turns out that about 15 years ago, I _asked_ a question on this topic, but the answer was submitted by someone else. Looks like I don't have much expertise, after all.

Finally, it found a gem of a quote from an interview I gave. Or wait, that was my brother! Confusingly, we founded a company together, and we were both mentioned in the same article, but he was the interviewee, not I.

I would say it's decent enough for a springboard, but you should definitely treat the output with caution and follow the links provided to make sure everything is accurate.

toasteros 3 hours ago [-]
"Pretty neat, but definitely watch out for hallucinations."

We'd never hire someone who just makes stuff up (or at least keep them employed for long). Why are we okay with calling "AI" tools like this anything other than curious research projects?

Can't we just send LLMs back to the drawing board until they have some semblance of reliability?

kenjackson 6 minutes ago [-]
You can use them for whatever you like, or not use them. Everyone has a different bar for when technology is useful. My dad doesn't think EVs are useful due to the long charge times, but there are others who find it fully acceptable.
cdblades 2 hours ago [-]
> Why are we okay with calling "AI" tools like this anything other than curious research projects?

Because they are a way to launder liability while reducing costs to produce a service.

Look at the AI-based startups y-combinator has been funding. They match that description.

throwing_away 3 hours ago [-]
> We'd never hire someone who just makes stuff up (or at least keep them employed for long).

This is contrary to my experience.

oldstrangers 3 hours ago [-]
> Can't we just send LLMs back to the drawing board until they have some semblance of reliability?

Well at this point they've certainly proven a net gain for everyone regardless of the occasional nonsense they spew.

majormajor 31 minutes ago [-]
I think as these things get more integrated into customer service workflows - especially for things like insurance claims - there's gonna start being a lot more buyer's remorse on everyone's part.

We've tried for decades to turn people into reliable robots, now many companies are running to replace people robots with (maybe less reliable?) robot-robots. What could go wrong? What are the escalation paths going to be? Who's going to be watching them?

DanHulton 3 hours ago [-]
That is... debatable. You may be entirely inside the bubble, there.
taikahessu 3 hours ago [-]
Not sure if this was posted as humour, but I don't feel that way. In today's world, where I certainly would consider taking the blue pill, I'm having a blast with LLMs!

It has helped me learn stuff incredibly faster. Especially I find them useful for filling the gaps of knowledge and exploring new topics in my own way and language, without needing to wait an answer from a human (that could also be wrong).

Why does it feel, that "we are entirely inside the bubble" for you?

toasteros 52 minutes ago [-]
Are you sure it's helped you learn?

In the early days of ChatGPT where it seemed like this fun new thing, I used it to "learn" C. I don't remember anything it told me, and none of the answers it gave me were anything that I couldn't find elsewhere in different forms - heck I could have flipped open Kernighan & Ritchie to the right page and got the answer.

I had a conversation with an AI/Bitcoin enthusiast recently. Maybe that already tells you everything you need to know about this person, but to the hammer the point home, they made a claim to similar to you: "I learn much more and much better with AI". They also said they "fact check" things it "tells" them. Some moments later they told me "Bitcoin has its roots in Occupy Wall Street".

A simple web search tells you that Bitcoin is conceived a full 2 years before Occupy. How can they be related?

It's a simple error that can be fact checked simply. It's a pretty innocuous falsity in this particular case - but how many more falsehoods have they collected? How do those falsehoods influence them on a day-by-day basis?

How many falsehoods influence you?

A very well meaning activist posted a "comprehensive" list of all the programs that were to be halted by the grants and loans freezes last week. Some of the entries on the list weren't real, or not related to the freeze. They revealed they used ChatGPT to help compile the list and then went down one-by-one to verify each one.

With such meticulous attention to detail, incorrect information still filtered through.

Are you sure you are learning?

panarky 34 minutes ago [-]
When your bitcoiner friend told you something that's not true, that's a human who hallucinated, not an LLM.

Maybe we're already at AGI and just don't know it because we overestimate the capabilities of most humans.

squigz 2 hours ago [-]
> without needing to wait an answer from a human (that could also be wrong).

The difference is you have some reassurances that the human is not wrong - their expertise and experience.

The problem with LLMs, as demonstrated by the top-level comment here, is that they constantly make stuff up. While you may think you're learning things quickly, how do you know you're learning them "correctly", for lack of a better word?

Until an LLM can say "I don't know", I really don't think people should be relying on them as a first-class method of learning.

dingnuts 2 hours ago [-]
>It has helped me learn stuff incredibly faster. Especially I find them useful for filling the gaps of knowledge and exploring new topics in my own way and language

and then you verify every single fact it tells you via traditional methods by confirming them in human-written documents, right?

Otherwise, how do you use the LLM for learning? If you don't know the answer to what you're asking, you can't tell if it's lying. It also can't tell if it's lying, so you can't ask it.

If you have to look up every fact it outputs after it does, using traditional methods, why not skip to just looking things up the old fashioned way and save time?

Occasionally an LLM helps me surface unknown keywords that make traditional searches easier, but they can't teach anything because they don't know anything. They can imagine things you might be able to learn from a real authority, but that's it. That can be useful! But it's not useful for learning alone.

And if you're not verifying literally everything an LLM tells you.. are you sure you're learning anything real?

taikahessu 1 hours ago [-]
I guess it all depends on the topic and levels of trust. How can I be certain that I have a brain? I just have to take something for granted, don't I? Of course I will "verify" the "important stuff", but what is important? How can I tell? Most of the time only thing I need is a pointer in the right direction. Wrong advice? I know when I get there I suppose.

I can remember numerous things I was told while growing up, that aren't actually true. Either by plain lies and rumours or because of the long list of our cognitive biases.

> If you have to look up every fact it outputs after it does, using traditional methods, why not skip to just looking things up the old fashioned way and save time?

What is the old fashioned way? I mean people learn "truths" these days from Tiktok and Youtube. Some of the stuff is actually very good, you just have to distill it based on the stuff I was being taught at school. Nonody has yet declared LLMs as a subtitute for schools, maybe they soon will, but neither "guarantees" us anything. We could as well be taught political agendas.

I could order a book about construction, but I wouldn't build a house without asking a "verified" expert. Some people build anyway and we get some catastrofic results.

Levels of trust, it's all games and play until it gets serious, like what to eat or doing something that involves life threatening physics. I take it as playing with a toy. Surely something great have come up from only a few piece of legos?

> And if you're not verifying literally everything an LLM tells you.. are you sure you're learning anything real?

I guess you shouldn't do it that way. But really, so far the topics I've rigorously explored with ChatGPT for example, have been better than your average journalism. What is real?

dingnuts 34 minutes ago [-]
> What is the old fashioned way?

Looking in a resource written by someone with sufficient ethos that they can be considered trustworthy .

> What is real?

I'm not arguing ontology about systems that can't do arithmetic. you're not arguing in good faith at all

kardos 2 hours ago [-]
The Gell-Mann amnesia effect applies to LLMs as well!

https://en.m.wikipedia.org/wiki/Gell-Mann_amnesia_effect

orangepanda 3 hours ago [-]
You overestimate the importance of being correct
3 hours ago [-]
aiono 3 hours ago [-]
No, from the research around it the findings are mixed. There is no consensus that it's net gain.
hawaiianbrah 25 minutes ago [-]
A net gain for everyone? Tell that to the artists its screwing over!
kees99 2 hours ago [-]
"Occasional nonsense" doesn't sound great, but would be tolerable.

Problem is - LLMs pull answers from their behind, just like a lazy student on the exam. "Halucinations" is the word people use to describe this.

Those are extremely hard to spot - unless you happen to know the right answer already, at which point - why ask? And those are everywhere.

One example - recently there was quite a discussion about llm being able to understand (and answer) base16 (aka "hex") encoding on the fly, so I went on to try base64, gzipped base64, zstd-compressed base64, etc...

To my surprise, LLM got most of those encoding/compressions right, decoded/uncompressed the question, and answered it flawlessly.

But with few encodings, LLM detected base64 correctly, got compression algorithm correctly, and then... instead of decompressing, made up a completely different payload, and proceeded to answer that. Without any hint of anything sinister going.

We really need LLMs to reliably calculate and express confidence. Otherwise they will remain mere toys.

oldstrangers 2 hours ago [-]
Yeah, what you said represents a 'net gain' over not having any of that at all.
ramon156 1 hours ago [-]
3k a month vs ~500 dollars a month. That's all u need to know. Not saying its as good, but its all some managers care about
3 hours ago [-]
deeviant 1 hours ago [-]
Yeah, I used to hire people, but then one of them made a mistake, now I'm done with them forever, they are useless. It is not I, who is directing the workers, who cannot create a process that is resistant to errors, it's definitely the fact that all people are worthless until they make no errors as there truly is no other way of doing things other than telling your intern to do a task then having them send it directly to the production line.
brushfoot 3 hours ago [-]
I disagree that this is a useful springboard. And I say that as an AI optimist.

A report full of factual errors that a careful intern wouldn't make is worse than useless (yes, yes, I've mentored interns).

If the hard part is the language, then do the research yourself, write an outline, and have the LLM turn it into complete sentences. That would at least be faster.

Here's the thing, though: If you do that, you're effectively proving that prose style is the low-value part of the work, and may be unnecessary. Which, as much as it pains to me say as a former English major, is largely true.

giarc 4 hours ago [-]
What's faster? Writing a 500 word report "from scratch" by researching the topic yourself, vs. having AI write it then having to fact check every answer and correct each piece manually?

This is why I don't use AI for anything that requires a "correct" answer. I use it to re-write paragraphs or sentences to improve readability etc, but I stop short of trusting any piece of info that comes out from AI.

vessenes 3 hours ago [-]
Interesting!

I wonder if it’s carried over too much of that ‘helpful’ DNA from 4o’s RLHF. In that case, maybe asking for 500 words was the difficult part — it just didn’t have enough to say based on one SO post and one article, but the overall directives assume there is, and so the model is put into a place where it must publish..

Put another way, it seems this model faithfully replicates the incentives most academics have — publish a positive result, or get dinged. :)

Did it pick up your HN comments? Kadua claims that’s more than enough to roast me, … and it’s not wrong. It seems like there’s enough detail about you (or me) there to do a better job summarizing.

timabdulla 2 hours ago [-]
I didn't actually give it a goal of writing any particular length, but I do think that perhaps given my not-so-large online footprint, it may have felt "pressured" to generate content that simply isn't there.

It didn't pick up my HN comments, probably because my first and last name are not in my profile, though obviously that is my handle in a smooshed-together form.

mdp2021 4 hours ago [-]
> Then it cited an answer I gave on Stack Overflow [...] using this as evidence for my technical expertise. Turns out that about 15 years ago, I _asked_ a question on this topic, but the answer was submitted by someone else

Artificial dementia...

Some parties are releasing products much earlier than the ability to ship well working products (I am not sure that their legal cover will be so solid), but database aided outputs should and could become a strong limit to that phenomenon of remembering badly. Very linearly, like humans: get an idea, then compare it to the data - it is due diligence and part of the verification process in reasoning. It is as if some moves outside linear pure product progress reasoning are swaying the RnD towards directions outside the primary concerns. It's a form of procrastination.

prof-dr-ir 3 hours ago [-]
> Pretty neat, but definitely watch out for hallucinations.

That would be exactly my verdict of any product based on LLMs in the past few years.

RobinL 4 hours ago [-]
Interesting

You might find it amusing to compare it to: https://hn-wrapped.kadoa.com/timabdulla

(Ref:https://news.ycombinator.com/item?id=42857604)

wholinator2 3 hours ago [-]
This is... very uncomfortable. An (expanded) AI summary of my HN and reddit usage would appear to be a pretty complete representation of my "online" identity/character. I remember when people would browse your entire comment history just to find something to discredit you on reddit, and that behavior was _heavily_ discouraged. Now, we can just run an AI model to follow you and sentence you to a hell of being permanently discredited online. Give it a bunch of accounts to rotate through, send some voting power behind it (reddit or hn), and just pick apart every value you hold. You could obliterate someone's will to discuss anything online. You could effectively silence all but the most stubborn, and those people you would probably drive insane.

It's a very interesting usecase though, filter through billions of comments and give everyone a score on which real life person they probably are. I wonder if say, Ted Cruz hides behind a username somewhere.

throwaway519 2 hours ago [-]
throwaway/anonymous.

not just for when discussion of the content not the personality behind it is important.

dlivingston 2 hours ago [-]
I put my profile in [0] and it's mostly silly; a few comments extracted and turned into jokes. No deep insights into me, and my "Top 3 Technologies" are hilariously wrong (I've never written a single line of TypeScript!)

[0]: https://hn-wrapped.kadoa.com/dlivingston

TheGradfather 2 minutes ago [-]
The OpenAI Deep Research graph showing tool calls vs pass rate reveals something fascinating about how these models handle increasing amounts of information. The relationship follows a logistic curve that plateaus around 16% pass rate, even as we allow more tool calls.

This plateau behavior reflects something deeper about our current approach to AI. We've built transformer architectures partly inspired by simplified observations of human cognition - particularly how our brains use attention mechanisms to filter and process information. And like human attention, these models have inherent constraints: each attention layer normalizes scores to sum to 1, creating a fixed "attention budget" that must be distributed across all inputs.

A recent paper (https://arxiv.org/abs/2501.19399) explores this limitation, showing how standard attention becomes increasingly diffuse with longer contexts. Their proposed "Scalable-Softmax" helps maintain focused attention at longer ranges, but still shows diminishing returns - pushing the ceiling higher rather than eliminating it.

But here's the deeper question: As we push toward AGI and potentially superintelligent systems, should we remain bound by architectures modeled on our current understanding of human cognition? The human brain's limited attention mechanism evolved under specific constraints and for specific purposes. While it's remarkably effective for human-level intelligence, it might be fundamentally limiting for artificial systems that could theoretically process information in radically different ways.

Looking at the Deep Research results through this lens, the plateau might not just be a technical limitation to overcome, but a sign that we need to fundamentally rethink how artificial systems could process and integrate information. Instead of trying to stretch the capabilities of attention-based architectures, perhaps we need to explore entirely different paradigms of information processing that aren't constrained by biological analogues.

This isn't to dismiss the remarkable achievements of transformer architectures, but rather to suggest that the path to AGI might require breaking free from some of our biologically-inspired assumptions. What would an architecture that processes information in ways fundamentally different from human cognition look like? How might it integrate and reason about information without the constraints of normalized attention?

Would love to hear thoughts from others working on these problems, particularly around novel approaches that move beyond our current biological inspirations.

gorgoiler 9 hours ago [-]
For “deep research” I’m also reading “getting the answers right”.

Most people I talk to are at the point now where getting completely incorrect answers 10% of the time — either obviously wrong from common sense, or because the answers are self contradictory — undermines a lot of trust in any kind of interaction. Other than double checking something you already know, language models aren’t large enough to actually know everything. They can only sound like they do.

What I’m looking for is therefore not just the correct answer, but the correct answer in an amount of time that’s faster than it would take me to research the answer myself, and also faster than it takes me to verify the answer given by the machine.

It’s one thing to ask a pupil to answer an exam paper to which you know the answers. It’s a whole next level to have it answer questions to which you don’t know the answers, and on whose answers you are relying to be correct.

herculity275 6 hours ago [-]
My worry is that all these recent capabilities attempt to minimize hallucinations by relying on extensive web search, however web itself is being actively degraded by unfiltered LLM output. After a certain point running your research agent against a ~5-year-old snapshot of the web will be strictly more accurate (for non-current affairs queries) than querying live web.
igleria 8 hours ago [-]
It's really worrying to me, even as a self proclaimed "LLM <-> AI" skeptic, to see what kind of stuff people pretend to get out from an LLM. Typewriter monkeys as a service almost.

Still useful for the odd task here and there, but not as useful as all the money being invested in this (except for the companies getting that money, that is).

edit: actual example of something I'd expect a real AI to be able to solve by itself, but currently LLMs fail miserably https://x.com/RadishHarmers/status/1885884032220643587

mdp2021 8 hours ago [-]
> Typewriter monkeys as a service almost. // Still useful for the odd task here and there

1) Paramount task: searching in naturally structured language, as opposed to keywords. Odd tasks: oh yes, several tasks of fuzzy sophisticated text processing previously unsolved.

2) They translate NN encodings in natural language! The issue remains about the quality of /what/ they translate in natural language, but one important chunk of the problem* is in a way solved...

Now, I've been probably one of the most vocal here, shouting "That's the opposite of intelligence!" - even in the past 24 hours -, but be objective: there are also progresses ...

(* Around five years ago we were still stuck with Hinton's problem of interpreting pronouns as pointers in "the item won't fit in the case: it's too big" vs "the item won't fit in the case: it's too small" - look at it now...)

igleria 8 hours ago [-]
Of course I see progress, but I feel like the bridge of "thinks" versus "regurgitates" is still far off, if it is still in the horizon with the current approach. IMHO.

edit: furthermore, LLMs probably tackle very little "real state" in the "make machines THINK" land. But a crucial piece on the overall puzzle.

shakes_mcjunkie 5 hours ago [-]
> What I’m looking for is therefore not just the correct answer, but the correct answer in an amount of time that’s faster than it would take me to research the answer myself, and also faster than it takes me to verify the answer given by the machine.

This is why I haven't found AI tools very useful. I find my self spending more time verifying and fixing it's answers than I would have just doing or learning the darn thing myself.

7thpower 5 hours ago [-]
It is added cognitive load, but there is a lot of value in async tasks if you can trust the output or if the opportunity cost of validating is low.

The challenge with something like this for research, in its current state, is you’ll need to go double check it because you don’t trust it and it will end up effectively being a list of links.

It’s progress though and evidently good enough to find a sweet NSX in Japan, which is all some really need.

chombier 7 hours ago [-]
> and also faster than it takes me to verify the answer given by the machine.

I always thought there was a kind of NP-flavor to the problems for which LLMs-like AI are helpful in practice, in the sense that solving the problem may be hard but checking the solution must be fast.

Unless the domain can accomodate errors/hallucination, checking the solution (by a human) should be exponentially faster than finding it (by some AI) otherwise there's little practical gain.

jeswin 7 hours ago [-]
> Most people I talk to are at the point now where getting completely incorrect answers 10% of the time

A year back that number was 30%, and a couple of years back it was 60%. There will be a point where it'll be good enough. There are also better and better ways to verify answers these days.

It'll never be a solution for everything, but that's similar to many engineering problems we have: for example, ORMs aren't great for all types of queries, but they're sufficient for a good part of them.

dimitri-vs 6 hours ago [-]
It contributes little to discuss a hypothetical future. Maybe we'll have fusion energy, delivery drones, everyone using VR, etc. Maybe we will go into a deep recession due to trade wars, or maybe not.

The meaningful discussion is about how they perform NOW and the edge cases that have persisted since GPT-2 which no one has yet found a good solution for.

infecto 5 hours ago [-]
We already have delivery drones though.

I disagree though, it is useful as this problem has been whittled down and I think there is expectation that there will be continued effort. Its of course worth discussing but I find that for my workflows, I rarely encounter issues with hallucinations, they certainly exist but its gotten to a point that I don't have major issue with it.

skywhopper 4 hours ago [-]
At best, a proof of concept of experimental delivery drones exist, but only for small, lightweight items, and only in a few places, only in the right weather, and only if you place a target on your driveway and are there to receive the item in person, and all at the cost of a very high noise level. That's not exactly a real service.
HarHarVeryFunny 53 minutes ago [-]
> language models aren’t large enough to actually know everything

I'd say they don't know anything.

An LLM base model, before it is post-trained with RL, just has access to a sliced and diced corpus of human output. Take the contents of 4chan and WikiPedia, put in blender and mix and chop into "training sample" sized bites, then learn the statistical regularities of this blended mess. It is what it is - not exactly what I'd call a knowledge base, even though there are bits of knowledge in there.

When you add RL-based post-training for reasoning, all you are doing is trying get the model to be more selective when you are sampling from it - encouraging it to suppress some statistics, and emphazise others, such that when you sample from it the output looks more like valid reasoning steps and/or conclusions, per the verified reasoning examples you train it on.

I'm well aware of how useful RL-tuned models (whatever the goal) can be, but at the end of the day all they are doing is taking a statistical babbler and saying "try to output patterns more like this". It's not exactly a recipe for factuality or rationality - we've just gone from hallucination-prone base models, to gaslighting-prone RL-tuned "reasoning" models that output stuff that sounds like coherent reasoning.

What missing from all of this - what makes it different from how animals learn - it that the model has no experience of it's own, no autonomy or motivation to explore, learn and verify, and hence no episodic memories of how it learnt something (tried it and ran controlled experiments, or just overheard it on the bus), and what that implies about it's trustworthiness.

It's amazing that LLMs work as well as they do, a reflection of how much of what we do can be accomplished by reactive pattern matching, but if you want to go beyond that to something that can learn and discern the truth for itself, this seems the wrong paradigm altogether.

sandos 9 hours ago [-]
I mean this all falls down due to the need of verification:

"Limitations Deep research unlocks significant new capabilities, but it’s still early and has limitations. It can sometimes hallucinate facts in responses or make incorrect inferences"

How do I know which parts are false? It will take as long to verify as to research!

wnurmi 6 hours ago [-]
[dead]
squigz 2 hours ago [-]
> They can only sound like they do.

More importantly, I think, is that they are incapable of not doing so. Have we figured out how to make an LLM realize and answer that it doesn't know an answer?

michaelgiba 15 hours ago [-]
Gemini has had this for a month or two, also named "Deep Research" https://blog.google/products/gemini/google-gemini-deep-resea...

Meta question: what's with all of the naming overlap in the AI world? Triton (Nvidia, OpenAI) and Gro{k,q} (X.ai, groq, OpenAI) all come to mind

shihab 13 hours ago [-]
From the creator of Triton (OpenAI)-

"PS: The name Triton was coined in mid-2019 when I released my PhD paper on the subject. I chose not to rename the project when the "TensorRT Inference Server" was rebranded as "Triton Inference Server" a year later since it's the only thing that ties my helpful PhD advisors to the project."

svara 3 hours ago [-]
> Gemini has had this for a month or two,

Would have loved to try it when they released it, but I'm apparently in the wrong country. I think it's not available outside the US (?). OpenAI and DeepSeek have no such issues. It's a bummer really, I'm happy paying for this but they don't want me to.

samplatt 14 hours ago [-]
>Meta question

I think you have to prefix the query with "@Meta AI", hope this helps

chabes 4 hours ago [-]
> what's with all of the naming overlap in the AI world? Triton (Nvidia, OpenAI) and Gro{k,q} (X.ai, groq, OpenAI) all come to mind

They seem to be ok with outsourcing any and all creativity to a language model, so it’s not surprising that they can’t come up with unique names themselves.

kavalerov 4 hours ago [-]
I am afraid Gemini's version is not really very "deep" - it surfaces a lot of information, but on a quite superficial level. OAIs version seems to make that one step forward to proper depth.

We found in our experience it is pretty hard to force LLM to do something in proper depth, and OAI's deep research definitely feels like one of the first examples from big labs on how this can be done. What we typically see is that it is not even the "agent" part that is hard to do, but how to force model to not "forget" to go deep...

toomim 13 hours ago [-]
John Stewart had something to say about this: https://youtu.be/Byg8VZdKK88?si=pX1WbtRwZCBGpwHS&t=141
justaj 9 hours ago [-]
Without the tracking bits: https://youtu.be/Byg8VZdKK88#t=141
james_promoted 14 hours ago [-]
I've always thought the Triton situation was intentional since the name isn't generic and because the companies are stepping on each others toes here (Nvidia's Triton simplifying owning your inference; OpenAI's Triton eroding the need for familiarity with CUDA). I couldn't figure out who publicly used the name first though.
stonogo 14 hours ago [-]
It's a sort of unofficial trade association where they coalesce on specific redefinitions of terms to meet their sales and PR efforts. First they came for "intelligence," then "open source," then "reason," and it will continue. Any word which the PR wants but they can't achieve gets redefined -- "grok" is a perfect example, since in the original sci-fi book it meant "total understanding." The mythological Triton ruled the deeps, so the "deep learning" sales copy immediately co-opted it.
albert_e 12 hours ago [-]
Also "accuracy" as a measure of model's performance used to mean something objective in the traditional ML world.

Now with LLMs it is what human evaluators feel about the LLM output?

yorwba 11 hours ago [-]
Traditional ML is no stranger to measuring accuracy in terms of agreement with human evaluators.
albert_e 10 hours ago [-]
A customer churn model or revenue forecast did have hard objective data (ground truth) to compare against - isn't it?
ptrrrrrrppr 8 hours ago [-]
you seem to think all classical ML models were supervised, which isn't true. and we have metrics for unsupervised approaches as well
bilater 11 minutes ago [-]
Not quite the agent they are building but I have an open source alternative that lets you use a variety of models, based on links of your choice to generate reports: https://github.com/btahir/open-deep-research
hi_hi 17 hours ago [-]
This is terrifying. Even though they acknowledge the issues with hallucinations/errors, that is going to be completely overlooked by everyone using this, and then injecting the outputs into their own powerpoints.

Management Consulting was bad enough before the ability to mass produce these graphs and stats on a whim. At least there was some understanding behind the scenes of where the numbers came from, and sources would/could be provided.

The more powerful these tools become, the more prevelant this effect of seepage will become.

autoconfig 17 hours ago [-]
Either you care about being correct or you don't. If you don't care then it doesn't matter whether you made it up or the AI did. If you care then you'll fact check before publishing. I don't see why this changes.
azinman2 16 hours ago [-]
When things are easy, you’re going to take the easy path even if it means quality goes down. It’s about trade offs. If you had to do it yourself, perhaps quality would have been higher because you had no other choice.

Lots of kids don’t want to do homework. That said, previously many would because there wasn’t another choice. But now they can just ask ChatGPT for the answers they’ll write that down verbatim with zero learning taking place.

Caring isn’t a binary thing or works in isolation.

simonw 14 hours ago [-]
"Lots of kids don’t want to do homework"

Sure, but if you're a professional you have to care about your reputation. Presenting hallucinated cases from ChatGPT didn't go very well for that lawyer: https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-...

PeterStuer 11 hours ago [-]
That's a lawyer in an adverserial situation. Business consultants tell their clients what they want to believe, the facts be dammed.
rsanek 11 hours ago [-]
it sounds like ai doesn't really change that situation
asimpletune 10 hours ago [-]
But the point is it does if you count making it worse changing the situation.
jstummbillig 9 hours ago [-]
I don't think it follows that taken an easier path would mean quality goes down.
financypants 13 hours ago [-]
what about tests?
hi_hi 16 hours ago [-]
Because maybe you want to, but you have a boss breathing down your neck and KPIs to meet and you haven't slept properly in days and just need a win, so you get the AI to put together some impressive looking graphs and stats that will look impressive in that client showcase thats due in a few hours.

Things aren't quite so black and white in reality.

dauhak 16 hours ago [-]
I mean those same conditions already just lead the human to cutting corners and making stuff up themselves. You're describing the problem where bad incentives/conditions lead to sloppy work, that happens with or without AI

Catching errors/validating work is obviously a different process when they're coming from an AI vs a human, but I don't see how it's fundamentally that different here. If the outputs are heavily cited then that might go someway into being able to more easily catch and correct slip-ups

tikhonj 12 hours ago [-]
Making it easier and cheaper to cut corners and make stuff up will result in more cut corners and more made up stuff. That's not good.

Same problem I have with code models, honestly. We already have way too much boilerplate and bad code; machines to generate more boilerplate and bad code aren't going to help.

mquander 12 hours ago [-]
The technology also makes it easier and cheaper to make good things, so the direction of the outcome isn't guaranteed.
hi_hi 15 hours ago [-]
Yep, I agree with this to some extent, but I think the difference in the future is all that stress will be bypassed and people will reach for the AI from the start.

Previously there was alot of stress/pressure which might or might not have led to sloppy work (some consultants are of a high quality). With this, there will be no stress which will (always?) lead to sloppy work. Perhaps there's an argument for the high quality consultants using the tools to produce accurate and high quality work. There will obviously be a sliding scale here. Time will tell.

I'd wager the end result will be sloppy work, at scale :-)

mlsu 8 minutes ago [-]
If 20% of people don't care about being correct, the rest of everyone can deal with that. If 80% of people don't care about being correct, the rest of us will not be able to deal with that.

Same thing as misinformation. A sufficient quantitative difference becomes a qualitative difference at some point.

spaceywilly 17 hours ago [-]
I think a lot about how differentiating facts and quality content is like differentiating signal from noise in electronics. The signal to noise ratio on many online platforms was already quite low. Tools like this will absolutely add more noise, and arguably the nature of the tools themselves make it harder to separate the noise.

I think this is a real problem for these AI tools. If you can’t separate the signal from the noise, it doesn’t provide any real value, like an out of range FM radio station.

WOTERMEON 16 hours ago [-]
Not only that: by publishing noise, you’re lowering the signal/noise ratio.
layer8 16 hours ago [-]
People are much less scrupulous using LLM output than making up stuff themselves, because then they can blame the LLM.
n4r9 10 hours ago [-]
It's a bit like saying "my kids are going to hit themselves anyway, so it doesn't matter if I give them foam rods or metal rods".
ctoth 2 hours ago [-]
Maybe this would make sense if you saw the whole world as "kids" that you had to protect. As an adult who lives in an adult world, I would like adults to have access to metal tools and not just foam ones.
n4r9 2 hours ago [-]
I guess I can replace "kid" with "toddler" and add "unsupervised" at the end.
RainyDayTmrw 13 hours ago [-]
It's possible that you care, but the person next to you doesn't, and external pressures force you to keep up with the person who's willing to shovel AI slop. Most of us don't have a complete luxury of the moral high ground at our jobs.
navigate8310 9 hours ago [-]
It's the high reps fault then of not caring about quality. Either you assimilate in that low quality lower management using AI slop or change job.
doomroot 13 hours ago [-]
It looks like the moral high just came more in demand.
sbarre 16 hours ago [-]
How hard it is to produce credible-looking bullshit makes a really big difference in these scenarios.

Consultants aren't the ones doing the fact-checking, that falls to the client, who ironically tend to assume the consultants did it.

ADeerAppeared 15 hours ago [-]
> If you care then you'll fact check before publishing.

Doing a proper fact check is as much work as doing the entire research by hand, and therefore, this system is useless to anyone who cares about the result being correct.

> I don't see why this changes.

And because of the above this system should not exist.

lngnmn2 10 hours ago [-]
[dead]
anthonyshort 11 hours ago [-]
Then the hallucinated research is published in an article which is then cited by other AI research, continuing the push the false information until it’s hard to know where the lie started.
scarab92 17 hours ago [-]
Think of it like a vaccine.

The majority of human written consultant reports are already complete rubbish. Low accuracy, low signal-to-noise, generic platitudes in a quantity-over-quality format.

LLMs are innoculating people to this kind of low information value content.

People who produce LLM quality output, are now being accused of using LLMs, and can no longer pretend to be adding value.

The result of this is going to be higher quality expectations from consultants and a shaking out of people who produce word vommit rather than accurate, insightful, contextually relevent information.

randcraw 8 minutes ago [-]
I don't think so. Instead of SEO, I think we'll soon see 'LLMO' dominating such uses, where LLM summaries are reshaped by vendors and etailers to misrepresent facts in ways that favor them over others.

I suspect this can be done simply by poisoning a query with supplemental suggestions of sources to use in a RAG, many of which don't even have to be publicly available but are made accessible to the LLM (perhaps by submitting hidden URLs that mislead the summary along with the query).

But even after such a practice is uncovered and roundly maligned, that won't stop the infinite supply of disreputables on the web from continuing to inject bias in this way into the background that drives deep research, so long as the LLM maker doesn't actively oppose this practice actively and publicly -- which none of them have been willing to do with any other LLM operational details so far.

DrSiemer 10 hours ago [-]
Exactly what will happen with art. The tolerance for low quality output will decrease.
layer8 16 hours ago [-]
This has been downvoted, but I think there’s actually a chance it might become true (until AGI comes along at least).
_bin_ 14 hours ago [-]
let's be real for a sec, i've done consulting and have a lot of friends who still do. three times in four, your mckinsey report isn't super well-founded in reality and involves a lot of guesstimation.
n144q 12 hours ago [-]
I think that ship has sailed many years ago since Facebook allowed false information to spread freely on their site (if not earlier).
cyanydeez 17 hours ago [-]
[flagged]
tmnvdb 17 hours ago [-]
> At least there was some understanding behind the scenes of where the numbers came from, and sources would/could be provided.

Oh Sweet summer child.

opdahl 3 hours ago [-]
Hi tmnvdb, since you seem to love these super smart LLMs I thought it would be fun to have openais o3-mini-high analyze your recent comments in contrast to the Hacker News Comment Guidelines. Here is the output it gave me, hope it helps you:

------

Hey, I've noticed a few things in your style that are both strengths and opportunities for improvement:

Strengths:

- You clearly have deep knowledge and back up your points with solid data and examples.

- Your confidence and detailed analysis make your arguments compelling.

Opportunities:

- At times, your tone can feel a bit combative, which might shut down conversation.

- Focusing on critiquing ideas rather than questioning someone's honesty can help keep the discussion constructive.

- A clearer structure in longer posts could make your points even more accessible.

Overall, your passion and expertise shine through—tweaking the tone a bit might help foster even more productive debates.

------

Just reply here if you want the full 500+ words analysis that goes into more detail.

DigitalSea 17 hours ago [-]
Not sure if people picked up on it, but this is being powered by the unreleased o3 model. Which might explain why it leaps ahead in benchmarks considerably and aligns with the claims o3 is too expensive to release publicly. Seems to be quite an impressive model and the leading out of Google, DeepSeek and Perplexity.
lordofgibbons 15 hours ago [-]
> Which might explain why it leaps ahead in benchmarks considerably and aligns with the claims o3 is too expensive to release publicly

It's the only tool/system (I won't call it an LLM) in their released benchmarks that has access to tools and the web. So, I'd wager the performance gains are strictly due to that.

If an LLM (o3) is too expensive to be released to the public, why would you use it in a tool that has to make hundreds of inference calls to it to answer a single question? You'd use a much cheaper model. Most likely o3-mini or o1-mini combined with o4-mini for some tasks.

og_kalu 5 hours ago [-]
>why would you use it in a tool that has to make hundreds of inference calls to it to answer a single question? You'd use a much cheaper model.

The same reason a lot of people switched to GPT-4 when it came out even though it was much more expensive than 3 - doesn't matter how cheap it is if it isn't good enough/much worse.

xbmcuser 17 hours ago [-]
It was expensive as they wanted to charge more for it but deepseek has forced their hand
willy_k 16 hours ago [-]
They’ve only released o3-mini, which is a powerful model but not the full o3 that is being claimed as too expensive to release. That being said, DeepSeek for sure forced their hand to release o3-mini to the public.
shawabawa3 12 hours ago [-]
o3 mini was previewed in December. Deepseek maybe made them release it a few weeks early but it was already on its way
sdesol 10 hours ago [-]
I guess the question is, did DeepSeek force them to rethink pricing? It's crazy how much cheaper it (v3 and R1) is, but considering they (Deepseek) can't keep up with demand, the price is kind of moot right now. I really do hope they get the hardware to support the API again. The v3 and R1 models that are hosted by others are still cheap compared to the incumbents, but nothing can compete with DeepSeek on price and performance.
kandesbunzler 10 hours ago [-]
no they didn't, this was literally all announced in December with a release date for January
Sparkyte 17 hours ago [-]
Rightfully so, some models are getting super efficient.
bitshiftfaced 14 hours ago [-]
> but this is being powered by the unreleased o3 model

What makes you believe that?

_bin_ 14 hours ago [-]
they explicitly stated it in the launch
bitshiftfaced 13 hours ago [-]
The linked article says,

> Powered by a version of the upcoming OpenAI o3 model that’s optimized for web browsing and data analysis, it leverages reasoning to search, interpret, and analyze massive amounts of text, images, and PDFs on the internet, pivoting as needed in reaction to information it encounters.

If that's what you're referring to, then it doesn't seem that "explicit" to me. For example, how do we know that it doesn't use less thinking than o3-mini? Google's version of deep research uses their "not cutting edge version" 1.5 model, after all. Are you referring to something else?

golol 9 hours ago [-]
o3-mini is not really "a version of the o3 model", it is a different model (less parameters). So their language strongly suggests, imo, that Deep Research is powered by a model with the same number of parameters as o3.
17 hours ago [-]
ai-christianson 17 hours ago [-]
Has anyone here tried it out yet?
nycdatasci 14 hours ago [-]
Pro user. No access like everyone else.

OpenAI is very much in an existential crisis and their poor execution is not helping their cause. Operator or “deep research” should be able to assume the role of a Pro user, run a quick test, and reliably report on whether this is working before the press release right?

maroonblazer 16 hours ago [-]
Per the below, seems it's not available to many yet.

https://news.ycombinator.com/item?id=42913575

mistercheph 16 hours ago [-]
I'm sure o3 will be a generation ahead of whatever deepseek, google and meta are doing today when it launches in 10 months, super impressive stuff.
petesergeant 15 hours ago [-]
I’m not sure if you’re implying this subtly in your comment or not, as it’s early here, but it does of course need to be a generation ahead of what 10 months of their competitors moving forward have done too. Nobody is standing still
bruce511 13 hours ago [-]
I read a fair amount of sarcasm in the parent comment ;)
bbor 17 hours ago [-]
Interesting, thanks for highlighting! Did not pick up on that. Re:"leading", tho:

Effectiveness in this task environment is well beyond the specific model involved, no? Plus they'd be fools (IMHO) to only use one size of model for each step in a research task -- sure, o3 might be an advantage when synthesizing a final answer or choosing between conflicting sources, but there are many, many steps required to get to that point.

xendipity 13 hours ago [-]
I don't believe we have any indication that the big offerings (claude.ai, Gemini, operator, tasks, canvas, chatgpt) use multiple models in one call (other than for different modalities like having Gemini create an image). It seems to actually be very difficult technically and I'm curious as to why.

I wonder how much of an impact our being still so early in the productization phase of this all is. Like it takes a ton of work and training and coordination to get multiple models synced up into an offering and I think the companies are still optimizing for getting new ideas out there rather truly optimizing them.

someothherguyy 10 hours ago [-]
...or its all a farce, for now.
elashri 13 hours ago [-]
It is actually interesting for people working in academia. I would like to test it but no way I can afford $200/m right now.

Can someone test it with this prompt.

"As a research assistant with comprehensive knowledge of particle physics, please provide a detailed analysis of next-generation particle collider projects currently under consideration by the international physics community.

The analysis should encompass the major proposed projects, including the Future Circular Collider (FCC) at CERN, International Linear Collider (ILC), Compact Linear Collider (CLIC), various Muon Collider proposals, and any other significant projects as of 2024.

For each proposal, examine the planned energy ranges and collision types, estimated timeline for construction and operation, technical advantages and challenges, approximate costs, and key physics goals. Include information about current technical design reports, feasibility studies, and the level of international support and collaboration.

Present a thorough comparative analysis that addresses technical feasibility, cost-benefit considerations, scientific potential for new physics discoveries, timeline to first data collection, infrastructure requirements, and environmental impact. The projects should be compared in terms of their relative strengths, weaknesses, and potential contributions to advancing our understanding of fundamental physics.

Please format the response as a structured technical summary suitable for presentation at a topical meeting of particle physicists. Where appropriate, incorporate relevant figures and tables to facilitate clear comparisons between proposals. Base your analysis on information from peer-reviewed sources and official design reports, focusing on the most current available data and design specifications.

Consider the long-term implications of each proposal, including potential upgrade paths, flexibility for future modifications, and integration with existing research infrastructure."

sagarpatil 13 hours ago [-]
I’ll do it. AFL right now.
10 hours ago [-]
fosterfriends 6 hours ago [-]
Ran it for you using o3-high! Here's a link to the results: https://chatgpt.com/share/67a0b227-8ee4-800f-a8ed-882e7bab97...

Hopefully helpful, happy to test others for you :)

-- Raw output --

Understood. I will conduct a detailed technical analysis of next-generation particle collider projects, focusing on the Future Circular Collider (FCC), International Linear Collider (ILC), Compact Linear Collider (CLIC), Muon Collider proposals, and other significant initiatives.

The report will include: - Planned energy ranges and collision types - Estimated construction and operation timelines - Technical advantages and challenges - Approximate costs - Key physics goals - Review of technical design reports and feasibility studies - International collaboration and support - Comparative analysis on feasibility, cost-benefit, scientific potential, infrastructure needs, and environmental impact - Long-term considerations, including upgrade paths and integration with existing facilities

The analysis will be formatted as a structured technical summary, suitable for presentation at a topical meeting of particle physicists, with comparative tables and figures where relevant.

I will update you when the report is ready.

# Next-Generation Particle Collider Projects: A Comparative Technical Analysis

## Introduction High-energy physics is charting plans for new colliders to extend beyond the LHC and its High-Luminosity upgrade (HL-LHC). The discovery of the 125 GeV Higgs boson has motivated proposals for a dedicated “Higgs factory” lepton collider to study its properties with precision, as well as future energy-frontier machines to search for new physics. Several next-generation collider initiatives are under consideration internationally as of 2024. These include CERN’s proposed *Future Circular Collider (FCC)*, Japan’s *International Linear Collider (ILC)*, CERN’s *Compact Linear Collider (CLIC)*, various designs for a *Muon Collider*, China’s *Circular Electron-Positron Collider (CEPC)* and its successor *Super Proton-Proton Collider (SppC)*, among others. Each proposal differs in collision type (electron-positron, proton-proton, muon-muon, etc.), energy scale, technology, timeline, cost, and physics focus. This summary reviews each project’s key parameters – *planned energy ranges, collision types, timeline, technical advantages/challenges, cost, and physics goals* – based on technical design reports and feasibility studies. A comparative analysis then contrasts their *technical feasibility, cost-benefit, scientific potential for discoveries, timeline to first data, infrastructure needs, and environmental impact*, highlighting the relative strengths and weaknesses of each approach. We also discuss long-term implications such as upgrade paths, flexibility for future modifications, and integration with existing infrastructure.

(Citations refer to official reports and peer-reviewed sources using the format 【source†lines】.)

## Future Circular Collider (FCC) – CERN - *Type and Energy:* The FCC is a *proposed 100 km circular collider* at CERN that would be realized in stages. The first stage, *FCC-ee*, is an electron-positron ($e^+e^-$) collider with center-of-mass energy tunable from ~90 GeV up to 350–365 GeV, covering the Z boson pole, WW threshold, Higgs production (240 GeV), and top-quark pair threshold (~350 GeV). A second stage, *FCC-hh*, would use the same tunnel for a proton-proton collider at up to *100 TeV* center-of-mass energy (an order of magnitude above the LHC’s 14 TeV). Heavy-ion collisions (e.g. Pb–Pb) are also envisioned. An *FCC-eh* option (electron-hadron collisions) is considered by adding a high-energy electron injector to collide with the proton beam. This integrated FCC program thus spans both *precision lepton* collisions and *energy-frontier hadron* collisions.

- *Timeline:* The conceptual schedule foresees *FCC-ee construction in the 2030s* and a start of operations by around *2040* (as the LHC/HL-LHC program winds down). According to the FCC Conceptual Design Report, an $e^+e^-$ Higgs factory could begin delivering physics in ~2040, running for 15–20 years. The *hadron collider FCC-hh* would be constructed subsequently (using the same tunnel and upgraded infrastructure), aiming for *first proton-proton collisions in the late 2050s】. This staged approach (lepton collider first, hadron later) mirrors the successful *LEP–LHC sequence*, leveraging the $e^+e^-$ machine to produce great precision data (and to build infrastructure) before pushing to the highest energies with the hadron machine. ...

(Too long for HN to write more)

fosterfriends 3 hours ago [-]
Honestly, these are the smartest and overall best LLM outputs I've ever seen to date. Loving Deep Research, feels like another level up in the race
spyckie2 16 hours ago [-]
Is this ability really a prerequisite to AGI and ASI?

Reasoning, problem solving, research validation - at the fundamental outset it is all refinement thinking.

Research is one of those areas where I remain skeptical it is that important because the only valid proof is in the execution outcome, not the compiled answer.

For instance you can research all you want about the best vacuum on the internet but until you try it out yourself you are going to be caught in between marketing, fake reviews, influencers, etc. maybe the science fields are shielded from this (by being boring) but imagine medical pharmas realizing that they can get whatever paper to say whatever by flooding the internet with their curated blog articles containing advanced medical “research findings”. At some point you cannot trust the internet at all and I imagine that might be soon.

I worry especially with the rapidly changing landscape of the amount of generated text in the internet that research will lose a lot of value due to massive amounts of information garbage.

It will be a thing we used to do when the internet was still “real”.

BeetleB 13 hours ago [-]
> For instance you can research all you want about the best vacuum on the internet but until you try it out yourself you are going to be caught in between marketing, fake reviews, influencers, etc.

So you wouldn't use this tool for those types of use cases.

But still, a valid point. I recall I once wanted to compare Hydroflask, Klean Kanteen and Thermos to see how they perform for hot/cold drinks. I was looking specifically for articles/posts where people had performed actual measurements. But those were very hard to find, with almost all Google hits being generic comparisons with no hard data. That didn't stop them from ranking ("Hydroflask is better for warm drinks!")

Would I be able to get this to ignore all of those and use only ones where actual experiments were performed. And moreover, filter out duplicates (e.g. one guy does an experiment, and several other bloggers link to his post and repeat his findings in their own posts - it's one experiment but with many search results).

simonw 14 hours ago [-]
> Is this ability really a prerequisite to AGI and ASI?

That depends entirely on how you choose to define "AGI".

observationist 16 hours ago [-]
It's a direction in a vast landscape, not a feature of itself - being better at different tasks, like search generally, and research in conjunction with reasoning, gets the model closer to AGI. An AGI will be able to do these tasks - so the point of the research is to have more Venn diagrams of capabilities like these to help narrow down the view on things that might actually be fundamental mechanisms involved in AGI.

Moravec detailed the idea of a landscape of human capabilities slowly being submerged by AI capabilities, and the point at which AI can do anything a human can, in practice or in principle, we'll know for certain we've reached truly general AI. This idea includes things like feeling pain and pleasure, planning, complex social, oral, and ethical dynamics, and anything else you can possibly think of as relevant to human intelligence. Deep Research is just another island being slowly submerged by the relentless and relentlessly accelerating flood.

numba888 16 hours ago [-]
> hings like feeling pain and pleasure

can machine feel? without that there is no AGI according to definition above.

and the second question: are animals "GI"? they don't have language and don't solve math problems, never heard of np-complete.

xwolfi 14 hours ago [-]
Are we not machines anyway ? Ofc a machine can feel, just need to have priorities that are aligned to itself, and use strong feedback when that self is either in danger or on the right path to preservation...

Feelings are nothing very special you know...

YmiYugy 17 hours ago [-]
If I understood the graphs correctly, it only achieves 20% pass rate on their internal tests. So I have to wait 30min and pay a lot of money just to sift through walls of most likely incorrect text? Unless the possibility of hallucinations is negligible, this is just way too much content to review at once. The process probably needs to be a lot more iterative.
itkovian_ 16 hours ago [-]
Here's an example of the type of question it is acheiving 20% on;

The set of natural transformations between two functors F,G ⁣:C→DF,G:C→D can be expressed as the end Nat(F,G)≅∫AHomD(F(A),G(A)). Nat(F,G)≅∫A HomD (F(A),G(A)).

Define set of natural cotransformations from FF to GG to be the coend CoNat(F,G)≅∫AHomD(F(A),G(A)). CoNat(F,G)≅∫AHomD (F(A),G(A)).

Let: - F=B∙(Σ4)∗/F=B∙ (Σ4 )∗/ be the under ∞∞-category of the nerve of the delooping of the symmetric group Σ4Σ4 on 4 letters under the unique 00-simplex ∗∗ of B∙Σ4B∙ Σ4 . - G=B∙(Σ7)∗/G=B∙ (Σ7 )∗/ be the under ∞∞-category nerve of the delooping of the symmetric group Σ7Σ7 on 7 letters under the unique 00-simplex ∗∗ of B∙Σ7B∙ Σ7 .

How many natural cotransformations are there between FF and GG?

slaterbug 12 hours ago [-]
As someone who doesn't understand anything beyond the word 'set' in that question, can anyone give an indication of how hard of a problem that actually is (within that domain)?

Also I'm curious as to what percentage of the questions in this benchmark are of this type / difficulty, vs the seemingly much easier example of "In Greek mythology, who was Jason's maternal great-grandfather?".

I'd imagine the latter is much easier for an LLM, and almost trivial for any LLM with access to external sources (such as deep research).

Davidzheng 15 hours ago [-]
btw isn't this question at least really badly worded (and maybe incorrect?) the definitions they give for F and G are categories not functors... (and both categories are in fact one object with contractible space of morphisms...)
baal80spam 10 hours ago [-]
That's easy Dave: 42.
perching_aix 13 hours ago [-]
It's very interesting to think about what kind of "mental model" might it have, if it's capable of "understanding" all this (to me) gibberish, but is then unable to actually work the problem.
rizky05 15 hours ago [-]
[dead]
brokensegue 17 hours ago [-]
26.6% on humanity's last exam is actually impressive.

pass rate really only matters in context of the difficulty of the tasks

16 hours ago [-]
tmnvdb 17 hours ago [-]
Only if you are asking questions at the level of a cutting edge benchmark
rvnx 17 hours ago [-]
This is one of the actual questions:

> In Greek mythology, who was Jason's maternal great-grandfather?

https://www.google.com/search?q=In+Greek+mythology%2C+who+wa...

johnfn 13 hours ago [-]
Did you intentionally flip through all the questions to find the one that seemed the easiest? If so, why? That's question #7, and all other 7 questions in the sample set seem ridiculously difficult to me.
elicksaur 17 hours ago [-]
In Greek mythology, Jason's maternal great-grandfather was Einstein.
tmnvdb 17 hours ago [-]
This is a hard question for language models since it targets one of their known weaknesses.
andyg_blog 16 hours ago [-]
Greek mythology? But seriously please elaborate for my less educated self.
_bin_ 13 hours ago [-]
it tests syllogistic reasoning: Jason's mother was Tyro, whose father was Poesidon, whose father was Kronos. it also tests whether it "eagerly" rather than comprehensively considers something: a maternal great-grandfather could be the father of either one's maternal grandmother or maternal grandfather. so the answer could also be king Aeolus of the Etruscans.

ideally a model would be able to answer this accurately and completely.

nimithryn 13 hours ago [-]
I think there are more possible answers? Jason's mother differs depending on the author...

For example, Jason's mother was Philonis, daughter of Mestra, daughter of Daedalion, son of Hesporos. So Jason's maternal great-grandfather was Hesporos.

tmnvdb 16 hours ago [-]
LLMs often don't do well on tasks that require composition into smaller subtasks. In this case there is a chain of relations that depend on the previous result.
layer8 17 hours ago [-]
Users don’t care about how hard something is for LLMs if they receive incorrect output.
11101010001100 16 hours ago [-]
It's categorically more than a weakness.
pama 16 hours ago [-]
No it is not an actual question on this exam. From the paper: “To ensure question quality and integrity, we enforce strict submission criteria. Questions should be precise, unambiguous, solvable, and non-searchable, ensuring models cannot rely on memorization or simple retrieval methods. All submissions must be original work or non-trivial syntheses of published information, though contributions from unpublished research are acceptable. Questions typically require graduate-level expertise or test knowledge of highly specific topics (e.g., precise historical details, trivia, local customs) and have specific, unambiguous answers…”. (Emphasis mine)
Neynt 15 hours ago [-]
It's example #7 on https://lastexam.ai/
pama 3 hours ago [-]
This is an example of the submitted questions. Because it is possible to search it on the web, it is not an example of the accepted questions.
freehorse 9 hours ago [-]
I am selling a bridge, it is a great bargain.
roenxi 17 hours ago [-]
Maybe. Not enough data to say. Say it does a days worth of work in a query. It is sensible to use if it takes less than a day to review ~5 days worth of work. I don't know if we're near that threshold yet but conceptually this would work well for actual research where the amount of preparation is large compared to the amount of output written.

And eyeballing the benchmarks, it'll probably reach a >50% rate per query by the end of the year. Seems to double every model or two.

throwaway123lol 3 hours ago [-]
Yeah it can be more iterative. Just use individual queries and build on it yourself. This is all this is doing. It's a trick, and OpenAI is a PR hype company at this stage.
dyauspitr 14 hours ago [-]
On questions even specialists in that field can’t answer correctly.
random_cynic 15 hours ago [-]
The difference is that it takes few minutes to an hour at most so it can be run multiple times a day, using the results of previous runs to further refine the search and reasoning process to get better outcomes. Pretty much how any human research works but much faster and with potentially vastly more world-knowledge and reasoning capability than average humans. And these capabilities will rapidly improve with further RL.
spyckie2 17 hours ago [-]
I mean you want it to grill your steak and eat it for you too?

I mean I too can complain that my iPhone doesn’t automatically screen out spammers and send my mom flowers on Mother’s Day.

scarab92 17 hours ago [-]
Why doesn't the iPhone screen spammers yet? Pixel has had this feature for a decade.
senordevnyc 10 hours ago [-]
Pixel hasn’t even been around for a decade.
scarab92 7 hours ago [-]
The Pixel branding is 12 years old, and IIRC this feature also existed in Nexus before that.
senordevnyc 1 hours ago [-]
Haha, are you referring to the Chromebook Pixel? How is that relevant to stopping spam calls?

Pixel phone launched in 2016.

6gvONxR4sf7o 16 hours ago [-]
There are some people in the blogosphere who are known experts in their niche or even niche-famous because they write popular useful stuff. And there are a ton more people who write useful stuff because they want that 'exposure.' At least, they do in the very broadest sense of writing it for another human to read it. I wonder if these people will keep writing when their readership is all bots. Dead internet here we come.
seanmcdirmid 16 hours ago [-]
I'm all for writing just for the bots, if I can figure it out. A lot of academic papers aren't really read anyways, just briefly glanced at so they can be cited together, large publications like journal pubs or dissertations even less so. But the ability to add to a world of knowledge that is very easy to access by people who want to use it...that is very appealing to me as an author. No more trudging through a bunch of papers with titles that might be relevant to what I want to know about...and no more trudging through my papers, I'm OK with that.
lmm 12 hours ago [-]
Of course they will. Loads of people go around taking hundreds of photos with the biggest camera they can afford even though no-one else will ever willingly look at them.
cye131 17 hours ago [-]
Does anyone actually have access to this? It says available for pro users on the website today - I have pro via my employer but see no "deep research" option in the message composer.
dimitri-vs 5 hours ago [-]
I have access as of ~3 hours ago. Using the Win desktop app too, which is behind on some features (Operator, tasks). I open up any of the models and it shows up as a `(Deep research)` tag on the input field next to the web search option. Didn't clear cache or anything.
fosterfriends 15 hours ago [-]
I have pro, in US, not seeing yet
_bin_ 13 hours ago [-]
what about a full refresh of the page or perhaps jump into the dev tools and check "disable cache"

could also be aggressive caching from cloudflare. could be they're just trying to announce more stuff to maintain cachet and can't yet support all users forking over 200/month.

energy123 11 hours ago [-]
I relogged, disabled cache and reloaded the page with Ctrl+Shift+R but it doesn't show up.
nijaar 11 hours ago [-]
same here. pro in the US and still no access. i even logged in using my phone and a different browser
snewman 17 hours ago [-]
Two different people I know with pro subscriptions report not having access yet.
16 hours ago [-]
greatpostman 17 hours ago [-]
Have pro, can’t see it yet
labanimalster 14 hours ago [-]
same here
chachamatcha 15 hours ago [-]
also US based, have pro and still no access.
fizx 15 hours ago [-]
same same
nycdatasci 14 hours ago [-]
Pro user. No access like everyone else.

OpenAI is very much in an existential crisis and their poor execution is not helping their cause. Operator or “deep research” should be able to assume the role of a Pro user, run a quick test, and reliably report on whether this is working before the press release right?

kandesbunzler 10 hours ago [-]
How many times are you going to post this exact same comment here? Are you a Chinese bot or something?
adriand 17 hours ago [-]
Feels like only a matter of time before these crawlers are blocked from large swathes of the internet. I understand that they’re already prohibited from Reddit and YouTube. If that spreads, this approach might be in trouble.
scarab92 17 hours ago [-]
I doubt those crawler rules will be honoured for long.

I wouldn’t even be surprised if a law is passed requiring sites to provide equal access to humans whether accessed directly or via these models.

It’s too important an innovation to stall, especially considering the US’s competitors (China) won’t respect robots.txt either.

cj 17 hours ago [-]
Anyone selling anything would want to remain crawlable if people use this to research something that could lead to a purchase.
reaperman 17 hours ago [-]
Not necessarily. Southwest airlines doesnt allow itself on price comparison sites or Google Flights.

Amazon listings are blocked from google shopping and other price comparison sites.

yencabulator 47 minutes ago [-]
> Amazon listings are blocked from google shopping

I see Amazon results there all the time. 3 of the visible 8 sponsored results are Amazon, in the non-sponsored results an Amazon listing is either first or second in every category.

rsanek 10 hours ago [-]
shlomo_z 14 hours ago [-]
Your point is completely valid, but... Southwest now has an arrangement with Google Flights to allow their listings there.
felindev 9 hours ago [-]
While people might attempt that, it's going to be an arms race, just like ads vs adblocks. There's already multiple crawlers that present fake user-agent when their original one is blocked. Temptation of more data is just to irresistible to them
sumedh 8 hours ago [-]
> Feels like only a matter of time before these crawlers are blocked from large swathes of the internet.

How would you know its a crawler?

drcode 17 hours ago [-]
I suppose there is an equilibrium, where sites that penalize these types of crawlers will also get less traffic from people reading ai citations, so for many sites the upsides of allowing it will be greater than the downsides.
crazylogger 17 hours ago [-]
This is trivially bypassed by OpenAI asking the user to take control of their computer (or a sandboxed browser within it,) then for all intents and purposes it’s the user themselves accessing your site (with some productivity/accessibility aid from OAI.)
bbor 17 hours ago [-]
TBF OpenAI in particular bought access to Reddit. Otherwise yeah this is my main confusion with all of these products, Perplexity being the biggest -- how do you get around the status-quo of refusing access to bots? Just to start off with, there is no Google Search API, and they work hard to make sure headless browsers can't access the normal service.

They do say "Currently, deep research can access the open web...", so maybe "open" there implies something significant. Like, "websites that have agreements with OpenAI and/or do not enforce norobot policies".

wahnfrieden 16 hours ago [-]
Client-side browsers that crawl for users (and prompt for logins or captcha as needed) won't be as easily blockable
optimalsolver 16 hours ago [-]
Big Tech Podcast listener?
throwaway123lol 3 hours ago [-]
This is so lame. This feels like another desperate attempt to stay relevant cobbled together after the DeepSeek announcement last week. What was the other attempt they made? Skip a version number to seem like more progress was made (o1->o3)? From what I can tell "o3" is just the same as o1 with an extra reasoning-effort parameter.

Oh and "Deep research" is available to people on the $200 per month plan? Lol - cool. I've been using DeepSeek a lot more recently and it's so incredibly good even with all the scaling issues.

Havoc 17 hours ago [-]
The descriptions of the product sounded substantially more impressive than the actual samples tbh.

Still I think there is a big market for this sort of „go away for 30 mins and figure this out“ style agent

TechDebtDevin 14 hours ago [-]
This is 5-10 years out. What OpenAI is displaying here I've been able to do with relatively little code, a bit of scraping and far less capable models for a year. I really don't see what is novel or useful here.
sumedh 8 hours ago [-]
Probably the accuracy.
airstrike 14 hours ago [-]
"Deep research" is now somehow synonymous to searching online for stats and pulling stuff from Statista? And when I want to make changes to that report, do I have to tweak my prompt and get an entirely different document?

Not sure if I'm too tired and can't see it but the lack of images/examples of the resulting report in this announcement doesn't inspire a lot of confidence just yet.

z7 2 hours ago [-]
Business and technical analysis of DeepSeek's entire R&D history with extrapolations:

https://chatgpt.com/share/67a0d59b-d020-8001-bb88-dc9869d52b...

jmount 15 hours ago [-]
I had no idea there was a market for "Compile a research report on how the retail industry has changed in the last 3 years. Use bullets and tables where necessary for clarity." I imagine reading such a result is pure torture.
Alifatisk 3 hours ago [-]
When I saw new to llms, I used Bing ai in a fun way. So when I was writing my report, it was sometimes hard to find discussions or material about a certain topic.

What I did was to ask Bing ai about that topic and it returned information aswell as sources to where it found those, so I picked up all those links and researched them myself.

Bing ai was a great resource for finding relevant links, this was until I found out about perplexity, my life haven't been the same since.

DoctorOetker 1 hours ago [-]
Would formalizing Wiles' proof of Fermat's Last Theorem be considered deep research? Is it able to formalize it in say metamath's set.mm?

Or is the position of OpenAI that Wiles' proof is incomplete?

ejang0 16 hours ago [-]
Can anyone confirm if this is available in Canada and other countries? This site says "We are still working on bringing access to users in the United Kingdom, Switzerland, and the European Economic Area." But I'm not sure about other countries. I don't have Pro currently, only Plus.
carbocation 16 hours ago [-]
I don't even see it in the US right now.
carbocation 8 hours ago [-]
(Update: it’s visible for me now.)
VerdisQuo5678 17 hours ago [-]
The accuracy of this tool does not matter. This is exclusively designed for box ticking "reports" that nobody reads and a produced for the sake of itself.
arbywhy 17 hours ago [-]
99% of corpo upper management slide deck work. ai only makes more of this useless pencil-neck board of directors slop.
reaperman 16 hours ago [-]
“Pencil-neck” is a strange insult to use here. How are software developers, or hardware design engineers, or finance workers any less “pencil-neck” than “board of directors”?
tomrod 16 hours ago [-]
The new term for this is "AI Loopidity", highlighting the unintelligent ouroboros nature of one side using AI to generate content and then another side to consume content.
sockaddr 14 hours ago [-]
Similar to “Bullshit jobs”

All the AI commercials are designed to appeal to people that don’t produce any actual value but haven’t been detected by the system yet.

Need to send email to boss? Press magic button! Job well done, idiot.

Someone send you big scary email? Press magic button! Good job dummy!

Someone wants to go eat some Italian with you, push magic button for totally not-ad result. Enjoy your Olive Garden, moron.

rsanek 10 hours ago [-]
i think the apple ads are the poster child here. hopefully we can see more inventive ones than just serving lazy people.
thefourthchime 17 hours ago [-]
OpenAI has a deep bench. I bet they pushed this out to change the narrative about deepseek
btown 17 hours ago [-]
Also named specifically to muddle the SEO for the term "deep." Nothing that OpenAI does is unintentional.
kevlened 17 hours ago [-]
It's more likely this is a response to Gemini Deep Research released in December

https://blog.google/products/gemini/google-gemini-deep-resea...

petra 17 hours ago [-]
That Google product isnt that good, it can't really replace research done by a person.
nicce 16 hours ago [-]
Just one tool in the toolbox. It helps to see if some sources have been missed.
alvah 13 hours ago [-]
It absolutely can replace the research done by one person, for my use case at least. It’s also available on their $20/month subscription, unlike OpenAI’s $200/month.
sadeshmukh 13 hours ago [-]
Nobody was going to hire a researcher for a quick question.
nicce 17 hours ago [-]
Two birds with one stone: timing for Deepseek and feature for Gemini
xnx 17 hours ago [-]
Google publicly announced a model named "Deep Research" on December 11th: https://blog.google/products/gemini/google-gemini-deep-resea...
dougb5 17 hours ago [-]
Does the naming scheme they've used for models so far suggest that they care about SEO?
leonheld 17 hours ago [-]
Oh God, this is such an astute observation. I think it worked so well on me that I didn't even think about the "deep" portion initially. Goes to show how effective these things are psychologically.
17 hours ago [-]
bbor 17 hours ago [-]
I have never believed a conspiracy theory more instantly. Deep Search vs. DeepSeek is way more than enough to confuse the average layman! Especially when you're googling something you heard about at work a few hours ago, or on Bloomberg TV
bonoboTP 17 hours ago [-]
You might as well say that DeepSeek wanted to cause confusion with DeepMind. Deep isn't such a distinguishing name, deep learning has been a buzzword since 2012.
viraptor 17 hours ago [-]
Deepmind is not a consumer product. Gemini is part of it but nobody calls it deepmind.
bonoboTP 17 hours ago [-]
The point is, "deep" is an extremely generic word in the AI space.
picografix 11 hours ago [-]
I think deep research as a service could be a really strong use case for enterprises, as long as they have access to non-public data. I assume that most of this guarded data is high quality, and seeing progress in these areas might end up being even more impressive than it is now.
usaar333 17 hours ago [-]
Overall impressive.

Though, the jump for Gaia relative to SOTA is relatively not that high. Especially given that this is o3

highfrequency 14 hours ago [-]
Can it compile and run (non-Python) code as part of its tool use? Compile-run steps always seemed like they would be a huge value add during reasoning loops - it feels very silly to get output from ChatGPT, try to run it in terminal, get an error and paste the error to have ChatGPT immediately fix it. Surely it should be able to run code during the reasoning loop itself?
simonw 14 hours ago [-]
It sounds like it can run Python, which means it has access to Code Interpreter, which means it can run various other languages as well if you can convince it to do so.

I've used Code Interpreter to compile and run C code - https://simonwillison.net/2024/Mar/23/building-c-extensions-... - and I've managed to get it to run JavaScript (by uploading a Deno binary) and even Lua and PHP in the past as well: https://til.simonwillison.net/llms/code-interpreter-expansio...

joanfihu 6 hours ago [-]
There is no way I'll read all that text from the demos...

AskPandi has a similar feature called "Super Search" that essentially checks more sources and self validates it's own answers.

iT's AgEnTic.

The answers are easier to digest, if you search for products, you'll get a list of products with images, prices and retailers.

sivm 3 hours ago [-]
I used it once to research language learning and had my pro mode taken away pending review for abuse.
pazimzadeh 15 hours ago [-]
> In Nature journal's Scientific Reports conference proceedings from 2012, in the article that did not mention plasmons or plasmonics, what nano-compound is studied?

Aren't there more than one articles that did not mention plasmons or plasmonics in Scientific Reports in 2012?

Also, did they pay for access to all journal contents? that would be useful

nicce 15 hours ago [-]
Maybe that is the only one with open access
smusamashah 7 hours ago [-]

    can sometimes hallucinate facts in responses or make incorrect inferences, though at a notably lower rate than existing ChatGPT models, according to internal evaluations. It may struggle with distinguishing authoritative information from rumors, and currently shows weakness in confidence calibration, often failing to convey uncertainty accurately
Taken from the limitations section.

These tools are just good at creating pollution. I don't see the point of delegating a (not just) research where 1% blatant mistakes are acceptable. These need much better grounding before handing out to masses.

I can not take any output by these tools (google summaries, comment summaries by amazon, youtube summaries etc etc) while knowing for a fact some of that is a total lie. I can not tell which part is a lie. e.g. If LLM says that in any given text the sentiment is divided, it could be just one person with an opposing view.

If same task was given to a person, I could reason with that person on any conclusion. These tools will reason on their hallucinations.

axpy906 4 hours ago [-]
Don’t most researchers have a local setup plugged into Olama so that they do NOT share their search information?
kenjackson 17 hours ago [-]
If it has access to play by play data for all sports this could be an absolute playground for amateur sports statisticians. The possibilities…
pjs_ 17 hours ago [-]
McKinsey mode
rsanek 10 hours ago [-]
don't be hyperbolic. deep research would need to help cause an opioid crisis for to get to that level.
TechDebtDevin 14 hours ago [-]
More like high school intern mode.
sockaddr 14 hours ago [-]
Heh
wilg 17 hours ago [-]
I think this looks cool. Apparently unlike everyone else on this website?
tmnvdb 17 hours ago [-]
HN is full of people who want to feel smart by complaining.
RandomWorker 13 hours ago [-]
I’m a researcher and honestly not worried. 1. Developing the right question has always been the largest barrier to great research. Not sure OpenAI can develop the right question without the Human experience. The second biggest part of my role is influencing people that my questions are the right questions. Which is made easier when you have a thorough understanding of the first. That being said, I’m sure there will be many people here that will tell me that algorithms already influence people, and ai can think through much of any issues there are.

I do use these systems from time to time, but it just never renders any specific information that would make it great research.

RayVR 13 hours ago [-]
100% agree.

These systems serve best at augmenting information discovery. When I'm tackling a new area or looking for the right terminology, these models provide a quick shortcut because they have good probabilistic "understanding" of my naive, jargon-free description. This allows me to pull in all of the jargon for the area of research I'm interested in, and move on to actually useful resources, whether that be journal articles, textbooks, or - rarely - online posts/blogs/videos.

the current "meta" is probably something like Elicit + notebookLM + Claude for accelerating understanding of complex topics and extracting useful parts. But, again, each step requires that I am closely involved, from selecting the "correct" papers, to carefully aggregating and grooming the information pulled in from notebookLM, to judging the the usefulness of Claude's attempts to extract what I have asked for

GeoAtreides 10 hours ago [-]
> Developing the right question has always been the largest barrier to great research.

I thought funding was the biggest barrier to great research

11 hours ago [-]
Xuban 7 hours ago [-]
This make sense, I often use the normal search feature to research a very large ammount of information and it mostly does not work well. If the new search feature increases the number of websites scrapped and the pertinence of the websites, I'm all in.
titzer 4 hours ago [-]
It's great that none of these AI models are being foisted on us by advertising companies.
sharpshadow 4 hours ago [-]
Are they launching a new feature after some other AI got the attention to get the attention back?
monkeydust 6 hours ago [-]
What a decent setup to replicate via open model and agent framework? One thing I have struggled with is getting comprehensive web searches using an agentic framework.
anon373839 9 hours ago [-]
Setting aside how well it works, I think this is a pretty nice demonstration of how to do UX for an agentic RAG app. I like that the intermediate steps have been pushed out to a sidebar, with updates that both provide some transparency about the process and make the high latency more palatable.
gigatexal 9 hours ago [-]
Ok so I do this as a noob in some field. How do I know or trust the research conclusions? How do I know it’s not hallucinated its conclusions? I’ll likely have to do my own research to just verify it and then if I did I might as well have done the research myself.
RayVR 14 hours ago [-]
Each release from openAI gives me less hope for them and this whole AI boom. They should be leading the charge of highlighting how the current generation of LLMs fail, not churning out half-baked overhyped products.

Yes, they can do some cool tricks, and tool calling is fun. No one should trust the output of these models, though. The hallucinations are bad, and my experience with the "reasoning" models is that as soon as they fuck up (they always do) they go off the rails worse than the base LLMs.

xt00 17 hours ago [-]
"will find, analyze, and synthesize hundreds of online sources"

Synthesize? Seems like the wrong word -- I think they would want to say something like, "analyze, and synthesize useful outputs from hundreds of online sources"..

pjot 17 hours ago [-]
From New Oxford dictionary:

  > combine (a number of things) into a coherent whole: pupils should synthesize the data they have gathered | Darwinian theory has been synthesized with modern genetics.
nicce 17 hours ago [-]
On the other hand, accurate if it is prone for hallucination…
tmnvdb 17 hours ago [-]
You can synthesize the parts to get the whole. Both uses are correct AFAIK
Bjorkbat 15 hours ago [-]
Actually sounds pretty cool, but the graph on expert level tasks is confusing my expectations. Saying it has a pass rate of less than 20% sounds a lot like saying this thing is wrong most of the time.

Granted, these strike me as difficult tasks and I’d likely ask it to do far simpler things, but I’m not really sure what to expect from looking at these graphs.

Ah, but the fact that it bothers to cite its sources is a huge plus. Between that and its search abilities it sounds valuable to me

random_cynic 15 hours ago [-]
I think that's mostly because of the access to information it has. Much of the highly useful information is not on the public internet or shows up on search engines, only domain experts know about them. Also, the websites may be paywalled or gated by login. So a better comparison would be if the models had the same level of access as an expert.
rajnathani 11 hours ago [-]
I remember about 10-15 years ago that Ray Kurzweil (who still works at Google) or someone at Google had this idea for what Google should be able to do: About doing deep research by itself with a simple search query. I can't find the source. Obviously it didn't pan out without transformers.
gwerbret 17 hours ago [-]
To anyone who's tried it: how does it handle captchas? I can't imagine that OpenAI's IP addresses are anyone's favorites for unfettered access to web properties these days.
feznyng 2 hours ago [-]
You can buy residential proxies to pretend you're a regular person IIRC, some of the browser automation companies do that to bypass rate limiting, captchas, etc.
layer8 16 hours ago [-]
And is it smart enough to use archive.today for paywalled articles. ;)
getnormality 15 hours ago [-]
The demo on global e-commerce trends seems less useful than a Google search, where the AI answer will at least give you links to the claimed information.
jasonjmcghee 17 hours ago [-]
Surprised more comments aren't mentioning deepseek has this feature (for free) already. Assuming this is why OpenAI scrambled to release it.

The examples they have on the page work well on chat.deepseek.com with r1 and search options both enabled.

Do I blindly trust the accuracy of either though? Absolutely not. I'm pretty concerned about these models falling into gaming SEO and finding inaccurate facts and presenting them as fact. (How easy is it to fool / prompt inject these models?)

But has utility if held right.

starchild3001 16 hours ago [-]
Not really accurate. The "Search" functionality you're describing in DeepSeek is comparable to OpenAI's existing "Search GPT." OpenAI's recent announcement refers to a more advanced capability, similar to Gemini's existing "deep research" feature. DeepSeek's current offerings are significantly more limited in scope.
jasonjmcghee 16 hours ago [-]
Doesn't seem like access is available to try "deep research" yet on OpenAI, so I can only speak to what I tried, which was their examples on the blog post (using DeepSeek w/ R1 + Search) and results were pretty similar.

AFAIK OpenAI's current offering uses 4o, and it does a web search and then pipes it into 4o. I'm guessing adding CoT + other R1/o3 like stuff is one of the key effective differences. But time will tell how different it is. Maybe it's a dramatic improvement.

WiSaGaN 16 hours ago [-]
SearchGPT is bad because its underlying model is not a reasoning one. Deepseek one mentioned above is closer to deep research than searchgpt.
TechDebtDevin 14 hours ago [-]
Are you unaware that there is a "Deepthink (R1)" button right next to the "Search" button on DeepSeek's Chat app. Its been there for some time, even before all the hype regarding R1.
starchild3001 14 hours ago [-]
I'm well aware of that. That is not what openai calls "deep research".
14 hours ago [-]
nicce 17 hours ago [-]
I wish Kagi would work with similar performance. Their lenses feature is perfect for this and they already filter out most of the SEO spam based on trackers and other typical red flags.
layer8 17 hours ago [-]
From the demo: “Use bullets and tables where necessary for clarity.” It’s weird that it would be necessary to specify that. I suppose they want to showcase that you can influence the output style, but it’s strange that you’d have to explicitly specify the use of something that is “necessary for clarity”. It comes across as either a flaw in the default execution, or as a merely performative incantation.
lolpanda 14 hours ago [-]
"synthesize large amounts of online information" does it heavily depend on the search engine performance and relevance of the search results? I don't see any mention of Google or Bing. Is this using their internal search engine then?
esafak 16 hours ago [-]
Is there a benchmark we can compare this against You.com's research mode? It looks like R1 forced them to release o3 prematurely and give it Internet access. And they didn't want to say they released o3 so they called it 'Deep Research'.
freehorse 9 hours ago [-]
I love that when "open"ai releases things last year or so, they do not actually release them. So we get the chance in the meantime to all enjoy a bunch of speculative, shilling comments here about this next great thing being miles ahead of competitors/close to AGI/the tool that will actually do X thing that others complain so far llms are failing to do.
dazzaji 7 hours ago [-]
Late Sunday night, I gained access to OpenAI’s newly launched Deep Research and immediately tested it on a draft blog post about Uniform Electronic Transactions Act (UETA) compliance and AI-agent error handling [1]. Here’s what I found:

Within minutes, it generated a detailed, well-cited research report that significantly expanded my original analysis, covering: * Legal precedents & case law interpretations (including a nuanced breakdown of UETA Section 10). * Comparative international frameworks (EU, UK, Canada). * Real-world technical implementations (Stripe’s AI-driven transaction handling). * Industry perspectives & business impact (trust, risk allocation, compliance). * Emerging regulatory standards (EU AI Act, FTC oversight, ISO/NIST AI governance).

What stood out most was its ability to: - Synthesize complex legal, business, and technical concepts into clear, actionable insights. - Connect legal frameworks, industry trends, and real-world case studies. - Maintain a business-first focus, emphasizing practical benefits. - Integrate 2024 developments with historical context for a deeper analysis.

The depth and coherence of the output were comparable to what I would expect from a team of domain experts—but delivered in a fraction of the time.

From the announcement: Deep Research leverages OpenAI’s next-generation model, optimized for multi-step research, reasoning, and synthesis. It has already set new performance benchmarks, achieving 26.6% accuracy on Humanity’s Last Exam (the highest of any OpenAI model) and a 72.57% average accuracy on the GAIA Benchmark, demonstrating advanced reasoning and research capabilities.

Currently available to Pro users (with up to 100 queries per month), it will soon expand to Plus and Team users. While OpenAI acknowledges limitations—such as occasional hallucinations and challenges in source verification—its iterative deployment strategy and continuous refinement approach are promising.

My key takeaway: This LLM agent-based tool has the potential to save hours of manual research while delivering high-quality, well-documented outputs. Automating tasks that traditionally require expert-level investigation, it can complete complex research in 5–30 minutes (just 6 minutes for my task), with citations and structured reasoning.

I don’t see any other comments yet from people who have actually used it, but it’s only been a few hours.I’d love to hear how it’s performing for others. What use cases have you explored? How did it do?

(Note: This review is based on a single use case. I’ll provide further updates as I conduct broader testing.)

[1] https://www.dazzagreenwood.com/p/ueta-and-llm-agents-a-deep-...

timabdulla 7 hours ago [-]
I tried it on a few things I was familiar with just to assess its reliability.

The first was on a topic with which I am deeply familiar -- myself -- and it made three factual errors in a 500-word report: https://news.ycombinator.com/item?id=42916899

The second was a task to do an industry analysis on a space in which I worked for about ten years. I think its overall synthesis was good (it accorded with my understanding of the space), but there were a number of errors in the statistics and supporting evidence it compiled, based upon my random review of the source material.

I think the product is cool and will definitely be helpful, but I would still recommend verifying its outputs. I think the process of verification is less time-consuming than the process of researching and writing, so that is likely an acceptable compromise in many cases.

chrismarlow9 17 hours ago [-]
This smells like when Google released Gemini to have a product in the space.
xnx 14 hours ago [-]
I agree that OpenAI is trying to stay relevant by announcing a lot of have baked products with little to no availability.

> when Google released Gemini to have a product in the space.

Bard preceded Gemini.

lysace 17 hours ago [-]
Eh, not really. Google failed to launch first out of internal political dysfunction and then made a crash effort to launch something to counter the first ChatGPT release.

I highly doubt that the concerns of internal political commissars were holding up this particular openai release.

chrismarlow9 8 hours ago [-]
That's some fancy words friend.
therealmarv 17 hours ago [-]
I don't know. OpenAI is so bad in naming... the average person on the street will confuse Deepseek with Deep Research. Also not to forget o1, o3 ... 4o
szvsw 17 hours ago [-]
> the average person on the street will confuse Deepseek with Deep Research.

That’s probably a feature not a bug (from OpenAI’s perspective…).

tmnvdb 17 hours ago [-]
You're not wrong but it feels like bikeshedding at this point.
hipadev23 17 hours ago [-]
Yes.
prng2021 17 hours ago [-]
"Deep research was trained using end-to-end reinforcement learning"

Does this mean they skipped supervised fine tuning like DeepSeek did with R1?

OutOfHere 16 hours ago [-]
No, it just suggests that RL was used over a base SFT model, and moreover that RL here was tuned to this research task. Personally I don't think that RL is strictly necessary for this task at all, but perhaps it helps.
resters 14 hours ago [-]
Still not seeing access on my account.
_bin_ 14 hours ago [-]
they're not giving it to us lowly $20/month users yet :( gotta take out a second mortgage and throw them 200/month if you want it now
resters 3 hours ago [-]
I have the $200/month version. Deep Research arrived this morning.

So far I tried it on one problem and it seems limited by the "front end" being 4o-mini. It ignored most of my initial prompt and also ignored the previous research it asked for which I provided. The final output was high quality and definitely was enriched by the web searching it did, but it left out a crucially important dimension of the problem because it was unable to ingest the background info I provided adequately.

I'd like to see a version of it where the front end model is o1-pro

corentin88 11 hours ago [-]
Curious about the use cases here. Building AI Agents? But which one?
tomrod 17 hours ago [-]
I'm not sure if this is worth a subscription. DSPy and DeepseekR1 can already move this direction, if I understand right.
apstls 17 hours ago [-]
What is the current state of DSPy optimizers? When I originally checked it out it appeared to just be optimizing the set of examples used for n-shot prompting.
tmnvdb 17 hours ago [-]
You understand wrong.
auggierose 9 hours ago [-]
The flow reminds me a bit of undermind.ai.
14 hours ago [-]
reader9274 17 hours ago [-]
I think we're all reaching AI fatigue. Fewer and fewer people care anymore
bonoboTP 17 hours ago [-]
Sure if you're viewing this as some kind of spectator thing, or entertainment, maybe it's less interesting. But it doesn't really matter whether "people care". What matters is whether it's useful and has impact. It's enough if the small number of people use it for whom it is useful. It doesn't matter if the average Joe on the street is excited by it.

Few people care or even know about various advances in various specialized fields. It's enough if AI simply seeps into various applications in boring and non-flashy ways for it to have significant effects that will affect a wider range of people, whether they get hyped by the news announcements or not. Jobs etc.

An analogy: the Internet as such is not very exciting nowadays, certainly not in the way it was exciting in the 90s with all the news segments about surfing the information superhighway or whatever. There was a lot of buzz around the web, but then it got normalized. It didn't disappear, it just got taken for granted. No average person got excited around HTML5 or IPv6. It just chugs along in the background. AI will similarly simply build into the fabric of how things get done. Sometimes visibly to the average person, sometimes just behind the scenes.

InkCanon 15 hours ago [-]
Not sure if it's just me, but it looks like all SOTA companies are doubling down to chase the new benchmark, which beyond hype, doesn't seem to translate into real world uses. Why don't these companies just plug it into a popular git repo and say, hey our AI fixed these 100 issues! Or something real? The only people who seem to be doing something real is DeepMind.
khazhoux 12 hours ago [-]
Incorrect. We are not all reaching AI fatigue.
rvnx 17 hours ago [-]
Especially this is not a breakthrough justifying a 340B USD valuation, but rather the work that junior developers can do; implement a loop of Bing Searches connected to an LLM.
tmnvdb 17 hours ago [-]
Peak HN comment
rvnx 17 hours ago [-]
Doesn't make it untrue.

Agents that can search the internet exist for a while now and have been essentially solved and happily used in platforms like Perplexity.

It's really "meh", very far from revolutionary.

Keep in mind this company is trying to convince everybody they need 500B USD now (through the Stargate project).

alvah 13 hours ago [-]
I haven’t tried the OpenAI version yet, as I’m on their peasant-level $20 plan, but the Google equivalent is way superior to Perplexity (I use both extensively). The web search Perplexity carries out is superficial compared to the Google product; it misses a large percentage of what Gemini Deep Research finds, and for a particular task in my business this makes a huge difference.
spyckie2 17 hours ago [-]
To go from partially automated to fully automated is thousands of non trivial edge cases and unforeseen decision points that must be tamed.

To say this is trivial is like saying the one shot ai prompted twitter clone is the same thing as twitter.

Peak HN indeed.

CamperBob2 14 hours ago [-]
Let us know when your Bing-bot scores over 20% on the HLE benchmark.
rvnx 1 hours ago [-]
It's literally a browsing agent that searches the internet and they know the questions in advance when preparing the agent

Without internet: 10%

With internet: 23%

In addition:

> We found that the ground-truth answers for one dataset were widely leaked online

in very small letters, and they blocked these URLs at runtime but not training time.

It's not bad, but not revolutionary at all compared to the leap that was GPT-2 from GPT-3, or GPT-4o to DeepSeek-R1

CamperBob2 43 minutes ago [-]
If they "knew the questions in advance," why'd they need Internet access at all? The ability to use the same data sources humans would use is not the insult you seem to think it is.

Again: the assertion was yours, so let us know the results of your own work.

EcommerceFlow 15 hours ago [-]
Can't even get Sunday nights off trying to keep up fml.
17 hours ago [-]
regularjack 4 hours ago [-]
Of course, they had to weasel the word "deep" in there.
17 hours ago [-]
febin 17 hours ago [-]
Is this "deep research" tool exploiting open knowledge creators, using their work without compensation?
tmnvdb 17 hours ago [-]
How is using public information "exploitation"? A human researcher with Google would do the same.
hnisoss 16 hours ago [-]
So its fine for OpenAI to effectively sell your CC BY-NC content to others?
scarab92 17 hours ago [-]
Are you exploiting open knowledge creators, using their work without compensation?
febin 17 hours ago [-]
The creators are aware that a human is using this, can we say the same for AI, does it have their consent?
handfuloflight 17 hours ago [-]
Then consent is granted by transitive property because these AI are yielded by humans.
hnisoss 16 hours ago [-]
Yea but guy paying closedai to get "insights" that basically copy-pasted content from my blog is definitely violating my blogs copyright, and in the end no coin comes to me either. What about that?
handfuloflight 16 hours ago [-]
Could you provide an example where OpenAI outputting verbatim quotes actually constitutes the copyright violation? Because mechanically retrieving relevant quotes seems analogous to grep/search - the copyright status would depend on how downstream users transform and use that content. Like how quoting your blog in a technical analysis or critique is fair use, but wholesale republishing isn't. This suggests the violation occurs at usage time, not retrieval time.
protocolture 17 hours ago [-]
You exploited my eyes by making me read this comment. Wheres my compensation.
febin 15 hours ago [-]
I see many are offended, but I am genuinely asking a question.

I want to understand does this mean it's ethical for anyone to create a research AI tool that will go through arXiv and related GitHub repo and use it to solve problems, implement ideas like cursor.

rapjr9 13 hours ago [-]
It is also an agent, so it is using you without compensation for your work.
hnisoss 16 hours ago [-]
Of course. It's a child's play for SamA et al.
johnneville 17 hours ago [-]
would it be open knowledge if it required payment to access ?
17 hours ago [-]
17 hours ago [-]
rob_c 9 hours ago [-]
Feels more and more like openAI doesn't have "that next big thing".

To be clear I'm constantly impressed with what they have and what I get as a customer, but the delivery since 4 hasn't exactly been in line with Altman's Musk-tier vapoware promises...

ldjkfkdsjnv 17 hours ago [-]
So much cynicism and hate in these comments, especially as we are likely witnessing AGI come to life. Its still early, but it might be coming. Where is the excitement? This is an interesting time to be alive.

HN has a huge cultural problem that makes this website almost irrelevant. All the interesting takes have moved to X/twitter

roenxi 17 hours ago [-]
We're looking at trends that may well obliterate the economic value of a well trained human mind sitting behind a keyboard all day. That is a bit of a threat to most people on HN if the trending continues at the current rate and direction.
layer8 16 hours ago [-]
“May you live in interesting times” is usually taken as a curse. ;)

More seriously, it’s unclear why one should be excited by the prospect of AGI, especially when instrumentalized by corporations and authoritarian governments.

crvdgc 9 hours ago [-]
AGI aside, sometimes HN critics/cynicism indeed points out the exact reason why something wouldn't work and is vindicated after the fact, e.g. Apple Vision Pro. I guess it's just hard to predict the future and for me, it's interesting to listen to even pure contrarians.
bonoboTP 17 hours ago [-]
HN is and has always been quite negative/pessimistic/cynical in general. That Dropbox comment was quite a long time ago already.
qgin 16 hours ago [-]
Never underestimate HN's capacity to be cynical about literally everything
rpcope1 16 hours ago [-]
> especially as we are likely witnessing AGI come to life

Man, I've got a great deal on some oceanfront property in Wyoming for you.

rvz 16 hours ago [-]
> "So much cynicism and hate in these comments, especially as we are likely witnessing AGI come to life. Its still early, but it might be coming. Where is the excitement? This is an interesting time to be alive."

Maybe you can define what "AGI" really means and what the end-game and the economic implications are when 'AGI" is some-what achieved? OpenAI somehow believes that they haven't achieved "AGI" yet, which they continue to do this on purpose for obvious reasons.

The first hint I will give you is that it certainly won't be a utopia.

dutchbookmaker 15 hours ago [-]
I would be more excited if it wasn't $200 a month to try.

I don't feel like OpenAI does a good job of getting me excited either.

Find the perfect snowboard? How can that idea get pitched and make the final cut for a $200 a month service? The NFL kicker example is also completely ridiculous.

The business and UX example seems interesting. Would love to see more.

PartiallyTyped 17 hours ago [-]
I feel that a lot of this can already be achieved via aider (not affiliated), and any of the top models.
tmnvdb 17 hours ago [-]
Do you have any benchmarks to back up your 'feelings'?
PartiallyTyped 17 hours ago [-]
I really don't like the snarky tone of the parent comment.

Nonetheless, I don't think this is even something that can easily be benchmarked. I'd recommend you take a look at aider [1], and consider how I drew similarities between it and what's presented here.

Has ClosedAI presented any benchmarks / evaluation protocols?

[1] https://aider.chat/

tmnvdb 16 hours ago [-]
Yes, they show benchmarks in the article linked here. Did you not read it?
PartiallyTyped 3 hours ago [-]
I don’t think you actually read it. The benchmarks are in reference to the model that’s underlying deep-research, and not deep-research itself. For the latter, they have anecdata from scientists.
taran_narat 9 hours ago [-]
isn't this just perplexity?
tmnvdb 17 hours ago [-]
Eating popcorn while the scaling doubters scramble to move the goalposts for the nth time.
elicksaur 17 hours ago [-]
Its number for one of the benchmark has:

**with browsing + python tools

Maybe we have different definitions of scaling?

tmnvdb 16 hours ago [-]
I would consider unsupervised tool usage an achievement
elicksaur 14 hours ago [-]
But it’s not simply scaling. Who is moving the goalposts exactly?
khazhoux 12 hours ago [-]
Trying to parse this. What are you saying?
ldjkfkdsjnv 17 hours ago [-]
Say whatever you want about openAI, they are shipping more than any other company on the planet.
viraptor 17 hours ago [-]
How do you even compare different companies? I'd say the massive farms ship every year more than OpenAI ever did.
kortilla 17 hours ago [-]
What does that even mean? Treating each iterative model as a new product is not any different than Google changing its search or youtube recommendation algorithm.

Different pre-cooked prompts and filters don’t really amount to new products either, despite them being marketed as such. It’s like adobe treating each tool in photoshop as its own product.

tmnvdb 17 hours ago [-]
Have you even watched the video? This is a new capability and not a trivial one.
navigaid 16 hours ago [-]
Name one single open source model released by OpenAI since 2020
rcdwealth 4 hours ago [-]
[dead]
18 hours ago [-]
18 hours ago [-]
jaco6 15 hours ago [-]
I see lots of warranted skepticism about the capabilities of this tool, but the reality is that this is an incremental step toward full automation of white collar labor. No, it will not make all analysts jobless overnight. But it may reduce hiring of said people by 5 or 10 percent. And as people get better at using the tool and the tool itself gets better, those numbers will grow. Remember that it took decades for the giant pool of typing secretaries in Mad Men to disappear, but they did disappear. Gone forever. Interestingly, anger about the diminishment of secretarial male white collar work in Germany due to the spread of the typewriter a few decades earlier was one of the drivers of the Nazi Party’s popularity (see Evans, the Rise of the Third Reich).

AI’s triumph in the white collar workplace will be gradual, not instantaneous. And it will be grimly quiet, because no one likes white collar workers the way they like blue collar workers, for some odd reason, and there’s no tradition of solidarity among white collar workers. Everyone will just look up one day and find that the local Big Corp headquarters is…empty.

lngnmn2 10 hours ago [-]
[dead]
rvz 17 hours ago [-]
It appears that OpenAI is in panic mode after the release of DeepSeek. Before they were confident in competing against Google on any AI model they release.

Now they are scrambling against open-source after their disastrous operator demonstration and using this deep research demo as cover. Nothing that Google or Perplexity could not already do themselves.

By the end of them month, this feature is going be added by a bunch of other open-source projects and this feature won't be as interesting very quickly.

blackeyeblitzar 33 minutes ago [-]
I don’t think you’re comparing the right things here. This feature is more like Google’s Deep Research, which basically goes off and does a whole lot of search and compute to produce something more like a full research report. This has nothing to do with open weight models like DeepSeek (note: DeepSeek, Llama, etc are NOT open source). This feature doesn’t just require the research on the model but also enormous compute. Plus anyone using such a feature for real work is not going to be using DeepSeek or whatever, but a product with trustworthy practices and guarantees.
tucnak 17 hours ago [-]
Look, who's copying who now. They added _the_ button!
blackeyeblitzar 17 hours ago [-]
I’m not sure I understand what you mean by “the button”. If you’re comparing this to DeepSeek’s copying, it’s not really the same thing right? DeepSeek essentially stole intellectual property by violating OpenAI’s terms of service. As I understand it, this is a copy of Google’s Deep Research
caspper69 17 hours ago [-]
I chuckle every time I see this. Poor OpenAI.

Meanwhile, their entire training corpus was the result of scraping the intellectual property and copyrighted materials of THE ENTIRE PUBLIC INTERNET.

Woe is them to be sure.

blackeyeblitzar 15 hours ago [-]
OpenAI’s scraping will likely be ruled as fair use.
lompad 17 hours ago [-]
Deepseek proved that there is no moat. Thus no path to profitability for openai, anthropic & co.

Stealing from thieves is fine by me. Sama was the one claiming that all information could be used to train LLMs, without permisdion of the copyright holders.

Now the same is being done to openai. Well, too bad.

blackeyeblitzar 15 hours ago [-]
> Stealing from thieves is fine by me. Sama was the one claiming that all information could be used to train LLMs, without permisdion of the copyright holders.

OpenAI and other LLMs scraping the internet is probably covered under fair use. DeepSeek’s violation of OpenAI’s terms is pretty clearly a violation of their terms and not legal.

simion314 12 hours ago [-]
>DeepSeek’s violation of OpenAI’s terms is pretty clearly a violation of their terms and not legal.

Here is a new thing you learn today, ToS are not laws, you can ignore any ToS and at worst the company might close your account.

therealpygon 17 hours ago [-]
Care to explain how something that cannot be copyrighted and was not generated by a human is “intellectual property“? Or are you just parroting a narrative?
blackeyeblitzar 14 hours ago [-]
Trade secrets are protected by the law. It doesn’t require copyright.
figers 17 hours ago [-]
Didn't OpenAI steal everyone's data they could consume from the internet? Actively being sued by the NY Times and others for this...
blackeyeblitzar 14 hours ago [-]
Yes those cases will be interesting. By default a lot of copyrighted content may be legal to use for training (in the US but also many other places) under what’s called fair use. The cases you’re referring to will likely reinforce this, but it isn’t known yet. Note that it’s not just OpenAI on that side of the argument but also other (non tech) organizations that believe protecting fair use here is current law and essential.
ADeerAppeared 17 hours ago [-]
I'm sorry but what the fuck is this product pitch?

Anyone who's done any kind of substantial document research knows that it's a NIGHTMARE of chasing loose ends & citogenesis.

Trusting an LLM to critically evaluate every source and to be deeply suspect of any unproven claim is a ridiculous thing to do. These are not hard reasoning systems, they are probabilistic language models.

panarky 17 hours ago [-]
> they are probabilistic language models

This is like arguing an Airbus cannot possibly fly because it is 165 tonnes of aluminum, steel and plastic.

The proof is in the fact that it flies, not what it is constructed from.

ADeerAppeared 15 hours ago [-]
> The proof is in the fact that it flies, not what it is constructed from.

And LLMs do not.

> "But it looks like reasoning to me"

My condolences. You should go see a doctor about your inability to count the number of 'R's in a word.

panarky 13 hours ago [-]
"Even though that Airbus looks like it's flying it's really not because my personal definition of 'flying' requires feathers and flapping wings."
CamperBob2 14 hours ago [-]
OK, what's your next move, now that letter-counting has been solved by the current generation of frontier models?

CoT reasoning is reasoning, whether you like it or not. If you don't understand that, it means the models are already smarter than you.

rizky05 15 hours ago [-]
[dead]
timsh 17 hours ago [-]
this is so precise. I guess we’ll need a global version of https://datacolada.org/ quite soon to not get hit by a bus in every scientific field
lukeschlather 16 hours ago [-]
o1 and o3 are definitely not your run of the mill LLM. I've had o1 correct my logic, and it had correct math to back up why I was wrong. I'm very skeptical, but I do think at some point AI is going to be able to do this sort of thing.
17 hours ago [-]
spyckie2 17 hours ago [-]
Why is HN not creating policy against moral prigotry? There is no useful discussion here anymore.

Seriously begging the mods to take a closer look, or at least PG to not abandon his curated internet space.

esafak 17 hours ago [-]
priggery or bigotry? And what are you referring to?
nycdatasci 14 hours ago [-]
Pro user. No access like everyone else.

OpenAI is very much in an existential crisis and their poor execution is not helping their cause. Operator or “deep research” should be able to assume the role of a Pro user, run a quick test, and reliably report on whether this is working before the press release right?

samplatt 14 hours ago [-]
That's the third time in this thread you've stated "OpenAI is in an existential crisis". It looks very suspicious.
_bin_ 13 hours ago [-]
man you work for high flyer or something? i know that's not really a fair question but oai still seems to lead the pack. i know it's a hype-y area but responding to one (1) model that's comparable to o4 but cheaper with "guys it's so over for openai" is excessive.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 17:49:19 GMT+0000 (Coordinated Universal Time) with Vercel.