NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Gemini 2.5: Our most intelligent models are getting even better (blog.google)
cye131 244 days ago [-]
The new 2.5 Pro (05-06) definitely does not have any sort of meaningful 1 million context window, as many users have pointed out. It does not even remember to generate its reasoning block at 50k+ tokens.

Their new pro model seemed to just trade off fluid intelligence and creativity for performance on closed-end coding tasks (and hence benchmarks), which unfortunately seems to be a general pattern for LLM development now.

dandiep 243 days ago [-]
I wish Google would provide a WebRTC endpoint for their Live mode like Open AI does for their Realtime models [1]. Makes it so much easier to deploy without needing something like LiveKit or Pipecat.

1. https://platform.openai.com/docs/guides/realtime#connect-wit...

Aeolun 243 days ago [-]
I think it’s pretty strange how time and time again I see the scores for other models go up, but when I actually use them it sucks, and then I go back to Claude.

It’s also nice Claude just doesn’t update until they have actual improvements to show.

jacob019 243 days ago [-]
Claude is great for code, if pricy, but when it gets stuck I break out Gemini 2.5 pro. It's smarter, but wants to rewrite everything to be extremely vebose and defensive, introducing bugs and stupid comments. 2.5 flash is amazing for agentic work. Each frontier model has unique strengths.
mchusma 243 days ago [-]
I strongly dislike the “updating of versions” whenever possible. Versions are rarely better in all ways, makes things harder. Just make it version 2.6.
andrewstuart 243 days ago [-]
I love Gemini.

I just wish they’d give powerful options for getting files out of it.

I’m so sick of cutting and pasting.

It would be nice to git push and pull into AI Studio chats or SFTP.

russfink 244 days ago [-]
Why don’t companies publish hashes of emitted answers so that we, eg teachers, could verify if the AI produced this result?
perdomon 244 days ago [-]
Hashes of every answer to every question and every variation of that question? If that were possible, you’d still need to account for the extreme likelihood of the LLM providing a differently worded answer (it virtually always will). This isn’t how LLMs or hashing algorithms work. I think the answer is that teachers need to adjust to the changing technological landscape. It’s long overdue, and LLMs have almost ruined homework.
fuddy 243 days ago [-]
Hashing every answer you ever give is the kind of thing that is done with hashing algorithms, the trouble is that the user can trivially make an equally good variant with virtually any (well an unlimited number of possible) change, and nothing has hashed it.
haiku2077 244 days ago [-]
Ever heard of the meme:

"can I copy your homework?"

"yeah just change it up a bit so it doesn't look obvious you copied"

notamemevvv 243 days ago [-]
[flagged]
evilduck 244 days ago [-]
Local models are possible and nothing in that area of development will ever publish a hash of their output. The huge frontier models are not reasonably self-hosted but for normal K-12 tasking a model that runs on a decent gaming computer is sufficient to make a teacher's job harder. Hell, a small model running on a newer phone from the last couple of years could provide pretty decent essay help.
haiku2077 244 days ago [-]
Heck, use a hosted model for the first pass, send the output to a local model with the prompt "tweak this to make it sound like it was written by a college student instead of an AI"
Atotalnoob 244 days ago [-]
There are the issues others mentioned, but also you could write something word for word of what an LLM says.

It’s statistically unlikely, but possible

BriggyDwiggs42 244 days ago [-]
There’s an actual approach where you have the LLM generate patterns of slightly less likely words and then can detect it easily from years ago. They don’t want to do any of that stuff because cheating students are their users.
subscribed 244 days ago [-]
This is exactly where users of English as second language are being accused of cheating -- we didn't grew with the live language, but learnt from movies, classic books, and in school (the luckiest ones).

We use rare or uncommon words because of how we learned and were taught. Weaponising it against us is not just a prejudice, it's idiocy.

You're postulating using a metric that shows how much someone deviates from the bog standard, and that will also discriminate against the smart, homegrown erudites.

This approach is utterly flawed.

BriggyDwiggs42 243 days ago [-]
I’m referencing a paper I saw in passing multiple years ago, so forgive me if I didn’t elaborate the exact algorithm. The LLM varies its word selection in a patterned way, eg most likely word, 2nd most, 1st, 2nd, and so on. It’s statistically impossible for an esl person to happen to do this on accident.
haiku2077 244 days ago [-]
I remember when my parents sent me to live with my grandparents in India for a bit, all the English language books available were older books, mostly British authors. I think the newest book I read that summer that wasn't a math book was Through the Looking Glass.
dietr1ch 244 days ago [-]
I see the problem you face, but I don't think it's that easy. It seems you can rely on hashes being noisy and alter questions or answers a little bit to get around the LLM homework naughty list.
staticman2 244 days ago [-]
It would be pretty trivial to paraphrase the output wouldn't it?
fenesiistvan 244 days ago [-]
Change one character and the hash will not match anymore...
silisili 243 days ago [-]
Just ctrl-f for an em dash and call it a day.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 08:00:20 GMT+0000 (Coordinated Universal Time) with Vercel.