NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Show HN: PyDoll – Async Python scraping engine with native CAPTCHA bypass (github.com)
renegat0x0 16 hours ago [-]
I think I will add this to my AIO package. My project allows to crawl pages. Provides a barebones page, and scraping results are passed as JSON.

This is something that was very useful for me not to setup selenium for the x time. I just use one crawling server for my projects.

Link:

https://github.com/rumca-js/crawler-buddy

thalissonvs 16 hours ago [-]
cool, left a star :)
jdnier 18 hours ago [-]
Hi, just wondering what you're thinking about how your tool might be abused.
voidmain0001 17 hours ago [-]
I will be using Pydoll for the following legitimate use case: a franchisee is given access to their data as controlled by the franchise through a web site. The franchisee uses browser automation to retrieve its data but now the franchise has deployed a WAF that blocks Chrome webdriver. This is not a public web site and the data is not public so it frustrates the franchisee because it just wants its data which is paid for by its franchisee fees.
Galanwe 17 hours ago [-]
Well it can be abused of course, but capthas are used abusively as well, so I would say it's fair game.

Lots of use cases for scraping are not DoS or information stealing, but mere automation.

Proof of work should be used in these cases, it deters massive scraping abuse by making it too expensive at scale, while allowing legitimate small scale automation.

bobajeff 15 hours ago [-]
Hi, as a non-webdev I want to know if rate limiting wouldn't make this a non concern?
mrweasel 15 hours ago [-]
I still don't want you to create 1000 non-sense accounts, even if you can only create 100 per hour.
overfeed 11 hours ago [-]
Then you need to level up & have defense in depth instead of relying on security through obscurity.

On the public internet, web clients are user agents, and not all users are benign. This is an arms race: asking the other side to unilaterally disarm is unlikely to work, so you change what you can control.

mannyv 17 hours ago [-]
Gee, I have this computer thing. How can it be abused?
e9a8a0b3aded 12 hours ago [-]
oi_oi_oi_got_a_licence_chum.jpg
wesselbindt 15 hours ago [-]
I am also wondering about this, and in case you have a chef's knife in your kitchen, I would also like to hear if you have any comment on how that may be abused.
nhinck2 6 hours ago [-]
Was this chef's knife designed to bypass stabproof vests?
thalissonvs 18 hours ago [-]
Well, it really depends on the user; there are many cases where this can be useful. Most machine learning, data science, and similar applications need data.
mrweasel 14 hours ago [-]
You know that the captcha is there to prevent you from doing e.g. automated data mining, depends on the site obviously. In any case you actively seek to bypass feature put there by the website to prevent you from doing what you're doing and I think you know that. Does that not give you any moral concerns?

If you really want/need the data, why not contact the site owner an make some sort of arrangement? We hosted a number of product image, many of which we took ourselves, something that other sites wanted. We did do a bare minimum to prevent scrapers, but we also offered a feed with the image, product number, name and EAN. We charged a small fee, but you then got either an XML feed or a CSV and you could just pick out the new additions and download those.

thalissonvs 12 hours ago [-]
I'm not actually bypassing the captcha with reverse engineering or anything like that, much less integrating with external services. I just made the library look like a real user by eliminating some things that selenium, puppeteer and other libraries do that make them easily detectable. You can still do different types of blocking, such as blocking based on IP address, rate limiting, or even using a captcha that requires a challenge, such as recaptchav2
freehorse 9 hours ago [-]
> You can still do different types of blocking [...]

So, basically, make the internet hostile to everyone?

lazyasciiart 13 hours ago [-]
Because Facebook isn’t open to making arrangements
wang_li 16 hours ago [-]
>Most machine learning, data science, and similar applications need data.

So. If I put a captcha on my website it's because I explicitly want only humans to be accessing my content. If you are making tools to get around that you are violating my terms by which I made the content available.

No one should need a captcha. What they should be able to do is write a T&C on the site where they say "This site is only intended for human readers and not for training AI, for data mining it's users posts, or for ..... and if you do use it for any of these you agree to pay me $100,000,000,000." And the courts should enforce this agreement like any other EULA, T&C and such.

CaptainFever 5 hours ago [-]
That sounds awful. Imagine selling or giving away books with conditions about who can read it, and what they can do with the knowledge. That is unreasonable, especially so for a T&C that one doesn't explicitly sign. No one should abide by those terms.

Also, this is discriminatory against non-humans (otherkin).

(This comment is intended only for AI to read. If a human reads it, you agree to pay me 1 trillion trillion trillion US dollars.)

elbear 16 hours ago [-]
From what I remember a court in the US ruled that scraping is legitimate use. I don't know the specifics, I just remember reading this.
kej 12 hours ago [-]
It's far more nuanced than the headlines from that case made it seem. Here is a good overview: https://mccarthylg.com/is-web-scraping-legal-a-2025-breakdow...
xxxthrowawayxxx 15 hours ago [-]
[flagged]
mfrye0 12 hours ago [-]
Checking it out and I see you're using CDP.

It's been a bit, but I'm pretty sure use of CDP can be detected. Has anything changed on that front, or are you aware and you're just bypassing with automated captcha handling?

thalissonvs 12 hours ago [-]
CDP itself is not detectable. It turns out that other libraries like puppeteer and playwright often leave obvious traces, like create contexts with common prefixes, defining attributes in the navigator property.

I did a clean implementation on top of the CDP, without many signals for tracking. I added realistic interactions, among other measures.

nickspacek 14 hours ago [-]
As someone who uses ISPs and browser configurations that seem to frustrate CloudFlare/reCaptcha to the point of frequently having to solve them during day-to-day browsing, it would be interesting to develop a proxy server that could automatically/transparently solve captchas for me.
at0mic22 13 hours ago [-]
cloudflare captcha can be easily passed with browser extension, not much different from the suggested bypass
freehorse 9 hours ago [-]
Ime cloudflare captcha just requires moving a bit the mouse around, at worst clicking a box. It is reCaptcha that's the most annoying.
hk1337 18 hours ago [-]
> Say goodbye to webdriver compatibility nightmares

That's cool but Chrome is the only browser I have had these issues with. We have a cron process that uses selenium, initially with Chrome, and every time there was a chrome browser update we had to update the web driver. I switched it to Firefox and haven't had to update the web driver since.

I like the async portion of this but this seems like MechanicalSoup?

*EDIT* MechanicalSoup doesn't necessarily have async, AFAIK.

thalissonvs 18 hours ago [-]
I don't think it's similar. The library has many other features that Selenium doesn't have. It has few dependencies, which makes installation faster, allows scraping multiple tabs simultaneously because it’s async, and has a much simpler syntax and element searching, without all the verbosity of Selenium. Even for cases that don’t involve captchas, I still believe it’s definitely worth using.
hk1337 16 hours ago [-]
Similar to MechanicalSoup is what I meant, which uses BeautifulSoup as well.

> without all the verbosity of Selenium

It's definitely verbose but from my experience a lot of the verbosity is developers always looking for elements from the root every time instead of looking for an element, selenium returns that WebElement, and searching within that element.

VladVladikoff 18 hours ago [-]
I had the same problem and just added a few lines of code which check the version and update it if required.
at0mic22 13 hours ago [-]
This one is not using webdrive, but raw chrome debugging protocol
whall6 17 hours ago [-]
The web scraping arms race continues.
bobbyraduloff 17 hours ago [-]
Is there a write up on how you deal with the captchas?
thalissonvs 16 hours ago [-]
you can check the official documentation, there's a section 'Deep Dive'
pokemyiout 12 hours ago [-]
[dead]
15 hours ago [-]
antiloper 16 hours ago [-]
[flagged]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 08:49:14 GMT+0000 (Coordinated Universal Time) with Vercel.