Hacker Newsnew | past | comments | ask | show | jobs | submit | pqdbr's commentslogin

Recently we suffered a different kind of subscription bombing: a hacker using our 'change credit card' form to 'clean' a list of thousands credit cards to see which ones would go through and approve transactions.

He ran the attack from midnight to 7AM, so there were no humans watching.

IPs were rotated on every single request, so no rate limiter caught it.

We had Cloudflare Turnstile installed in both the sign up form and in all credit card forms. All requests were validated by Turnstile.

We were running with the 'invisble' setting, and switched back to the 'recommended' setting after the incident, so I don't know if this less strict setting was to blame.

Just like OP, our website - to avoid the extra hassle on users - did not require e-mail validation, specially because we send very few e-mails.

We never thought this could bite us this way.

Every CC he tried was charged $1 as confirmation that the CC was valid, and then immediately refunded, erroring out if the CC did not approve this $1 transaction, and that's what he used. 10% of the ~2k requests went through.

Simply adding confirmation e-mail won't cut it: the hacker used - even tough he did not need it - disposable e-mail addresses services.

This is a big deal. Payment processors can ban you for allowing this to happen.


Being used to validate stolen card numbers has long been a problem; we've had to put in a number of defenses to fight our way off whatever list of "easy sites" these folks maintain. I hadn't thought about the "change card" path though...another bit of time spent away from what our business is really supposed to be doing...

We solved this by introducing a silent block. If the system notices unusual behavior (too many payment attempts per user, for example), it no longer sends the payment attempt to the provider. Instead, it idles for a second or two and then just fails with a generic “payment declined.” Most attackers don’t notice they’re being blocked and just assume all credit cards are bad.

Sounds like any per-user detection wouldn't have worked in this case.

the "notice unusual behavior" is the hard part

thousands of $1 charges and refunds in a 7 hour period seems unusual to me. then again, i've never run a site that received thousands of charges ever, so seeing it in a few hours would be obvious.

Genuinely asking, are you a product manager? You’re giving me flashbacks to all of the PMs who suggested a 2-3 branch decision tree for a complex classification problem, because that’s what struck them as intuitive. We are just a few baby steps away from reinventing the entire field of fraud detection within this thread.

Sir, I resent the implication! I do not lie with such swine!

It's easy to say that every site must add protections against every single type of attack, except it's impossible for site owners to be experts in fraud. While credit card processing vendors are expected to be experts in fraud. I ask you where in this situation would be the better place to implement fraud detection? Of the two places, whose more financially at risk?


:)

I think we’re 100% in agreement: let the payment processors handle the fraud. Except payment processors unfortunately hold all the cards and will shut your site down if you don’t comply with their standards :(



Well, what you can do is notify the card issuer about those cards that went through, so they can mark them as stolen. That surely will make the hacker really happy, and discourage them of doing it again :)

So you mean you are keeping full card numbers somewhere in your logs to... fix some potential security issue...?

>Hey mr processor, the cards for transaction numbers x...y are stolen.

I'm assuming there were transaction IDs provided that can be given to the processor. If they can't do anything with the IDs, then that's a pretty broken system.

If you are the processor, yes, I guess. If you aren't, then you can provide the transaction ID to the processor and let it handle that part.

Pretty sure this goes against PCI DDS requirements to not store CC numbers.

Ouch. Just one credit card change per account?

This is one of those levels of monitoring that only gets put in place after such an event. Eg whole subsystem analysis - the change card feature being used 1000s of times (well, proportional to scale) in 7 hours is a massive red flag


> This is one of those levels of monitoring that only gets put in place after such an event.

For a website, yes. But honestly the credit card people and their infrastructure should probably _also_ watch out for this. They'd be in a much better place to detect these.


In a perfect world sure but in the real world if a processor catches something they will disable your processing and freeze any funds while making it a nightmare to remedy, so you really want them doing as little as possible.

They do, but they’re also just as aware that you could be the fraudster. So they put the punishment where it’s optimal for them. You are not inside their trust space.

Yeah, it seems like the site's processor should have noticed this one site sending thousands of $1 charges and refunds in a small window much more easily than the site recognizing it was being done. The processor has much more to loose multiplied across all customers making it worth their time

Cloudflare and any other anti-bot service is only good against people without willpower and knowledge to bypass them.

JS can be reversed, you clearly see what data points they use for detection. Anything can be spoofed and it will look like human behavior.

And if everything fails, you outsource it to AI - Always Indian :D


We had this happen on one of our sign up forms. I added a crappy open source image captcha and it went away. I guess whichever attacker was using us wasn't that motivated!

Did they use the same username/login every time?

I have no experience building this two-pass approach, but I arrived at it intuitively while planning for a new project. Any references to actual implementations?


This is someone's out of control bot.


We've also been running Rails in production for 15+ years (since 2011) in two companies and it has been serving us greatly. Hiring is tough, but I definitely believe the stack makes up for it due to the productivity gains.

In late 2025 we decided to migrate one of them to Inertia. Public facing pages is already done, and we're 80% through migrating the logged in area (it's a huge app). We choose Vue.js.

It's amazing how powerful this stack is and how little you have to change in the backend.


I'm surprised hiring is tough. The job market is such trash rn and I feel there are a lot of Rubyists, or ex-Rubists interested in returning to it, around. Maybe not? (Edit: spelling)


> Maybe not?

Because there are fewer and fewer ruby/rails people available.

It is the simplest explanation - and the one that makes the most sense, too.


Well ya, I'm just saying I'm surprised considering the current job market. I moved on from Rails about 5 years ago now, but have 9 years experience under my belt and still keep up a bit with new things and play with them once in a while. And yet I've applied for several Rails positions in the past few years and always get an outright rejection.


I'm availabe! On rails for over 21 years since version zero. It's strange, but true, that we Rails-devs are hard to find. I apply for Rails-related roles and am competing with over 1,000 other resumes. I'm not sure how I fail to even get a screening call 99% of the time. Perhaps overqualified? Perhaps poor filtering? It is a very strange job market.


Off-topic, but I'm having a terrible experience with Cloudflare and would love to know if someone could offer some help.

All of a sudden, about 1/3 of all traffic to our website is being routed via EWR (New York) - me included -, even tough all our users and our origin servers are in Brazil.

We pay for the Pro plan but support has been of no help: after 20 days of 'debugging' and asking for MTRs and traceroutes, they told us to contact Claro (which is the same as telling me to contact Verizon) because 'it's their fault'.


Do you think cloudflare is responsible for all of the network traffic routing in the entire world and can simply fix any problem even if it's on somebody else's network?


No. I do think that Cloudflare is a great company and got where it's at today because they care for this type of issue, and has a much better chance of contacting their peering traffic partner than me because they take care of ~20% of all internet traffic, while I take care of none.


It is possible that Claro has a bad route that sends all traffic destined for Cloudflare through New York.


Every once and a while we have had Bell Canada route a request that should be going about 6 blocks away across the continent and back.

They are not super helpful fixing it either.


How are you prompting it to draw diagrams in drawio


Here's the chat I used for the drawing - https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%...

Save the xml, import to drawio


Sometimes it helps to also provide a drawio file that has the elements you wan't (eg. cloud service icons or whatever), but you just feed it the content you want diagrammed and let it eat.

Even if it's not completely correct, it usually creates something that's much closer to complete than a blank page.


Drawio drawings are just XML, its possible it can generate that directly


hopefully op will answer if that's what he is doing


One Rails app self deployed in VPS can really go a long way.

I'd add:

- Learn tailscale. It's one of those technologies that takes half a day to get used to (claude code will answer all your questions), and you'll never want to live without it in the future. SSH into your VPS, install tailscale and use `tailscale set --ssh`, and shut down `systemctl stop ssh.service`. You don't even have SSH daemon runing anymore, extra safety.

- Use a simple docker compose to run your dependencies like docker. Bind to the tailscale IP, so ports: {{ tailscale_ip_of_this_server}}:5432:5432. This way your database is not exposed to the web, but it's exposed to the tailscale. Use tailscale_ip_of_this_server in your ENV vars to rails to connect - since it's running in the same server, tailscale will route it to localhost (and with localhost latency).

- With claude, set up a second VPS, add it to the same tailscale, turn of SSHD. They can now connect directly via tailscale (try ping or sshing between them).

- Then, install claude code on this second VPS. Since postgres is bound to the tailscale IP on the main, the 'slave' can connect to it. Install postgres via docker, but have claude set up a 'streaming replication' between server MASTER and server SLAVE, using tailscale ip.

100% secure, encrypted, and now you have a postgres replication with zero data loss if something goes wrong.

In the same SLAVE, you can also install barman - claude code will help you set it up.

You then have a postgres BACKUP with point in time recovery.

There you go!


How do you reach your server without SSH in the event Tailscale is down or your Tailscale account has a problem?


I redeemed my 50 USD credit to give it a go. In literally less than 10 minutes I spent 10 USD. Insane. I love Claude Code, but this pricing is madness.


What would have been the human labor cost equivalent?


Yes, Internal server error as of 5 minutes ago.


Could you please share more? I can't make dictation work.


Are you using a dedicated pg instance for vector or you keep all your data in a single pg instance (vector and non-vector)?


The biggest selling point to using Postgres over qdrant or whatever is that you can put all the data in the same db and use joins and ctes, foreign keys and other constraints, lower latency, get rid of effectively n+1 cases, and ensure data integrity.


I generally agree that one database instance is ideal, but there are other reasons why Postgres everywhere is advantageous, even across multiple instances:

- Expertise: it's just SQL for the most part - Ecosystem: same ORM, same connection pooler - Portability: all major clouds have managed Postgres

I'd gladly take multiple Postgres instances even if I lose cross-database joins.


Yep. If performance becomes a concern, but we still want to exploit joins etc, it's easy to set up replicas and "shard" read only use cases across replicas.


Postgres supports the Foreign Data Wrapper concept from SQL/MED. If you configure this you can do joins across instances, even!

https://www.postgresql.org/docs/current/postgres-fdw.html


All in one of course. That’s the biggest advantage. And why postgres is great - it covers virtually all standard use cases.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: