dizbusters · @dizbusters
33 followers · 14 posts · Server infosec.exchange

What can we learn from and sneaker culture, as practitioners of security?

The overlap between what is considered security and what is considered not-security is getting blurrier every day, and the line is shifting all of the time in the consumer or product oriented space. It's easy to say "that's a security problem" when your database is breached. It's a bit different when the problem is "some guy in Ohio is buying hundreds of our product and reselling them with markup".

Sneakerbots are just the latest evolution in this type of problem, and many of the best bots in the world right now have overlap in problem space and in some cases methodologies too.

As I mentioned in my part 2 post, bots are getting significantly more advanced in the last 3-5 years. They are now able to replicate actual consumer device behavior by using statistical models to simulate what a real consumer holding their phone looks like to the app/website. A normal person doesn't buy shoes with a 0px screen size or a completely stationary gyroscope, so we'll just simulate an iPhone screen size and generate some random-ish gyro noise that passes the test.

That gets you past the "are you a human" barrier, but what about the "is your account in good standing" barrier? HIBP lists ~645 breaches in their API (thanks Troy!) and I bring that up because it's an indicator of just how many accounts are viable. As the industry is well aware, password sharing is common among consumers. Credential stuffing, or the process of pushing compromised credentials from Target A's breach into Target B's website to find hits, will yield a high enough success rate that it can be used for mass purchases. 1% success rate is enough when you have 100m rows to smash against a login page. Even if you bring that down to .1% of successful logins, that's still thousands of accounts that are in good enough standing to make an online purchase. And if they have a gift card balance or stored credit card, well now you're in business for free.

This type of behavior has heavy overlap with gift card fraud/abuse, as well as credit card fraud. If an attacker finds a good account and a website that is poorly designed, they might be able to shove some CC numbers against that verification API to see which ones are still working for their other theft schemes.

But where are they getting these accounts and CC numbers? The same place your average profit-driven ransomware operator is. It's all the same accounts, the same places, the same methodology. The only difference is the tool they're using to get the $$$. Many operators maintain a very similar approach to modern operators too, using Discord servers and intermediary sellers with control panels that put some enterprise software to shame. Bots that actually work are hoarded and kept secret, sold only to the highest bidder or to trusted individuals, while the lower rungs of the sneakerbot ladder are left with lower-tier bots or are forced into the old-fashioned way of trying to backdoor product out of physical stores. Why bother with a bot when you can pay the manager at FootLocker to sneaker 8 pairs out for you. Oh and they're probably 2 decades younger than you.

I'm not entirely sure where I'm going with this, I might wax on it a bit longer in other posts, but what I wanted to highlight here really was that we in need to pay attention to product more. And that doesn't mean just AppSec. Many of us work for tech-focused companies that have inherently different risk models than a retail company. Take a moment and consider what you would do if you were selling a product that had a guaranteed 10% profit for anyone who resold it. How would you make that fair? Do you even care if it's fair? What decisions are you making? What types of users are you concerned about? What are their markers (age, activity, location, IP, etc)?

Despite that thought experiment as a retail company, certain fundamentals stay the same. Auth is becoming increasingly important. The trustworthiness of any particular user matters. People will, if given an opportunity for profit, abuse your system. Plan for it. Expect it.

Part 3 of ???

That's all for now, I think I've rambled a bit too long already. Hit me up for questions or comments on this type of topic.

#sneakerbots #security #sneakerbot #malware #cybersecurity #infosec #bots #sneakers #productsecurity

Last updated 3 years ago

dizbusters · @dizbusters
33 followers · 11 posts · Server infosec.exchange

What is a ? Essentially it’s no different than the bots that target other consumer products: tickets, PS5s, etc. They are automated processes for skipping parts of the normal checkout process, in an effort to increase one’s odds of obtaining said product in a competitive marketplace. If everyone else is putting in their address and CC info by hand, a simple autofill bot might make the difference between you and the next person. But most websites these days save your payment info, so that’s not as valuable.

Instead what if you could attempt to checkout 5 times at once? Instead of doing the whole process of adding to cart, going checkout, entering in details, confirming order, and then failing for the one product, what if you could do that for multiple attempts? Even at an 80% failure rate, you’d still get one! Now scale that out to 10,000 attempts. A few years ago, that’s where most bots were. Essentially they were using combinations of scrapers or tools like Selenium to programmatically checkout thousands of times simultaneously.

Before we start talking about the defensive measures that were put in place, let’s talk about the motive for companies around bots. Most companies will receive the same money if they sell to a bot as if they sell to a human. The company does not care if the Taylor Swift ticket is one of a thousand that goes to a scalper, they only care that they get the market price of the ticket. But from a PR standpoint that looks terrible, and if the complaints get loud enough you may end up with a lawsuit like Ticketmaster for not providing a fair enough marketplace. So the company has two competing motives: a) if they let every product go to a bot they get the same money as if they went to a human and b) they need to prevent bots from getting products so as to not seem unfair. The real world reality of this is that it becomes a “best effort” solution. Yes, they will try to remove as many bots as they can from the situation, but if the last 1% goes to a bot nobody is going to lose sleep over it.

That’s where the defensive mechanisms start coming into play. What do you do if you’re a defender?

You’re trying to checkout 1000 times a second? We'll rate limit you by IP.
You’re using proxies to scale horizontally to bypass the rate limiting? We’ll lock your account down to one-IP, or start banning the hosting providers that you use for proxies.
You start generating thousands of fake accounts so you don’t need as many proxies? We’ll implement an account reputation score so brand new accounts can’t checkout for premium products.
You’re using stolen account credentials for real user accounts? We’ll start checking to see if the browser details look like a machine or a human. Can a human hold a phone with 0 gyroscopic motion and a screen size of 0? I bet not.

The end game of the modern bot is likely more advanced than anyone outside of the community gives them credit for. They are simultaneously doing all of the following: scraping breach dumps and hacked accounts to find accounts with good reputation, combining those with residential IP blocks that have been hacked or otherwise act as open proxies, and then feeding all of that into algorithms that use neural networks/ML to generate “human-like” data to make the activity look like real. Of course, all of those packages get sent to addresses that the bot owner has access to, even if they are fake addresses that just end up in the same place. USPS will still deliver to “123 Oak Street Unit 1” even if there is no Unit 1 cause it’s a house. Same deal with “123 Oak Street Apt 3”. All of that makes for a bot that looks exceedingly like a human: they’re using a real account with real prior behavior, the address looks right enough to pass a cursory glance, the IP range is residential so a defender can’t just blast the /24, and the device activity looks real enough that only way to detect it is another ML algorithm.

Right now there are very few companies that are able to combat this type of behavior at-scale, especially if they’re running their own infrastructure that is suddenly being assaulted by billions of HTTP requests from consumers all over the world.

Next up, we’ll talk about what we in can learn from this and how to apply it elsewhere.

Part 2 of ??, more to follow later

#sneakerbot #cybersecurity #security #infosec #bots #sneakers #sneakerbots #productsecurity

Last updated 3 years ago