Krebs on Security
Last month Yours Truly got snookered by a too-good-to-be-true online scam in which some dirtball hijacked an Amazon merchant’s account and used it to pimp steeply discounted electronics that he never intended to sell. Amazon refunded my money, and the legitimate seller never did figure out how his account was hacked. But such attacks are becoming more prevalent of late as crooks increasingly turn to online crimeware services that make it a cakewalk to cash out stolen passwords.
The item at Amazon that drew me to this should-have-known-better bargain was a Sonoswireless speaker that is very pricey and as a consequence has hung on my wish list for quite some time. Then I noticed an established seller with great feedback on Amazon was advertising a “new” model of the same speaker for 32 percent off. So on March 4, I purchased it straight away — paying for it with my credit card via Amazon’s one-click checkout.
A day later I received a nice notice from the seller stating that the item had shipped. Even Amazon’s site seemed to be fooled because for several days Amazon’s package tracking system updated its progress slider bar steadily from left to right.
Suddenly the package seemed to stall, as did any updates about where it was or when it might arrive. This went on for almost a week. On March 10, I received an email from the legitimate owner of the seller’s account stating that his account had been hacked.
Identifying myself as a reporter, I asked the seller to tell me what he knew about how it all went down. He agreed to talk if I left his name out of it.
“Our seller’s account email address was changed,” he wrote. “One night everything was fine and the next morning our seller account had a email address not associated with us. We could not access our account for a week. Fake electronic products were added to our storefront.”
He couldn’t quite explain the fake tracking number claim, but nevertheless the tactic does seem to be part of an overall effort to delay suspicion on the part of the buyer while the crook seeks to maximize the number of scam sales in a short period of time.
“The hacker then indicated they were shipped with fake tracking numbers on both the fake products they added and the products we actually sell,” the seller wrote. “They were only looking to get funds through Amazon. We are working with Amazon to refund all money that were spent buying these false products.”
As these things go, the entire ordeal wasn’t awful — aside maybe from the six days spent in great anticipation of audiophilic nirvana (alas, after my refund I thought better of the purchase and put the item back on my wish list.) But apparently I was in plenty of good (or bad?) company.
The Wall Street Journal notes that in recent weeks “attackers have changed the bank-deposit information on Amazon accounts of active sellers to steal tens of thousands of dollars from each, according to several sellers and advisers. Attackers also have hacked into the Amazon accounts of sellers who haven’t used them recently to post nonexistent merchandise for sale at steep discounts in an attempt to pocket the cash.”
Perhaps fraudsters are becoming more brazen of late with hacked Amazon accounts, but the same scams mentioned above happen every day on plenty of other large merchandising sites. The sad reality is that hacked Amazon seller accounts have been available for years at underground shops for about half the price of a coffee at Starbucks.
The majority of this commerce is made possible by one or two large account credential vendors in the cybercrime underground, and these vendors have been collecting, vetting and reselling hacked account credentials at major e-commerce sites for years.
I have no idea where the thieves got the credentials for the guy whose account was used to fake sell the Sonos speaker. But it’s likely to have been from a site like SLILPP, a crime shop which specializes in selling hacked Amazon accounts. Currently, the site advertises more than 340,000 Amazon account usernames and passwords for sale.
The price is about USD $2.50 per credential pair. Buyer scan select accounts by balance, country, associated credit/debit card type, card expiration date and last order date. Account credentials that also include the password to the victim’s associated email inbox can double the price.
If memory serves correctly, SLILPP started off years ago mainly as a PayPal and eBay accounts seller (hence the “PP”). “Slil” is transliterated Russian for “слил,” which in this context may mean “leaked,” as in password data that has leaked from other now public breaches. SLILPP has vastly expanded his store in the years since: It currently advertises more than 7.1 million credentials for sale from hundreds of popular bank and e-commerce sites.
The site’s proprietor has been at this game so long he probably deserves a story of his own soon, but for now I’ll say only that he seems to do a brisk business buying up credentials being gathered by credential-testing crime crews — cyber thieves who spend a great deal of time harvesting and enriching credentials stolen and/or leaked from major data breaches at social networking and e-commerce providers in recent years.
Fraudsters can take a list of credentials stolen from, say, the Myspace.com breach (in which some 427 million credentials were posted online) and see how many of those email address and password pairs from the MySpace accounts also work at hundreds of other bank and e-commerce sites.
Password thieves often then turn to crimeware-as-a-service tools like Sentry MBA, which can vastly simplify the process of checking a list of account credentials at multiple sites. To make blocking their password-checking activities more challenging for retailers and banks to identify and block, these thieves often try to route the Internet traffic from their password-guessing tools through legions of open Web proxies, hacked PCs or even stolen/carded cloud computing instances.
PASSWORD RE-USE: THE ENGINE OF ALL ONLINE FRAUD
In response, many major retailers are being forced to alert customers when they see known account credential testing activity that results in a successful login (thus suggesting the user’s account credentials were replicated and compromised elsewhere). However, from the customer’s perspective, this is tantamount to the e-commerce provider experiencing a breach even though the user’s penchant for recycling their password across multiple sites is invariably the culprit.
There are a multitude of useful security lessons here, some of which bear repeating because their lack of general observance is the cause of most password woes today (aside from the fact that so many places still rely on passwords and stupid things like “secret questions” in the first place). First and foremost: Do not re-use the same password across multiple sites. Secondly, but equally important: Never re-use your email password anywhere else.
Also, with a few exceptions, password length is generally more important than password complexity, and complex passwords are difficult to remember anyway. I prefer to think in terms of “pass phrases,” which are more like sentences or verses that are easy to remember.
If you have difficult recalling even unique passphrases, a password manager can help you pick and remember strong, unique passwords for each site you interact with, requiring only one strong master password to unlock any of them. Oh, and if the online account in question allows 2-factor authentication, be sure to take advantage of that.
I hope it’s clear that Amazon is just one of the many platforms where fraudsters lurk. SLILPP currently is selling stolen credentials for nearly 500 other banks and e-commerce sites. The full list of merchants targeted by this particularly bustling fraud shop is here (.txt file).
As for the “buyer beware” aspect of this tale, in retrospect there were several warning signs that I either ignored or neglected to assign much weight. For starters, the deal that snookered me was for a luxury product on sale for 32 percent off without much explanation as to why the apparently otherwise pristine item was so steeply discounted.
Also, while the seller had a stellar history of selling products on Amazon for many years (with overwhelmingly positive feedback on virtually all of his transactions) he did not have a history of selling the type of product that thieves tried to sell through his account. The old adage “If something seems too good to be true, it probably is,” ages really well in cyberspace.
It’s hard to understand all the ramifications of Oracle’s undo handling, and it’s not hard to find cases where the resulting effects are very confusing. In a recent post on the OTN database forum resulted in one response insisting that the OP was obviously updating a table with frequent commits from one session while querying it from another thereby generating a large number of undo reads in the querying session.
It’s a possible cause of the symptoms that had been described – although not the only possible cause, especially since the symptoms hadn’t been described completely. It’s actually possible to see this type of activity when there are no updates and no outstanding commits taking place at all on the target table. Unfortunately it’s quite hard to demonstrate this with a quick, simple, script in recent versions of Oracle unless you do some insanely stupid things to make the problem appear – but I know how to do “insanely stupid” in Oracle, so here we go; first some data creation:
rem rem Script: undo_rec_apply_2.sql rem Author: Jonathan Lewis rem Dated: March 2017 rem create table t2(v1 varchar2(100)); insert into t2 values(rpad('x',100)); commit; create table t1 nologging pctfree 99 pctused 1 as with generator as ( select rownum id from dual connect by level <= 1e4 ) select cast(rownum as number(8,0)) id, cast(lpad(rownum,10,'0') as varchar2(10)) v1, cast(lpad('x',100,'x') as varchar2(100)) padding from generator v1, generator v2 where rownum <= 8e4 -- > comment to bypass WordPress formatting issue ; alter table t1 add constraint t1_pk primary key(id) ; begin dbms_stats.gather_table_stats( ownname => user, tabname =>'T1', method_opt => 'for all columns size 1' ); end; /
The t2 table is there as a target for a large of updates from a session other than the one demonstrating the problem. The t1 table has been defined and populated in a way that puts one row into each of 80,000 blocks (though, with ASSM and my specific tablespace definition of uniform 1MB extents, the total space is about 80,400 blocks). I’ve got a primary key declaration that allows me to pick single rows/blocks from the table if I want to.
At this point I’m going to do a lot of updates to the main table using a very inefficient strategy to emulate the type of thing that can happen on a very large table with lots of random updates and many indexes to maintain:
begin for i in 1..800 loop update t1 set v1 = upper(v1) where id = 100 * i; execute immediate 'alter system switch logfile'; execute immediate 'alter system flush buffer_cache'; commit; dbms_lock.sleep(0.01); end loop; end; / set transaction read only;
I’m updating every 100th row/block in the table with single row commits, but before each commit I’m switching log files and flushing the buffer cache.
This is NOT an experiment to try on a production system, or even a development system if there are lots of busy developers or testers around – and if you’re running your dev/test in archivelog mode (which, for some of your systems you should be) you’re going to end up with a lot of archived redo logs. I have to do this switch to ensure that the updated blocks are unpinned so that they will be written to disc and flushed from the cache by the flush buffer cache. (This extreme approach would not have been necessary in earlier versions of Oracle, but the clever developers at Oracle Corp. keep adding “damage limitation” touches to the code that I have to work around to create small tests.) Because the block has been flushed from memory before the commit the session will record a “commit cleanout failures: block lost” on each commit. By the time this loop has run to completion there will be 800 blocks from the table on disc needing a “delayed block cleanout”.
Despite the extreme brute force I use in this loop, there is a further very important detail that has to be set before this test will work (at least in 188.8.131.52, which is what I’ve used in my test runs). I had to start the database with the hidden parameter _db_cache_pre_warm set to false. If I don’t have the database started with this feature disabled Oracle would notice that the buffer cache had a lot of empty space and would “pre-warm” the cache by loading a few thousand blocks from t1 as I updated one row – with the side effect that the update from the previous cycle of the loop would be cleaned out on the current cycle of the loop. If you do run this experiment, remember to reset the parameter and restart the instance when you’ve finished.
I’ve finished this chunk of code with a call to “set transaction read only” – this emulates the start of a long-running query: it captures a point in time (through the current SCN) and any queries that run in the session from now on have to be read-consistent with that point in time. After doing this I need to use a second session to do a bit of hard work – in my case the following:
execute snap_rollstats.start_snap begin for i in 1..10000 loop update t2 set v1 = upper(v1); update t2 set v1 = lower(v1); commit; end loop; end; / execute snap_rollstats.end_snap
The calls to the snap_rollstats package simply read v$rollstat and give me a report of the changes in the undo segment statistics over the period of the loop. I’ve executed 10,000 transactions in the interval, which was sufficient on my system to use each undo segment header at least 1,000 times and (since there are 34 transaction table slots in each undo segment header) overwrite each transaction table slot about 30 times. You can infer from these comments that I have only 10 undo segments active at the time, your system may have many more (check the number of rows in v$rollstat) so you may want to scale up that 10,000 loop count accordingly.
At this point, then, the only thing I’ve done since the start of my “long running query” is to update another table from another session. What happens when I do a simple count() from t1 that requires a full tablescan ?
alter system flush buffer_cache; execute snap_filestat.start_snap execute snap_my_stats.start_snap select count(v1) from t1; execute snap_my_stats.end_snap execute snap_filestat.end_snap
I’ve flushed the buffer cache to get rid of any buffered undo blocks – again an unreasonable thing to do in production but a valid way of emulating the aging out of undo blocks that would take place in a production system – and surrounded my count() with a couple of packaged call to report the session stats and file I/O stats due to my query. (If you’re sharing your database then the file I/O stats will be affected by the activity of other users, of course, but in my case I had a private database.)
Here are the file stats:
-------------- Datafile Stats -------------- file# Reads Blocks Avg Size Avg Csecs S_Reads Avg Csecs M_Reads Avg Csecs Max Writes Blocks Avg Csecs Max File name ----- ----- ------ -------- --------- ------- --------- ------- --------- --- ------ ------ --------- --- 1 17 17 1.000 .065 17 .065 0 .000 6 0 0 .000 15 /u01/app/oracle/oradata/TEST/datafile/o1_mf_system_938s4mr3_.dbf 3 665 665 1.000 .020 665 .020 0 .000 6 0 0 .000 15 /u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_938s5n46_.dbf 5 631 80,002 126.786 .000 2 .045 629 .000 6 0 0 .000 17 /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k__cz1w7tz1_.dbf
As expected I’ve done a number of multiblock reads of my data tablespace for a total of roughly 80,000 blocks read. What you may not have expected is that I’ve done 665 single block reads of the undo tablespace.
What have I been doing with all those undo blocks ? Check the session stats:
Session stats ------------- Name Value ---- ----- transaction tables consistent reads - undo records applied 10,014 transaction tables consistent read rollbacks 10
We’ve been reading undo blocks so that we can create read-consistent copies of the 10 undo segment headers that were active in my instance. We haven’t (and you’ll have to trust me on this, I can’t show you the stats that aren’t there!) reported any “data blocks consistent reads – undo records applied”.
If you want to see a detailed explanation of what has happened you’ll need to read Oracle Core (UK source), chapter 3 (and possibly chapter 2 to warm yourself up for the topic). In outline the following type of thing happens:
- Oracle gets to the first block updated in t1 and sees that there’s an ITL (interested transaction list) entry that hasn’t been marked as committed (we flushed the block from memory before the commit cleanout could take place so the relevant transaction is, apparently, still running and the row is still marked as locked).
- Let’s say the ITL entry says the transaction was for undo segment 34, transaction table slot 11, sequence 999. Oracle reads the undo segment header block for undo segment 34 and checks transaction table slot 11, which is now at sequence 1032. Oracle can infer from this that the transaction that updated the table has committed – but can’t yet know whether it committed before or after the start of our “long running query”.
- Somehow Oracle has to get slot 11 back to sequence 999 so that it can check the commit SCN recorded in the slot at that sequence number. This is where we see “undo records applied” to make the “transaction table read consistent”. It can do this because the undo segment header has a “transaction control” section in it that records some details of the most recent transaction started in that segment. When a transaction starts it updates this information, but saves the old version of the transaction control and the previous version of its transaction table slot in its first undo record, consequently Oracle can clone the undo segment header block, identify the most recent transaction, find its first undo record and apply it to unwind the transaction table information. As it does so it has also wound the transaction control section backwards one step, so it can use that (older) version to go back another step … and so on, until it takes the cloned undo segment header so far back that it takes our transaction table slot back to sequence 999 – and the job is done, we can now check the actual commit SCN. (Or, if we’re unlucky, we might receive an ORA-01555 before we get there)
So – no changes to the t1 table during the query, but lots of undo records read because OTHER tables have been changing.
In my example the tablescan used direct path reads – so the blocks that went through delayed block cleanout were in private memory, which means they weren’t in the buffer cache and didn’t get written out to disc. When I flushed the buffer cache (again to emulate aging our of undo blocks etc.) and repeated the tablescan Oracle had to go through all that work of creating read consistent transaction tables all over again.