Saturday, July 10, 2010

Tech-Ed 2010 Session Summaries 3: Mark Minasi on DNSSEC

Tech-Ed 2010 Session Summaries 3: Mark Minasi on DNSSEC: Why you care, What you can do, How Windows can help you.

Mark Minasi was on a tear at Tech Ed this year with several great sessions, including this impressively researched and engagingly delivered one.

The problem underlying this session is that the current nature of DNS makes it quite possible for others to take over DNS services and masquerade as (for example) banking domains. This is slowly changing, but between 2010 and 2014, an entirely new form of attack will likely take place, in response to the closing of the window of opportunity for this. More on this at the end.

Background of the Problem: DNS “spoofing” (alternately, “cache poisoning”) has been possible for many years, however recent research and developments in security have made such hijacking much easier. In such an attack, an address is taken over, such as blogspot.com, and traffic to it is redirected to another location. The implications for commerce need little elaboration. As we will see, there are solutions, but every link in the DNS name resolution chain must be part of the solution for this to work.

DNSEC and zone signing are a solution. Now, signing your own zone for most of us is not critical. However, we _do_ need a DNS infrastructure that allows you to validate the DNS information from your bank. And now Windows supports this on a server and client basis.

So how does this work?

1. When you seek an address, you request goes to your ISP’s DNS servers.

2. Your ISP’s DNS servers ask for the IP address to be used, indicate the port to be used for response, and designate a transaction ID (TXID) to be used in that response.

3. This is sent to your ISP’s DNS server, which then sends this address to you.

4. Then your ISP’s DNS server stops listening.

And 99.9% of the time, this is what happens. But there is a risk that goes like this.

1. When you seek an address, you request goes to your ISP’s DNS servers.

2. Your ISP’s DNS servers ask for the IP address to be used, indicate the port to be used for response, and designate a transaction ID (TXID) to be used in that response.

3. A deceptive ID is sent to the designated port with the designated TXID. This is then sent to you.

4. This is then adopted and sent to you. The ISP DNS stops listening. You are screwed.

The only form of authentication used for this is the TXID and the selected port number. But they were really meant to be traffic codes, not authentication tools, and hence are quite weak.

Brute force FTW: So, then, trial and error guarantees a certain number of successes. Moreover, port designation has been non-random and therefore predictable. For another thing, traffic between DNS servers for many years was always on port 53, even across different operating systems. On top of that, until 1997, TXIDs were sequential, not random. This made successful attacks fairly simple. In the past few years Windows has improved the assignment of TXIDs, the assignment of port numbers.

The danger is that a well-organized attack could take control of several large ISPs’ DNS servers for an hour or so, which would be long enough to harvest vast numbers of usernames and passwords for major financial institutions. A bot army of (according to Minasi) 50+ million machines could continually try to poison the caches on the DNS servers of major ISPs; eventually some success would be certain.

Background on RR (Resource Records). With DNSSEC, we “sign” a zone of ours. This is done by adding new types of resource records to the ones we already know, such as MX, NS, A, etc. In this system, every resource record gets a new record known as an RRSIG. Then we create and take a private/public key pair and encrypt this RRSIG with the private key. Then, to verify this key, visitors/clients use the public key to decode the RRSIG. This public key, for convenience’ sake, is itself stored in another new type of DNS record which is long, ugly, and contains an identifier for the public key used. Another new type of record contains this.

So, how does this work? First we establish that a putative zone is internally consistent. Here’s how.

1. Get the A (address) record for the target, a very standard task. (In fact, this all also applies to other types of name records too.)

2. Run this record through a hashing algorithm. This yields a hashed version of the A record.

3. Then we go back to the target, whose RRSIG record also contained a hashed A record for this server.

4. Grab that RRSIG and the DNSKEY, which contains the decryption key for the RRSIG.

5. The DNSKEY is run against the RRSIG. This should yield a hash which is identical to the hash in step 2. This proves the server is internally consistent, but that’s all. A fake could be internally consistent, after all

Process-wise, this is smart, but slow. And obviously our zones are going to be a _lot_ bigger as a result of this.

The answer then is the DNSKEY. We can verify this against the parent zone, the grandparent zone, etc. Here’s how:

1. Get DNSKEY from target and create a hash from it.

2. This is then verified against the top-level domain has for this key. If it is consistent, then the DNSKEY is reliable, and matching hashes from it can be trusted.

The top-level domain, if we do not trust it, can itself be verified against the root domain.

One problem that can come up is that if we do not have a record matching a query, a response is sent nonetheless. This means that an adversary could send numerous requests for invalid server names, and learn about or organization by studying the responses. One additional feature on DNSSEC, NSEC3, remedies this by sending hashes back, not hostnames. Strangely, this is not supported on Windows Server 2008 R2.

How to Implement? Given these immense benefits for thwarting domain takeovers and cache poisoning, how do we make this work? Well, many of us don’t need it. No one’s going to poison our cache unless you’re worth the effort. So DNSSEC is of special interest to banks, financial organizations, Dead.net, etc. Beyond this though, all the servers between you and your target must be DNSSEC-aware for this to work. Oh, and your PC’s DNS client has to be DNSSEC-aware too.

One issue with this, as you can see, is that all computers between you and your target have to be DNSSEC-aware. But not all are. The trend is for companies increasingly to sign their zones, though. And to do this, you need Windows Server 2008 R2 for your DNS servers, and the zones needing to be signed are the Internet-facing ones.

Workarounds: Currently, VeriSign is scheduled to implement DNSSEC this year, and the .com top-level domain should be onboard by 2011. Fortunately, while we wait for the rest of the world to make this work, we can use workarounds. Some top-level domains have been signed, such as .org, .se (Sweden, of all countries!), and these can be used as bases for signing as an interim step; they are known as “islands of trust”. Take a hash of their DNSkeys, and use this on your server. This is quite similar to the way we use root certificates. Just as we trust VeriSign, we trust .org or .se.

Icann created a listing of such anchors in February, 2009; this will be taken down when the other top-level domains implement DNSSEC themselves. These were not baked into W2K8S R2, but can be added. The continuing problem is that XP and Vista cannot work with this system. However Windows 7 and W2K8S R2 can. With then, DNSCMD does the signing if asked. This works with static zones. These two steps avoid excessive, bandwidth-sagging traffic.

In addition, the Windows 7/ W2K8S R2 versions works by offline signing, in which the zone is hashed, and then uploaded. One significant change is that the zone signing keys should be changed on a monthly basis. However, when this is done, then the key-changing client needs to update its DS key (an entry in the zone file, remember) of its far upstream DNS partners. This seems like a phenomenal amount of traffic. And it would be. Ideally, we should have some way to support signing, but without the huge traffic surge.


Two Keys Are Better Than One: The solution to this is to have two keys rather than one. One key, the ZSK or Zone-Signing Key, does get changed every month, and which handles most of the zone, but does not put a DS record upstream. This can be changed as often as desired without incurring traffic or service charges. On the other hand, the second type of key, the KSK or Key-Signing Key, and put _this_ in the parent zone. This presents the reliability of an upstreamed key (i.e., one with a DS record in a parent), together with the convenience of one which can be quickly and easily changed for local purposes.

Creating Them: Mark quickly covers the process of creating the keys, noting the use of DNSCMD for this, and giving examples of the commands:

1. Get a text copy of the zone: (from AD-I): dnscmd /zoneexport zonename exportfilename.

2. Create and back up the KSK and the ZSK: For KSK: dnscmd.exe /offlinesign /genkey /alg rsasha1 /length 2048 /flags KSK /some yourcompany.com /sscert /friendlyname KSK-yourcomany.com. Then, for ZSK, dnscmd.exe /offlinesign /genkey /alg rsasha1 /length 2048 /some yourcompany.com /sscert /friendlyname ZSK-yourcomany.com.

3. Sign a zone: dnscmd /offlinesign /signzone.

Near the end Mark also covered trust anchors. He reiterated that the need for these will fade within two years as more top-level domains come onboard with DNSSEC. That being said, for the current need, he showed how to find them, how to read their nomenclature, and how to use them.

Client-Side: The client-side aspect of this received little discussion until the very end, mainly because there was little to say. Only Windows 7 has a DNSSEC-aware DNS client, which gives the _option_ of requiring signatures, through the Name Resolution Policy Table. Obviously the reason this is optional is because most sites still do not use signing. The Table can be enacted through a rule in GPEDIT (in Windows Settings>Name Resolution Policy), and then restarting the DNS client.

At Tech-Ed, Mark discussed the short-term implications of this. The current vulnerability to hijacking and poisoning will end within two or three years. The current vulnerability is exploited by thousands or millions of bots. The natural implication, for criminal organizations is that their window of opportunity for exploiting this is closing. As a result, it’s likely that some serious and sustained attacks will take place in the next very few years.

0 Comments:

Post a Comment

<< Home