The Domain Name System wasn't designed to work with Internet firewalls. It's a testimony to the flexibility of DNS and of the BIND implementation that you can configure DNS to work with, or even through, an Internet firewall.
That said, configuring BIND to work in a firewalled environment, although not difficult, takes a good, complete understanding of DNS and a few of BIND 's more obscure features. Describing it also requires a large portion of this chapter, so here's a roadmap.
We start by describing the two major families of Internet firewall software - packet filters and application gateways. The capabilities of each family have a bearing on how you'll need to configure BIND to work through the firewall. The next section details the two most common DNS architectures used with firewalls, forwarders and internal roots, and describes the advantages and disadvantages of each. Then we introduce a new feature, conditional forwarding, which combines the best of internal roots and forwarders. Finally, we discuss shadow namespaces and the configuration of the bastion host, the host at the core of your firewall system.
Before you start configuring BIND to work with your firewall, it's important you understand what your firewall is capable of. Your firewall's capabilities may influence your choice of DNS architecture and will determine how you implement it. If you don't know the answers to the questions in this section, track down someone in your organization who does know and ask. Better yet, work with your firewall's administrator when designing your architecture to ensure it will coexist with the firewall.
Note that this is far from a complete explanation of Internet firewalls. These few paragraphs only describe the two most common types of Internet firewalls, and only in enough detail to show how the differences in their capabilities impact name servers. For a comprehensive treatment of Internet firewalls, see Brent Chapman and Elizabeth Zwicky's Building Internet Firewalls (O'Reilly & Associates).
The first type of firewall we'll cover is the packet filtering firewall. Packet filtering firewalls operate largely at the transport and network levels of the TCP / IP stack (layers three and four of the OSI reference model, if you dig that). They decide whether to route a packet based upon packet-level criteria like the transport protocol (i.e., whether it's TCP or UDP ), the source and destination IP address, and the destination port (see Figure 15.1 ).
What's most important to us about packet filtering firewalls is that you can typically configure them to allow DNS traffic selectively between hosts on the Internet and your internal hosts. That is, you can let an arbitrary set of internal hosts communicate with Internet name servers. Some packet filtering firewalls can even permit your name servers to query name servers on the Internet, but not vice versa. All router-based Internet firewalls are packet filtering firewalls. Checkpoint's FireWall-1, Cisco's PIX , and Sun's SunScreen are popular commercial packet filtering firewalls.
Application gateways operate at the application protocol level, several layers higher in the OSI reference model than most packet filters ( Figure 15.2 ). In a sense, they "understand" the application protocol in the same way a server for that particular application would. An FTP application gateway, for example, can make the decision to allow or deny a particular FTP operation, like a RETR (a get ) or a STOR (a put ).
The bad news, and what's important for our purposes, is that most application gateway firewalls handle only TCP -based application protocols. DNS , of course, is largely UDP -based, and we know of no application gateways for DNS . This implies that if you run an application gateway firewall, your internal hosts will likely not be able to communicate directly with name servers on the Internet.
The popular Firewall Toolkit from Trusted Information Systems ( TIS ) is a suite of application gateways for common Internet protocols like Telnet, FTP , and HTTP . TIS 's Gauntlet product is also based on application gateways, as is Raptor's Eagle Firewall.
Note that these two categories of firewall are really just generalizations. The state of the art in firewalls changes very quickly, and by the time you read this, you may have a firewall that includes an application gateway for DNS . Which family your firewall falls into is only important because it suggests what that firewall is capable of; what's more important is whether your particular firewall will let you permit DNS traffic between arbitrary internal hosts and the Internet.
The simplest configuration is to allow DNS traffic to pass freely through your firewall (assuming you can configure your firewall to do that). That way, any internal name server can query any name server on the Internet, and any Internet name server can query any of your internal name servers. You don't need any special configuration.
Unfortunately, this is a really bad idea, for a number of reasons:
The developers of BIND are constantly finding and fixing security-related bugs in the BIND code. Consequently, it's important to run the latest released version of BIND , especially on name servers that are directly exposed to the Internet. If one or just a few of your name servers communicate directly with name servers on the Internet, upgrading to a new version is easy. If any of the name servers on your network can, it's another story.
Even if you're not running a name server on a particular host, a hacker might be able to take advantage of the fact that you allow DNS traffic through your firewall to attack that host. For example, a co-conspirator working on the inside could set up a Telnet daemon listening on the host's DNS port, allowing the hacker to telnet right in.
For the rest of this chapter, we'll try to set a good example.
Given the dangers of allowing bidirectional DNS traffic through the firewall unrestricted, most organizations elect to limit the internal hosts that can "talk DNS " to the Internet. In an application gateway firewall, or any firewall without the ability to pass DNS traffic, the only host that can communicate with Internet name servers is the bastion host (see Figure 15.3 ).
In a packet-filtering firewall, the firewall's administrator can configure the firewall to let any set of internal name servers communicate with Internet name servers. Often, this is a small set of hosts that run name servers under the direct control of the domain administrator (see Figure 15.4 ).
Servers that can query name servers on the Internet directly don't require any special configuration. Their hints files contain the Internet's root name servers, which enables them to resolve Internet domain names. Internal name servers that can't query name servers on the Internet, however, need to know to forward queries they can't resolve to one of the name servers that can. This is done with the forwarders directive or substatement, introduced in Chapter 10, Advanced Features and Security .
Figure 15.5 illustrates a common forwarding setup, with internal name servers forwarding queries to a name server running on a bastion host.
At Movie U., we put in a firewall to protect ourselves from the Big Bad Internet several years ago. Ours is a packet-filtering firewall, and we negotiated with our firewall administrator to allow DNS traffic between Internet name servers and two of our name servers, terminator.movie.edu and wormhole.movie.edu . Here's how we configured the other internal name servers at the university. For our BIND 8 name servers:
options { forwarders { 192.249.249.1; 192.249.249.3; }; forward only; };
and for our BIND 4 name servers:
forwarders 192.249.249.3 192.249.249.1 options forward-only
(We vary the order in which the forwarders appear to help spread the load between them.)
When an internal name server receives a query for a name it can't resolve locally, like an Internet domain name, it forwards that query to one of our forwarders, which can resolve the name using name servers on the Internet. Simple!
Unfortunately, it's a little too simple. Forwarding starts to get in the way once you implement subdomains or build an extensive network. To explain what we mean, take a look at part of the configuration file on zardoz.movie.edu :
options { directory "/usr/local/named"; forwarders { 192.249.249.1; 192.253.253.3; }; }; zone "movie.edu" { type slave; file "db.movie"; masters { 192.249.249.3; }; };
zardoz.movie.edu is a slave for movie.edu and uses our two forwarders. What happens when zardoz receives a query for a name in fx.movie.edu ? zardoz , as an authoritative movie.edu name server, has the NS records that delegate fx.movie.edu to its authoritative name servers. But it's also been configured to forward queries it can't resolve locally to terminator and wormhole . Which will it do?
It turns out that zardoz will ignore the delegation information and forward the query to terminator . That'll work, since terminator will receive the recursive query and ask an fx.movie.edu name server on zardoz 's behalf. But it's not particularly efficient, since zardoz could easily have sent the query directly.
Now imagine the scale of the network is much larger: a corporate network that spans many continents, with tens of thousands of hosts and hundreds or thousands of name servers. All of the internal name servers that don't have direct Internet connectivity - the vast majority of them - use a small set of forwarders. What are the problems with this picture?
If the forwarders fail, your name servers lose the ability to resolve both Internet domain names and internal domain names that they don't have cached or in authoritative data.
The forwarders will have an enormous query load placed on them. This is both because of the large number of internal name servers that use them and because the queries are recursive and require a good deal of work to answer.
Imagine two internal name servers, authoritative for west.acmebw.com and east.acmebw.com , respectively, both on the same network segment in Boulder, Colorado. Both are configured to use the company's forwarder in Bethesda, Maryland. For the west.acmebw.com name server to resolve a name in east.acmebw.com , it sends a query to the forwarder in Bethesda. The forwarder in Bethesda then sends a query back to Boulder to the east.acmebw.com name server, the original querier's neighbor. The east.acmebw.com name server replies by sending a response back to Bethesda, which the forwarder sends back to Boulder.
In a traditional configuration with root name servers, the west.acmebw.com name server would quickly have learned that an east.acmebw.com name server was next door, and would favor it (because of its low round-trip time). Using forwarders "short-circuits" the normally efficient resolution process.
The upshot is that forwarding is fine for small networks and simple namespaces, but probably inadequate for large networks and complex namespaces. We found this out the hard way at Movie U. as our network grew, and were forced to implement internal roots.
If you want to avoid the scalability problems of forwarding, you can set up your own root name servers. These internal roots will serve only the name servers in your organization. They'll only know about the portions of the namespace relevant to your organization.
What good are they? By using an architecture based on root name servers, you gain the scalability of the Internet's namespace (which should be good enough for most companies), plus redundancy, distributed load, and efficient resolution. You can have as many internal roots as the Internet has roots - thirteen or so - whereas having that many forwarders may be an undue security exposure and a configuration burden. Most of all, the internal roots don't get used frivolously. Name servers only need to consult an internal root when they time out the NS records for your top-level zones. Using forwarders, name servers may have to query a forwarder once per resolution .
The moral of our story is that if you have, or intend to have, a large name space and lots of internal name servers, internal root name servers will scale better than any other solution.
Since name servers "lock on" to the closest root name server by favoring the one with the lowest roundtrip time, it pays to pepper your network with internal root name servers. If your organization's network spans the U.S., Europe, and the Pacific Rim, consider locating at least one internal root name server on each continent. If you have three major sites in Europe, give each of them an internal root.
Here's how an internal root name server is configured. An internal root delegates directly to any domains you administer. For example, on the movie.edu network, the root zone's data file would contain:
movie.edu. 86400 IN NS terminator.movie.edu. 86400 IN NS wormhole.movie.edu. 86400 IN NS zardoz.movie.edu. terminator.movie.edu. 86400 IN A 192.249.249.3 wormhole.movie.edu. 86400 IN A 192.249.249.1 86400 IN A 192.253.253.1 zardoz.movie.edu. 86400 IN A 192.249.249.9 86400 IN A 192.253.253.9
On the Internet, this information would appear in the edu name servers' databases. On the movie.edu network, of course, there aren't any edu name servers, so you delegate directly to movie.edu from the root.
Notice that this doesn't contain delegation to fx.movie.edu or any other subdomain of movie.edu . The movie.edu name servers know which name servers are authoritative for all movie.edu subdomains, and all queries for information in those subdomains will pass through the movie.edu name servers, so there's no need to delegate them here.
We also need to delegate from the internal roots to the in-addr.arpa domains that correspond to the networks movie.edu uses:
249.249.192.in-addr.arpa. 86400 IN NS terminator.movie.edu. 86400 IN NS wormhole.movie.edu. 86400 IN NS zardoz.movie.edu. 253.253.192.in-addr.arpa. 86400 IN NS terminator.movie.edu. 86400 IN NS wormhole.movie.edu. 86400 IN NS zardoz.movie.edu. 254.253.192.in-addr.arpa. 86400 IN NS bladerunner.fx.movie.edu. 86400 IN NS outland.fx.movie.edu. 86400 IN NS alien.fx.movie.edu. 20.254.192.in-addr.arpa. 86400 IN NS bladerunner.fx.movie.edu. 86400 IN NS outland.fx.movie.edu. 86400 IN NS alien.fx.movie.edu.
Notice that we did include delegation for the 254.253.192.in-addr.arpa and 20.254.192.in-addr.arpa zones, even though they correspond to the fx.movie.edu zone. We didn't need to delegate to fx.movie.edu , because we'd already delegated to its parent. The movie.edu name servers delegate to fx.movie.edu , so by transitivity the roots delegate to fx.movie.edu . Since neither of the other in-addr.arpa zones is a parent of 254.253.192.in-addr.arpa or 20.254.192.in-addr.arpa , we needed to delegate both zones from the root. As we've covered earlier, we don't need to add address records for the three Special Effects name servers, bladerunner, outland , and alien , because a remote name server can already find their addresses by following delegation from movie.edu .
All that's left is to add an SOA record for the root zone and NS records for this internal root name server and any others:
. IN SOA rainman.movie.edu. hostmaster.movie.edu. ( 1 ; serial 86400 ; refresh 3600 ; retry 608400 ; expire 86400 ) ; minimum IN NS rainman.movie.edu. IN NS awakenings.movie.edu. rainman.movie.edu. 86400 IN A 192.249.249.254 awakenings.movie.edu. 86400 IN A 192.253.253.254
rainman.movie.edu and awakenings.movie.edu are the hosts running internal root name servers. We shouldn't run an internal root on a bastion host, because if a name server on the Internet accidentally queries it for data it's not authoritative for, the internal root will respond with its list of roots - all internal!
So the whole db.root file (by convention, we call the root zone's data file db.root ) looks like this:
. IN SOA rainman.movie.edu. hostmaster.movie.edu. ( 1 ; serial 86400 ; refresh 3600 ; retry 608400 ; expire 86400 ) ; minimum IN NS rainman.movie.edu. IN NS awakenings.movie.edu. rainman.movie.edu. 604800 IN A 192.249.249.254 awakenings.movie.edu. 604800 IN A 192.253.253.254 movie.edu. 86400 IN NS terminator.movie.edu. 86400 IN NS wormhole.movie.edu. 86400 IN NS zardoz.movie.edu. terminator.movie.edu. 86400 IN A 192.249.249.3 wormhole.movie.edu. 86400 IN A 192.249.249.1 86400 IN A 192.253.253.1 zardoz.movie.edu. 86400 IN A 192.249.249.9 86400 IN A 192.253.253.9 249.249.192.in-addr.arpa. 86400 IN NS terminator.movie.edu. 86400 IN NS wormhole.movie.edu. 86400 IN NS zardoz.movie.edu. 253.253.192.in-addr.arpa. 86400 IN NS terminator.movie.edu. 86400 IN NS wormhole.movie.edu. 86400 IN NS zardoz.movie.edu. 254.253.192.in-addr.arpa. 86400 IN NS bladerunner.fx.movie.edu. 86400 IN NS outland.fx.movie.edu. 86400 IN NS alien.fx.movie.edu. 20.254.192.in-addr.arpa. 86400 IN NS bladerunner.fx.movie.edu. 86400 IN NS outland.fx.movie.edu. 86400 IN NS alien.fx.movie.edu.
The named.conf file on both of the internal root name servers, rainman and awakenings , contains the lines:
zone "." { type master; file "db.root"; };
Or, for a BIND 4 server's named.boot file:
primary . db.root
This replaces a zone statement of type hint or a cache directive - a root name server doesn't need a cache file to tell it where the other roots are; it can find that in db.root . Did we really mean that each root name server is a primary for the root domain? Actually, that depends on the version of BIND you're running. BIND versions after 4.9 will let you declare a server a slave for the root domain, but BIND 4.8.3 and earlier insist that all root name servers load db.root as primaries.
If you don't have a lot of idle hosts sitting around that you can turn into internal roots, don't despair! Any internal name server (i.e., one that's not running on a bastion host or outside your firewall) can serve double duty as an internal root and as an authoritative name server for whatever other zones you need it to load. Remember, a single name server can be authoritative for many, many zones, including the root.
Once you've set up internal root name servers, configure all your name servers on hosts anywhere on your internal network to use them. Any name server running on a host without direct Internet connectivity should list the internal roots in its hints file:
; Internal db.cache file, for movie.edu hosts without direct ; Internet connectivity ; ; Don't use this cache file on a host with Internet connectivity! ; . 99999999 IN NS rainman.movie.edu. 99999999 IN NS awakenings.movie.edu. rainman.movie.edu. 99999999 IN A 192.249.249.254 awakenings.movie.edu. 99999999 IN A 192.253.253.254
Name servers running on hosts using this cache file will be able to resolve names in movie.edu and in Movie U.'s in-addr.arpa domains, but not outside of those domains.
To tie together how this whole scheme works, let's go through an example of name resolution on an internal caching-only name server using these internal root name servers. First, the internal name server receives a query for a domain name in movie.edu , say the address of gump.fx.movie.edu . If the internal name server doesn't have any "better" information cached, it starts by querying an internal root name server. If it has communicated with the internal roots before, it has a round-trip time associated with each, which tells it which of the internal roots is responding to it most quickly. It sends a nonrecursive query to that internal root for gump.fx.movie.edu 's address. The internal root answers with a referral to the movie.edu name servers on terminator.movie.edu , wormhole.movie.edu , and zardoz.movie.edu . The caching-only name server follows up by sending another nonrecursive query to one of the movie.edu name servers for gump 's address. The movie.edu name server responds with a referral to the fx.movie.edu name servers. The caching-only name server sends the same nonrecursive query for gump 's address to one of the fx.movie.edu name servers, and finally receives a response.
Contrast this with the way a forwarding setup would have worked. Let's imagine that instead of using internal root name servers, our caching-only name server were configured to forward queries to first terminator and then wormhole . In that case, the caching-only name server would have checked its cache for the address of gump.fx.movie.edu and, not finding it, would have forwarded the query to terminator . terminator would have queried an fx.movie.edu name server on the caching-only name server's behalf and returned the answer. Should the caching-only name server need to look up another name in fx.movie.edu , it would still ask the forwarder, even though the forwarder's response to the query for gump.fx.movie.edu 's address may have contained the names and addresses of the fx.movie.edu name servers.
But wait! That's not all internal roots will do for you. We talked about getting mail to the Internet without changing sendmail 's configuration all over the network.
Wildcard records are the key to getting mail to work - specifically, wildcard MX records. Let's say we'd like mail to the Internet to be forwarded through postmanrings2x.movie.edu , the Movie U. bastion host, which has direct Internet connectivity. Then adding these records to db.root :
* IN MX 5 postmanrings2x.movie.edu. *.edu. IN MX 10 postmanrings2x.movie.edu.
will get the job done. We need the * .edu MX record in addition to the * record because of the DNS wildcard production rules we described in the wildcards section in Chapter 10 . Since there are explicit data for movie.edu in the zone, the first wildcard won't match movie.edu or any other subdomains of edu . We need another, explicit wildcard record for edu to match these domains.
Now mailers on our internal movie.edu hosts will send mail addressed to Internet domains to postmanrings2x for forwarding. For example, mail addressed to nic.ddn.mil will match the first wildcard MX record:
%nslookup -type=mx nic.ddn.mil.
- Matches the MX record for * Server: rainman.movie.edu Address: 192.249.249.19 nic.ddn.mil preference = 5, mail exchanger = postmanrings2x.movie.edu postmanrings2x.movie.edu internet address = 192.249.249.20
while mail addressed to vangogh.cs.berkeley.edu will match the second MX record:
%nslookup -type=mx vangogh.cs.berkeley.edu.
- Matches the MX record for *.edu Server: rainman.movie.edu Address: 192.249.249.19 vangogh.cs.berkeley.edu preference = 10, mail exchanger = postmanrings2x.movie.edu postmanrings2x.movie.edu internet address = 192.249.249.20
Once the mail reaches postmanrings2x , our bastion host, postmanrings2x 's mailer will look up the MX records for these addresses itself. Since postmanrings2x will resolve the name using the Internet's name space instead of the internal name space, it will find the real MX records for the destination domain and deliver the mail. No changes to sendmail 's configuration are necessary.
Another nice perk of this internal root scheme is that it gives you the ability to forward mail addressed to certain Internet domains through particular bastion hosts, if you have more than one. We can choose, for example, to send all mail addressed to uk domain recipients to our bastion host in London first, and then out onto the Internet. This can be very useful if our internal network's connectivity or reliability is better than the U.K.'s section of the Internet.
Movie U. has a private network connection to our sister university in London near Pinewood Studios. As it turns out, sending mail across our private link, and then through the Pinewood host to correspondents in the U.K., is more reliable than sending it directly across the Internet. So we add the following wildcard records to db.root :
; holygrail is at the other end of the U.K. Internet link *.uk. IN MX 10 holygrail.movie.ac.uk. holygrail.movie.ac.uk. IN A 192.168.76.4
Now, mail addressed to users in subdomains of uk will be forwarded to the host holygrail.movie.ac.uk at our sister university, which presumably has facilities to forward that mail to other domains in the U.K.
Unfortunately, just as forwarding has its problems, internal roots have their limitations. Chief among these is the fact that your internal hosts can't see the Internet namespace. On some networks, this isn't an issue, because most internal hosts don't have any direct Internet connectivity. On others, however, the Internet firewall or other software may require that all internal hosts have the ability to resolve names in the Internet's namespace. For these networks, an internal root architecture won't work.
The solution to this problem may be views, which the ISC hopes to introduce to BIND sometime soon in the version 8 release stream.[2] Views would allow you to specify when during resolution a name server tries its forwarders and under what conditions.[3]
[2] Views haven't been implemented yet, but we were granted a peek at how they may work and are documenting them in the hope that they'll beat this book to production.
[3] Todd Aven's noforward patch for BIND 4.9 was a precursor to this functionality. It's still available from ftp://ftp.isc.org/isc/bind/src/4.9.3/contrib/noforward.tar.gz .
By default, a BIND name server configured to use forwarders consults them before attempting normal resolution, or instead of normal iterative resolution. Also, when a BIND name server is configured to use forwarders, it will consult those forwarders for queries about any domain name. A view lets you specify whose queries are forwarded and what those queries have to be about (which domain names) in order to be forwarded.
The syntax of the view statement might look something like this:
viewviewname
{ [ interfaceip_list
; ] [ domaindomain_list
; ] [ clientip_list
; ] forward onreasons
[ toip_list
]; };
Here's how the statement works: domain specifies the domains to which the view applies. domain takes a list of domain names as an argument. The client substatement determines which addresses this view applies to. client takes an address match list as an argument (as described in Chapter 10 ). interface specifies the interfaces on the local host to which the view applies. If the server receives a query on one of the interfaces specified, from a client whose address matches an address in the client substatement, and about a domain name specified in domain , the view applies. The default for interface is the built-in address match list localhost , the default for client is any , and the default for domain is ".", the root, meaning that by default, the view applies to queries from any IP address looking up any name.
forward would replace and extend the forwarders substatement of the options statement. It lists the IP addresses of the forwarders to use for queries that match the specifications of this view. The forwarders are listed in the order in which you want them queried. What's new is the reasons clause. reasons might include no-domain and no-answer . These are the conditions under which the forwarders are used:
no-domain corresponds to an NXDOMAIN (no such domain) response.
no-answer corresponds to a NOERROR /no records response (that is, the domain name exists but the record type doesn't).
If we were to implement views in our internal root environment at Movie U., here's how our internal name server's view statements might look:
view { client { 192.249.249/24; 192.253.253/24; 192.253.254/24 }; domain { "!movie.edu"; "!249.249.192.in-addr.arpa"; "!253.253.192.in-addr.arpa"; "!254.253.192.in-addr.arpa"; }; forward on no-domain to { 192.249.249.3; 192.249.249.1; }; };
This tells our internal name servers (all except terminator and wormhole , which can resolve Internet domain names directly) to forward queries from our internal IP addresses and about domain names that are not (note the negation operator) in movie.edu or our in-addr.arpa subdomains to terminator and wormhole , in that order.
Please note that we've described just one possible implementation of views. The actual implementation the ISC decides upon may differ, both in features and in syntax.
Many organizations would like to advertise different zone data to the Internet than they do internally. In most cases, much of the internal zone data is irrelevant to the Internet because of the organization's Internet firewall. The firewall may not allow direct access to most internal hosts, and may also translate internal, unregistered IP addresses into a range of IP addresses registered to the organization. Therefore, the organization may need to trim out irrelevant information from the external view of the zone, or change internal addresses to their external equivalents.
Unfortunately, BIND doesn't support automatic filtering and translation of zone data. Consequently, many organizations manually create what have become known as "split namespaces." In a split namespace, the real namespace is available only internally, while a pared-down, translated version of it, called "the shadow namespace," is visible to the Internet.
The shadow namespace contains the name-to-address and address-to-name mappings of only those hosts that are accessible from the Internet, through the firewall. The addresses advertised may be the translated equivalents of real internal addresses. The shadow namespace may also contain one or more MX records to direct email from the Internet through the firewall to a mail server.
Since Movie U. has an Internet firewall that greatly limits access from the Internet to the internal network, we elected to create a shadow namespace. For movie.edu , the only information we need to give out is about the zone (an SOA and a few NS records), the bastion host ( postmanrings2x ), and the new external name server, ns.movie.edu , which also functions as an external web server, www.movie.edu . The address of the external interface on the bastion host is 200.1.4.2, while the address of the name/web server is 200.1.4.3. The shadow movie.edu zone data file looks like this:
@ IN SOA ns.movie.edu. hostmaster.movie.edu. ( 1 ; Serial 86400 ; Refresh 3600 ; Retry 608400 ; Expire 86400 ) ; Default TTL IN NS ns.movie.edu. IN NS ns.isp.net. ; our ISP 's name server IN A 200.1.4.3 IN MX 10 postmanrings2x.movie.edu. IN MX 100 mail.isp.net. www IN CNAME movie.edu. postmanrings2x IN A 200.1.4.2 IN MX 10 postmanrings2x.movie.edu. IN MX 100 mail.isp.net. ;postmanrings2x handles mail addressed to ns ns IN A 200.1.4.3 IN MX 10 postmanrings2x.movie.edu. IN MX 100 mail.isp.net. * IN MX 10 postmanrings2x.movie.edu. IN MX 100 mail.isp.net.
Note that there's no mention of any of the subdomains of movie.edu , including any delegation to the servers for those subdomains. The information simply isn't necessary, since there's nothing in any of the subdomains that you can get to from the Internet, and inbound mail addressed to hosts in the subdomains is caught by the wildcard.
The db.200.1.4 file, which we need to reverse map the two Movie U. IP addresses that hosts on the Internet might see, looks like this:
@ IN SOA ns.movie.edu. hostmaster.movie.edu. ( 1 ; Serial 86400 ; Refresh 3600 ; Retry 608400 ; Expire 86400 ) ; Default TTL IN NS ns.movie.edu. IN NS ns.isp.net. 2 IN PTR postmanrings2x.movie.edu. 3 IN PTR ns.movie.edu.
One precaution that we need to take is to make sure that the resolver on our bastion host isn't configured to use the server on ns.movie.edu . Since that server can't see the real movie.edu , using it would render postmanrings2x unable to map internal names to addresses or addresses to names.
The bastion host is a special case in a split namespace. The bastion host has a foot in each environment: one network interface connects it to the Internet, and another connects it to the internal network. Now that we have split our name space in two, how can our bastion host see both the Internet name space and our real internal name space? If we configure it with the Internet root name servers in its hints file, it will follow delegation from the Internet's edu name servers to an external movie.edu name server with shadow zone data. It would be blind to our internal name space, which it needs to see to log connections, deliver inbound mail, and more. On the other hand, if we configure it with our internal roots, then it won't see the Internet's name space, which it clearly needs to do in order to function as a bastion host. What to do?
If we have internal name servers that support conditional forwarding, we can simply configure the bastion host's resolver to query those servers, since they can already see both the internal and Internet namespaces. If we use forwarding internally, depending on the type of firewall we're running, we may also need to run a name server on the bastion host itself. If the firewall won't pass DNS traffic, we'll need to run at least a caching-only name server, configured with the Internet roots, on the bastion host, so that our internal name servers will have somewhere to forward their unresolved queries.
Without conditional forwarding, the simplest solution is to run a name server on the bastion host (if you aren't already doing so). The name server must be configured as a slave for movie.edu and any in-addr.arpa subdomains in which it needs to resolve addresses. This way, if it receives a query for a name in movie.edu , it'll use its local authoritative data to resolve the name. If the name is in a subdomain of movie.edu , it'll follow NS records in the zone data to query an internal name server for the name. Therefore, it doesn't need to be configured as a slave for any movie.edu subdomains, such as fx.movie.edu , just the "top" domain (see Figure 15.6 ).
The named.conf file on our bastion host looks like this:
options { directory "/var/named"; }; zone "movie.edu" { type slave; file "db.movie"; masters { 192.249.249.3; }; }; zone "249.249.192.in-addr.arpa" { type slave; file "db.192.249.249"; masters { 192.249.249.3; }; }; zone "253.253.192.in-addr.arpa" { type slave; file "db.192.253.253.in-addr.arpa"; masters { 192.249.249.3; }; }; zone "254.253.192.in-addr.arpa" { type slave; file "db.192.253.254"; masters { 192.253.254.2; }; }; zone "20.254.192.in-addr.arpa" { type slave; file "db.192.254.20"; masters { 192.253.254.2; }; }; zone "." { type hint; file "db.cache"; };
An equivalent named.boot file would look like this:
directory /var/named secondary movie.edu 192.249.249.3 db.movie secondary 249.249.192.in-addr.arpa 192.249.249.3 db.192.249.249 secondary 253.253.192.in-addr.arpa 192.249.249.3 db.192.253.253 secondary 254.253.192.in-addr.arpa 192.253.254.2 db.192.253.254 secondary 20.254.192.in-addr.arpa 192.253.254.2 db.192.254.20 cache . db.cache ; lists Internet roots
Unfortunately, loading these zones on the bastion host also exposes them to the possibility of disclosure on the Internet, which we were trying to avoid by splitting the name space. But as long as we're running BIND 4.9 or better, we can protect the zone data using the secure_zone TXT record or the allow-query substatement. With allow-query , we can place a global access list on our zone data. Here's the new options statement from our named.conf file:
options { directory "/var/named"; allow-query { 127/8; 192.249.249/24; 192.253.253/24; 192.253.254/24; 192.254.20/24; }; };
With BIND 4.9's secure_zone , we can turn off all external access to our zone data by including these TXT records in each db file:
secure_zone IN TXT "192.249.249.0:255.255.255.0" IN TXT "192.253.253.0:255.255.255.0" IN TXT "192.253.254.0:255.255.255.0" IN TXT "192.254.20.0:255.255.255.0" IN TXT "127.0.0.1:H"
Don't forget to include the loopback address in the list, or the bastion host's own resolver may not get answers from the name server!