We'd like to close this chapter on policy formation with a few words about knowledge. In traditional security, derived largely from military intelligence, there is the concept of "need to know." Information is partitioned, and you are given only as much as you need to do your job. In environments where specific items of information are sensitive or where inferential security is a concern, this policy makes considerable sense. If three pieces of information together can form a damaging conclusion and no one has access to more than two, you can ensure confidentiality.
In a computer operations environment, applying the same need-to-know concept is usually not appropriate. This is especially true if you should find yourself basing your security on the fact that something technical is unknown to your attackers. This concept can even hurt your security.
Consider an environment in which management decides to keep the manuals away from the users to prevent them from learning about commands and options that might be used to crack the system. Under such circumstances, the managers might believe they've increased their security, but they probably have not. A determined attacker will find the same documentation elsewhere - from other users or from other sites. Many vendors will sell copies of their documentation without requiring an executed license. Usually all that is required is a visit to a local college or university to find copies. Extensive amounts of UNIX documentation are available as close as the nearest bookstore! Management cannot close down all possible avenues for learning about the system.
In the meantime, the local users are likely to make less efficient use of the machine because they are unable to view the documentation and learn about more efficient options. They are also likely to have a poorer attitude because the implicit message from management is "We don't completely trust you to be a responsible user." Furthermore, if someone does start abusing commands and features of the system, management does not have a pool of talent to recognize or deal with the problem. And if something should happen to the one or two users authorized to access the documentation, there is no one with the requisite experience or knowledge to step in or help out.
Keeping bugs or features secret to protect them is also a poor approach to security. System developers often insert back doors in their programs to let them gain privileges without supplying passwords (see Chapter 11, Protecting Against Programmed Threats ). Other times, system bugs with profound security implications are allowed to persist because management assumes that nobody knows of them. The problem with these approaches is that features and problems in the code have a tendency to be discovered by accident or by determined crackers. The fact that the bugs and features are kept secret means that they are unwatched, and probably unpatched. After being discovered, the existence of the problem will make all similar systems vulnerable to attack by the persons who discover the problem.
Keeping algorithms secret, such as a locally developed encryption algorithm, is also of questionable value. Unless you are an expert in cryptography, you are unlikely to be able to analyze the strength of your algorithm. The result may be a mechanism that has a gaping hole in it. An algorithm that is kept secret isn't scrutinized by others, and thus someone who does discover the hole may have free access to your data without your knowledge.
Likewise, keeping the source code of your operating system or application secret is no guarantee of security. Those who are determined to break into your system will occasionally find security holes, with or without source code. But without the source code, users cannot carry out a systematic examination of a program for problems.
The key is attitude. If you take defensive measures that are based primarily on secrecy, you lose all your protections after secrecy is breached. You can even be in a position where you can't determine whether the secrecy has been breached, because to maintain the secrecy, you've restricted or prevented auditing and monitoring. You are better served by algorithms and mechanisms that are inherently strong, even if they're known to an attacker. The very fact that you are using strong, known mechanisms may discourage an attacker and cause the idly curious to seek excitement elsewhere. Putting your money in a wall safe is better protection than depending on the fact that no one knows that you hide your money in a mayonnaise jar in your refrigerator.
Despite our objection to "security through obscurity," we do not advocate that you widely publicize new security holes the moment that you find them. There is a difference between secrecy and prudence! If you discover a security hole in distributed or widely available software, you should quietly report it to the vendor as soon as possible. We would also recommend that you also report it to one of the FIRST teams (described in Appendix F, Organizations ). Those organizations can take action to help vendors develop patches and see that they are distributed in an appropriate manner.
If you "go public" with a security hole, you endanger all of the people who are running that software but who don't have the ability to apply fixes. In the UNIX environment, many users are accustomed to having the source code available to make local modifications to correct flaws. Unfortunately, not everyone is so lucky, and many people have to wait weeks or months for updated software from their vendors. Some sites may not even be able to upgrade their software because they're running a turnkey application, or one that has been certified in some way based on the current configuration. Other systems are being run by individuals who don't have the necessary expertise to apply patches. Still others are no longer in production, or are at least out of maintenance. Always act responsibly. It may be preferable to circulate a patch without explaining or implying the underlying vulnerability than to give attackers details on how to break into unpatched systems.
We have seen many instances in which a well-intentioned person reported a significant security problem in a very public forum. Although the person's intention was to elicit a rapid fix from the affected vendors, the result was a wave of break-ins to systems where the administrators did not have access to the same public forum, or were unable to apply a fix appropriate for their environment.
Posting details of the latest security vulnerability in your system to the Usenet electronic bulletin board system will not only endanger many other sites, it may also open you to civil action for damages if that flaw is used to break into those sites.[8] If you are concerned with your security, realize that you're a part of a community. Seek to reinforce the security of everyone else in that community as well - and remember that you may need the assistance of others one day.
[8] Although we are unaware of any cases having been filed yet, on these grounds, several lawyers have told us that they are waiting for their clients to request such an action.
Some security-related information is rightfully confidential. For instance, keeping your passwords from becoming public knowledge makes sense. This is not an example of security through obscurity. Unlike a bug or a back door in an operating system that gives an attacker superuser powers, passwords are designed to be kept secret and should be routinely changed to remain so.
The key to successful risk assessment is to identify all of the possible threats to your system, but only to defend against those attacks which you think are realistic threats.
Simply because people are the weak link doesn't mean we should ignore other safeguards. People are unpredictable, but breaking into a dial-in modem that does not have a password is still cheaper than a bribe. So, we use technological defenses where we can, and we improve our personnel security by educating our staff and users.
We also rely on defense in depth: we apply multiple levels of defenses as backups in case some fail. For instance, we buy that second UPS system, or we put a separate lock on the computer room door even though we have a lock on the building door. These combinations can be defeated too, but we increase the effort and cost for an enemy to do that...and maybe we can convince them that doing so isn't worth the trouble. At the very least, you can hope to slow them down enough so that your monitoring and alarms will bring help before anything significant is lost or damaged.
With these limits in mind, you need to approach computer security with a thoughtfully developed set of priorities. You can't protect against every possible threat. Sometimes you should allow a problem to occur rather than prevent it, and then clean up afterwards. For instance, your efforts might be cheaper and less trouble if you let the systems go down in a power failure and then reboot than if you bought a UPS system. And some things you simply don't bother to defend against, either because they are too unlikely (e.g., an alien invasion from space), too difficult to defend against (e.g., a nuclear blast within 500 yards of your data center), or simply too catastrophic and horrible to contemplate (e.g., your management decides to switch all your UNIX machines to a well-known PC operating system). The key to good management is knowing what things you will worry about, and to what degree.
Decide what you want to protect and what the costs might be to prevent those losses versus the cost of recovering from those losses. Then make your decisions for action and security measures based on a prioritized list of the most critical needs. Be sure you include more than your computers in this analysis: don't forget that your backup tapes, your network connections, your terminals, and your documentation are all part of the system and represent potential loss. The safety of your personnel, your corporate site, and your reputation are also very important and should be included in your plans.