The first step in securing a network must be the security policy. To effectively secure any network you must have and implement a good security policy.
In forming a security policy it is often it is useful to consider the idea of security zones - divide your network into security zones and define boundaries which separate security zones or domains. Then, make sure you cannot cross any of these boundaries without appropriate authentication (such as passwords). Remember that if you permit someone any access on a system which is not actively maintained, there is a good chance they will be able to obtain superuser access.
Of critical importance is the security of your servers, both due to the data which may be contained in them and the potential for well-connected servers to be broken into and used to attack a third party. This is an area where you cannot be too careful - you must watch security mailing lists as it may be just a few hours between a security vulnerability being published and millions of sites being attacked.
Just as important, but more often neglected, is the protection of your traffic against "traffic stealing". Open proxy servers and mail relays can pose just as much of a threat to your internet connectivity and profitability as any break-in. Related to this is "spam" prevention - unsolicited e-mail, or "spam", is another form of "traffic stealing" by using you and your customers bandwidth to send unsolicited advertising. It is important to make sure your network and all network services you provide are secure and well-configured to prevent traffic stealing.
Another important part of security is protecting your clients, their systems and their connection to the internet. While some of this responsibility must fall on the clients, it is up to you to keep them informed and to give some reasonable protection from outside attacks.
Firewalls and filtering routers may help you secure your network however they have some downfalls. Often a firewall is installed and it is then assumed that "all is secure, we have a firewall". This is simply not true, and almost never true - it should certainly not be assumed. The firewall itself may even add insecurity into your network - a prominent firewall only recently was found to give out it's SNMP community names to a SNMP probe on it's external interface. Also, there must be holes in a firewall in order for any services to be provided to the outside world. Maintaining these holes is one hassle you take on when you decide to install a firewall. Two more important issues to consider if you plan to install a firewall are preventing a false sense of security and that you must remember that a firewall only protects one side from another - where do you put your customers? They could end up victims of hackers on the outside, or if you put them on the inside they may be just as much a threat as any "random internet hacker".
Two other vital areas are physical security and social engineering. Just as the best firewall is an air gap, the best way past a firewall is to just walk past it. Do you have a secure machine room, or could people walk into it? Do you give potential customers office tours and have insecure consoles or passwords written on notes? All security is void if you have no physical security. Similarly, social engineering allows a way to bypass the normal security systems you may design. Phone calls can prove effective ways around security. Phrack - a prominent hacking and phreaking magazine - published "standard" social engineering attacks against users of various software packages. Would you double-check the identity of someone who called with a critical update for your system, gave their name as an engineer that you have exchanged e-mail with from the company, and asked that you change the remote administration password back to the default field service one and remind him of the phone number/address (supposedly lost in their last upgrade)? Similarly, do you have strong, positive identity of a client before you change their password?
Similarly, good technical input is important in any security policy. The policy must be workable and most likely, you will find that it is you who have to implement and work with the policy. Unreasonable recommendations will result in an unworkable policy, which benefits no-one.
It is important to identify the security zones in your network. The boundaries between these zones are the best locations for any firewalls, filtering or added authentication.
First you must identify key zones. If your web server is of high importance, and your billing system is of critical importance, then these may constitute two zones. Your client base and the internet will of course be two of the zones.
Separate these zones carefully. Who needs protecting from who? Your client base needs some level of protection from the internet. Your servers need protection from both the clients and the internet. You may decide that the billing system needs protection from everyone - even if someone breaks into the web server, the billing system must remain secure. Draw a chart indicating the zones, and make sure that they do not have inappropriate linkage. For example, it is convenient to provide centralized backup and restore - but make sure that this does not provide a short-cut between a medium security zone and a critical security zone, for example letting someone who has broken into your web server to go through the backup system to your billing server, or to simply let them restore your client's credit card number list.
What I cover here is just a start on the large topic of securing your hosts. It is impossible to cover the entire list of all security vulnerabilities - systems are still being broken into with security problems discovered and understood a decade ago.
Since most ISPs run Unix or Linux, that is the focus of this section. Securing Unix is best done by consulting internet resources. AUSCert and Cert provide good web sites covering past and present security vulnerabilities, as well as tips for recovering from a break-in. The AUSCert mailing list provides AUSCert members with notification of new problems and current intruder activities. The BUGTRAQ mailing list is a full-disclosure mailing list covering Unix security, and it's sister list NTBUGTRAQ covers similar issues under Microsoft Windows NT. Other mailing lists of interest are the Linux security-audit list and vendor security mailing lists, which are provided by most commercial Unix vendors as well as Linux distribution vendors such as Debian and RedHat.
AUSCert also provides incident reporting and follow-up facilities to its members and works with the other FIRST organizations in other countries which means that they are in a much better position to inform others whose systems have been broken into and to follow up on security incidents than an ISP is. Very serious security incidents should also be reported to the Australian Federal Police, but remember that they have limited resources and require solid evidence in order get anywhere.
Running a secure Unix system means not only keeping the system secure but also and keeping secured logs to tell you who was on the system and what they were doing. Often commercial operating systems such as Digital Unix provide detailed system call accounting; Linux provides standard process accounting which can be used to search for people who have attempted to use well known network attacks. You may be hesitant to turn on process accounting due to the overhead - you shouldn't be. If your system is running too slow, process accounting can also help you find which programs to profile in order to find places to optimize. The overhead of the accounting is much less that the gain which can typically be found by looking at what it finds, and the data it produces can be invaluable for security monitoring.
System logging via syslog is another item which should be made reliable and secure. System logging should be kept on a remote host, which runs nothing but syslog and has separate partitions for each host and service to avoid the use of syslog flooding to prevent some item from being logged by filling the log partition. Systems which are not used to store logs from the network should have the UDP syslog port disabled so the disk can't be filled by people sending floods of syslog packets at random hosts. Your UDP syslog port should not be externally accessible to prevent external abuses.
Even then, these logs will not be enough. You also need logs of all the traffic flows across all your external network interfaces. These are invaluable in tracing back what happened in an attack or security incident. Some approaches which tend to be useful are using "nnstat" on a Unix host watching the traffic, or using Cisco's NetFlow accounting on your border routers and some software to collect the data. This data should be archived for at least 6 months.
The most common way for local users to gain superuser access is via SUID programs. Many people leave applications SUID-root even when nobody is ever going to want to use them. For example, if you are running a dialup shell machine with Digital Unix, the chance of anyone running the DEC X password program are vanishingly small (similar examples could be found for most Unix distributions). This contained a root-access security hole which was not present for those who disabled the unused SUID program. You should check all SUID applications and disable those which will never be used (or will never be used by anyone but those who already have root access). Consider putting users into groups and giving groups access to devices instead of setting applications SUID to access devices (this is an area where Debian Linux tends to be superior over RedHat Linux).
The most common way for remote users to gain unauthorized access is via network services (including, of course, web services). Some of the unused services can be disabled, and the rest must be kept current. A network security scanner can be quite useful here, however some scanners (such as SAINT, SATAN or mscan) may simply produce far too much output for you to process if you have a large number of hosts. I recently looked at a free scanner called "nessus" which looks quite promising, it has a client/server design with both Unix and Windows clients and a Unix daemon, and includes over 100 plug-ins for different security problems, and may prove to be a good alternative. Another approach is to use a locally developed scanner to rapidly scan TCP ports and search for specific security problems (well known web scripts, guest logins, unpassworded printers [a potential denial of service attack by changing IP address to match a router port or server] and open mail relays).
Another underestimated area of insecurity is network hardware. Just as printers are often left unpassworded and may be used to replace server and router IP addresses, switches and routers may have security problems which their administrators are unaware of. For example, a security vulnerability covering all Cisco routers from IOS version 9.1 to the versions being shipped in the middle of this year (1998) was recently discovered. Various other brands have been discovered to have hard-coded passwords in their routers and switches, which are sometimes stored in plaintext in publicly available boot images. Some hardware puts the SNMP community names in the public SNMP tree. You should be careful to investigate the security aspects of the hardware in your network.
Most ISPs provide some form of shell access on some form of Unix system. One thing which you should rarely if ever assume is that your users cannot get to a Unix shell prompt; if you provide any Unix access they will often find a way. There are countless examples I could think of where Universities or ISPs have believed they have locked users out of their shell system and found that there was a back-door to shell access or a way to remove/modify the login script via FTP (remember even if it is owned by the root user it can be removed and replaced). All systems should be securely configured, and any important data appropriately protected.
Probably the most common Unix operating system in use by ISPs is the Linux operating system. There are alternatives to this, still free, which may provide a higher level of security. The most prominent one is OpenBSD - the OpenBSD source base has been audited by the OpenBSD team (most notably, Theo de Raadt). A less extreme option is to add some "Secure Linux" kernel patches to your existing Linux systems. These allow some extra security, such as non-executable stack or automatically opening file-descriptors 0 to 2 for SUID processes. These measures reduce the chance of new vulnerabilities being exploitable on your system. There is some performance penalty in using this patch, however if you don't have the time to dedicate to being up to the second in security patches for new vulnerabilities it is most likely worth the trade-off.
There are many other ways to improve the security of a Linux shell system, depending on what level of access you wish to provide to your clients. For example, if you don't want your clients running anything you haven't installed, you may mount home directories, mail spool and temporary space with the "noexec" flag. This makes it impossible to run most local exploit scripts as downloaded, however it is still possible to exploit most of the problems, just with more work. Less extreme mount options, which could be used instead of or as well as "noexec", include "nosuid" and "nodev". Other restrictions such as quotas, resource limits and per-user process count limits help reduce exposure to denial-of-service attacks from local users, with the obvious side-effect of reducing what local users can do (so, set them reasonably considering both sides of the picture).
Another area where you can make a choice between usability and security is per-user login control. This is used at UWA on the main student system - to log in from any machine which is not a DNS-registered machine in the uwa.edu.au domain, our students must apply for access from the machine or from the whole ISP. This reduces exposure to unknown hackers doing password guessing attacks, but as with all security measures it is just one more step to reduce the chance of break-ins.
For content providers, the web server is often the most visible way of breaking into the system. Both Unix and Windows NT web servers have had their share of security problems. Under Unix, remember to keep up to date with the latest stable version of Apache, manually inspect all CGI scripts (even those from well-known archives) for common security problems, consider using Perl's taint checking on CGI scripts and don't leave any CGI scripts you don't need installed, including the default ones. Under NT, make sure you watch security lists for vulnerabilities, as many NT product vendors won't admit to having known security problems, and keep up to date with any vendor updates, which may include security fixes for problems which are not revealed.
The final common way into a system is local software running with privilege. You should never install SUID, SGID or remotely accessible software without having a highly security-knowledgeable person check the software first. Even the best internet software authors have let many security problems slip past them, and most people writing custom software and scripts will have less experience and make more mistakes.
Up to this point I have mainly been outlining ways to avoid being broken into without mentioning specific vulnerabilities. There are just so many vulnerabilities that it's not possible to cover a reasonable number of them here. Obviously the problems scanned for by mscan are currently the most popular among hackers looking for random targets - for example, buffer overflows in name serving and imap daemons. However there are other common problems which are harder to exploit or less significant and often overlooked, such as tar files and patches which modify password files (either directly or by making symlinks to disguise what's happening), a writable "nobody" home directory of /tmp (which may be a problem when configuration or remote access permission files aren't checked for the owner, or when shell initialization files are executed if the superuser ever tests programs with the command 'su - nobody') and methods such as certain sshd tunnels and the ftp-bounce attack which may produce connections with ident of root (and in the second case from a secure port). Another traditional problem is mode 711 files - these just can't be handled sanely on NFS (especially with untrusted clients) or on Intel hardware (a simple LD_PRELOAD for example can dump a dynamically linked mode 711 binary).
A final point is what to do if you are broken into. There are a few possible approaches and it may be best to consult the AUSCert or Cert technical tips web pages. Depending on the severity of the situation, one of the best first things to do is turn the system off, place the drives in another system, and make a complete backup copy of them. A less extreme approach is to unplug the system from the network, kill all intruder-related processes (such as linsniffer, mscan, etc - note that "ps" is often replaced with a modified version and in more sophisticated attacks a kernel module can be used to hide processes; one simple check that can sometimes find that "ps" is a modified version is to run "pstree" and see if anything interesting shows up, but this doesn't detect kernel modules which hide processes), identify the method of break-in, secure the security hole, check all recently modified files (note - the intruder may jump the system clock backwards when modifying files to confuse such a check) and then bring the system back online. If the system really cannot go down, it is possible to do this all without removing the system from the network, but this should only be attempted by an expert who is aware of how the system was broken into and a reboot afterwards is still essential to make certain no processes or modules are lingering from the attack.
This problem has been around in Australia since AARNet introduced international byte charging years ago, and people would abuse FTP servers with the "Alex" FTP virtual file-system and the old archie.au "fetchfile" service. The success of local internet exchanges and the return of split charging has, along with the proliferation of insecure proxy servers and mail relays, escalated this problem dramatically.
Protection is relatively straight-forward. The popular Squid proxy server provides easy to configure access lists (acls). Paul Vixie's TSI (transport security initiative) web pages provide a good reference on blocking against unauthorized mail relaying, which is a feature by default in most recent sendmail (8.9.x) servers. The NEC SOCKS configuration file has a flexible method for listing hosts to access, and the latest WinGate now comes configured with reasonable restrictions by default (due to past abuses).
If you have a large network, it is relatively easy to write a custom scanner to check for open mail relays and run this on a local machine with a non-local IP address (eg, 192.168.x.x). This can identify your open mail relays which you can then work on closing. This technique is most useful on networks such as universities and for you to use on your client networks. I have my own "fast scanners" which I have used to scan class B networks in PARNet for open mail relays, which we have then worked on closing.
You should also filter the SMTP ports of your dialup users. At UWA, we have found ourself in internet mail server blacklists since we are the upstream of students who remain connected roughly 18 hours a day and run software such as WinGate or other mail relaying software, without appropriate relay prevention. Your clients should be, by default, not permitted to receive direct port 25 (SMTP) connections. It may help if you do this with your office systems too - have an explicit list of machines permitted to receive e-mail from the internet rather than a list of those who can't.
A quite different form of traffic-stealing is incoming unsolicited bulk e-mail (commonly known as "spam") - where a commercial body sends millions of copies of some e-mail message to internet users. This is stealing you and your customers bandwidth in order to send them advertising. Especially in Australia, where almost all traffic is volume-charged and some clients are volume-charged for the e-mail they receive, "spam" e-mail is considered by many as a serious problem. Most "spam" e-mail messages are only of relevance to Americans anyway. There are 4 main approaches to blocking "spam" e-mail. Firstly, using your own list of known bad sites and address - however, this is high maintenance. Secondly, Paul Vixie's RBL (real-time black-hole list) gives a black-list of known "spam" sites and sites open to relaying. The alternative rbl.dorkslayers.com site provides a much larger list which can be used by sites with a more extreme "anti-spam" stance. Thirdly, SPAMCAN provides a regular-expression based header filter for rejecting e-mail. And finally, Sendmail 8.9.x has the ability to use headers and regular-expressions in rulesets, however this is yet to be widely used. If you go for either of these last two methods, I strongly recommend blocking all messages with a X-UIDL or X-PMFLAGS header, and considering blocking all with an X-Advertisement. The first two of these headers (X-UIDL and X-PMFLAGS) are widely used by "spammers" to attempt to confuse mail reading software to make their message be seen over and over again by the one client.
Just as monitoring traffic logs makes you aware of scans and potential break-ins to secure your hosts, it can also make you aware of abnormal traffic flows which may indicate traffic stealing. Make sure you monitor the traffic on all your links, even "friendly" ones.
However education will never reach all your clients, and network filtering is another important way to protect them. You can easily filter the Windows file-sharing ports, which is quite unlikely to inconvenience clients and quite likely to prevent problems for them. While Windows 98 recommends to dialup users that they disable file-sharing on their dialup interface, earlier versions left it open to attacks and unexpected access.
Filtering of large ICMP echo packets also protects your clients who may be running old operating systems. A more recent "hot issue" is ICMP echo packets which contain the string +++ATH0, which disconnects clients who do not have a guard time set on their modem and have not disabled the +++ escape. A suggested fix for this is that after a new client connects via SLIP or PPP you can send them a packet which will disable the +++ escape and save their modem configuration. This will usually disconnect them once but it will only affect people with vulnerable modems. You probably don't want to do this to a new client on their first connection - some decision of when to do this protection is needed here.
Yet another measure used to protect clients is the use of procmail to protect against buffer overflows and other problems in mail clients. The latest version of sendmail can also be configured to provide this protection. If you want more information on this use of procmail, search the BUGTRAQ archives.
The final area of protecting your clients is also an area where you protect yourself. It is important to protect your clients against their own (and possibly your support staff's) inadequacy in the area of password selection. Crackers such as "Crack" and "john" can be used to attempt to crack passwords in your own password file. Often it will surprise you who has guessable passwords. You cannot tell if someone has discovered an unlogged way to guess passwords and is gradually working away guessing someone's password - it is important that you check the password security yourself. Insecure passwords allow an easy entrance for hackers or crackers to do much more damage.
Standard network filtering consists of a number of rules to validate your
traffic; for example:
Make sure you apply rules which you deem appropriate on all interfaces, including any local peering or friendly traffic exchange links.
If you want to detect attempted probes/scans, a good way to do this is to put half a dozen addresses which are really unused IP addresses into the DNS and unfilter these IP addresses, the watch all traffic logged to these addresses. Traffic logging by methods such as "nnstat" or Cisco's NetFlow has been mentioned already. The most common scanner at the moment is "mscan" which scans for things such as SGI systems (tcpmux scanning), RedHat Linux systems running old DNS servers and imap daemons and other well-known security holes. Other ports which are commonly scanned include SNMP and SMTP.
As explained as the beginning of this talk, the most important thing about a firewall is to make sure that the firewall itself is secure and does not provide additional security problems or a false sense of security - remember that if anybody inside your firewall is not completely trusted, you still have to fully secure all your systems just as you would without the firewall. However there are firewall packages out there which include scanning of e-mail attachments and web content for viruses and known trojan horses. These may be useful for yourself or for paranoid clients, however securing the individual machines against viruses will always offer a better alternative.
If you cannot achieve good physical security for a system then it is important that you configure the system so that physical access cannot be used to break into the system. On a non-Intel Unix system this usually means configuring a console-secure mode and EEPROM password, disabling the break key and passwording single-user boot. On an Intel Unix system, you also have to protect against people accessing the BIOS and booting off alternative devices. This is quite difficult as most PCs have well-known BIOS passwords. Even if you solve these problems, you still aren't protected against the denial of service attack resulting from someone disconnecting your system from power or from the internet, and you may be vulnerable to someone reconnecting your system to a local LAN and spoofing another servers address to gain increased access. Obviously all your network ports also need to be secure to make sure people can't "replace" one of your servers with another system which responds to ARP requests faster and, for example, steals passwords, or has a configurable MAC address and confuses the network hardware -- or simply add a traffic sniffer machine.
Basically, if you wish to obtain a good level of security it is critical that your systems are physically secure from everyone but trusted staff at all times.
Social engineering can be used to breach physical security or in a network attack. I mentioned the Phrack article on how to gain access to certain types of computer systems in my introduction. Another use of social engineering is to gain access to a machine room -- for example, "Hi. I'm from CompuFlow machine room air conditioning systems. I'm here to fix the humidity monitoring system, we've been getting alarms from it for the past half hour. I'll be about 2 hours." Would any of your staff let this person in? If there was a technician who legitimately needed to be in your machine room, would you be able to leave someone to watch him without risking physical security elsewhere?
A security policy is important. A security policy forms the basis of your efforts to secure your network. In designing a security policy you must consider the security zones of your network.
To secure your network you must protect your servers, your clients and your traffic. Security is an ongoing job, and keeping current is essential. Services such as AUSCert and internet resources such as BUGTRAQ, NTBUGTRAQ and rootshell can be very useful in keeping current.
Your network should be covered by at least a filtering router to prevent network attacks. Firewalls can also be useful, but it is important not to over-estimate the usefulness of a firewall.
And last but not least, physical security and social engineering
vulnerabilities should never be underestimated.