SY0-501 Section 4.3 Given a scenario, select the appropriate solution to establish host security.
Operating system security and settings
The ability to run the administrative interfaces within the operating system, and the applications associated with them, is often the difference between a standard user and an administrative user. The person running the administrative interfaces can make configuration changesto the system(s) and modify settings in ways that can have wide-ranging consequences. For example, a user who is able to gain access to the administrative tools could delete other users, set their own ID equal to the root user, change passwords, or delete key files.
To protect against this, access to management and administrative interfaces should be restricted to only those administrators who need it. Not only should you protect server utilities, but also you should also even go so far as to remove users’ access to workstation utilities such as regedit and regedit32 that have administrative depth. The System And Security applet beneath the Control Panel (known just as Security in operating systems earlier than Windows 7) is the main interface for security features inWindows operating systems. From here, you can configure Windows Firewall; automatic scans of your computer, and Windows Defender.
One of the best tools to use when looking for possible illicit activity on a workstation is Performance Monitor (known as System Monitor in early versions of Windows). This utility can be used to examine activity on any counter. Excessive processor usage is one counter worth paying attention to if you suspect the workstation is affected or being illegitimately accessed. It is important that you use password protection to protect the management functionality and consoles on a workstation or server. Just because users are authorized to use that machine does not mean they should be authorized to access all management functions.
OS Hardening
The space of hardening an OS is vast. It includes new ideas yet to be implemented in widely used OS (Windows, Linux). It includes re-designing and re-implementing existing ideas such as “change-roots”. It includes analyzing the source code of an OS extremely carefully by experts as well as via software tools based on mathematical proof techniques.
Least Privilege
On Unix/Linux systems, the user called root or superuser, user id 0, can bypass all security restrictions. Windows systems have the “System” and “Administrator” accounts. The super user privilege should be split into many smaller privileges. E.g., a backup process should be able to read any file, but it should not be able to shut down the system, modify files, or send network packets. Processes, not users, should be given privileges. The backup program should be permitted to read all files, even though the administrator user who invokes the program should not be allowed such access. The backup process should not pass down its privileges to processes that it starts. The use of such finely divided abilities instead of sweeping powerful mechanisms is called the least privilege principle.
The traditional Unix model allows for access control of only files. So, a number of resources become “files”: processes, hard disks, network interfaces, etc. In order to apply the principle of least privilege, we also need to divide the resources into finer units (often called objects, but unrelated to OOP). The users and processes are called subjects.
Capabilities
A capability is a word used with different meanings in the context of OS design. In OS research literature, processes hold tokens, called capabilities, denoting resources to be accessed and what can be done with them. Capabilities can be explicitly passed among processes. Linux, Windows, … are not capability based in this sense. This usage of the word is unrelated to “POSIX capabilities” which are implemented in Linux.
Mandatory
Access Control Newer OS designs add additional security attributes, called sensitivity labels (SLs), to files, processes, network interfaces, host addresses, and other resources based on how we wish to compartmentalize them. Access control using SLs is called mandatory access control (MAC). It is called mandatory because no inheritance of privileges is assumed. E.g., MAC can be applied at the network interface level. Incoming packets are assigned SLs, based on the source IP address or the network interface. Outgoing packets will have the label of the process that created them. A filtering rule can be formulated so that an incoming or outgoing packet can be dropped if the SL does not satisfy some conditions. MAC, in its academic form, is not about access control but about information flow.
There are Linux kernel modifications that accomplish the following. Secure policy enforcement; Supports read, write, append, execute, view, and read-only ptrace object permissions; Supports hide, protect, and override subject flags; Supports the PaX flags; Shared memory protection feature; Integrated local attack response on all alerts; Subject flag that ensures a process can never execute trojaned code; Intelligent learning mode that produces least-privilege ACLs with no configuration; Full-featured fine-grained auditing; Resource ACLs; Socket ACLs; File/process ACLs; Capabilities; Protection against exploit bruteforcing; /proc/pid filedescriptor/memory protection; ACLs can be placed on nonexistent files/processes; ACL regeneration on subjects and objects; Administrative mode to use for regular sysadmin tasks; ACL system is resealed up admin logout; Globbing support on ACL objects; Configurable log suppression; Configurable process accounting; Humanreadable configuration; Not filesystem dependent; Not architecture dependent; Scales well: supports as many ACLs as memory can handle; No runtime memory allocation; SMP safe; Include directive for specifying additional ACLs; Enable, disable, reload capabilities; Userspace option to test permissions on an ACL; Option to hide kernel processes;
Role-Based Access Control
We can divide the permissions/privileges based on the function (“role”) they have, such as backup, file system integrity check, filtration of incoming packets. Each user is permitted a collection of roles. RBAC can implement MAC. There is a considerable amount of discrete mathematics developed for RBAC and MAC.
Read-Only File System
Attackers with root privileges can have access to any file. He can also access raw devices and corrupt the file system on it. We should mount important file volumes as read-only. But the ordinary mount cannot securely accomplish that because of access to raw devices. A readonly file system must disable file-write system calls and this would also prevent modifying file system through raw devices.
Anti-Malware
It is important to stop malware before it ever gets hold of a system. Although tools that identify malware when they find it on a system are useful, real-time tools that stop it from ever making it to the system are much better. One of the available tools for Windows is Microsoft Security Essentials, and it runs on Windows 7 as well as Windows Vista and Windows XP SP2. A beefed-up Windows Defender replaced Microsoft Security Essentials in Windows 8.
Also note that another free tool from Microsoft is the Malicious Software Removal Tool, which helps remove any infection found but is not intended to be a full anti-malware suite. An updated version of this tool is released on the second Tuesday of each month, and once installed, it is included, by default, in Microsoft Update and Windows Update.
Patch management
Patch management is an area of systems management that involves acquiring, testing, and installing multiple patches (code changes) to an administered computer system. Patch management tasks include: maintaining current knowledge of available patches, deciding what patches are appropriate for particular systems, ensuring that patches are installed properly, testing systems after installation, and documenting all associated procedures, such as specific configurations required. A number of products are available to automate patch management tasks, including RingMaster’s Automated Patch Management, PatchLink Update, and Gibraltar’s Everguard.
Like its real world counterpart, a patch is a “make-do” fix rather than an elegant solution. Patches are sometimes ineffective, and can sometimes cause more problems than they fix. Patch management experts, such as Mark Allen, CTO of Gibraltar Software, suggest that system administrators take simple steps to avoid problems, such as performing backups and testing patches on non-critical systems prior to installations. Patch management is a circular process and must be ongoing. The unfortunate reality about software vulnerabilities is that, after you apply a patch today, a new vulnerability must be addressed tomorrow. Develop and automate a patch management process that includes each of the following:
Detect. Use tools to scan your systems for missing security patches. The detection should be automated and will trigger the patch management process.
Assess. If necessary updates are not installed, determine the severity of the issue(s) addressed by the patch and the mitigating factors that may influence your decision. By balancing the severity of the issue and mitigating factors, you can determine if the vulnerabilities are a threat to your current environment.
Acquire. If the vulnerability is not addressed by the security measures already in place, download the patch for testing.
Test. Install the patch on a test system to verify the ramifications of the update against your production configuration.
Deploy. Deploy the patch to production computers. Make sure your applications are not affected. Employ your rollback or backup restore plan if needed.
Maintain. Subscribe to notifications that alert you to vulnerabilities as they are reported. Begin the patch management process again.
Trusted OS
A trusted operating system (TOS) is any operating system that meets the government’s requirements for security. The most common set of standards for security is Common Criteria (CC). This document is a joint effort among Canada, France, Germany, the Netherlands, the United Kingdom, and the United States. The standard outlines a comprehensive set of evaluation criteria, broken down into seven Evaluation Assurance Levels (EALs).
EAL 1 EAL 1 is primarily used when the user wants assurance that the system will operate correctly but threats to security aren’t viewed as serious.
EAL 2 EAL 2 requires product developers to use good design practices. Security isn’t considered a high priority in EAL 2 certification.
EAL 3 EAL 3 requires conscientious development efforts to provide moderate levels of security.
EAL 4 EAL 4 requires positive security engineering based on good commercial development practices. It is anticipated that EAL 4 will be the common benchmark for commercial systems.
EAL 5 EAL 5 is intended to ensure that security engineering has been implemented in a product from the early design phases. It’s intended for high levels of security assurance. The EAL documentation indicates that special design considerations will most likely be required to achieve this level of certification.
EAL 6 EAL 6 provides high levels of assurance of specialized security engineering. This certification indicates high levels of protection against significant risks. Systems with EAL 6 certification will be highly secure from penetration attackers.
EAL 7 EAL 7 is intended for extremely high levels of security. The certification requires extensive testing, measurement, and complete independent testing of every component.
EAL certification has replaced the Trusted Computer Systems Evaluation Criteria (TCSEC) system for certification, which was popular in the United States. It has also replaced the
Information Technology Security Evaluation Criteria (ITSEC), which was popular in Europe. The recommended level of certification for commercial systems is EAL 4. Currently, only a few operating systems have been approved at the EAL 4 level, and even though an operating system straight out of the box may be, that doesn’t mean your own individual implementation of it is functioning at that level. If your implementation doesn’t use the available security measures, then you’re operating below that level. As an administrator, you should know and thoroughly understand that just because the operating system you have is capable of being certified at a high level of security doesn’t mean that your implementation is at that level.
Host-based firewall
Host-based firewalls for servers typically use rule sets similar to those of network firewalls. Some host-based firewalls for desktops and laptops also use similar rulesets, but most allow or deny activity based on lists of applications. Activity involving any application not on the lists is either denied automatically, or permitted or denied on the basis of the user’s response to a prompt asking for a decision about the activity. To prevent malware incidents, organizations should configure host-based firewalls with deny-by-default rulesets for incoming traffic.
In addition to restricting network activity based on rules, many host-based firewalls for workstations incorporate antivirus software and intrusion prevention software capabilities, as well as suppressing Web browser pop-up windows, restricting mobile code, blocking cookies, and identifying potential privacy issues within Web pages and e-mails. Host-based firewalls that integrate these functions can be very effective not only at preventing most types of malware incidents, but also at stopping the spread of malware infections. For example, a host-based firewall with antivirus capabilities can monitor inbound and outbound e-mails for signs of mass mailing viruses or worms and temporarily shut off e-mail services if such activity is detected. Accordingly, host-based firewalls for workstations that offer several types of malware prevention capabilities typically offer the best single, hostbased technical control for malware threat mitigation, as long as they are configured properly and kept up-to-date at all times with the latest signatures and software updates. Host-based firewalls are particularly important for systems that are network-connected but are not protected by network firewalls and other network-based security controls. Systems that are directly accessible from the Internet should be protected whenever possible through host-based firewalls to prevent network service worms and other threats from connecting to and attacking them.
Host-based
Intrusion detection A host-based IDS (HIDS) is designed to run as software on a host computer system. These systems typically run as a service or as a background process. An HIDS examines the machine logs, system events, and applications interactions; it normally doesn’t monitor incoming network traffic to the host. An HIDS is popular on servers that use encrypted channels or channels to other servers.
Two major problems with HIDS aren’t easily overcome. The first problem involves a compromise of the system. If the system is compromised, the log files to which the IDS reports may become corrupt or inaccurate. This may make fault determination difficult or it may make the system unreliable. The second major problem with an HIDS is that it must be deployed on each system that needs it. This can create a headache for administrative and support staff. One of the major benefits of HIDSs is the potential to keep checksums on files. These checksums can be used to inform system administrators that files have been altered by an attack. Recovery is simplified because it’s easier to determine where tampering has occurred. The other advantage is that HIDS can read memory when NIDS cannot. HIDSs typically respond in a passive manner to an incident. An active response would theoretically be similar to those provided by network-based IDS.
Hardware Security
Hardware security involves applying physical security modifications to secure the system(s) and preventing them from leaving the facility. Don’t spend all of your time worrying about intruders coming through the network wire while overlooking the obvious need for physical security.
Adding a cable lock between a laptop and a desk prevents someone from picking it up and walking away with a copy of your customer database. All laptop cases include a built- in security slot in which a cable lock can be inserted to prevent it from easily being removed from the premises. When it comes to desktop models, adding a lock to the back cover can prevent an intruder with physical access from grabbing the hard drive or damaging the internal components. The lock that connects through that slot can also go to a cable that then connects to a desk or other solid fixture to keep the entire PC from being carried away. In addition to running a cable to the desk, you can choose to run an end of it up to the monitor if theft of peripherals is a problem in your company. You should also consider using a safe and locking cabinets to protect backup media, documentation, and any other physical artifacts that could do harm if they fell into the wrong hands. Server racks should lock the rack-mounted servers into the cabinets to pre- vent someone from simply pulling one and walking out the front door with it.
Virtualization
An equally popular—and complementary—buzzword to “the cloud” is virtualization. The cost savings promised by virtualization are often offset by the threats to security should the hypervisor be compromised. One reason for the popularity of virtualization is that in order to have cloud computing, you must have virtualization—it is the foundation on which cloud computing is built. At the core of virtualization is the hypervisor, which is the software/hardware combination that makes it possible. There are two methods of implementation: Type I and Type II. The Type I hypervisor model, also known as bare metal, is independent of the operating system and boots before the OS. The Type II hypervisor model, also known as hosted, is dependent on the operating system and cannot boot until the OS is up and running. It needs the OS to stay up so that it can boot.
Snapshots
Snapshots allow you to take an image of a system at a particular point in time. With most virtual machine implementations, you can take as many snapshots as you want (provided you have enough storage space) in order to be able to revert a machine to a “saved” state. Snapshots contain a copy of the virtual machine settings (hardware configuration), information on all virtual disks attached, and the memory state of the machine at the time of the snapshot.
Patch Compatibility
As with any server implementation, patch compatibility needs to be factored in before systems are updated. With VMware, patch releases are based on the code from the immediately preceding update, and compatibility for patches is assumed to have the same compatibility as the preceding update release. Although this approach differs for each vendor, most follow similar guidelines.
Host Availability/Elasticity
Host availability is a topic that needs to be addressed in the Service Level Agreement (SLA) with any vendor with whom you contract for cloud services. The goal is always to have minimal downtime: five 9s, or 99.999 percent uptime, is the industry standard. According to NIST, one of the five essential characteristics of the cloud is not just elasticity, but rapid elasticity.
Security Control
Testing Security control testing (SCT) often includes interviews, examinations, and testing of systems to look for weaknesses. It should also include contract reviews of SLAs, a look at the history of prior breaches that a provider has had, a focus on shared resources as well as dedicated servers, and so on.
If this sounds a lot like penetration testing, that is because it is a subset of it. In some organizations, security can be pushed aside in favor of design, and there is a great opportunity for this to happen when transitioning to virtualization and/or cloud computing. As a security professional, it is your responsibility to see that design does not overwhelm security.
Sandboxing
Sandboxing involves running apps in restricted memory areas. By doing so, it is possible to limit the possibility of an app’s crash, allowing a user to access another app or the data associated with it. Without sandboxing, the possibility exists that a crash in anothercustomer’s implementation could expose a path by which a user might hop (“server hop”) to your data. It is important to know that though this possibility exists—and you should test extensively to keep it from happening—the possibility of it occurring has been greatly exaggerated by some in the media.
Popular posts
Recent Posts