SY0-501 Section 6.2- Given a scenario, use appropriate cryptographic methods.
WEP vs. WPA/WPA2 and pre-shared key
Wired Equivalent Privacy (WEP) was intended to provide basic security for wireless networks, whereas wireless systems frequently use the Wireless Application Protocol (WAP) for network communications. Over time, WPA and WPA2 have replaced WEP in most implementations.
Wired Equivalent Privacy Wired Equivalent Privacy
(WEP) is a wireless protocol designed to provide a privacy equivalent to that of a wired network. WEP was implemented in a number of wireless devices, including smartphones and other mobile devices. WEP was vulnerable because of weak- nesses in the way its encryption algorithms (RC4) are employed. These weaknesses allowed the algorithm to be cracked potentially in as little as five minutes using available PC soft- ware. This made WEP one of the more vulnerable security protocols. As an example, the initialization vector (IV) that WEP uses for encryption is 24-bit, which is quite weak and means that IVs are reused with the same key. By examining the repeating result, it was easy for attackers to crack the WEP secret key. This is known as an IV attack. Since the IV is shorter than the key, it must be repeated when used. To put it in perspective, the attack happened because the algorithm used is RC4, the IV is too small, the IV is static, and the IV is part of the RC4 encryption key.
MD5
MD5 was developed in 1991 and is structured after MD4 but with additional security to overcome the problems in MD4. Therefore, it is very similar to the MD4 algorithm, only slightly slower and more secure. MD5 creates a 128-bit hash of a message of any length. Like MD4, it segments the message into 512-bit blocks and then into sixteen 32-bit words. First, the original message is padded to be 64 bits short of a multiple of 512 bits. Then a 64-bit representation of the original length of the message is added to the padded value to bring the entire message up to a 512-bit multiple.
SHA-256
SHA-256 is similar to SHA-1, in that it will also accept input of less than 264 bits and reduces that input to a hash. This algorithm reduces to 256 bits instead of SHA-1’s 160. Defined in FIPS 180-2 in 2002, SHA-256 is listed as an update to the original FIPS 180 that defined SHA. Similar to SHA-1, SHA-256 will accept 264 bits of input and uses 32-bit words and 512-bit blocks. Padding is added until the entire message is a multiple of 512. SHA-256 uses sixty-four 32-bit words, eight working variables, and results in a hash value of eight 32-bit words, hence 256 bits. SHA-256 is more secure than SHA-1, but the attack basis for SHA1 can produce collisions in SHA-256 as well since they are similar algorithms. The SHA standard does have two longer versions, however.
SHA-384
SHA-384 is also similar to SHA-1, but it handles larger sets of data. SHA-384 will accept 2128 bits of input, which it pads until it has several blocks of data at 1024-bit blocks. SHA384 also used 64-bit words instead of SHA-1’s 32-bit words. It uses six 64-bit words to produce the 284-bit hash value.
SHA-512
SHA-512 is structurally similar to SHA-384. It will accept the same 2128 input and uses the same 64-bit word size and 1024-bit block size. SHA-512 does differ from SHA-384 in that it uses eight 64-bit words for the final hash, resulting in 512 bits.
RIPEMD
RIPEMD is a cryptographic hash based upon MD4. It’s been shown to have weaknesses and has been replaced by RIPEMD-128 and RIPMD-160. These are cryptographic hash functions designed by Hans Dobbertin, Antoon Bosselaers, and Bart Preneel. The name comes from the project they were designed for: EU project RIPE (RACE Integrity Primitives Evaluation, 1988-1992). RIPEMD-160 produces a hash of the same length as SHA1 but is slightly slower. RIPEMD128 has been designed as a drop-in replacement for MD4/MD5 whilst avoiding some of the weaknesses shown for these two algorithms. (These are possible hash collisions.) It is about half the speed of MD5. Implementations for both these hashing functions are present in Trf, and a tcllib module provides a pure-Tcl version if required. The tcllib version will use Trf if it is available.
AES
Because of the advancement of technology and the progress being made in quickly retrieving DES keys, NIST put out a request for proposals for a new Advanced Encryption Standard (AES). It called for a block cipher using symmetric key cryptography and supporting key sizes of 128, 192, and 256 bits.
In the fall of 2000, NIST picked Rijndael to be the new AES. It was chosen for its overall security as well as its good performance on limited capacity devices. Rijndael’s design was influenced by Square, also written by John Daemen and Vincent Rijmen. Like Square, Rijndael is a block cipher separating data input in 128-bit blocks. Rijndael can also be configured to use blocks of 192 or 256 bits, but AES has standardized on 128-bit blocks. AES can have key sizes of 128, 192, and 256 bits, with the size of the key affecting the number of rounds used in the algorithm. Like DES, AES works in three steps on every block of input data:
1. Add round key, performing an XOR of the block with a subkey.
2. Perform the number of normal rounds required by the key length.
3. Perform a regular round without the mix-column step found in the normal round.
After these steps have been performed, a 128-bit block of plaintext produes a 128-bit block of ciphertext. As mentioned in step 2, AES performs multiple rounds. This is determined by the key size. A key size of 128 bits requires 9 rounds, 192-bit keys will re quire 11 rounds, and 256-bit keys use 13 rounds. Four steps are performed in every round:
1. Byte sub. Each byte is replaced by its S-box substitute.
2. Shift row. Bytes are arranged in a rectangle and shifted.
3. Mix column. Matrix multiplication is performed based upon the arranged rectangle.
4. Add round key. This round’s subkey is cored in.
These steps are performed until the final round has been completed, and when the final step has been performed, the ciphertext is output.
DES
DES, the Data Encryption Standard, was developed in response to the National Bureau of Standards (NBS), now known as the National Institute of Standards and Technology (NIST), issuing a request for proposals for a standard cryptographic algorithm in 1973. NBS received a promising response in an algorithm called Lucifer, originally developed by IBM. The NBS and the NSA worked together to analyze the algorithm’s security, and eventually DES was adopted as a federal standard in 1976.
NBS specified that the DES standard had to be recertified every five years. While DES passed without a hitch in 1983, the NSA said it would not recertify it in 1987. However, since no alternative was available for many businesses, many complaints ensued, and the NSA and NBS were forced to recertify it. The algorithm was then recertified in 1993. NIST has now certified the Advanced Encryption Standard (AES) to replace DES. DES is what is known as a block cipher; it segments the input data into blocks of a specified size, typically padding the last block to make it a multiple of the block size required. In the case of DES, the block size is 64 bits, which means DES takes a 64-bit input and outputs 64 bits of ciphertext. This process is repeated for all 64-bit blocks in the message. DES uses a key length of 56 bits, and all security rests within the key. The same algorithm and key are used for both encryption and decryption.
3DES
Triple DES (3DES ) is a variant of DES. Depending on the specific variant, it uses either two or three keys instead of the single key that DES uses. It also spins through the DES algorithm three times via what’s called multiple encryption. Multiple encryptions can be performed in several different ways. The simplest method of multiple encryption is just to stack algorithms on top of each other—taking plaintext, encrypting it with DES, then encrypting the first ciphertext with a different key, and then encrypting the second ciphertext with a third key. In reality, this technique is less effective than the technique that 3DES uses, which is to encrypt with one key, then decrypt with a second, and then encrypt with a third.
HMAC
What is HMAC? HMAC is merely a specific type of MAC function. It works by using an underlying hash function over a message and a key. It is currently one of the predominant means to ensure that secure data is not corrupted in transit over unsecure channels (like the internet). Any hashing functions could be used with HMAC, although more secure hashing functions are preferable. An example of a secure hash function (which is commonly used in HMAC implementations) is SHA-14. (Other common hashing functions include MD5 and RIPEND160). As computers become more and more powerful, increasingly complex hash functions will probably be used. Furthermore, there are several generations of SHA hashing functions (SHA-256, SHA-384, and SHA-512) which are currently available but not very widely used as their added security is not yet believed to be needed in everyday transactions.
RSA
RSA is one of the first public key cryptosystems ever invented. It can be used for both encryption and digital signatures. RSA is named after its inventors, Ron Rivest, Adi Shamir, and Leonard Adleman, and was first published in 1977. This algorithm uses the product of two very large prime numbers and works on the principle of difficulty in factoring such large numbers. It’s best to choose large prime numbers from 100 to 200 digits in length and that are equal in length. These two primes will be P and Q. Randomly choose an encryption key, E, so that E is greater than 1, E is less than P * Q, and E must be odd. E must also be relatively prime to (P – 1) and (Q – 1). Then compute the decryption key D:
D = E–1 mod ((P – 1)(Q – 1))
RC4
RC4 was created before RC5 and RC6, but it differs in operation. RC4 is a stream cipher;whereas all the symmetric ciphers we have looked at so far have been block-modeciphers.A stream-mode cipher works by enciphering the plaintext in a stream, usually bit by bit.This makes stream ciphers faster than block-mode ciphers. Stream ciphers accomplishthis by performing a bitwise XOR with the plaintext stream and a generated key strea .
One-time-pads
One-time pad (OTP), also called Vernam-cipher or the perfect cipher, is a crypto algorithm where plaintext is combined with a random key. It is the only known method to perform mathematically unbreakable encryption. Used by Special Operations teams and resistance groups in WW2, popular with intelligence agencies and their spies during the Cold War and beyond, the one-time pad gained a reputation as a simple but solid encryption system in the intriguing world of intelligence, espionage, covert operations and military and diplomatic communications, with an absolute security which is unmatched by modern crypto algorithms.
We can only talk about one-time pad if four important rules are followed. If these rules are applied correctly, the one-time pad can be proven unbreakable (see Claude Shannon’s “Communication Theory of Secrecy Systems”). Even infinite computational power and infinite time cannot break one-time pad encryption, simply because it is mathematically impossible. However, if only one of these rules is disregarded, the cipher is no longer unbreakable.
– The key is as long as the message or data that is encrypted.
– The key is truly random (not generated by a simple computer function or such)
– Each key is used only once, and both sender and receiver must destroy their key after use.
– There should only be two copies of the key: one for the sender and one for the receiver (some exceptions exist for multiple receivers)
Important note: one-time pads or one-time encryption is not to be confused with one-time keys (OTK) or one-time passwords (sometimes also denoted as OTP). Such one-time keys, limited in size, are only valid for a single encryption session by some crypto-algorithm under control of that key. Small one-time keys are by no means unbreakable, because the security of the encryption depends on the crypto algorithm they are used for.
CHAP
CHAP is used to periodically verify the identity of the peer using a 3-way handshake. This is done upon initial link establishment, and MAY be repeated anytime after the link has been established.
1. After the Link Establishment phase is complete, the authenticator sends a “challenge” message to the peer.
2. The peer responds with a value calculated using a “one-way hash” function.
3. The authenticator checks the response against its own calculation of the expected hash value. If the values match, the authentication is acknowledged; otherwise the connection SHOULD be terminated.
4. At random intervals, the authenticator sends a new challenge to the peer, and repeats steps 1 to 3.
CHAP provides protection against playback attack by the peer through the use of an incrementally changing identifier and a variable challenge value. The use of repeated challenges is intended to limit the time of exposure to any single attack. The authenticator is in control of the frequency and timing of the challenges. This authentication method depends upon a “secret” known only to the authenticator and that peer. The secret is not sent over the link. Although the authentication is only one-way, by negotiating CHAP in both directions the same secret set may easily be used for mutual authentication. Since CHAP may be used to authenticate many different systems, name fields may be used as an index to locate the proper secret in a large table of secrets. This also makes it possible to support more than one name/secret pair per system, and to change the secret in use at any time during the session. CHAP requires that the secret be available in plaintext form. Irreversably encrypted password databases commonly available cannot be used. It is not as useful for large installations, since every possible secret is maintained at both ends of the link. The Challenge packet is used to begin the Challenge-Handshake Authentication Protocol. The authenticator MUST transmit a CHAP packet with the Code field set to 1 (Challenge). Additional Challenge packets MUST be sent until a valid Response packet is received, or an optional retry counter expires. A Challenge packet MAY also be transmitted at any time during the Network-Layer Protocol phase to ensure that the connection has not been altered. The peer SHOULD expect Challenge packets during the Authentication phase and the Network-Layer Protocol phase. Whenever a Challenge packet is received, the peer MUST transmit a CHAP packet with the Code field set to 2 (Response). Whenever a Response packet is received, the authenticator compares the Response Value with its own calculation of the expected value. Based on this comparison, the authenticator MUST send a Success or Failure packet.
PAP
PAP can authenticate an identity and password for a peer resulting in success or failure. PAP provides a simple method for the peer to establish its identity using a 2-way handshake. This is done only upon initial link establishment. After the Link Establishment phase is complete, an Id/Password pair is repeatedly sent by the peer to the authenticator until authentication is acknowledged or the connection is terminated. PAP is not a strong authentication method. Passwords are sent over the circuit “in the clear”, and there is no protection from playback or repeated trial and error attacks. The peer is in control of the frequency and timing of the attempts. Any implementations which include a stronger authentication method (such as CHAP) MUST offer to negotiate that method prior to PAP. This authentication method is most appropriately used where a plaintext password must be available to simulate a login at a remote host. In such use, this method provides a similar level of security to the usual user login at the remote host.
NTLM
NTLM is a suite of authentication and session security protocols used in various Microsoft network protocol implementations and supported by the NTLM Security Support Provider (“NTLMSSP”). Originally used for authentication and negotiation of secure DCE/RPC, NTLM is also used throughout Microsoft’s systems as an integrated single sign-on mechanism. It is probably best recognized as part of the “Integrated Windows Authentication” stack for HTTP authentication; however, it is also used in Microsoft implementations of SMTP, POP3, IMAP (all part of Exchange), CIFS/SMB, Telnet, SIP, and possibly others. The NTLM Security Support Provider provides authentication, integrity, and confidentiality services within the Window Security Support Provider Interface (SSPI) framework. SSPI specifies a core set of security functionality that is implemented by supporting providers; the NTLMSSP is such a provider. The SSPI specifies, and the NTLMSSP implements, the following core operations:
1. Authentication — NTLM provides a challenge-response authentication mechanism, in which clients are able to prove their identities without sending a password to the server.
2. Signing — The NTLMSSP provides a means of applying a digital “signature” to a message. This ensures that the signed message has not been modified (either accidentally or intentionally) and that that signing party has knowledge of a shared secret. NTLM implements a symmetric signature scheme (Message Authentication Code, or MAC); that is, a valid signature can only be generated and verified by parties that possess the common shared key.
3. Sealing — The NTLMSSP implements a symmetric-key encryption mechanism, which provides message confidentiality. In the case of NTLM, sealing also implies signing (a signed message is not necessarily sealed, but all sealed messages are signed).
NTLM has been largely supplanted by Kerberos as the authentication protocol of choice for domain-based scenarios. However, Kerberos is a trusted-third-party scheme, and cannot be used in situations where no trusted third party exists; for example, member servers (servers that are not part of a domain), local accounts, and authentication to resources in an untrusted domain. In such scenarios, NTLM continues to be the primary authentication mechanism (and likely will be for a long time).
Blowfish
Blowfish is a keyed, symmetric cryptographic block cipher designed by Bruce Schneier in 1993 and placed in the public domain. Blowfish is included in a large number of cipher suites and encryption products, including SplashID. Blowfish’s security has been extensively tested and proven. As a public domain cipher, Blowfish has been subject to a significant amount of cryptanalysis, and full Blowfish encryption has never been broken. Blowfish is also one of the fastest block ciphers in public use, making it ideal for a product like SplashID those functions on a wide variety of processors found in mobile phones as well as in notebook and desktop computers.
Schneier designed Blowfish as a general-purpose algorithm, intended as a replacement for the aging DES and free of the problems associated with other algorithms. Notable features of the design include key-dependent S-boxes and a highly complex key schedule.
The Blowfish algorithm
Blowfish has a 64-bit block size and a key length of anywhere from 32 bits to 448 bits. It is a 16-round Feistel cipher and uses large key-dependent S-boxes. It is similar in structure to CAST-128, which uses fixed S-boxes. The diagram below shows the action of Blowfish. Each line represents 32 bits. The algorithm keeps two subkey arrays: the 18-entry P-array and four 256-entry S-boxes. The S-boxes accept 8-bit input and produce 32-bit output. One entry of the P-array is used every round, and after the final round, each half of the data block is XORed with one of the two remaining unusedP-entries. The diagram below shows Blowfish’s F-function. The function splits the 32-bit input into four eight-bit quarters, and uses the quarters as input to the S-boxes. The outputs are added modulo 232 and XORed to produce the final 32-bit output.
Since Blowfish is a Feistel network, it can be inverted simply by XORing P17 and P18 to the ciphertext block, then using the P-entries in reverse order. Blowfish’s key schedule starts by initializing the P-array and S-boxes with values derived from the hexadecimal digits of pi, which contain no obvious pattern. The secret key is then XORed with the P-entries in order (cycling the key if necessary). A 64-bit all-zero block is then encrypted with the algorithm as it stands. The resultant ciphertext replaces P1 and P2. The ciphertext is then encrypted again with the new subkeys, and P3 and P4 are replaced by the new ciphertext. This continues, replacing the entire P-array and all the S-box entries. In all, the Blowfish encryption algorithm will run 521 times to generate all the subkeys – about 4KB of data is processed.
PGP/GPG
PGP/GPG are tools for encrypting and signing files and e-mail messages. Encrypting to prevent others for being able to view it, signing so that the receiver can be certain from whom the original file came from and that it did not alter.
Whole disk encryption
Full Disk Encryption or Whole Disk Encryption is a phrase that was coined by Seagate to describe their encrypting hard drive. Under such a system, the entire contents of a hard drive are encrypted. This is different from Full Volume Encryptionwhere only certain partitions are encrypted. It is a kind of disk encryption software or hardware, which encrypts every bit of data that goes onto a disk or disk volume. The term “full disk encryption” is often used to signify that everything on a disk is encrypted, including the programs that can encrypt bootable operating system partitions. Full-disk encrypt is in contrast to Filesystem-level encryption, which is a form of disk encryption where individual files or directories are encrypted by the Filesystem itself. The enterprise-class full-disk encryption for Linux goes by the name dm-crypt. There is also an extension to it called LUKS (Linux Unified Key Setup)which enables us to do fancy things like key management for example. dm-crypt and LUKS, both are free-software working together in order to provide data encryption on storage media thus allowing that what is a secret stays a secret. dm-crypt is a device-mapper and part of the Linux operating system kernel. LUKS is a hard disk encryption specification, represented by cryptsetup, its actual implementation.
TwoFish
Twofish is a 128-bit block cipher that accepts a variable-length key up to 256 bits. The cipher is a 16-round Feistel network with a bijective F function made up of four key-dependent 8- by-8-bit S-boxes, a fixed 4-by-4 maximum distance separable matrix over GF(28 ), a pseudoHadamard transform, bitwise rotations, and a carefully designed key schedule. A fully optimized implementation of Twofish encrypts on a Pentium Pro at 17.8 clock cycles per byte, and an 8-bit smart card implementation encrypts at 1820 clock cycles per byte. Twofish can be implemented in hardware in 14000 gates. The design of both the round function and the key schedule permits a wide variety of tradeoffs between speed, software size, key setup time, gate count, and memory. We have extensively cryptanalyzed Twofish; our best attack breaks 5 rounds with 222.5 chosen plaintexts and 251 effort.
Twofish can:
– Encrypt data at 285 clock cycles per block on a Pentium Pro, after a 12700 clockcycle key setup.
– Encrypt data at 26500 clock cycles per block on a 6805 smart card, after a 1750 clock-cycle key setup.
Diffie-Hellman
Whitfield Diffie and Martin Hellman conceptualized the Diffie-Hellman key exchange. They are considered the founders of the public/private key concept; their original work envisioned splitting the key into two parts. This algorithm is used primarily to send keys across public networks. The process isn’t used to encrypt or decrypt messages; it’s used merely for the creation of a symmetric key between two parties. An interesting twist is that Malcolm J. Williamson of the British Intelligence Service had actually developed the method a few years earlier, but it was classified. On the Security+ exam, if you are asked about an algorithm for exchanging keys over an insecure medium, unless the context is IPSec, the answer is always Diffie-Hellman.
dhe
Adding an ephemeral key to Diffie-Hellman turns it into DHE (which, despite the order of the acronym, stands for Ephemeral Diffie-Hellman). Adding an ephemeral key to Elliptic Curve Diffie-Hellman turns it into ECDHE (again, overlook the order of the acronym letters, it is called Ephem- eral Elliptic Curve Diffie-Hellman). It is the ephemeral component of each of these that provides the perfect forward secrecy.
Use of algorithms/protocols with transport encryption
SSL and TLS
Secure Sockets Layer (SSL) is used to establish a secure communication connection between two TCP-based machines. This protocol uses the handshake method of establishing a session. The number of steps in the handshake depends on whether steps are combined and/or mutual authentication is included. The number of steps is always between four and nine, inclusive, based on who is doing the documentation. One of the early steps will always be to select an appropriate cipher suite to use. A cipher suite is a combination of methods, such as an authentication, encryption, and message authentication code (MAC) algorithms used together. Many cryptographic protocols such as TLS use a cipher suite. Netscape originally developed the SSL method, which has gained wide acceptance throughout the industry. SSL establishes a session using asymmetric encryption and maintains the session using symmetric encryption.
IPSec
IP Security (IPSec) is a security protocol that provides authentication and encryption across the Internet. IPSec is becoming a standard for encrypting virtual private network (VPN) channels and is built into IPv6. It’s available on most network platforms, and it’s considered to be highly secure. One of the primary uses of IPSec is to create VPNs. IPSec, in conjunction with Layer 2 Tunneling Protocol (L2TP) or Layer 2 Forwarding (L2F), creates packets that are difficult to read if intercepted by a third party. IPSec works at layer 3 of the OSI model. The two primary protocols used by IPSec are Authentication Header (AH) and Encapsulating Security Payload (ESP). Both can operate in either the transport or tunnel mode. Protocol 50 is used for ESP, while protocol 51 is used for AH.
SSH
Secure Shell (SSH) is a tunneling protocol originally used on Unix systems. It’s now available for both Unix and Windows environments. The handshake process between the client and server is similar to the process described in SSL. SSH is primarily intended for interactive terminal sessions.
HTTPS
Hypertext Transport Protocol over SSL (HTTPS), also known as Hypertext Transport Protocol Secure, is the secure version of HTTP, the language of the World Wide Web. HTTPS uses SSL to secure the channel between the client and server. Many e-business systems use HTTPS for secure transactions. An HTTPS session is identified by the https in the URL and by a key that is displayed in the web browser.
Cipher suites
The system may be considered weak if it allows weak keys, has defects in its design, or is easily decrypted. Many systems available today are more than adequate for business and personal use, but they are inadequate for sensitive military or governmental applications. Cipher suites, for example, work with SSL/TLS to combine authentication, encryption, and message authentication. Most vendors allow you to set cipher suite preferences on a server to determine the level of strength required by client connections. With Sybase, for example, you set the cipher suite preference to Weak, Strong, FIPS, or All. If you choose Strong, you are limiting the choices to only encryption algorithms that use keys of 64 bits or more. Choosing Weak adds all the encryption algorithms that are less than 64 bits, while choosing FIPS requires encryptions, hash and key exchange algorithms to be FIPS- compliant (AES, 3DES, DES, and SHA1). Apache offers similar choices but instead of the words Strong and Weak, the names are changed to High, Medium, and Low.
Key stretching
Key stretching refers to processes used to take a key that might be a bit weak and make it stronger, usually by making it longer. The key (or password/passphrase) is input into an algorithm that will strengthen the key and make it longer, thus less susceptible to brute-force attacks. There are many methods for doing this; two are discussed here:
PBKDF2
PBKDF2 (Password-Based Key Derivation Function 2) is part of PKCS #5 v. 2.01. It applies some function (like a hash or HMAC) to the password or passphrase along with Salt to produce a derived key.
Bcrypt
bcrypt is used with passwords, and it essentially uses a derivation of the Blowfish algorithm, converted to a hashing algorithm, to hash a password and add Salt to it.
Popular posts
Recent Posts