Encryption and Strong Authentication for Electronic Commerce

Camillo Särs
Faculty of Computer Science
Helsinki University of Technology
ged@iki.fi

Abstract

Electronic Commerce is not possible if the parties cannot authenticate each other or if the transaction can be altered by some malicious third party. This paper presents some of the available methods for securing transactions over an unsecure network and for authenticating communicating parties. The advantages and drawbacks of the different methods are presented and a comparison of the methods is made.


Table of Contents

1. Introduction
2. Basics of cryptography
2.1 Ciphers
2.2 Symmetric algorithms
2.3 Asymmetric algorithms
2.4 Hybrid ciphers
2.5 Cipher strength
3. Network cryptography applications
3.1 Application layer
3.2 Transport layer
3.3 Network layer
3.4 Other security measures
4. Comparison and Speculation
5. Conclusions
References
Glossary

1. Introduction

The rapid growth of the internet has lead to an increasing demand for secure electronic communcation. The demand is most apparent on the internet, but is by no means restricted to it. Companies want to exploit computer networks to their full potential, connecting sites that may be situated on opposite sides of the earth. Individual users want to securely access remote sites without disclosing their identities or activities.

Modern cryptography offers practical solutions to the problems that users in a networked environment are faced with. The next section presents some of the basic techniques of cryptography, but before you apply the solutions, you should understand the problems.

The first thing that springs to mind from the term cryptography is confidentiality, i.e. the ability to protect information from disclosure. It is immediately obvious that some types of information need adequate protection. Both individuals and companies have information that they don't want the whole world to know, so sending such information over an unprotected network is quite out of the question.

What is less obvious, and more controversial, is the fact that an individual should have the right to protect all the private information he or she wants to protect. We won't go into details in this politically highly sensitive area, but you should always remember that some governments want to restrict the private citizen's rights to use cryptography. Several less democratic countries have legislation that restricts the use or export of cryptographic algorithms, in the interest of the government.

The classical form of authentication is to use a user id and a password transmitted in the clear. Once this was barely adequate, but nowadays authentication must be handled using more sophisticated techniques. Modern cryptography offers several techniques for very strong authentication, and they can be used to authenticate almost anything on a network; users, hosts, clients, servers, you name it.

In some contexts where authentication is used today, authorization would be a more proper technique. The distinction between the two is clear, but nevertheless they are often confused. When you authenticate yourself, you prove your identity, whereas you use authorization to prove that you are authorized to use some facility. This gets interesting when you realize that cryptography offers you the possibility to authorize yourself without disclosing your identity.

The value of integrity of a piece of information is often underrated. In a closed system, you can assume that all the information you get is correct, or that you can easily detect that it has been corrupted. In a networked system, you must ensure the integrity of the information you send and receive. If you were to make a payment to the other side of the world, you would most certainly want to ensure that nobody could alter the sum you were paying or redirect it to the wrong account.

Commercial transactions, and many other transactions, require that none of the parties cannot later on claim that the transaction never took place. The principle of nonrepudiation is getting increasingly important, and can quite easily be solved using appropriate cryptographic techniques.

As with everything else, there is a downside to the use of cryptography. The possibility to reliably identify a user can easily invade the privacy of said user. Improper application of cryptography can give governments and corporations more power over the lives of ordinary citizens. The balance between anonymity and privacy on one hand and surveillance and authentication on the other is very delicate. When applying cryptography to a problem you should always consider its ramifications.

Users tend to lose their keys, regardless of how much the system administrators try to avoid such situations. There are cryptographic methods that can be used for key recovery, but sofar most organizations simply use key escrow. The difference is significant, as key escrow means that the key can fall in the wrong hands, whereas key recovery guarantees that only the rightful owner can recover a lost key.

2. Basics of cryptography

Most of the facts in this section are taken from the excellent book [Crypto]. It is generally regarded as one of the leading books in the area, especially for readers who have little or no earlier experience with cryptography. This section only gives a brief presentation of the essentials, later we assume that you have at least taken a brief look at the book.

2.1 Ciphers

Figure 2.1 shows the process of encrypting and decrypting a message. Depending on how you interpret the different parts, the figure actually describes virtually every encryption technique available. The message you want to encrypt is fed to a cryptographic algorithm and encrypted using a key. The output from the algorithm is called ciphertext. The only way to recover the original message is to decrypt the ciphertext with the correct decryption algorithm and key.


Figure 2.1 Encryption and decryption

Some historic ciphers relied on keeping the cryptographic algorithm secret, but all modern ciphers rely only on the key for their security. A. Kerckhoff first presented the fundamental principle of cryptanalysis that the crypto designer must assume that the cryptanalyst has complete details of the design and implementation of the cryptographic algorithm. A cipher is considered strong only when it has been scrutinized by the collective knowledge of the international cryptography community and no major faults have been found.

2.2 Symmetric algorithms

When the same key is used both for encryption and decryption, the algorithm is called a symmetric algorithm. The operations are usually denoted as shown in figure 2.2. Most of the fastest algorithms known today are symmetric, and they are part of virtually every cryptographic package currently in use. Using the same key makes things a bit complicated, as the parties must be able to decide on a key to use, without disclosing it to anybody else. This problem can be solved using asymmetric algorithms.

EK(M) = C
DK(C) = M
Figure 2.2 Notation for symmetric encryption and decryption

Symmetric algorithms can be roughly divided into two categories, stream ciphers and block ciphers. A stream cipher operates on very small units, often as little as a bit at a time, whereas a block cipher encrypts constant sized blocks. Many block ciphers can be used in a mode that turns them into stream ciphers. Stream ciphers are suitable for encrypting data on the fly, block ciphers are best used for encrypting data in place.

Modern block ciphers are designed using two basic techniques, confusion and diffusion. They can both be used separately to create quite complex algorithms, but are not as effective as a combination of the two. Confusion is basically substitution, patterns of plaintext are exchanged for patterns of cyphertext. Modern substitutions are very complex and vary for each bit in the plaintext and each key. Diffusion spreads the information of the plaintext by transposing the bits so that patterns in the plaintext are harder to find.

Stream ciphers obviously cannot directly apply diffusion to the plaintext, but often the underlying algorithm uses both confusion and diffusion to produce the bit stream used for encryption.

Some of the most popular symmetric algorithms are summarized in table 2.1. The importance of the key length is discussed in section 2.5.
CipherKey lengthComment
DES 56 The Data Encryption Standard is over 20 years old, and is one of the most well known cryptographic algorithms. Cryptanalysts have learned many new techniques when analyzing DES. Designed for hardware implementation.
3DES 112 Three-round DES. There are variants with two or three keys, but the effective key length is no more than 112 bits.
IDEA 128 The International Data Encryption Algorithm is considerably younger than DES. Sofar no cryptanalyst has found any significant weaknesses in the algorithm, and it has become increasingly popular.
Blowfish <=448 An algorithm designed by Bruce Schneier for implementation on large microprocessors. The key length can be varied for different security requirements. Some weak keys have been discovered, but the algorithm can be considered quite strong.
Table 2.1 Symmetric algorithms

2.3 Asymmetric algorithms

Algorithms that use different keys for encryption and decryption are called asymmetric algorithms, and are often referred to as "public-key algorithms", as one of the keys typically is publicly known. Asymmetric algorithms have several interesting properties and can be used to produce digital signatures for authentication purposes and integrity checks. The major drawback of asymmetric algorithms is their speed; typical implementations may be a thousand times slower than symmetric algorithms. The keys are also considerably larger than keys for symmetric algorithms.

The asymmetric algorithms rely on mathematical problems that are generally considered "hard". There are several types of problems that have baffled mathematicians for centuries and that currently are considered very hard to solve. Unfortunately nobody has been able to prove that they are hard, which means that most asymmetric algorithms are sensitive to mathematical breakthroughs.

Modular arithmetic is one of the main building blocks of asymmetric algorithms. Calculating discrete logarithms and square roots mod n is hard, whereas raising to a power mod n can be efficiently implemented in binary arithmetic. Factoring large numbers is also time consuming, especially when suitable primes are chosen to generate the large number. If you study the litterature, you will find that primes and modular arithmetic are major concerns when designing algorithms.

Asymmetric algorithms also often have several properties that make them vulnerable to attack if they are used improperly. When you are designing a cryptosystem, it is not enough to ensure that the algorithm you use is strong enough, you also have to verify that the whole system is strong. For instance, the RSA algorithm is very sensitive to chosen ciphertext attack and elements in the algorithm should be chosen with care.

There are several asymmetric algorithms that have been designed for a particular purpose. The algorithm may only produce digital signatures or be intended only for key exchange. The more general-purpose asymmetric algorithms can be adopted for such use as well.

Some of the most popular asymmetric algorithms are summarized in table 2.2. The key length of asymmetric algorithms cannot directly be compared to that of symmetric algorithms, but an attempt is made in section 2.5.
AlgorithmPurpose and "hard" problemComment
RSA Encryption, Signatures
Factoring large numbers
Named after the inventors - Ron Rivest, Adi Shamir and Leonard Adleman - RSA is perhaps the most popular of the asymmetric algorithms. It has been extensively analyzed, which suggests that it is quite secure, but this has not been proved.
ElGamal Encryption, Signatures
Calculating discrete logarithms
ElGamal was actually designed to do digital signatures, but can also be used for encryption. The encryption algorithm is in fact the same as in the Diffie-Hellman key exchange.
DSA Signatures
Calculating discrete logarithms
The Digital Signature Algorithm is part of the U.S. Digital Signature Standard. The basic algorithm is not bad, but several problems, including a subliminal channel, have been identified. The DSS has also been opposed as RSA can be considered a de-facto standard.
Diffie-Hellman Key exchange
Calclulating discrete logarithms
This algorithm is interesting in the sense that it is not intended for encryption. Instead it provides a secure manner in which two parties can exchange keys. Remember that key exchange is one of the basic problems with symmetric algorithms. It is also possible to extend the algorithm to more than two parties.
Table 2.2 Asymmetric algorithms

2.4 Hybrid ciphers

Symmetric and asymmetric algorithms are often combined to form hybrid ciphers. Typically an asymmetric algorithm is used to securely transfer a symmetric key to the correct recipient and to provide authentication and integrity. A much faster symmetric algorithm is then used to encrypt the actual message.

Designing a hybrid cipher requires more skill than using normal algorithms, but the result is definitely more flexible and easier to use than ciphers relying on only symmetric or asymmetric algorithms. The very popular cryptographic program "Pretty Good Privacy - PGP" [PGP] uses a hybrid of RSA and IDEA with excellent results. The only drawback of a hybrid cipher is that it relies on the strength of two different algorithms. If either of the algorithms is broken, then the whole hybrid scheme can be attacked as well.

2.5 Cipher strength

Once you have found a cryptographic algorithm that you consider reasonably strong, you must consider its key length. If the keys are too short, the cipher can be broken with a brute-force attack, i.e. an exhaustive search of the keyspace. Some algorithms are more suited to this type of attack than others, but the difference is negligible when compared to the impact of key length on the strength of a cipher. The difficulty of a brute-force attack grows exponentially with the number of bits, if you add ten bits to the key length you increase the number of keys by a factor of 210 = 1024.

In late 1995, an ad hoc group of known cryptographers and scientists tried to estimate the minimum key length for symmetric ciphers. They published their estimates in [Cryptographers], a paper that everyone using cryptography should read. We chose to cite some of the statements in the paper, as they are quite direct and to the point.

Neither corporations nor individuals will entrust their private business or personal data to computer networks unless they can assure their information's security.

This is probably correct when it comes to some kinds of information, but in our view experience has shown that the corporations' and idividuals' view of what is "secure" often is severely misguided. The market is full of cryptographic products that either use bad algorithms or too short keys and sometimes even both.

It is a property of computer encryption that modest increases in computational cost can produce vast increases in security. Encrypting information very securely (e.g., with 128-bit keys) typically requires little more computing than encrypting it weakly (e.g., with 40-bit keys).

If you are using cryptography to protect information, there is no reason not to use the strongest cryptography you can afford. Saving a few bits in key length gives you very little savings in efficiency, but may drastically reduce the strength of the encryption. On the other hand, increasing the key length just for the sake of long keys is not always necessary. A brute-force attack on a 256-bit key is practically infeasible for every foreseeable future. Using longer keys than 256 bits only goes to counteract possible weaknesses in the algorithm itself.

The paper shows that a key length of 40 bits is totally inadequate and that 56-bit DES is on the verge of becoming too weak.

Bearing in mind that the additional computational costs of stronger encryption are modest, we strongly recommend a minimum key-length of 90 bits for symmetric cryptosystems.

This statement could easily be interpreted as "90 bits is enough". We would rather interpret it as "use as many bits as possible, but never use less than 90", which is probably the intended interpretation. IDEA uses 128 bits, which should be enough for almost any use, Blowfish can be used with key sizes up to 448 bits if you want to. If you use a key size of 256 bits, you would be safe even if some cryptographic breakthrough reduced the key size with 50%. That is highly unlikely.

Other factors that you have to take into account when you are selecting a cryptosystem are the value and lifetime of the information you are about to protect. If the cost of breaking the encryption far outweighs the possible gain from it, it is highly unlikely that anyone will even try. If, however, the information you are protecting is valuable or will have to be protected for a very long time, you should definitely use the strongest cryptography possible.

Table 2.4 from [Crypto] compares symmetric and asymmetric key length. When reading the table keep in mind that asymmetric keys usually are around for much longer than symmetric keys. You should choose longer asymmetric keys to be on the safe side, but the higher computational requirements may restrict you to smaller sizes. The values cannot actually be compared directly, so the numbers are based on several assumptions.
Symmetricasymmetric
56384
64512
80768
1121792
1282304
Table 2.4 Comparison of asymmetric an symmetric key lengths (in bits)

However strong your cipher, you must always keep in mind that the cryptographic algorithm is only a part of a larger system. The system is never stronger than its weakest link. We won't go into details of why cryptosystems fail, but for the interested reader we strongly recommend [WCF]. To quote the abstract:

It turns out that the threat model commonly used by cryptosystem designers was wrong: most frauds were not caused by cryptanalysis or other technical attacks, but by implementation errors and management failures.

3. Network cryptography applications

Traffic on a network is often modelled according to the 7-layered ISO OSI reference model. When you consider adding cryptography to this model, you will find that there are several ways to do it. Figure 3.1 shows the layers that are most suited for Internet encryption in boldface. Current products are often applications, but some of them use techniques that could actually be considered network elements. Next we present how cryptography can be added to networks and consider the advantages and disadvantages of the different approaches.


Figure 3.1 The ISO OSI reference model

3.1 Application layer

Applications using standard internetworking functions can of course encrypt the traffic if they wish. This can be implemented in any number of ways, so we won't even attempt to give a complete description of application layer encryption. Instead we present some examples that we hope represent typical encrypting applications.

The main advantage of application layer encryption is that the encryption functionality doesn't even need to be network dependent. The encrypted data can be transferred as files, in email or using any other similar medium. On the downside every application may implement the encryption as it wishes, and no other application may be able to read the encrypted data. There is no key infrastructure available for applications, but you will see that this affects almost all levels in the OSI model.

Pretty Good Privacy - PGP

PGP is an example of application layer encryption [PGP]. It doesn't actually have a direct network interface, but is instead used to encrypt and decrypt email. Similarly, it can be used to encrypt arbitrary files, which can then be sent over the network. It uses a hybrid cryptosystem consisting of RSA and IDEA, but also permits encrypting files with plain IDEA.

It would be easy to use PGP to implement a secure encrypted network connection, but the througput could be quite poor. This is because every packet would have to be encrypted separately, which would make communication using small packets very inefficient.

F-Secure Commerce

F-Secure Commerce [Commerce] is a less typical example. It uses the SSH transport layer protocol to provide an application layer service to its user. When you start Commerce, it creates a local network service point (socket) in your computer. When you connect to this service point with application, it will forward your connection over a secure encrypted channel.

Most protocols written on top of TCP/IP use only a single network service point and rely on the network to identify the remote host, which means that they can be forwarded over a secure Commerce connection. This forwarding is almost transparent to the user of the Commerce client, he only has to tell his application to use the Commerce service point instead of the regular one. This creates interesting possibilities, as almost any application suddenly can be used securely over an insecure network.

3.2 Transport layer

The transport layer is perhaps the most intuitive place for network traffic encryption. It provides an end-to-end connection that can be encrypted and offers a clearly distinct interface to the upper layers. Once a secure transport layer has been implemented, any application can use it almost transparently. The transport layer security module can contain the necessary key management procedures so that individual applications do not have to know anything about them.

Despite its significant advantages, the transport layer has some disadvantages as well. It is a higher layer, which is usually implemented in software. This reduces its capacity and makes programming errors more probable as there are several different implementations. From a cryptographic point of view, it is also obvious that the transport layer cannot protect against traffic analysis, as the addressing information is added by a lower layer.

There are at least two good examples of transport layer encryption, SSH [SSH] and SSL [SSL]. They have both been born out of the need for reliable authentication and encrypted communication, but under quite different circumstances. Both are built on top of the existing TCP protocol, and both are also layered internally. PCT [PCTV2], both the old and the new version, has much in common with SSL, even the Fortezza key exchange algorithm.

Typically encrypting transport layer protocols are layered further internally. The lowest layer handles encryption, authentication and some integrity checks, the layers above handle handshake operations, such as algorithm negotiations, key exchange operations and authentication. Often the higher layer protocols are transparent to the application that uses the transport layer.

SSH

The SSH transport layer protocol was developed by Tatu Ylönen to solve what he saw as a major problem with the Internet - unsecure remote access. Initially "ssh" stood for "secure remote shell", a replacement for the unix "rsh" - "remote shell", but now the acronym is used as the name of the protocol. The protocol was developed in Finland, which has proven to be a major advantage.

The ssh Unix software provides strong authentication and strong encryption, and actually makes some basic tasks even easier than the old rsh software. Ssh rapidly gained in popularity, and has been ported to numerous platforms with the help of programmers who also saw the need for secure connections. Today it is a de-facto standard for secure remote connections between Unix systems and a client for Windows systems is available as a commercial product.

The first SSH protocol has some minor shortcomings, and will soon be replaced by SSH v2.0. The new protocol has been designed to accomodate future public key infrastructures and to increase the security of an already very good protocol.

As the SSH protocol was developed in a country that does not restrict the use and export of cryptographic algorithms, it uses the strongest possible encryption and authentication methods available. The session key is 256 bits long, although most algorithms only use a part of this. Extra effort has been put into designing a reliable random number generator, a thing that is often overlooked. The cryptographic algorithms used in SSH are summarized in table 3.1.

AlgorithmComment
IDEA128 bits
3DES168 bits
DES56 bits (weak)
ARCFOUR128 bits
Blowfish128 bits
MD5Used to calculate the message authentication code (MAC)
SHAAlternative MAC algorithm
RSAThe primary key exchange algorithm
Diffie-HellmanReserved for key exchange, but not yet specified
Table 3.1 SSH algorithms

SSL

The Secure Sockets Layer protocol was developed by Netscape Communications to provide secure access to World Wide Web (WWW) sites. The design does in no way limit its use to only that, but currently it has few other implementations. It is by far the most popular way to provide secure transactions on the Web, especially within the U.S.A. Export restrictions force Netscape to export a version that is severly crippled.

The SSL specification includes three proprietary algorithms, RC2, RC4 and Fortezza. This contradicts the principle that the security should only rely on the key, which makes us a bit suspicious. If the designers did not follow that principle, what other priciples have they seen fit to discard?

The export version of the SSL protocol uses 40-bit keys, which is ridiculously little as the Cryptographers paper showed. The encryption can be broken by a university student in a week [SSLBREAK] and in less than two days by a group of people. Nevertheless, the export version is advertised as being secure for electronic transactions over the Internet. We feel that this is misleading and can have unexpected repercussions.

In addition, weaknesses have been found in the random number generator in at least one implementation, making it possible to break the cipher in a few hours. The generator was promptly fixed, but this shows yet another risk of using too short keys.

We are convinced that the export version of SSL is useless, but the US version may still be quite good. The use of proprietary remains a concern, nevertheless. The cryptographic algorithms used in SSL are summarized in table 3.2. Algorithms used in the export version are marked with an asterisk.

AlgorithmCommentExportable
IDEA128 bits
3DES168 bits
DES56 bits (weak)
DES40 bits, very weak*
RC4128 bits, proprietary
RC440 bits, proprietary, very weak*
RC240 bits, very weak and broken*
MD5Used to calculate the message authentication code (MAC)*
SHAAlternative MAC algorithm*
RSAThe primary key exchange algorithm*
Diffie-HellmanKey exchange*
FortezzaA proprietary asymmetric algorithm on a PCMCIA card
Table 3.2 SSL algorithms

3.3 Network layer

Network layer encryption can be made totally transparent to applications, as the encryption can be done on a link-by-link basis. This also makes key management easier, as only one key or key pair is required for a link. If the underlying layers use a synchronous transmission scheme, it is also possible to hide the traffic from analysis by using a cipher in CFB mode to generate a continuous, seemingly random bit stream. The network layer encryption can easily be implemented in hardware, which further increases its efficiency.

All of this comes at a cost. The encryption is no longer necessarily end-to-end, as the packets are decrypted at every intermediate node in the network. In small networks this is perhaps of no concern, but on the Internet it is a major problem. There is no guarantee that the node will forward the packet over a link that is encrypted as well, so the security provided by encryption can be broken by just one insecure link. Another problem is that typically all traffic is encrypted, which may be unnecessary. Some packets may only need cryptographic integrity, not confidentiality, not to mention that packets that have several intended recipients may need to be transmitted once for every recipient.

Security Architecture for the Internet Protocol

The Security Architecture for the Internet Protocol, usually known as IPSEC [IPSEC], presents a solution for network layer encryption and authentication. To avoid problems with governments that do not allow encryption, IPSEC clearly separates authentication using Authentication Headers from encryption using Encapsulating Security Payload. The two methods can be used separately or together, according to the need of the user.

In the IPSEC model, the security features can be added to all the links that are considered insecure. This means that a connection may be unencrypted and unprotected in an internal "trusted" network, whereas all external connections are encrypted by a security gateway. Separate features may also be used on different legs of the connection, e.g. so that all legs are authenticated, but only some are encrypted. If necessary, it is even possible to do end-to-end encryption between two distinct parties with a key unique to that connection. This makes IPSEC quite flexible.

You should note that the IPSEC specification does not explicitly define what algorithms should be used, but makes it possible to choose the algorithms according to the situation. This means that IPSEC is as strong or weak as the algorithms and keys chosen. As most current protocols, the original IPSEC specification leaves the question of key exchange open.

3.4 Other security measures

Implementing encryption and authentication for a network requires more than just implementing the protocols themselves. The cryptosystem that results is a very complex entity, and every single part has to be implemented properly [WCF].

The communicating parties must be reliably authenticated to avoid man-in-the-middle attacks and unauthorized access. The cryptographic algorithms must chosen with care, so that the correct balance between performance and strong cryptography is achieved. The keys must be generated using a good random number generator, and must be stored securely. And above all the system should not be too complicated to use, otherwise users will find ways around it.

Key management is not as easy as it may sound. The Internet is a very dynamic environment, where hosts may change rapidly and registers often are outdated. Regardless of this, proper authentication requires that all public keys can be verified in some manner. This is currently one of the main areas of development in the Internet.

4. Comparison and Speculation

Application layer cryptography is a good solution to many problems, and will definitely remain so for a long time. You may want to encrypt or authenticate specific pieces of application data permanently, not just for the duration of the network session, and this cannot be achieved by the lower layers. Currently some network applications actually perform session cryptography on the application layer, but we believe that this is only a temporary situation. When network encryption becomes established, it will be the only session encryption used.

The practical differences between transport and network layer security are harder to pinpoint. At first glance it seems that end-to-end encryption must be implemented on the transport level, but IPSEC shows that this necessarily is not the case. The flexibility of the transport layer is weighed against the more efficient implementation of network layer encryption, and neither seems to come out as a clear winner. Time will tell which will be the dominant way to encrypt network sessions, or if a hybrid of the two will remain. We think that a hybrid system is very probable, where individual links are encrypted according to the "hostility" of the environment and sessions are authenticated and encrypted independently of the link encryption.

The strength of the cryptosystems stands out as a very important, perhaps the most important, factor. Implementing cryptography properly is very hard, which leads us to believe that more straight-forward solutions will be popular. Complex systems introduce many possibilities for errors, and verifying that they behave as expected is extremely hard. One such system, Kerberos, has been developed for several years, and has reached some popularity. We feel that its complexity has been the major reason why it has never become widespread [Kerberos].

Some of the most distinguished cryptographers in the world have publicly announced that symmetric cryptography with keys shorter than 90 bits cannot be considered secure [Cryptographers]. It is a generally accepted principle that algorithms should only be considered strong when they have been publicly reviewed. Regardless of this, some companies continue to offer products with proprietary algorithms and keys as short as 40 bits, some with even shorter effective key lengths.

Comparing the current network encryption protocols and applications is relatively easy. The protocols themselves seem to have been designed with care, although further cryptanalysis may yet reveal weaknesses in them. Currently the weaknesses do not lie in the protocols, but in the other components of the cryptosystem. Some protocols use publicly reviewed strong algorithms with as long keys as possible, whereas others use proprietary algorithms and frighteningly short keys. When you do a comparison, first drop the ones with obvious weaknesses, only then should you start analyzing the minor details.

The reason for some of the horribly weak cryptosystems available today is United States export restrictions. We cannot recommend using any cryptographic product that can legally be exported from the US for any cryptographic purposes. This includes authentication, as even some of the authentication keys seem to be too short.

5. Conclusions

Electronic Commerce requires that the transactions remain confidential and cannot be modified or repudiated. The current network encryption solutions provide secure authenticated channels, but in practice authentication of the actual transactions will have to be handled separately. This is not a problem, as separate application layer protocols exist for authenticated electronic transactions.

We are frightened by some of the current cryptographic applications. The export version of SSL is actually used for secure transactions, by people who have been mislead to believe that it is secure. Even worse, some of the companies offering these "secure" services may even believe they are secure. This seems to indicate that the public awareness of cryptography and its applications needs to be improved.

Once you weed out the weak solutions, you are left with some very promising protocols. They have many features in common, and provide both strong encryption and strong authentication. The implementations may still have some flaws, but already you can clearly see that an Internet infrastructure of encrypted connections is forming. Support for cryptographic protocols is rapidly increasing, and with that the awareness of how insecure the earlier connections have been.

In our opinion the Internet already has an established base of cryptographic protocols. You should never again have to make an unencrypted electronic transaction, and if you are faced with that choice, you can require the service provider to offer you a secure alternative. Ask for the strongest possible encryption and authentication, and do not settle for anything less.

Above all, remember that the cryptosystem is never stronger than its weakest link. Find this link, and make a determination as to how strong it really is, and if it is strong enough.


References

Please note that Internet Drafts are working documents. They may be updated, replaced, or obsoleted by other documents at any time. Refer to http://www.nordu.net/ftp/internet-drafts/1id-abstracts.txt for information on the status of any of the drafts.
[Crypto]
Schneier, B., Applied Cryptography Second Edition: protocols, algorithms, and source code in C, John Wiley & Sons, 1996.
[Cryptographers]
Blaze, M., Diffie, W., Rivest, R.L., Schneier, B., Shimomura, T., Thompson, E., Wiener, M., "Minimal Key Lengths for Symmetric Ciphers to Provide Adequate Commercial Security", January 1996.
[WCF]
Anderson, Ross J., "Why Cryptosystems Fail", Communications of the ACM, November, 1994.
(The paper is available in PostScript form.)
[SSH]
Ylönen, T., "SSH Transport Layer Protocol", IETF draft, June 1996.
[SSL]
Freier, a. , Karlton, P. Kocher, P., "The SSL Protocol Version 3.0", IETF draft, March 1996.
[PCT]
Benaloh, J., Lampson, B., Simon, D., Spies, T., Yee, B., Microsoft Corp., "The Private Communication Technology Protocol", IETF draft, October 1995.
[PCTV2]
Simon, D, Microsoft Corp., "The Private Communication Technology Protocol" Version 2, IETF draft, April 1996.
[S-HTTP]
Rescorla, E., Schiffman, A., "The Secure HyperText Transfer Protocol", IETF draft, May 1996.
[IPSEC]
Atkinson, R., "Security Architecture for the Internet Protocol", RFC 1825, August 1995.
[IPAH]
Atkinson, R., "IP Authentication Header", RFC 1826, August 1995.
[IPESP]
Atkinson, R., "IP Encapsulating Security Payload (ESP)", RFC 1827, August 1995.
[Kerberos]
Kohl, J., Neuman, C., "The Kerberos Network Authentication Service (V5)", RFC 1510, September 1993.
[PGP]
The International PGP Home Page, http://www.ifi.uio.no/pgp/
[Commerce]
Data Fellows Ltd., "F-Secure Commerce White Paper", http://www.Europe.DataFellows.com/f-secure/fsecom.htm, May 1996.
[SSLBREAK]
Doligez, D. "I broke Hal's SSL challenge", http://pauillac.inria.fr/~doligez/ssl/, August 1995.

Links


Glossary

Asymmetric algorithmAlso known as a public key algorithm, uses different keys for encryption and decryption, one of which is usually public
AuthenticationReliable identification of something, e.g. a user, a host and verification of its privileges
AuthorizationReliable verification of priviliges, perhaps without identification
CipherCryptographic algorithm or algorithms that allow encryption and decryption
ConfidentialityProtecting something from disclosure
CryptanalysisThe art of breaking encrypted messages
Cryptographic algorithmAn algorithm that transforms its input using a key, so that the original input is almost impossible to recover without the correct key
CryptographyThe art of encrypting messages
CryptologyThe branch of mathematics encompassing both cryptography and cryptanalysis
Digital signatureA piece of data calculated from a message in a way that both verifies the messages integrity and the identity of the person who signed the message
IntegrityVerifying that something is unaltered, intact
Key(cryptographic) A large apparently random number that is used with a cryptographic algorithm to encrypt or decrypt data
Key escrowA scheme where copies of cryptographic keys are handed over to a trusted party or parties so that they later can be retrieved
Key recoveryA scheme where copies of cryptographic keys are stored so that only their rightful owner can reclaim them if the original keys are lost
NonrepudiationNondeniability, the capability to prove that something actually happened
Public key algorithmAsymmetric algorithm
Subliminal channelA covert channel that allows information transfer to go undetected
Symmetric algorithmAn algorithm that uses the same key for encryption or decryption, or two keys that can easily be derived from each other

A PGP signature is available for this document. Please notify me if it is incorrect.
Camillo Särs <ged@iki.fi>
$Revision: 1.45 $ $Date: 1996-12-08 18:21:17+02 $