VHDL Code For The Data Encryption Standard
VHDL Code For The Data Encryption Standard
FPGA BOARD.
A Project Report
BACHELOR OF TECHNOLOGY
Anup Vashisth
(Roll No. – EIC-3047-2k15)
Session: 2015-19
This is to certify that Mr Anup Vashisth has carried out the project work titled
“Implementing Data Encryption Standard on FPGA BOARD”
from 08/06/2018 for the award of the Bachelor of Technology
(Electronics and Instrumentation Control Engineering), during the
internship semester from SAG, DRDO under my supervision.
This report embodies result of original work and studies carried out by
Student himself and the contents of the report do not form the basis for
the award of any other degree to the candidate or to anybody else.
Every project big or small is successful largely due to the effort of a number of wonderful people who
have always given their valuable advice or lent a helping hand. I sincerely appreciate the inspiration;
support and guidance of all those people who have been instrumental in making this project a success.
I would like to extend my sincere and heartfelt obligation towards all the personages who have helped
me in this endeavor. Without their active guidance, help, cooperation and encouragement, I would not
have made headway in the project. The internship opportunity with Scientific Analysis Group (SAG),
DRDO, Metcalfe House-Delhi is a great chance for learning and professional development.
I am ineffably indebted to Mr. S.P. Mishra, Scientist ‘G’ for conscientious guidance and encouragement
to accomplish this project. I would also like to thank my project mentor, Ms. Purnima Hansda, Scientist
‘D’ for her valuable and helpful feedback and continued support throughout this project.
I extend my gratitude to Electronics Department, YMCA University of Science and Technology and the
respected teachers for giving me an opportunity toward on a research project like this. I perceived this
opportunity as a big mile stone in my career development.
ABOUT THE ORGANIZATION
DRDO was formed in 1958 from the amalgamation of the then already functioning Technical
Development Establishment (TDEs) of the Indian Army and the Directorate of Technical Development &
Production (DTDP) with the Defence Science Organization (DSO). DRDO was then a small organization with
10 establishments or laboratories. Over the years, it has grown multi-directionally in terms of the variety of
subject disciplines, number of laboratories, achievements. DRDO is a network of more than 50 laboratories
which are deeply engaged in developing defence technologies covering various disciplines, like aeronautics,
armaments, electronics, combat vehicles, engineering systems, instrumentation, missiles, advanced computing
and simulation, special materials, naval systems, life sciences, training, information systems and agriculture.
Vision
Make India prosperous by establishing world-class science and technology base and provide our Defence
Services decisive edge by equipping them with internationally competitive systems and solutions
Mission
Design, develop and lead to production state-of-the-art sensors, weapon systems, platforms and allied
equipment for our Defence Services.
Provide technological solutions to the Defence Services to optimize combat effectiveness and to
promote well-being of the troops.
Develop infrastructure and committed quality manpower and build strong technology base.
Core Competence: Dept. of Defence Research and Development (R&D) is working for indigenous
development of weapons, sensors & platforms required by the three wings of the Armed Forces. To fulfil this
mandate, Dept. of Defence Research and Development (R&D), is closely working with academic institutions,
Research and Development (R&D) Centers and production agencies of Science and Technology (S&T)
Ministries/Depts. in Public & Civil Sector including Defence Public Sector Undertakings & Ordnance
Factories.
Training and Development: DRDO has a dynamic training and development policy which is
executed through the Continuing Educational Programmes (CEP) for all cadre personnel viz DRDS,
DRTC, and Admin & Allied. At the entry level in DRDS, the newly recruited scientists undergo a
16 weeks Induction Course at Institute of Armament Technology (IAT), Pune. Under the Research and
Training (R&T) scheme the scientists are sponsored for ME/M Tech programmes at IITs/IISc and
reputed universities. The fees is also reimbursed by the DRDO where scientists undergo Ph.D.
programme. In addition to this, the Organization through its two premier Institutes namely Institute of
Technology Management (ITM) and Institute of Armament Technology (IAT) deemed university
offer courses for scientists and Armed Forces in the area of Technology Management, R&D
Management and Armament. Recently, a training Centre at Jodhpur has been established to meet the
training needs for Admin & Allied cadre. In order to attract the futuristic talent, DRDO has Junior
Research Fellow (JRF), Senior Research Fellow (SRF) and Research Associate (RA) schemes for
young & dynamic personnel & interested in Defence Research and Development.
ABOUT SCIENTIFIC ANALYSIS GROUP(SAG)
Scientific Analysis Group (SAG) was established in 1963 for evolving new scientific methods
for design and analysis of communication systems. It was situated in Central Secretariat
Complex and consisted of 12 Scientists. This group was placed under direct control of chief
Controller (R&D) in 1973 and became a full-fledged Directorate in R&D Headquarters.
In 1976, SAG started undertaking R&D projects on mathematical, communication and speech
analysis. Due to the increasing responsibilities of SAG, the manpower component of SAG
underwent expansion and additional accommodation was allotted to SAG at Metcalfe House
Complex. SAG was further entrusted with R&D work in the field of electronics. Work related to
evaluating communication equipment to be introduced in Services was taken up during 1980.
The manpower structure was again revised and the electronic facilities were enhanced. Presently
SAG is housed in two independent buildings with a total manpower of 140 scientists/technical
and other staff.
Facilities Available
Simulation Laboratory
Speech Laboratory
Hardware Laboratory
System Analysis Support Laboratory
16 Node Distributed Computational Facility
Excellent Tech Library with around 10000 books and National/International Journals
256 Dual Processor Nodes Distributed Computing Infrastructure
ABSTRACT
The Data Encryption Standard (DES) is one of the oldest symmetric keys (same key used to
encrypt/decrypt) cryptosystems. The DES was officially standardized in 1976 becoming the first
encryption system to meet the National Bureau of Standards (NBS) criteria for an encryption system
(Schneier, 1996), and the first standardized encryption system. Thirty-four years later, development in
cryptography has been enormous and positive changes have been made. The Advanced Encryption
Standard (AES) (NIST, 2001) has been developed, new directions in cryptography have been followed;
that is asymmetric cryptography (Diffie and Hellman, 1976) (encryption key different from decryption
key), even the NBS is now called National Institute of Standards and Technology (NIST). The DES is
still used today; for example, it is implemented in the Java cryptography library and is still used in the
financial industry. It is also implemented in hardware, for example (Preissig, 2000) is an implementation
of the DES on hardware by Texas Instruments. The DES has received its fair share of criticism and has
been adapted in several ways to meet current industry standards. This paper is based on an investigative
research on the DES. In the following sections, the basic working principle of the DES, to the scrutiny,
adaptation and shortcoming of the DES and Triple Data Encryption Algorithm (TDEA)an extension of
the DEA will be described. Finally, it will be determined if the DES is a secure enough encryption system
to be used to keep our confidential data safe based on results of the research.
The Data Encryption Algorithm is a symmetric block cipher that encrypts data in 64-bit blocks. It has a
key length of 56 bits which is expressed as a 64-bit number; the last bit in every byte act as a parity check
for the previous 7 bits. This is used for error detection. The three main operations of the DES are: the
XOR, permutation and substitution. According to Claude Shannon, encryption of symmetric ciphers
comprises confusion and diffusion. The aim of confusion is to make the relationship between the plain
text and cipher text complex while diffusion is aimed at spreading the change in the cipher text to hide
any statistical feature. In the DES, substitution is used to achieve confusion and permutation diffusion.
Data encryption is also known as “Forward Cipher Operation” and data decryption “Inverse Cipher
Operation”. In the forward cipher operation, each 64-bit data (Plain text) are transformed using several
mathematical steps (FIBS, 1999) for 16 rounds. The inverse cipher transformation uses the same
mathematical steps as the encryption algorithm but we must make sure the same block of key bits used
during each round of encryption is used during decryption. That is, where R16 L16 is the input for
decryption, K16 is used for that iteration, K15 for the R15 L15, and so on.
Contents
Acknowledgement
Abstract
1. Introduction
1.2. Objective
1.3. Scope
2. Literature Review
2.1. Cryptography
3. Requirements
4. Algorithms
5.1. Results
INTRODUCTION
INTRODUCTION
In the late 1960s, IBM set up a research project in computer cryptography led by Horst Feistel.
The project concluded in 1971 with the development of an algorithm with the designation
LUCIFER [FEIS73], which was sold to Lloyd’s of London for use in a cash-dispensing system,
also developed by IBM. LUCIFER is a Feistel block cipher that operates on blocks of 64 bits,
using a key size of 128 bits. Because of the promising results produced by the LUCIFER project,
IBM embarked on an effort to develop a marketable commercial encryption product that ideally
could be implemented on a single chip. The effort was headed by Walter Tuchman and Carl
Meyer, and it involved not only IBM researchers but also outside consultants and technical advice
from the National Security Agency (NSA). The outcome of this effort was a refined version of
LUCIFER that was more resistant to cryptanalysis but that had a reduced key size of 56 bits, in
order to fit on a single chip. In 1973, the National Bureau of Standards (NBS) issued a request for
proposals for a national cipher standard. IBM submitted the results of its Tuchman–Meyer project.
This was by far the best algorithm proposed and was adopted in 1977 as the Data Encryption
Standard. Before its adoption as a standard, the proposed DES was subjected to intense criticism,
which has not subsided to this day. Two areas drew the critics’ fire. First, the key length in IBM’s
original LUCIFER algorithm was 128 bits, but that of the proposed system was only 56 bits, an
enormous reduction in key size of 72 bits. Critics feared that this key length was too short to
withstand brute-force attacks. The second area of concern was that the design criteria for the
internal structure of DES, the S-boxes, were classified. Thus, users could not be sure that the
internal structure of DES was free of any hidden weak points that would enable NSA to decipher
messages without benefit of the key. Subsequent events, particularly the recent work on
differential cryptanalysis, seem to indicate that DES has a very strong internal structure.
Furthermore, according to IBM participants, the only changes that were made to the proposal were
changes to the S-boxes, suggested by NSA, that removed vulnerabilities identified in the course of
the evaluation process. Whatever the merits of the case, DES has flourished and is widely used,
especially in financial applications.
In 1994, NIST reaffirmed DES for federal use for another five years; NIST recommended the use of
DES for applications other than the protection of classified information. In 1999, NIST issued a new
version of its standard (FIPS PUB 46-3) that indicated that DES should be used only for legacy
systems and that triple DES (which in essence involves repeating the DES algorithm three times on
the plaintext using two or three different keys to produce the ciphertext) be used. Because the
underlying encryption and decryption algorithms are the same for DES and triple DES, it remains
important to understand the DES cipher.
Advantages of DES
By using DES, input message of 64bits can be encrypted using the secret key length of
64bits.
The encrypted key is cipher key which is expanded into a larger key, which is later used for
other Operations.
DES is hard to attack
DES is very hard to crack because of the number of rounds used in encrypting message.
DES is faster when compared RSA Encryption Algorithm.
DES has high level of security. It is completely specified and very easy to understand. It is
adaptable to different applications. Data rates are high. DES can be validated and
Exportable.
1.2. Objective
The objective of the project is to implement DES cryptography on FPGA for Border Security
forces and other secret communications to take place, using a very small device for all the
computations, such that it is portable and energy efficient. Thus, FPGA is used for the
cryptography, because it has an array of logic blocks, I/O pads, and routing channels. Thus,
making it a standalone device for communication, encryption and decryption.
The main objective of using Data Encryption standard is to make the transmitted messages secure
and to ensure that this encryption and decryption takes place in minimum time along with it being
least vulnerable to any attacks which may be able to decrypt the message without the key.
1.3. Applications
DES algorithm was made mandatory for all financial transactions by the U.S government
which involves electronic fund transfer.
High speed in ATM
It is used for secure video teleconferencing
Used in Routers and Remote Access Servers
It can be used by federal departments and agencies when they require cryptographic
protection for sensitive information.
CHAPTER 2
LITERATURE REVIEW
CHAPTER 2
LITERATURE REVIEW
2.1 Cryptography
Symmetric encryption, also referred to as conventional encryption or single key encryption, was the only
type of encryption in use prior to the development of public key encryption in the 1970s.
algorithm, both the sender and receiver share the key. The sender uses the key to hide the message. Then,
the receiver will use the same key in the opposite way to reveal the message. For centuries, most
cryptography has been symmetric. Advanced Encryption Standard is a widely used one. However, this is
not to be confused with symmetry
A symmetric encryption scheme has five ingredients:
■ Plaintext: This is the original intelligible message or data that is fed into the
algorithm as input.
■ Secret key: The secret key is also input to the encryption algorithm. The key is a value independent of the
plaintext and of the algorithm. The algorithm will produce a different output depending on the specific key
being used at the time. The exact substitutions and transformations performed by the algorithm depend on
the key.
On the plaintext and the secret key. For a given message, two different keys will
reverse. It takes the ciphertext and the secret key and produces the original plaintext.
It remains by far the most widely used of the two types of encryption. In a symmetric-key
2.1.2 Asymmetric Cryptography
Asymmetric cryptography is harder to use. Each person who wants to use
shared, and a different number (a "public key") that they can tell everyone. If
someone else wants to send this person a message, they'll use the number
they've been told to hide the message. Now the message cannot be revealed,
even by the sender, but the receiver can easily reveal the message with his
secret or "private key". This way, nobody else needs to know the secret key.
updates for their software can sign those updates to prove that the update was
made by them, so that hackers cannot make their own updates that would
cause harm. Computers can also use asymmetric ciphers to give each other the
■ Plaintext: This is the readable message or data that is fed into the algorithm
■ Ciphertext: This is the encrypted message produced as output. It depends on the plaintext and the key. For
aasgiven
input.message, two different keys will produce two different ciphertexts.
■■ Encryption
Decryption algorithm:
algorithm: The
Thisencryption
algorithm algorithm
accepts theperforms various transformations
matching key on
andthe plaintext.
ciphertext and the produces the
original plaintext.
2.1.3 Misconceptions: Symmetric vs Asymmetric
encryption scheme depends on the length of the key and the computational
be abandoned. As one of the inventors of public-key encryption has put it, ―the
universally accepted.
2.2. RSA ALGORITHM
The pioneering paper by Diffie and Hellman [DIFF76b] introduced a new approach to cryptography and,
in effect, challenged cryptologists to come up with a cryptographic algorithm that met the requirements
for public-key systems. A number of algorithms have been proposed for public-key cryptography. Some
of these, though initially promising, turned out to be breakable.4 One of the first successful responses to
the challenge was developed in 1977 by Ron Rivest, Adi Shamir, and Len Adleman at MIT and first
published in 1978 [RIVE78].5 The Rivest-Shamir-Adleman (RSA) scheme has since that time reigned
supreme as the most widely accepted and implemented general-purpose approach to public-key
encryption. The RSA scheme is a block cipher in which the plaintext and ciphertext are integers between
0 and n - 1 for some n. A typical size for n is 1024 bits, or 309 decimal digits. That is, n is less than
21024. We examine RSA in this section in some detail, beginning with an explanation of the algorithm.
Then we examine some of the computational and cryptanalytical implications of RSA.
1. Generate two large random (and distinct) primes p and q, each roughly the same size.
The greatest common divisor (GCD) of two whole numbers is the largest natural number that divides
evenly into both without a remainder.
FORMAL NOTATION
In the initial setup, the values for a-0 and b are the two values we want to find the GCD of, a-o being the
larger of the two. The goal of each step is to find the quotient q and remainder r that makes the equation
true.
The Euclidean Algorithm is a k-step iterative process that ends when the remainder is zero. (In other
words, you keep going until there’s no remainder.) The GCD will be the last non-zero remainder.
Suppose we wish to compute gcd (27,33) gcd (27,33). First, we divide the bigger one by the smaller
one:
33=1×27+633=1×27+6
Thus gcd (33,27) =gcd (27,6) gcd(33,27)= gcd(27,6). Repeating this trick:
27=4×6+327=4×6+3
and we see gcd(27,6)=gcd(6,3)gcd(27,6)=gcd(6,3). Lastly,
6=2×3+06=2×3+0
So since 6 is a perfect multiple of 3, gcd (6,3) =3gcd(6,3)=3, and we have found
that gcd(33,27)=3gcd(33,27)=3.
CHAPTER 3
REQUIREMENT SPECIFICATIONS
CHAPTER 3
REQUIREMENT SPECIFICATIONS
Field programmable Gate Arrays (FPGAs) are pre-fabricated silicon devices that can be
electrically programmed in the field to become almost any kind of digital circuit or system. For
low to medium volume productions, FPGAs provide cheaper solution and faster time to market
as compared to Application Specific Integrated Circuits (ASIC) which normally require a lot of
resources in terms of time and money to obtain first device. FPGAs on the other hand take less
than a minute to configure and they cost anywhere around a few hundred dollars to a few
thousand dollars. Also, for varying requirements, a portion of FPGA can be partially
reconfigured while the rest of an FPGA is still running. Any future updates in the final product
can be easily upgraded by simply downloading a new application bit stream. However, the main
advantage of FPGAs i.e. flexibility is also the major cause of its draw back. Flexible nature of
FPGAs makes them significantly larger, slower, and more power consuming than their ASIC
counterparts.
These disadvantages arise largely because of the programmable routing interconnect of FPGAs
which comprises of almost 90% of total area of FPGAs. But despite these disadvantages, FPGAs
present a compelling alternative for digital system implementation due to their less time to market
and low volume cost. Normally FPGAs comprise of:
• Programmable logic blocks which implement logic functions.
• Programmable routing that connects these logic functions.
• I/O blocks that are connected to logic blocks through routing interconnect and
that make off-chip connections. A generalized example of an FPGA is shown in
Fig. 1.1 where configurable logic blocks (CLBs) are arranged in a two-
dimensional grid and are interconnected by programmable routing resources.
I/O blocks are arranged at the periphery of the grid and they are also connected
to the programmable routing interconnect. The “programmable/reconfigurable”
term in FPGAs indicates their ability to implement a new function on the chip
after its fabrication is complete. The re- configurability/ programmability of an
FPGA is based on an underlying programming technology, which can cause a
change in behavior of a pre-fabricated chip after its fabrication.
In FPGAs, there is no processor to run the software and we are the one designing the circuit. We can
configure an FPGA as simple as an AND gate or a complex as the multi-core processor.
To create a design, we write Hardware Description Language (HDL), which is of two types – Verilog
and VHDL. Then the HDL is synthesized into a bit file using a BITGEN to configure the FPGA.
The FPGA stores the configuration in RAM, that is the configuration is lost when there is no power
connectivity. Hence, they must be configured every time power is supplied.
FPGA Architecture:
:
The FPGA Architecture consists of three major components
Programmable Routing
The programmable routing establishes a connection between logic blocks and Input/output blocks to
complete a user-defined design unit.
Programmable I/O
The programmable I/O pads are used to interface the logic blocks and routing architecture to the external
components. The I/O pad and the surrounding logic circuit form as an I/O cell.
With advancement, the basic FPGA Architecture has developed through the addition of more specialized
programmable function blocks.
The special functional blocks like ALUs, block RAM, multiplexers, DSP-48, and microprocessors have
been added to the FPGA, due to the frequency of the need for such resources for applications.
3.1.2 ML605
Simulation
System-level testing may be performed with ISIM or the Model Sim logic simulator, and such test
programs must also be written in HDL languages. Test bench programs may include simulated input
signal waveforms, or monitors which observe and verify the outputs of the device under test
Model Sim or ISIM may be used to perform the following types of simulations:
Synthesis
Xilinx's patented algorithms for synthesis allow designs to run up to 30% faster than competing
programs, and allows greater logic density which reduces project time and costs.
Also, due to the increasing complexity of FPGA fabric, including memory blocks and I/O blocks, more
complex synthesis algorithms were developed that separate unrelated modules into slices, reducing
post-placement errors.
IP Cores are offered by Xilinx and other third-party vendors, to implement system-level functions such
as digital signal processing (DSP), bus interfaces, networking protocols, image processing, embedded
processors, and peripherals. Xilinx has been instrumental in shifting designs from ASIC-based
implementation to FPGA-based implementation.
4.1.1CREATING A PROJECT
Project Navigator allows you to manage your FPGA and CPLD designs using an ISE®
project, which contains all the source files and settings specific to your design. First, you
must create a project and then, add source files, and set process properties. After you
create a project, you can run processes to implement, constrain, and analyze your design.
Project Navigator provides a wizard to help you create a project as follows. Note: If you
prefer, you can create a project using the New Project dialog box instead of the New
Project Wizard. To use the New Project dialog box, deselect the Use New Project
wizard option in the ISE General page of Preferences dialog box.
To Create a Project:
1. Select File > New Project to launch the New Project Wizard.
2. In the Create New Project page, set the name, location, and project type, and click Next.
3. For EDIF or NGC/NGO projects only: In the Import EDIF/NGC Project page, select
the input and constraint file for the project, and click Next.
4. In the Project Settings page, set the device and project properties, and click Next.
5. In the Project Summary page, review the information, and click Finish to create the
project. Project Navigator creates the project file (project_name.xise) in the directory
you specified. After you add source files to the project, the files appear in the Hierarchy
pane of the Design panel. Project Navigator manages your project based on the design
properties (top-level module type, device type, synthesis tool, and language) you
selected when you created the project. It organizes all the parts of your design and keeps
track of the processes necessary to move the design from design entry through
implementation to programming the targeted Xilinx® device.
Note: For information on changing design properties, see Changing Design
Properties. You can now perform any of the following:
a. Create new source files for your project.
b. Add existing source files to your project.
c. Run processes on your source files.
d. Modify process properties
3.5 VHDL
3.3. VHDL Requirements
LANGUAGE.
♦General Features: documentation, high level design, simulation, synthesis, test,
VHDL automatic hardware. Description Language) is a hardware description language used
(VHSIC Hardware
♦Design Hierarchy: Multilevel
in electronic design automation description,
to describe partitioning.
digital and signal systems such as field-programmable gate
♦Library Support: Standard packages, cell-based
arrays and integrated circuits. VHDL can also be used as design.
a general purpose parallel programming
language.♦Sequential Statements: Behavioral software-like constructs.
♦Generic Design: Binding to specific libraries.
The key♦Type
advantage of VHDL,
Declaration: when used
strongly typedfor systems design, is that it allows the behaviour of the
language.
required♦Subprograms.
system to be described and verified (simulated) before synthesis tools translate the design into
real hardware (gates
♦Timing: andand
delay wires).
concurrency.
♦Structural specification:
Another benefit is that VHDL allows wiring
thecomponents.
description of a concurrent system. VHDL is a dataflow
language, unlike procedural computing languages such as BASIC, C, and assembly code, which all run
3.10 Signals
sequentially, and Variables
one instruction at a time.
A VHDL project is multipurpose. Being created once, a calculation block can be used in many other
projects. However, many formational and functional block parameters can be tuned (capacity
parameters, memory size, element base, block composition and interconnection structure).
A VHDL project is portable. Being created for one element base, a computing device project can be
ported on another element base, for example VLSI with various technologies.
A big advantage of VHDL compared to original Verilog is that VHDL has a full type system.
Designers can use the type system to write much more structured code (especially by declaring record
types).
The Feistel cipher or Feistel Network is named after Horst Feistel, who developed it while working at
IBM. He and a colleague, Don Coppersmith, published a cipher called Lucifer in 1973 that was the
first public example of a cipher using a Feistel structure. Due to the benefits of the Feistel structure,
other encryption algorithms based upon the structure and upon Lucifer have been created and adopted
for common use.
Feistel Cipher is not a specific scheme of block cipher. It is a design model from which many
different block ciphers are derived. DES is just one example of a Feistel Cipher. A cryptographic
system based on Feistel cipher structure uses the same algorithm for both encryption and decryption.
Advantages of Feistel Ciphers
Feistel ciphers have two main advantages
Structural reusability: As we discussed previously, the same structure can be used for
encryption and decryption as long as the key schedule is reversed for decryption. This is
extremely useful for hardware implementations of ciphers since all of the encryption logic does
not have to be reimplemented in reverse for decryption.
Ability to use one-way round functions: The other major advantage of Feistel ciphers is that the
round function, F, does not have to be reversible. Most ciphers require that every
transformation of the plaintext performed in encryption be reversible so that they can be
undone in decryption. Since this is not a requirement for ciphers using the Feistel structure, it
opens up new possibilities for round functions.
4.2. Algorithm for DES.
DES Encryption
The overall scheme for DES encryption is illustrated in Figure 3.5. As with any encryption scheme,
there are two inputs to the encryption function: the plaintext to be encrypted and the key. In this case,
the plaintext must be 64 bits in length and the key is 56 bits in length.
Fig. 3.5
Initial permutations
FINAL PERMUTATION
EXPANSION PERMUTATION
(d) Permutation Function (P)
Permutation Function
DETAILS OF SINGLE ROUND Figure 3.6 shows the internal structure of a single round.
Again, begin by focusing on the left-hand side of the diagram. The left and right
32-bit quantities, labelled L (left) and R (right). As in any classic Feistel cipher, the
The round key is 48 bits. The input is 32 bits. This input is first
that involves duplication of 16 of the bits (Table 3.2c). The resulting 48 bits are
XORed with. This 48-bit result passes through a substitution function that
The role of the S-boxes in the function F is illustrated in Figure 3.7. The substitution
consists of a set of eight S-boxes, each of which accepts 6 bits as input and
produces 4 bits as output. These transformations are defined in Table 3.3, which is
interpreted as follows: The first and last bits of the input to box form a 2-bit binary
number to select one of four substitutions defined by the four rows in the table for .
The middle four bits select one of the sixteen columns. The decimal value in the cell
selected by the row and column is then converted to its 4-bit representation to produce
the output. For example, in S1, for input 011001, the row is 01 (row 1) and the
column is 1100 (column 12). The value in row 1, column 12 is 9, so the output is 1001.
Figure 3.6.
4.3. Calculation of F(R,K)
Data security is a major issue to be dealt with from a few decades. In this report we have reasoned the
use of cryptographic techniques for securing the data. We have discussed the efficient data security
algorithms- DES. These algorithms help in securing in transit data. Since RSA is a linear cryptographic
algorithm and is slow in the encryption and decryption processes, it can put the user ‘s security at risk.
Thus, DES is the cryptographic algorithm which provides security and authentication. Authentication to
the data is provided with the help of smaller keys. The computational cost as well as the speed of this
algorithm is comparatively better. It also makes use of the good exchange protocols giving another mark
to the security.
DES finally and definitively proved insecure in July 1998, when the Electronic Frontier
Foundation (EFF) announced that it had broken a DES encryption using a special-
purpose “DES cracker” machine that was built for less than $250,000. Theattack took less than
three days. The EFF has published a detailed description of the machine, enabling others to build
theirown cracker [EFF98].
And, of course, hardware prices will continue to drop as speeds increase, making DES virtuallyworthles
s. It is important to note that there is more to a key-search attack than simply running through all
possible keys. Unless known plaintext is provided, the analyst must be able to recognize plaintext as
plaintext. If the message is just plaintext in English, then the result pops out easily, although the task
of recognizing English would have to be automated. Ifthe text message has been compressed before encr
yption, then recognition is more difficult. And if the message is some more general type of data, such as
a numerical file, and this has been compressed, the problem becomes even more difficult to
automate. Thus, to supplement the brute-force approach, some degree of knowledge about the expected
plaintext is needed, and some means of automatically distinguishing plaintext from garble is also needed.
The EFF approach addresses this issue as well and introduces some automated techniques that would be
effective in many contexts. Fortunately, there are a number of alternatives to DES, the most important of
which are AES and triple DES.
We will briefly mention some weaknesses that have been found in the design of the cipher.
S-boxes At least three weaknesses are mentioned in the literature for S-boxes.
1. In S-box 4, the last three output bits can be derived in the same way as the first output bit by
complementing some of the input bits.
2. Two specifically chosen inputs to an S-box array can create the same output.
3. It is possible to obtain the same output in a single round by changing bits in only three neighbouring
S-boxes.
D-boxes One mystery and one weakness were found in the design of D-boxes:
1. It is not clear why the designers of DES used the initial and final permutations; these have no security
benefits.
2. In the expansion permutation (inside the function), the first and fourth bits of every 4-bit series are
repeated.
Weakness in the Cipher Key Several weaknesses have been found in the cipher key.
Key Size Critics believe that the most serious weakness of DES is in its key size (56 bits). To do a
brute-force attack on a given ciphertext block, the adversary needs to check 256 keys.
a. With available technology, it is possible to check one million keys per second. This means that we
need more than two thousand years to do brute-force attacks on DES using only a computer with one
processor.
b. If we can make a computer with one million chips (parallel processing), then we can test the whole
key domain in approximately 20 hours. When DES was introduced, the cost of such a computer was
over several million dollars, but the cost has dropped rapidly. A special computer was built in 1998 that
found the key in 112 hours.
c. Computer networks can simulate parallel processing. In 1977 a team of researchers used 3500
computers attached to the Internet to fi nd a key challenged by RSA Laboratories in 120 days. The key
domain was divided among all of these computers, and each computer was responsible to check the part
of the domain.
d. If 3500 networked computers can find the key in 120 days, a secret society with 42,000 members
can find the key in 10 days. The above discussion shows that DES with a cipher key of 56 bits is not
safe enough to be used comfortably. We will see later in the chapter that one solution is to use triple
DES (3DES) with two keys (112 bits) or triple DES with three keys (168 bits).
Weak Keys Four out of 256 possible keys are called weak keys. A weak key is the one that, after
parity drop operation (using Table 6.12), consists either of all 0s, all 1s, or half 0s and half 1s. These
keys are shown in Table 6.18.
The round keys created from any of these weak keys are the same and have the same pattern as the
cipher key. For example, the sixteen round keys created from the first key is all made of 0s; the one
from the second is made of half 0s and half 1s. The reason is that the key-generation algorithm first
divides the cipher key into two halves. Shifting or permutation of a block does not change the block if it
is made of all 0s or all 1s.
What is the disadvantage of using a weak key? If we encrypt a block with a weak key and subsequently
encrypt the result with the same weak key, we get the original block. The process creates the same
original block if we decrypt the block twice. In other words, each weak key is the inverse of itself Ek
(Ek(P)) = P, as shown in Fig. 6.11.
6.3. Different Possible attacks on DES
A brute force attack involves trying all possible keys until hitting on the one that results in plaintext.
This can involve significant costs related to the amount of processing required to try quadrillions (in the
case of DES) of keys. The time required is a factor of how many keys can be tried per unit of time,
which is a factor of how many computers can be assigned to the task in parallel.
Because computers are getting faster all the time. The unit of measure for comparison purposes is
million-instructions-per-second (MIPS) per year (MY). It is the number of instructions a million-
instructions-per-second (MIPS) computer can execute in one year.
Moore's Law (Gordon Moore, founder of Intel) states that processing speed doubles every
18 months. As a result, advances in technology and computing performance will always
make brute force an increasingly practical attack on keys of a fixed length.
This table shows the times required for a brute force attack on various key lengths using
"Deep Crack" technology.
Deep Crack technology was developed in 1998 by the EFF (Electronic Frontier Foundation). They
built a machine called the Deep Crack capable of trying a million DES keys per microsecond against a
readable ASCII string hour to try all possible keys. In theory, its success in cracking DES makes DES
worthless. In practice, however, by using cipher block chaining, doing any initial scrambling of the
data and/or doing it three times in a row (triple DES), it can still be fairly difficult to crack.
The only hope against a brute force attack is to have so many possible keys that it is not feasible to try
them all in a reasonable amount of time. Obviously, as the key length grows beyond 100 or so, the
number of keys quickly becomes astronomical. The new AES standard, Rijndael, which is supposed
to replace DES, supports 128 and 256-bit keys. Even considering the staggering advances in
computing power and cryptanalysis, 256-bit keys should be pretty safe for the next 100 years or so.
6.3.2. Linear Cryptanalysis
6.3.3 Meet-in-the-middle attack
The meet-in-the-middle attack is one of the types of known plaintext attacks. The intruder has to know
some parts of plaintext and their ciphertexts. Using meet-in-the-middle attacks it is possible to break
ciphers, which have two or more secret keys for multiple encryption using the same algorithm. For
example, the 3DES cipher works in this way. Meet-in-the-middle attack was first presented by Diffie and
Hellman for cryptanalysis of DES algorithm.
A cipher, which is to be broken using meet-in-the-middle attack, can be defined as two algorithms, one for
encryption and one for decryption. Each of them contains two simpler algorithms:
C = Eb(kb, Ea(ka, P))
P = Da(ka, Db(kb, C))
where:
C is a ciphertext,
P is a plaintext,
E is an algorithm for encryption,
D is an algorithm for decryption,
ka and kb are two secret keys
The first step of the attack is to create a table with all possible values for one side of the equation. One
should calculate all possible ciphertexts of the known plaintext P created using the first secret key,
so Ea(ka,P). A number of rows in the table is equal to a number of possible secret keys. It is good idea to
sort the received table based on received ciphertexts Ea(ka,P), in order to simplify its further searching.
The second step of the attack is to calculate values of Db(kb,C) for the second side of the equation. One
should compare them with the values of the first side of the equation, computed earlier and stored in
the table. The intruder searches a pair of secret keys ka and kb, for which the value Ea(ka,P) found in
the table and the just calculated value Db(kb,C) are the same.
The scheme of meet-in-the-middle attack
It is possible to attack encryption systems, where two encrypting algorithms E are different (and used
keys which have not necessarily the same lengths). In that case, in the first step the table is created for
weaker of two algorithms.
AES data encryption is a more mathematically efficient and elegant cryptographic algorithm, but its main
strength rests in the option for various key lengths. AES allows you to choose a 128-bit, 192-bit or 256-
bit key, making it exponentially stronger than the 56-bit key of DES. In terms of structure, DES uses the
Feistel network which divides the block into two halves before going through the encryption steps. AES
on the other hand, uses permutation-substitution, which involves a series of substitution and permutation
steps to create the encrypted block. The original DES designers made a great contribution to data
security, but one could say that the aggregate effort of cryptographers for the AES algorithm has been far
greater.
One of the original requirements by the National Institute of Standards and Technology (NIST) for the
replacement algorithm was that it had to be efficient both in software and hardware implementations
(DES was originally practical only in hardware implementations). Java and C reference implementations
were used to do performance analysis of the algorithms. AES was chosen through an open competition
with 15 candidates from as many research teams around the world, and the total amount of resources
allocated to that process was tremendous. Finally, in October 2000, a NIST press release announced the
selection of Rijndael as the proposed Advanced Encryption Standard (AES).
REFERENCES
PROJECT DETAILS