Untitled
Untitled
With advancements in science and technology, cloud computing is the next big
thing in the industry. The significant advantage of cloud storage is no difficulty to
get to, diminished equipment, low protection, and fixing cost so every association
is working with the cloud. Security is a significant factor in cloud computing for
ensuring client data is placed on the safe mode in the cloud. Cloud computing is
motivating data owners to outsource their databases to the cloud. However, for
privacy concerns, the sensitive data must be encrypted before outsourcing, which
inevitably posts a challenging task for effective data utilization. Existing work
either focuses on keyword searches or suffers from inadequate security guarantees
or inefficiency. In this paper, we concentrate on multi-dimensional range queries
over dynamic encrypted cloud data. Cloud computing has been widely deployed
and become indispensable for many companies and institutions, due to its salient
features such as on-demand services, elasticity, flexibility, multi-tenancy and
efficient access to their data and reduced the maintenance cost. Various cloud-
based services and applications are gradually enriched, including many
governments, educational institutions, medical institutions, and large enterprise
groups. As one of the key services, data outsourcing solves the problem of storing
large amounts of data at low cost for users with limited storage resources and has
been widely used. If data is directly stored in the cloud, users have the risk of
revealing privacy. This is very critical, especially for high-impact business data or
medical records. For most the service providers don’t guarantee not to see or
modify clients’ data. A cloud server also may be selfish to save its computation or
transmission overhead. So, it is essential to provide security and privacy guarantees
such as confidentiality, integrity, freshness, authenticity, and verification in
addition to user experience, conventional operability and management such as
updating, retrieving, and supporting multi-user, for an elegant and practical
outsourcing storage scheme. To ensure data confidentiality without losing
operability and management of the data, many SSE schemes are proposed at the
beginning stage. However, most of these SSE schemes assume that the cloud
server is honest but curious. This assumption is not always valid. Because the
cloud server may suffer from external attacks, internal configuration errors,
software vulnerabilities, and internal personnel threats. The data may be tampered
with, or the server may return a cached history or partial results.
PaaS (Platform-as-a-Service)
SaaS (Software-as-a-Service)
1.2.1SaaS (Software-as-a-Service)
SaaS is the primary delivery model for most commercial software today—there are
hundreds of thousands of SaaS solutions available, from the most focused industry
and departmental applications to powerful enterprise software database and AI
(artificial intelligence) software.
1.2.2PaaS (Platform-as-a-Service)
PaaS provides software developers with on-demand platform—hardware, complete
software stack, infrastructure, and even development tools—for running,
developing, and managing applications without the cost, complexity, and
inflexibility of maintaining that platform on-premises.
Today, PaaS is often built around containers, a virtualized compute model one step
removed from virtual servers. Containers virtualize the operating system, enabling
developers to package the application with only the operating system services it
needs to run on any platform, without modification and without need for
middleware
1.2.3IaaS (Infrastructure-as-a-Service)
In contrast to SaaS and PaaS (and even newer PaaS computing models such as
containers and serverless), IaaS provides the users with the lowest-level control of
computing resources in the cloud.
IaaS was the most popular cloud computing model when it emerged in the early
2010s. While it remains the cloud model for many types of workloads, use of SaaS
and PaaS is growing at a much faster rate
Public cloud is open to all to store and access information via the Internet using
the pay-per-usage method.
In public cloud, computing resources are managed and operated by the Cloud
Service Provider (CSP).
Hypervisor
Management Software
Deployment Software
Network
Cloud Server
Cloud Storage
CHAPTER 2
LITERATURE SURVEY
Encrypted Storage Outsourcing allows the client with limited resources to
outsource large amounts of data to cloud service companies at a low cost. Since the
data was directly encrypted and stored in the cloud server, users cannot directly
query the encrypted data. If the user wants to update the database. Users are
required to download the data locally. Then update the database and upload it to
the server. Searchable Encryption. To query the encrypted data of the cloud server,
some relevant technologies were proposed to solve this problem SSE is one of the
most important schemes to solve this problem. SSE is a search query scheme based
on the keyword index. It can perform operations such as querying cloud-encrypted
data. In the work the authors provided a privacy-protection framework for
outsourced media search. The work relies on multimedia hashing and symmetric
encryption and tries to balance the strength of privacy enforcement, the quality of
search, and computation complexity. In general, we summarize the related work in
the following. Verifiable Searchable Symmetric Encryption. Some of the most
recent work focuses on forward security or backward security. Most of them
consider honest but curious servers which follow the defined protocol. Although
these schemes can provide forward or backward security, the search results cannot
be verified efficiently when the servers perform active attacks. In this review
the verifiability of a symmetric searchable encryption scheme is that the user can
verify the integrity and freshness of the search results returned by the server.
Integrity verification is to prevent the server from returning partial or incorrect
search results. In this paper, our goal is to design an Efficient, Secure, Verifiable
SSE scheme on the three-party model which focuses the user can verify the
correctness of the received result from the server and the of the user can detect
whether the server is launching a replaying attack. The overall performance can be
reasonably improved. For example, when the server receives the user0 s query, the
server can save the overall cost through our specific algorithm and data structure.
The proposed data structure and algorithm can support efficient dynamic updating.
2.2 Title: Enabling Generic, Verifiable, and Secure Data Search in Cloud
Services
Authors: C. Wang, X. Yuan, Q. Wang, J. Zhu, Q. Li, and K. Ren
SSE schemes only work with honest-but-curious cloud services that do not deviate
from the prescribed protocols. However, this assumption does not always hold in
practice due to the untrusted nature in storage outsourcing. To alleviate the issue,
there have been studies on Verifiable Searchable Symmetric Encryption (VSSE),
which functions against malicious cloud services by enabling results verification.
But to our best knowledge, existing VSSE schemes exhibit very limited
applicability, such as only supporting static database, demanding specific SSE
constructions, or only working in the single-user model. In this paper, we propose
GSSE, the first generic verifiable SSE scheme in the single-owner multiple-user
model, which provides verifiability for any SSE schemes and further supports data
updates. To generically support result verification, we first decouple the proof
index in GSSE from SSE. We then leverage Merkle Patricia Tree (MPT) and
Incremental Hash to build the proof index with data update support. We also
develop a timestamp-chain for data freshness maintenance across multiple users.
Rigorous analysis and experimental evaluations show that GSSE is secure and
introduces small overhead for result verification.
Author: Weng.j
Title: Supervised Learning for Suicidal Ideation Detection in Online User Content.
2.5 Title: DAC-MACS: Effective data access control for multiauthority cloud
storage systems
Author: Zi Huang
Data access control is an effective way to ensure data security in the cloud.
However, due to data outsourcing and untrusted cloud servers, the data access
control becomes a challenging issue in cloud storage systems. Existing access
control schemes are no longerapplicable to cloud storage systems, because they
either produce multiple encrypted copies of the same data or require a fully trusted
cloud server. Ciphertext-policy attribute-based encryption (CP-ABE) is a
promising technique for access control of encrypted data.
2.6 Title: Expressive, efficient, and revocable data access control for multi-
authority cloud storage
Author: Wang
Abstract: Data access control is an effective way to ensure the data security in the
cloud. Due to data outsourcing and untrusted cloud servers, the data access control
becomes a challenging issue in cloud storage systems. Ciphertext-Policy Attribute-
based Encryption (CP-ABE) is regarded as one of the most suitable technologies
for data access control in cloud storage, because it gives data owners more direct
control on access policies. However, it is difficult to directly apply existing CP-
ABE schemes to data access control for cloud storage system
Author: Shaoxiong Ji, Xue Li, Zi Huang & Erik CambriaAbstract: In this paper,
we propose a toolkit for efficient and privacy-preserving outsourced calculation
under multiple encrypted keys (EPOM). Using EPOM, a large scale of users can
securely outsource their data to a cloud server for storage. Moreover, encrypted
data belonging to multiple users can be processed without compromising on the
security of the individual user's (original) data and the final computed results. To
reduce the associated keymanagement cost and private key exposure risk in
EPOM, we present a distributed tworisk indicators. The relation module is further
equipped with the attention mechanism to prioritize more critical relational
features. Through experiments on three real-world datasets, our model outperforms
most of its counterparts.
2.8 Title : Enhanced Clients for Data Stores and Cloud Services
Authors: Arun Iyengar, Fellow, IEEE
This paper presents the design and implementation of enhanced clients for
improving both the functionality and performance of applications accessing data
stores or cloud services. Our enhanced clients can improve performance via
multiple types of caches, encrypt data for providing confidentiality before sending
information to a server, and compress data for reducing the size of data transfers.
Our clients can perform data analysis to allow applications to more effectively use
cloud services. They also provide both synchronous and asynchronous interfaces.
An asynchronous interface allows an application program to access a data store or
cloud service and continue execution before receiving a response which can
significantly improve performance.
CHAPTER 3
SYSTEM ANALYSIS
3.1.1. DRAWBACKS
1.Less Security
In our scheme, the data owner uses B+-Tree to build the index and hashes all nodes
to generate a root using several hash functions. The user verifies the integrity of the
search results through this root. We leverage two methods to generate the root of
B+-Tree. One is to hash all the nodes to generate the root of the B+-Tree. The
other is to hash only the leaf nodes to generate the root of the B+-Tree. Since its
leaf nodes contain all key-value pairs, the integrity of the search results can also be
verified. Although there are many applications based on B+-Tree, they cannot be
utilized directly in such context providing excellent privacy and security
guarantees such as integrity checking, verification, etc. Second, how does a scheme
prevent malicious servers from launching replay attacks and support three party
model proposes a timestamp-based scheme to solve this problem, but it can be only
applied to the two-party model. We address this issue by combining the timestamp
mechanism with the root of the B+-Tree and authorized users. Finally, costs and
user experience are common issues for most of the current schemes. In traditional
verifiable SSE schemes, whenever a keyword queried by the user exists or not, the
server always needs to traverse the whole secure index and create the
corresponding authenticator. It is not efficient for both the server and the users. A
new idea should be adopted to differentiate the two situations to boost the
performance and user experience. The difficulty is how to make the newly
designed scheme compatible with the previous scheme and keep the scheme
supporting dynamic updating. Besides, it is essential to save users’ costs. For some
cloud service providers not only, charges based on the size of the storage resource
requested by the client but also charges based on the amount of calculation.
How can the server avoid unnecessary searches?
3.2.2 ADVANTAGES
1) The user can verify the correctness of the received result from the server.
2) The user can detect whether the server is launching a replaying attack.
3) The overall performance can be reasonably improved. For example, when the
server receives the user’s query, the server can save the overall cost through our
specific algorithm and data structure.
INVERTED LIST
ALL KEYWORDS
DOCUMENTS CORRESPONDING TO KEYWORDS
KEYWORDS CORRESPONDING TO HASH VALUE
TRAPDOOR CORRESPONDING TO ALL KEYWORDS
Algorithm description
Most of the parameters, used under counting bloom filter, are defined same
with Bloom filter, such as n, k. m is denoted as the number of counters in
Counting Bloom filter, which is expansion of m bits in Bloom filter.
An empty Counting Bloom filter is set as a m counters, all initialized to 0.
Similar to Bloom filter, there must also be k various hash functions defined,
each of which responsible to map or hash some set element to one of the m
counter array positions, creating a uniform random distribution. It is also
same that k is a constant, much less than m, which is proportional to the
number of elements to be appended.
The main generalization of Bloom filter is appending an element. To append
an element, insert it to each of the k hash functions to obtain k array
positions and increment the counters 1 at all these positions.
To query for an element with a threshold θ (verify whether the count number
of an element is less than θ), insert it to each of the k hash functions to
obtain k counter positions.
If any of the counters at these positions is smaller than θ, the count number
of element is definitely smaller than θ – if it were higher and equal, then all
the corresponding counters would have been higher or equal to θ.
If all are higher or equal to θ, then either the count is really higher or equal
to θ, or the counters have by chance been higher or equal to θ.
If all are higher or equal to θ even though the count is less than θ, this
situation is defined as false positive. Like Bloom filter, this also should be
minimized.
CHAPTER- 4
MODULE DESCRIPTION
Java Technology
Simple
Architecture neutral
Object oriented
Portable
Distributed
High performance
Interpreted
Multithreaded
Robust
Dynamic
Secure
You can think of Java byte codes as the machine code instructions for the Java
Virtual Machine (Java VM). Every Java interpreter, whether it’s a development
tool or a Web browser that can run applets, is an implementation of the Java VM.
Java byte codes help make “write once, run anywhere” possible. You can compile
your program into byte codes on any platform that has a Java compiler. The byte
codes can then be run on any implementation of the Java VM. That means that as
long as a computer has a Java VM, the same program written in the Java
programming language can run on Windows 2000, a Solaris workstation, or on an
iMac.
You’ve already been introduced to the Java VM. It’s the base for the Java platform
and is ported onto various hardware-based platforms.
The Java API is a large collection of ready-made software components that provide
many useful capabilities, such as graphical user interface (GUI) widgets. The Java
API is grouped into libraries of related classes and interfaces; these libraries are
known as packages. The next section, What Can Java Technology Do? Highlights
what functionality some of the packages in the Java API provide.
The following figure depicts a program that’s running on the Java platform. As the
figure shows, the Java API and the virtual machine insulate the program from the
hardware. Native code is code that after you compile it, the compiled code runs on
a specific hardware platform. As a platform-independent environment, the Java
platform can be a bit slower than native code. However, smart compilers, well-
tuned interpreters, and just-in-time byte code compilers can bring performance
close to that of native code without threatening portability.
The most common types of programs written in the Java programming language
are applets and applications. If you’ve surfed the Web, you’re probably already
familiar with applets. An applet is a program that adheres to certain conventions
that allow it to run within a Java-enabled browser.
However, the Java programming language is not just for writing cute, entertaining
applets for the Web. The general-purpose, high-level Java programming language
is also a powerful software platform. Using the generous API, you can write many
types of programs.
• The essentials: Objects, strings, threads, numbers, input and output, data
structures, system properties, date and time, and so on.
• Security: Both low level and high level, including electronic signatures,
public and private key management, access control, and certificates.
We can’t promise you fame, fortune, or even a job if you learn the Java
programming language. Still, it is likely to make your programs better and requires
less effort than other languages. We believe that Java technology will help you do
the following:
• Write better code: The Java programming language encourages good coding
practices, and its garbage collection helps you avoid memory leaks. Its object
orientation, its JavaBeans component architecture, and its wide-ranging, easily
extendible API let you reuse other people’s tested code and introduce fewer bugs.
• Avoid platform dependencies with 100% Pure Java: You can keep your
program portable by avoiding the use of libraries written in other languages. The
100% Pure JavaTMProduct Certification Program has a repository of historical
process manuals, white papers, brochures, and similar materials online.
• Write once, run anywhere: Because 100% Pure Java programs are compiled
into machine-independent byte codes, they run consistently on any Java platform.
• Distribute software more easily: You can upgrade applets easily from a
central server. Applets take advantage of the feature of allowing new classes to be
loaded “on the fly,” without recompiling the entire program.
Sockets
#include <sys/types.h>
#include <sys/socket.h>
Here "family" will be AF_INET for IP communications, protocol will be zero, and
type will depend on whether TCP or UDP is used. Two processes wishing to
communicate over a network create a socket each. These are similar to two ends of
a pipe - but the actual pipe does not yet exist.
JFree Chart
JFreeChart is a free 100% Java chart library that makes it easy for developers to
display professional quality charts in their applications. JFreeChart's extensive
feature set includes:
A consistent and well-documented API, supporting a wide range of chart types;
A flexible design that is easy to extend, and targets both server-side and client-side
applications;
Support for many output types, including Swing components, image files
(including PNG and JPEG), and vector graphics file formats (including PDF, EPS
and SVG);
1. Map Visualizations
Charts showing values that relate to geographical areas. Some examples include:
(a) population density in each state of the United States, (b) income per capita for
each country in Europe, (c) life expectancy in each country of the world. The tasks
in this project include:
Sourcing freely redistributable vector outlines for the countries of the world,
states/provinces in particular countries (USA in particular, but also other areas);
Implement a new (to JFreeChart) feature for interactive time series charts --- to
display a separate control that shows a small version of ALL the time series data,
with a sliding "view" rectangle that allows you to select the subset of the time
series data to display in the main chart.
3. Dashboards
4. Property Editors
The property editor mechanism in JFreeChart only handles a small subset of the
properties that can be set for charts. Extend (or reimplement) this mechanism to
provide greater end-user control over the appearance of the charts.
J2ME uses configurations and profiles to customize the Java Runtime Environment
(JRE). As a complete JRE, J2ME is comprised of a configuration, which
determines the JVM used, and a profile, which defines the application by adding
domain-specific classes. The configuration defines the basic run-time environment
as a set of core classes and a specific JVM that run on specific types of devices.
We'll discuss configurations in detail in the Theprofile defines the application;
specifically, it adds domain-specific classes to the J2ME configuration to define
certain uses for devices. We'll cover profiles in depth in the The following graphic
depicts the relationship between the different virtual machines, configurations, and
profiles. It also draws a parallel with the J2SE API and its Java virtual machine.
While the J2SE virtual machine is generally referred to as a JVM, the J2ME virtual
machines, KVM and CVM, are subsets of JVM. Both KVM and CVM can be
thought of as a kind of Java virtual machine -- it's just that they are shrunken
versions of the J2SE JVM and are specific to J2ME
Introduction In this section, we will go over some considerations you need to keep
in mind when developing applications for smaller devices. We'll take a look at the
way the compiler is invoked when using J2SE to compile J2ME applications.
Finally, we'll explore packaging and deployment and the role preverification plays
in this process.
Developing applications for small devices requires you to keep certain strategies in
mind during the design phase. It is best to strategically design an application for a
small device before you begin coding. Correcting the code because you failed to
consider all of the "gotchas" before developing the application can be a painful
process. Here are some design strategies to consider:
* Minimize run-time memory use. To minimize the amount of memory used at run
time, use scalar types in place of object types. Also, do not depend on the garbage
collector. You should manage the memory efficiently yourself by setting object
references to null when you are finished with them. Another way to reduce run-
time memory is to use lazy instantiation, only allocating objects on an as-needed
basis. Other ways of reducing overall and peak memory use on small devices are to
release resources quickly, reuse objects, and avoid exceptions.
4.Configurations overview
The configuration defines the basic run-time environment as a set of core classes
and a specific JVM that run on specific types of devices. Currently, two
configurations exist for J2ME, though others may be defined in the future:
5.J2ME profiles
Profile 1: KJava
KJava is Sun's proprietary profile and contains the KJava API. The KJava profile is
built on top of the CLDC configuration. The KJava virtual machine, KVM, accepts
the same byte codes and class file format as the classic J2SE virtual machine.
KJava contains a Sun-specific API that runs on the Palm OS. The KJava API has a
great deal in common with the J2SE Abstract Windowing Toolkit (AWT).
However,
MIDP is geared toward mobile devices such as cellular phones and pagers. The
MIDP, like KJava, is built upon CLDC and provides a standard run-time
environment that allows new applications and services to be deployed dynamically
on end user devices. MIDP is a common, industry-standard profile for mobile
devices that is not dependent on a specific vendor. It is a complete and supported
foundation for mobile application
development. MIDP contains the following packages, the first three of which are
core CLDC packages, plus three MIDP-specific packages.
* java.lang
* java.io
* java.util
* javax.microedition.io
* javax.microedition.lcdui
* javax.microedition.midlet
* javax.microedition.rms
The input design is the link between the information system and the user. It
comprises the developing specification and procedures for data preparation and
those steps are necessary to put transaction data in to a usable form for processing
can be achieved by inspecting the computer to read data from a written or printed
document or it can occur by having people keying the data directly into the system.
The design of input focuses on controlling the amount of input required,
controlling the errors, avoiding delay, avoiding extra steps and keeping the process
simple. The input is designed in such a way so that it provides security and ease of
use while retaining privacy. Input Design considered the following things:
Methods for preparing input validations and steps to follow when errors
occur.
OBJECTIVES
2.It is achieved by creating user-friendly screens for the data entry to handle large
volumes of data. The goal of designing input is to make data entry easier and to be
free from errors. The data entry screen is designed in such a way that all the data
manipulations can be performed. It also provides record viewing facilities.
3.When the data is entered it will check for its validity. Data can be entered with
the help of screens. Appropriate messages are provided when needed so that the
user will not be in maize of instant. Thus, the objective of input design is to create
an input layout that is easy to follow
4.1.2 OUTPUT DESIGN
A quality output is one which meets the requirements of the end user and presents
the information clearly. In any system results of processing are communicated to
the users and to other systems through outputs. In output design it is determined
how the information is to be displaced for immediate need and the hard copy
output. It is the most important and direct source of information to the user.
Efficient and intelligent output design improves the system’s relationship to help
user decision-making.
The output form of an information system should accomplish one or more of the
following objectives.
Future.
Trigger an action.
Confirm an action.
4.2 RESOURCE REQUIREMENT
SYSTEM REQUIREMENTS:
HARDWARE REQUIREMENTS:
SOFTWARE REQUIREMENTS:
DESIGN PROCESS
To solves the problem of storing large amounts of data at low cost for users with
limited storage resources and has been widely used.
ADVANTAGES
5.2 ALGORITHMS
AES:
The algorithm described by AES is a symmetric-key algorithm, meaning the
same key is used for both encrypting and decrypting the data.
MODULE DESCRIPTION
2. Cloud Server
OWNER/USER LOGIN
CLOUD SERVER
CLASS DIAGRAM:
SEQUENCE DIAGRAM:
COLABORATION DAIAGRAM:
DEPLOYMENT DAIGRAM:
DATAFLOW DIAGRAM:
LEVEL 0:
LEVEL 1:
LEVEL 2:
LEVEL3
CHAPTER 6
IMPLEMENTATION
6.1.1ODBC
Through the ODBC Administrator in Control Panel, you can specify the database
that is associated with a data source that an ODBC application program is written
to use. Think of an ODBC data source as a door with a name on it. Each door will
lead you to a particular database. For example, the data source named Sales
Figures might be a SQL Server database, whereas the Accounts Payable data
source could refer to an Access database. The physical database referred to by a
data source can reside anywhere on the LAN.
The ODBC system files are not installed on your system by Windows 95. Rather,
they are installed when you setup a separate database application, such as SQL
Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control
Panel, it uses a file called ODBCINST.DLL. It is also possible to administer your
ODBC data sources through a stand-alone program called ODBCADM.EXE.
There is a 16-bit and a 32-bit version of this program and each maintains a separate
list of ODBC data sources.From a programming perspective, the beauty of ODBC
is that the application can be written to use the same set of function calls to
interface with any data source, regardless of the database vendor. The source code
of the application doesn’t change whether it talks to Oracle or SQL Server. We
only mention these two as an example. There are ODBC drivers available for
several dozen popular database systems.
Excel spreadsheets and plain text files can be turned into data sources. The
operating system uses the Registry information written by ODBC Administrator to
determine which low-level ODBC drivers are needed to talk to the data source
(such as the interface to Oracle or SQL Server). The loading of the ODBC drivers
is transparent to the ODBC application program. In a client/server environment,
the ODBC API even handles many of the network issues for the application
programmer.
The advantages of this scheme are so numerous that you are probably thinking
there must be some catch. The only disadvantage of ODBC is that it isn’t as
efficient as talking directly to the native database interface. ODBC has had many
detractors make the charge that it is too slow. Microsoft has always claimed that
the critical factor in performance is the quality of the driver software that is used.
In our humble opinion, this is true. The availability of good ODBC drivers has
improved a great deal recently. And anyway, the criticism about performance is
somewhat analogous to those who said that compilers would never match the
speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives
you the opportunity to write cleaner programs, which means you finish sooner.
Meanwhile, computers get faster every year.
6.1.2 JDBC
The remainder of this section will cover enough information about JDBC for you
to know what it is about and how to use it effectively. This is by no means a
complete overview of JDBC. That would fill an entire book.
JDBC Goals
Few software packages are designed without goals in mind. JDBC is one that,
because of its many goals, drove the development of the API. These goals, in
conjunction with early reviewer feedback, have finalized the JDBC class library
into a solid framework for building database applications in Java.
The goals that were set for JDBC are important. They will give you some insight
as to why certain classes and functionalities behave the way they do. The eight
design goals for JDBC are as follows:
The designers felt that their main goal was to define a SQL interface for Java.
Although not the lowest database interface level possible, it is at a low enough
level for higher-level tools and APIs to be created. Conversely, it is at a high
enough level for application programmers to use it confidently. Attaining this goal
allows for future tool vendors to “generate” JDBC code and to hide many of
JDBC’s complexities from the end user.
SQL syntax varies as you move from database vendor to database vendor. To
support a wide variety of vendors, JDBC will allow any query
statement to be passed through it to the underlying database driver. This allows the
connectivity module to handle non-standard functionality in a manner that is
suitable for its users.
The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal
allows JDBC to use existing ODBC level drivers by the use of a software interface.
This interface would translate JDBC calls to ODBC and vice versa.
4. Provide a Java interface that is consistent with the rest of the Java system.
Because of Java’s acceptance in the user community thus far, the designers feel
that they should not stray from the current design of the core Java system.
5. Keep it simple
This goal probably appears in all software design goal listings. JDBC is no
exception. Sun felt that the design of JDBC should be very simple, allowing for
only one method of completing a task per mechanism. Allowing duplicate
functionality only serves to confuse the users of the API.
Strong typing allows for more error checking to be done at compile time; also, less
error appear at runtime.
Because often, the usual SQL calls used by the programmer are simple SELECT’s,
INSERT’s, DELETE’s and UPDATE’s, these queries should be simple to perform
with JDBC. However, more complex SQL statements should also be possible.
Finally, we decided to proceed with the implementation using Java Networking
and for dynamically updating the cache table we go for MS Access database.
Java byte codes help make “write once, run anywhere” possible. You can compile
your Java program into byte codes on my platform that has a Java compiler. The
byte codes can then be run any implementation of the Java VM. For example, the
same Java program can run Windows NT, Solaris, and Macintosh.
6.1.5 Networking
TCP/IP stack:
IP datagram’s
Internet. It is also responsible for breaking up large datagram into smaller ones for
transmission and reassembling them at the other end.
UDP
UDP is also connectionless and unreliable. What it adds to IP is a checksum for the
contents of the datagram and port numbers. These are used to give a client/server
model
TCP
Internet addresses
To use a service, you must be able to find it. The Internet uses an address scheme
for machines so that they can be located. The address is a 32-bit integer which
gives the IP address. This encodes a network ID and more addressing. The network
ID falls into various classes according to the size of the network address.
Network address
Class A uses 8 bits for the network address with 24 bits left over for other
addressing. Class B uses 16-bit network addressing. Class C uses 24-bit network
addressing and class D uses all 32.
Subnet address
// Algorithm used
private final static String ALGORITHM = "DES";
/**
* Encrypt data
* @param secretKey - a secret key used for encryption
* @param data - data to encrypt
* @return Encrypted data
* @throws Exception
*/
public String cipher(String secret, String data) throws Exception {
// Key has to be of length 8
if (secretKey == null || secretKey.length() != 8)
throw new Exception("Invalid key length - 8 bytes key needed!");
return toHex(cipher.doFinal(data.getBytes()));
}
/**
* Decrypt data
* @param secretKey - a secret key used for decryption
* @param data - data to decrypt
* @return Decrypted data
* @throws Exception
*/
public String decipher (String secret, String data) throws Exception {
// Key has to be of length 8
if (secretKey == null || secretKey.length() != 8)
throw new Exception("Invalid key length - 8 bytes key needed!");
// Helper methods
return result.toString();
}
String secretKey="12345678";
private final static String HEX = "0123456789ABCDEF";
public static void main(String[] args) {
try {
String secretKey = "01234567";
String data="test";
} catch (Exception e) {
e.printStackTrace();
}
}
import java.io.DataInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.security.SecureRandom;
import javax.crypto.Cipher;
import javax.crypto.KeyGenerator;
import javax.crypto.SecretKey;
import javax.crypto.spec.SecretKeySpec;
/**
* Usage:
* <pre>
* String crypto = SimpleCrypto.encrypt(masterpassword, cleartext)
* ...
* String cleartext = SimpleCrypto.decrypt(masterpassword, crypto)
* </pre>
* @author ferenc.hechler
*/
public class AES {
}catch(Exception e)
{
e.printStackTrace();
}
return out;
}
}catch(Exception e)
{
e.printStackTrace();
}
return out;
}
}
}
6.3 SCREENSHOTS
Cloud server
DATA OWNER
UPLOAD FILES
SENDING ENCRYPTION AND SECRET KEY TO CLOUD
DOWNLOADING FILES
CHAPTER 7
CONCLUSION AND FUTURE ENHANCEMENT
7.1CONCLUSION
As graph data continue to increase such as social networks and biological
networks. Current SSE cannot fully satisfy queries of graph data. In the future, we
can study more complex search directions, such as matrix queries, graph adjacency
queries, and so on. With the rise of Network Function Virtualization (NFV)
technology, some studies have applied searchable encryption technology to the
middlebox. Due to the excellent characteristics of the blockchain, the research on
combining the blockchain with searchable encryption is also a direction. In
summary, this paper presents an efficient SSE scheme based on B+-Tree and CBF
which supports secure verification, dynamic updating, and multi-user queries. Due
to the CBF, we only need the complexity of O(1) to verify if the token exists for
both the server and the user. Our CBF also supports efficient updating. When the
token exists, we compose the authenticator by encrypting and signing the root and
timestamp of the B+-Tree. Users can check the integrity of the results returned by
the server through the authenticator. Finally, we implement our scheme and
conduct comprehensive experiments to evaluate our scheme. The results are
reasonable and consistent with our performance analysis. And the time cost is
stable and very low when the query doesn’t exist due to the CBF.
7.2FUTURE ENHANCEMENT
In the future we can study more complex search directions, such as matrix queries,
graph adjacency queries, and so on. With the rise of Network Function
Virtualization (NFV) technology, some studies have applied searchable encryption
technology to the middlebox.
CHAPTER 8
REFERENCES
[1] B. Waters. J. Bethencourt, A. Sahai. Ciphertext-policy attribute-based
encryption. IEEE Computer Society, 10:321–334, 2007.
[2] Tadjer. What is cloud computing? Acm, 51:9–11, 2011.
[3] Arun K. Iyengar. Enhanced clients for data stores and cloud services. IEEE
TKDE, 31:1969–1983, 2019.
[4] Xiaoping Ge Ya Wang Jingqing Fu Jiyi Wu, Lingdi Ping. Cloud storage as the
infrastructure of cloud computing. In IEEE ICICCI, pages 380–383, 2010
[5] Laurent Amsaleg Weng, Li and Teddy Furon. Privacypreserving outsourced
media search. IEEE TKDE, 28:2738–2751, 2016.
[6] C. Papamanthou S. Kamara and T. Roeder. Cs2: A semantic cryptographic
cloud storage system. In Microsoft Technical Report, pages 380–383, 2011.
[7] Kui Ren Bo Zhang Ruitao Xie Kan Yang, Xiaohua Jia. Effective data access
control for multi-authority cloud storage systems. IEEE TIFS, 8:1790–1799, 2013.
[8] Lokesh M. Gupta Karl A. Nielsen Matthew G. Borlick, Lokesh M. Gupta.
Method, system, and computed program product for distributed storage of data in a
heterogeneous cloud. INTERNATIONAL BUSINESS MACHINES
CORPORATION, 10:171, 2019.
[9] Amrit Jassal Daniel H. Jung Gregory B. Neustaetter Sean H. Puttergill etc
Hakan Ancin, Xi Chen. Systems and methods for facilitating access to private files
using a cloud storage system. In Inc. Mountain View, CA, page 585, 2019.
[10] Kristin Lauter Seny Kamara. Cryptographic cloud storage. In LNCS, pages
136–149, 2010.