PLK - Workshop 3PAR
PLK - Workshop 3PAR
Workshop
15-16 Nov 2018
Snapshot Pool
No problem, I’ll have another set of drives
installed to create a snapshot pool
The 3PAR way of setting up a Storage Array
Optimal use of resources
For my five servers I need five 2TB LUNs with
average performance
OK, easy! I just create 5 VVs in a RAID5 CPG
striped across all available drives and present the
VLUNs to you. Thanks to distributed sparing I
don’t have to waste drives for sparing
0 1 2 3 4 5 6 Listen, I have two more servers. They also need
R5 R5 R5 R5 R5 R1 R1 2TB LUNs but with very high write performance
3PAR Controller Nodes No problem, I just create two more VVs in a
RAID1 CPG striped across the very same drives.
Physical Drives These new LUNs have higher write performance
I also need to create snapshots of my LUNs
Easy, I still have enough space on my set of
physical drives to hold your snapshots
Terminology 1/2
Chunklets, LD, CPG
• Chunklets - Physical disks (PD) are divided into 1 GiB chunklets. Each chunklet occupies
contiguous space on a physical disk. Chunklets are automatically created by the HPE 3PAR Operating
System and they are used to create Logical Disks (LD). A chunklet is assigned to only one LD.
• Logical Disks (LD) - collection of chunklets arranged as rows of RAID sets. Each RAID set is made
up of chunklets from different physical disks. LDs are pooled together in Common Provisioning Groups
(CPG), which allocate space to virtual volumes.
• Common Provisioning Groups (CPG) - defines the LD creation characteristics, such as RAID type,
set size, disk type plus total space warning, and limit. A CPG is a virtual pool of LDs that allocates
space to virtual volumes (VV) on demand.
Logical disk types
LDs used by system
System sets LDs for logging, preserved data, and for system administration.
These LDs are multilevel LDs with 3-way mirrors.
• Logging LDs - RAID 10 LDs - temporarily hold data during disk failures and disk replacement.
Created by the system during the initial installation. Depending on the system model, each controller node
in the system has a 20 GiB or 60 GiB logging LD.
• Preserved data LDs - RAID 10 LDs used to hold preserved data.
Created by the system during the initial installation. The size of the preserved data LD is based on the
amount of data cache in the system
• Administration volume LDs - storage space for the admin, a single volume (not LD!) created on each
system during installation. Used to store system administrative data such as the system event log
Terminology 2/2
Virtual Volumes FPVV & TPVV and CPG space allocation
• Virtual Volumes (VV) - draw their resources from CPG and volumes are exported as LUN to hosts.
Virtual volumes are the only data layers visible to the hosts.
• Fully Provisioned Volume (FPVV) - fixed size and dedicated LDs. The FPVV size is fixed.
FPVV reserves the entire amount of space required, whether or not the space is actually used.
• Thin Thinly Provisioned Virtual Volume (TPVV) - uses Logical Disks (LD) that belong to a CPG.
TPVVs associated with the same CPG draw space from the LD pool as needed, allocating space on
demand in 16 KiB increments.
When the volumes require additional storage, the HPE 3PAR OS automatically creates additional LDs.
The CPG can grow until it reaches the user-defined growth limit, which restricts the CPG max. size.
These allocations are adaptive, because subsequent allocations are based on the rate of consumption
for previously allocated space. For example, if a TPVV is initially allocated 1 GiB for each node, but
consumes that space in less than 60 seconds, the next allocation becomes 2 GiB for each node
Virtual Volume Types
• User space - contains the user data and is exported as a LUN to the host.
• Snapshot space - contains copies of user data that changed since the previous snapshot.
• Admin space - contains pointers to copies of user data in the snapshot space.
Virtual Volume Types
TPVV – Warnings & Limits
Allocation warning threshold - The user-defined threshold at which the system generates an alert.
This threshold is a percentage of the virtual size of the volume, the size that the volume presents to the
host.
Allocation limit threshold - The user-defined threshold at which writes fail, preventing the volume from
consuming additional resources. This threshold is a percentage of the virtual size of the volume, the size
that the volume presents to the host.
The HPE 3PAR storage system supports the following RAID types:
• RAID 0
• RAID 10 (RAID 1)
• RAID 50 (RAID 5)
• RAID MP (Multi-Parity) or RAID 6 (default)
RAID Types – Raid 0
Set Size, Step Size, Raw Size
RAID 0
set size always =1
The number of sets in a row is the row size
A RAID 1 set can function with the loss of all but one of
the chunklets in the set.
Multiple CPGs can be configured and optionally overlap the same drives
• i.e. a System with 200 drives can have one CPG containing all 200 drives and other CPGs with
overlapping subsets of these 200 drives.
Hit Create
In the Advanced options you can define
and select
• Set Size
• Cage Availability
• Growth Size
CPG & space management
Growth increments, warnings, and limits
• Within CPG we can create Fully Provisioned Virtual Volumes (FPVV) and Thinly Provisioned Virtual
Volumes (TPVV) that draw space from the LD pool on the CPG
• CPG is configured to grow new LDs automatically when the amount of available LD space falls below a
configured threshold.
• When creating a CPG, set a growth increment and an growth warning and growth limit to restrict the
CPG growth and maximum size. By default, the growth warning and growth limit are set to none
CPG & space management
Growth increments
As volumes that draw from a CPG require additional storage, the system automatically creates additional
LDs according to the CPG growth increment. The default and minimum growth increments vary according
to the number of controller nodes in the system, as shown in the following table:
Full VVs have their own LDs Thin VVs share the same LDs in a CPG
Thin Volumes
Virtual Volumes Full VV
25
3PAR Virtualization – the Logical View
With one drive type
3PAR autonomy User initiated
Physical Drives Logical Disks (LD) CPG(s) Virtual Exported
formatted in 1GB autonomically Volumes LUNs
B
Chunklets created Device Type FC
RAID Type 10
Set Size 2
A
Device Type FC
RAID Type 50
Set Size 7+1
B
3PAR Virtualization – the Logical View
With three drive types
3PAR autonomy User initiated
Physical Drives Logical Disks (LD) CPG(s) Virtual Exported
formatted in 1GB autonomically Volumes LUNs
Chunklets created
SSD
FC
NL
AO = Adaptive Optimization
Why are Chunklets so Important?
Think of many Virtual Drives on a single Physical Drive
Ease of use and Drive Utilization
• Array managed by policies, not by administrative planning
• The same drives can service all RAID types at the same time
– RAID10
– RAID50 – 2:1 to 8:1
– RAID60 – 4:2*; 6:2; 8:2*; 10:2; 14:2*
• Transparent mobility between drives and RAID types
thanks to Dynamic and Adaptive Optimization
Performance
• Enables wide-striping across hundreds of drives
• Avoids hot-spots
• Autonomic data restriping after disk installations
High Availability – selectable by CPG
• HA Drive/Magazine - Protect against drive/magazine failure (Industry standard)
• HA Cage - Protect against a cage failure (complete drive Enclosure)
* Preferred, performance enhances R6 set sizes
3PAR Virtualization Concept
Drive ownership – Example 1: 2-Node System with 2 drive enclosures
Node0 Node1
• A Physical Drive (PD) is owned by one node
Node 0 owns the odd drives, Node 1 owns the
even drives
Note: if one node fails the partner node takes ownership of all PD
3PAR Virtualization Concept
Drive ownership – Example 2: 4-Node system with 8 drive enclosures
Active-active
Set size (ssz) The size of a RAID set. For example RAID5 with set size 4 = 3+1
On RAID6, set size 8 = 6+2
Step size Stripe size = how often do we jump from one chunklet to another
Default RAID1 = 256KB for FC/NL, 32KB for SSD
Default RAID5 = 128KB for FC/NL, 64KB for SSD
Default RAID6 = 64KB for 6+2, 32KB for 14+2
32
3PAR Virtualization Concept
Chunklets, Regions, Virtual Volumes and Steps
Host IO
Default step sizes
Steps Steps
Fast Class Nearline SSD
Virtual Volume Virtual Volume
RAID1 256KB 256KB 32KB
A1 A2 A3 A4 A5 A6 A7 A1 A2 A3 A4 A5 A6 A7
VV I1 I2 I3 I4 I5 I6 I7 RAID50 128KB 128KB 64KB A8 A9 A10 A11 A12 A13 A14
Data
Regions Regions
x1 x2 x3 x4 x5 x6 x7 RAID60 32 - 64KB 32 - 64KB 32 - 64KB n10 n11 v12 n13 n14 n15 n16
Logical Disk (LD2) RAID 50 (7:1) – Set size = 8 Logical Disk (LD) RAID 10 – Set size = 2
Logical Disk (LD1) RAID 50 (7:1) – Set size = 8 P (I1-7) A10’ A’’ Logical Disk (LD) RAID 10 – Set size = 2
K7 B10’ B’’
A1 A2 A3 A4 A5 A6 A7 P (A1-7) L7 C10’ C’’
A1’ A1’’ A9’ A9’’
B1 B2 B3 B4 B5 B6 P (B1-7) B7 M7 D10’ D’’
B1’ B1’’ B9’ B9’’
C1 C2 C3 C4 C5 P (C1-7) C6 C7 N7 E10’ E’’
C1’ C1’’ C9’ C9’’
128MB D1 D2 D3 D4 P (D1-7) D5 D6 D7 D1’ D1’’ D9’ D9’’ 128MB
LD Regions E1 E2 E3 P (E1-7) E4 E5 E6 E7 x7 n10’ E1’
n’’ E1’’ E9’ E9’’
LD Regions
P (n1-7) n1 n2 n3 n4 n5 n6 n7 n1’ n1’’ n9’ n9’’
Chunklets Chunklets
Physical
Disks
3PAR Virtualization Concept
Pages
• The HPE 3PAR OS space consolidation features allow to change the way that virtual volumes
(VV) are mapped to logical disks (LD) in a CPG.
• Moving VV regions from one LD to another enables you to compact the LDs, and frees up disk
space to be reclaimed for use by the system.
• What causes CPG space to be in a nonoptimal state:
– Volumes are deleted,
– Volume copy space grows and then shrinks,
• Compact CPG - allows to reclaim space from a CPG that has become less efficient in space usage from
creating, deleting, and relocating volumes. Compacting consolidates LD space in CPGs in as few LDs as
possible
35
Logical disks and chunklet initialization
• After deleting logical disks, the underlying chunklets must be initialized before their space is
available to build logical disks. The initialization process for chunklets takes about
1 minute per 1 GiB chunklet.
• Deleting 1 TB volume means about 16 hours of processing to fully reclaim the space
• Use showpd –c command to see the progress. Chunklets that are uninitialized are listed in
the Uninit column
36
3PAR Thin Technologies
Go back to
SW and
Features
Adaptive Data Reduction (ADR)
Adaptive Data Reduction – collection of technologies designed to reduce your data footprint
There are four stages of ADR that progressively improve the efficiency of data stored on SSDs:
• Zero Detect
• Deduplication
• Compression
• Data Packing
Each feature uses a combination of a 5th generation hardware ASIC, in-memory metadata, and efficient
processor utilization to deliver optimal physical space utilization for hybrid and all-flash systems.
3PAR Adaptive Data Reduction Overview
Zero Detection
Examines all incoming write streams, identifies extended strings of zeros, and removes them to prevent unnecessary
data from being written to storage. It is performed within the HPE 3PAR ASIC, not only do all operations take place
inline and at wire-speed, but they consume no CPU cycles so they do not impact system operations
Fast
8TB
2TB Buffer
1TB
Buy less storage capacity Reduce Tech Refresh Costs Integrated space reclamation
3PAR Thin Persistence – Manual thin reclaim
LUN 2 LUN 2
LUN 1 LUN 1
Unused
Unused
Data 2 Free Free
Data 1 Chunklets
Chunklets Data1 Data 2
LUN 2 LUN 2
LUN 1 LUN 1
000000000 Free
00000000 000000000 Chunklets
Free
Data1 Data 2 Chunklets Data 1 Data 2
• 3PAR Adaptive Data Reduction Technologies work with and optimize all three formats
• VMware recommends EZT for highest performance; see VMware Whitepaper
• vStorage API for Array Integration (VAAI)
− Thin VMotion
− Active Thin Reclamation
Introduced with vSphere 5.0
• VM Space Reclamation
− Using vmkfstools to manually reclaim VMFS deleted blocks; see KB article
− Leverages industry standard T10 UNMAP*
− Supported with VMware vSphere 5.0 and 3PAR OS 3.1.x
Introduced with vSphere 5.5
− esxcli storage vmfs unmap datastore_id
Automatically reclaim space with UNMAP in Windows
Deduplication Compression
• Per CPG • Per Virtual Volume
• Removes and references duplicate 16kB pages • Based on LZ4 Lossless compression algorithm
(focused on compression and decompression speed)
1
Original VVs
1 2 3 4 5 n
n
DDS
Deduplication:
Write data
hash calculated by Unique data
in cache
the 3PAR ASIC 1)
For more info also see the 3PAR Adaptive Data Reduction whitepaper
Evolution of 3PAR Dedup Volumes (TDVV)
• 3.2.1 MU1 – TDVV1 – First release of Dedup for 7000 and 10000
• 3.2.2 GA – TDVV2 – Introduced on 8000 and 20000 providing defrag of the Dedup Store
• 3.3.1 GA – TDVV3 – Enhanced design with reduced overhead for 7k, 8k, 9k, 10k and 20k
showcpg –d (on a 3.3.1 system)
----Volumes---- -Usage- ---------(MiB)---------- --LD--- -RC_Usage- -Shared-
Id Name Warn% VVs TPVVs TDVVs Usr Snp Base Snp Free Total Usr Snp Usr Snp Version
4 NL_r6 - 0 0 0 0 0 0 0 0 0 0 0 0 0 -
1 SSD_r1 - 0 0 0 0 0 0 0 0 0 0 0 0 0 -
2 SSD_r5 - 9 4 4 5 4 205952 2048 17280 225280 0 2 0 0 2
3 SSD_r6 - 9 4 4 5 4 66432 2048 17536 86016 0 2 0 0 3
0 system_cpg - 1 0 0 1 1 102400 2560 13824 118784 2 2 0 0 -
Express Scan
Compression and
Write Data Resulting data
Deduplication by Data Packing
in cache written to SSD
the Intel CPU
When used together, duplicate pages are removed first and unique pages are then compressed
Compressed data presents a unique problem
After compression, blocks will have odd sizes
16KB 2.3KB
16KB 8.3KB
16KB
16KB
16KB
3.2KB
5.2KB
10.7KB
?
16KB 1.1KB
Padding per-backend page means lots of wasted space, resulting in lower total system efficiency
HPE 3PAR Data Packing
16K optimized
16KB 2.3KB
16KB 8.3KB
16KB 3.2KB
16KB 1.1KB
• Compression occurs when pages are flushed from cache to the backend SSDs
• Only pages belonging to a single VV are compressed together
• The pages do not need to belong to contiguous addresses
• Pages belonging to different VVs will not be compressed together
• Pages belonging to different snapshots of same base VV also not compressed together
• When data is re-written to a compressed Virtual page, we try to recompress the data into the existing
compressed page (a "refit"). If the new compressed virtual page does not fit into the original page
then the virtual page is written to a new compressed page
• Up to eight 16 KiB cache pages can be compress into a single 16 KiB compressed page
• The number will depend upon how compressible the data is
3PAR Compression Estimation
Dry run using the CLI – checkvv -compr_dryrun “VV _name”
cs-3par5 cli% checkvv -compr_dryrun Test_VV cs-3par5 cli% showtask -d 5646
The -dedup_compr_dryrun command relies on HPE 3PAR Thin Id Type Name Status Phase Step -------StartTime-------- -------FinishTime------- -Priority- -User--
5648 dedup_compr_dryrun checkvv done --- --- 2017-04-19 17:29:29 CEST 2017-04-19 17:32:17 CEST n/a pmattei
Compression and Deduplication technology to emulate the
total amount of space savings by converting one or more Detailed status:
2017-04-19 17:29:29 CEST Created task.
input VVs to Compressed TDVVs. Please note that this is
2017-04-19 17:29:29 CEST Started checkvv space estimation started with option -dedup_compr_dryrun
an estimation and results of a conversion may differ 2017-04-19 17:32:17 CEST Finished checkvv space estimation process finished
from these results.
-(User Data)- -(Compression)- -------(Dedup)------- -(DataReduce)-
Id Name Size(MB) Size(MB) Ratio Size(MB) Ratio Size(MB) Ratio
The command can be run against live production volumes
39 Test_VV 105250 32089 3.28 -- -- 32089 3.28
and will generate a non-intrusive background task. The
--------------------------------------------------------------------------------------
task may take some time to run, depending on the size and
1 total 105250 32089 3.28 5293 19.89 2978 35.34
number of volumes, and can be monitored via the showtask cs-3par5 cli%
commands.
The following example shows the effect of allocation unit on dedup ratios. Five TDVVs were created and then
formatted with NTFS file systems using allocation units of 4 KiB–64 KiB. Four copies of the Linux kernel source
tree, which contains many small files, were unpacked into each file system.
60
3PAR Autonomic Sets
Simplify Provisioning Autonomic 3PAR Storage
Traditional Storage Autonomic Host Set
Cluster of VMware vSphere Servers
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
Individual Volumes
Autonomic Volume Set
– Initial provisioning of the cluster – Initial provisioning of the cluster
• Requires 50 provisioning actions • Add hosts to the Host Set
(1 per host – volume relationship) • Add volumes to the Volume Set
– Add another host/server • Export Volume Set to the Host Set
• Requires 10 provisioning actions (1 per volume) – Add another host/server
– Add another volume • Just add host to the Host Set
• Requires 5 provisioning actions (1 per host) – Add another volume
• Just add the volume to the Volume Set
3PAR Distributed and Adaptive
Sparing
HPE 3PAR High Availability - Sparing
Spare Disk Drives vs. Distributed Sparing
Spare Chunklets
Spare drive
Physical Disk
DC DC DC DC
Each physical disk in a 3PAR array is initialized DC DC
with data and spare Chunklets of 1GB each
DC DC DC DC
DC SC
DC SC DC DC
DC DC
DC DC DC SC
DC DC
The best case: spare chunklets are on the same node as the failed drive with the same availability
characteristics
When spare chunklets with this criteria are not available, free chunklets with the same
characteristics are considered. During the sparing process if the number of free chunklets used
exceeds a threshold set in the HPE 3PAR StoreServ OS, consideration will be given to spare
chunklets on another node. This will help keep the array balanced
HPE 3PAR High Availability – Sparing policy
Default: ~ 2.5 percent with minimum Small configurations (e.g. 8000’s with less than 48 disks)
have a spare space target equal to the size of 2 of the largest drives of the type being
spared. setsys SparingAlgorithm Default
• Minimal: ~2.5 percent without minimums. Minimal is the same as default except for small
configurations. When the configuration is small, minimal will only set aside spare space equal to
one of the largest drives (compared to two for default) of the type being sized.
• Maximal: one disk’s worth in every cage. Spare space is calculated as the space of the largest
drive of the drive type being sized for each disk cage.
Go back to
Table of
Content
HA Enclosure (Cage) vs. HA Drive (Magazine)
Tier 1 Availability feature in a modular class array
Raid Group 1 Raid Group 2
Raidlets Sets
Enclosures without compromising data availability
Note: Each node owns its vertically protected drives
5 OK 4+1 8+2
6 OK 5+1 10+2
7 OK 6+1 10+2
8 OK 7+1 14+2
9 OK 8+1 14+2
Host Ports
Cache
84x0
Disk Ports
with
3PAR ASIC
2 nodes
4
8200
with
2 nodes
HA Enclosure (Cage) on a 4-node system
Max RAID-set sizes supporting HA Enclosure
R1
Nodes 2 and 3
12 (2 x 6) OK 5+1 10+2 Nodes 0 and 1
RAID 50 Volume
R R R R R R
Reads Writes
5 5 5 5 5 5 MPIO
A1 A2 A3 A4 A5 A6
B1 B2 B3 B4 B5 B6 Host HBA
C1 C2 C3 C4 C5 C6
D1 D2 D3 D4 D5 D6
SAN Switch
cage
A1 A2 A5 A3 A6
A4 3PAR Front-end
cage
B1 B4 B2 B5 B3 B6
RAID
3PAR Back-end
cage
C1 C4 C2 C5 C3 C6
0:0:1 0:0:2 1:0:1 1:0:2
0:0:1 0:0:2
cage
“Raidlets” Groups built across Write Cache Re-Mirroring Transparent handling of T10-PI end-to-end
cages (any RAID level) paths or controller loss data protection
3PAR High Availability
Guaranteed Drive Enclosure (Drive Cage) Availability if desired
cage
A8 A9
A1 AA A2 AB A3 AC R R R R R R
enclosure
A4 A5 A6 5 5 5 5 5 5
A E A
A1 A2 A3 A4 A5 A6
B1 B2 B3 B4 B5 B6
C1 C2 C3 C4 C5 C6
B7
cage
B8 B9 D1 D2 D3 D4 D5 D6
B1 BA B2 BB B3 BC
enclosure
B4 B5 B6
B F B
C7 C8 C9
cage
CA CB CC RAID 10 Volume
C1 C2 C3
enclosure
C4 C5 C6
C G C
R R R R R R R R R R R R
1 1 1 1 1 1 1 1 1 1 1 1
A7 A8 A9 AA AB AC C7 C8 C9 CA CB CC
cage
D7 D8 D9 B7 B8 B9 BA BB BC D7 D8 D9 DA DB DC
D1 DA D2 DB D3 DC
enclosure
D4 D5 D6
D H D
Enclosure-independent RAID
“Raidlets” Groups for any RAID level built across enclosures
Enclosure-dependent RAID Data access preserved with HA Enclosure (Cage)
Enclosure (cage) failure might mean no access to data User selectable per CPG
3PAR High Availability
Write Cache Re-Mirroring
3PAR StoreServ
Mirror Mirror
Node 0 Node 1
Node 2 Node 3
Remote Write
Remote Write Definition: A write I/O request from a host that comes into a node
that differs from the LD owner of the request.
Node 0 Node 1
Node 2 Node 3
Local Read data in Cache
Local Read Definition: A read I/O request from a host that comes into a
node that owns the LD of the request.
Step 1: Step 2
Read DATA in CACHE Data returned to host,
IO complete
Read request from server to node 2 for
LD owned by node 2 (never read from
the backup owner of the LD, node 3)
Node 0 Node 1
Node 2 Node 3
Note:
If the data is not in cache it is
first read from disk into cache
Remote Read
Remote Read Definition: A read I/O request from a host that comes into a node
that does not own the LD of the request
Node 2 Node 3
Write with the failed Node
A Write I/O request from a host that comes into a failed node
Node 0
Multi-Core
Processor
Multi-Core
Processor
Multi-Core
Processor
Multi-Core
Processor
Node 2
Multifunction Multifunction
Controller Controller L
L D
Control Control
D Cache Cache
Data Data Data Data
Cache Cache Cache Cache
3PAR Gen4
ASIC
3PAR Gen4
ASIC
3PAR Gen4
ASIC
3PAR Gen4
ASIC
Node 3 fails and its
Write request from server
LD becomes
Server sends dataowned by
to node
PCIe
Switch
PCIe PCIe PCIe
Switch
PCIe PCIe Processor
IO
Xfer determines
Complete
Ready to
from
Data DMAed to node host which
node
Switch Switch Switch Switch
2 and Node
data 2
is placed in
LD1’s
the write
cache is for
Node 2’s cache
83
Multipathing of traditional arrays
Path loss and controller maintenance or loss behavior
• Requires regular
maintenance/patching of MPIO
0:0:1
0:0:1
during node maintenance and in
0:0:1 0:0:2 1:0:1 1:0:2 Native port ID 0:0:1 0:0:2 1:0:1 1:0:2 case of a node failure
1:0:1 1:0:2 0:0:1 0:0:2 Guest port ID 1:0:1 1:0:2 0:0:1 0:0:2
Ctrl 0 Ctrl 1 Ctrl 0 Ctrl 1 • Server will not “see” the swap of
the 3PAR port ID thus no MPIO
An FC path loss is handled by A controller maintenance or loss is handled
3PAR Persistent Ports by 3PAR Persistent Ports for all protocols
path failover required
all server paths stay online all server paths stay online
If a device attempts to log in with the same port WWN (PWWN) as another • configshow switch.login.enforce_login: 0
device on the same switch, you can configure whether the new login or the
− switchdisable
existing login takes precedence.
− configure ->F-Port Login Parameters
• 0 - First login takes precedence over second login
(default behavior). • Enforce FLOGI/FDISC login: (0..2) [0] 2 <==
− switchenable
• 1 - Second login overrides first login.
Go back to
SW and
Features
StoreServ * License Cap
3PAR 8000 and 20000 Software Details 8200
8400, 8450
8440
48
168
320
Replication SW Suite *
• Virtual Copy (VC)
Recovery Manager Central Suite
• Remote Copy (RC)
• Peer Persistence (PP) • vSphere
• Cluster Extension Windows (CLX) Policy Server • MS SQL
• Policy Manager Software
• Oracle
Data Optimization SW Suite * • SAP HANA
• Dynamic Optimization Data Encryption • 3PAR File Persona
• Adaptive Optimization
• Peer Motion
• Priority Optimization
File Persona SW Suite Application SW Suite for MS Exchange
Security SW Suite *
• Virtual Domains Smart SAN for 3PAR Application SW Suite for MS Hyper-V
• Virtual Lock
• Virtual Domains
• Virtual Lock Application SW Suite for MS Hyper-V
Smart SAN for 3PAR
• Virtual Copy All-inclusive Single-System Software
• System Reporter 3PAR Operating System SW Suite * • Full Copy
• 3PARInfo • Thin Provisioning
• Rapid Provisioning • Adaptive Flash Cache • Thin Copy Reclamation
• Online Import license (1 year) • Autonomic Groups • Persistent Cache
• System Tuner • Thin Persistence
• Autonomic Replication Groups • Persistent Ports • Thin Conversion
• Host Explorer • Autonomic Rebalance • Management Console
• Multi Path IO SW • Thin Deduplication for SSD
• LDAP Support • Web Services API • 3PAR OS Administration Tools
• VSS Provider • Access Guard • SMI-S • CLI client
• Scheduler • Host Personas • Real Time Performance Monitor • SNMP
3PAR 8000, 9000 and 20000 Software Details
All-inclusive licensing model
All-inclusive Multi-System Software - optional Policy Server – optional
Policy Manager Software
Remote Copy (RC) Federation
Peer Motion Peer Persistence
Cluster Extension Windows (CLX) Data Encryption – optional
Requires encrypted drives
Go back to
SW and
Features
3PAR Full Copy V1– restorable copy (offline copy)
Part of the base 3PAR OS
Base Volume
Full Copy
3PAR Full Copy V1– restorable copy (offline copy)
CLI cmds
Base Volume
Restorable
• createvvcopy -p hmax -s -pri high hmax.phy_copy
Intermediate
Snapshot
Intermediate
Snapshot
• Create Snapshot
1. RO Snapshot
Virtual
Volumes 2. RW Snapshot
1 2 3 4 5
CPGs
3PAR Virtual Copy writes
Copy on Write (CoW) for fully or thinly provisioned Virtual Volumes (FPVV and TPVV)
• Write to RW Snapshot
1. Copy block to Snapshot CPG
2. Redirect Snapshots
1 2 3 4 5
6 1 7
CPGs
Creating a Virtual Copy in the SSMC
Base Volume
of 2 TB 24 Copies 48 Copies 72 Copies 96 Copies 120Copies 144Copies 168 Copies
~200 GB ~200 GB ~200 GB ~200 GB ~200 GB ~200 GB ~200 GB
Go back to
SW and
Features
Data types
Historical data: Charts that collect stored data in DB for the defined object.
• Histogram
• Performance
• Capacity
Most objects
Real time data: Charts that collect data, which has been collected on an object in
the last 5 seconds.
• Performance
Exported volumes
Physical drive
Port data
Port control
Categories - Histogram
4 histograms
available to report
• Enclosure port
• Exported volumes
• Host port
• Physical drive
Categories - Performance
Aggregated historical data collected for the performance object selected
• Node cache
• Node CPU
• Cumulative IO Density - CPG
• Cumulative IO Density - AOCPG
• Exported Volumes
• Ports
• Physical Drive
• Remote Copy Links
Categories - Capacity
System Reporter reports on the following capacity objects:
• Virtual Volume
• System
• CPG space
• Physical drive
Templates 1/2 - a key feature within System Reporter
Templates 2/2 - a key feature within System Reporter
Real Time Data Example
System Reporter Data Base .srdata
The .srdata VV is mounted on the non-master node. The size of the .srdata is
Filetype info:
FileType Count TotalUsage(MB) TypeUsed% EarliestDate EndEstimate
--------------------------------------------------------------------------------------------------
aomoves 2 0.0 --- --- ---
baddb 0 0.0 --- --- ---
daily 2 120.6 1.34 2016-07-12 02:00:00 CEST 55 year(s), 142 day(s) from now
hires 7 13231.9 98.14 2017-01-17 17:05:00 CET Full or almost full
hourly 3 2836.1 25.24 2016-07-11 15:00:00 CEST 2 year(s), 71 day(s) from now
ldrg 2709 24577.4 102.08 2017-03-02 06:30:00 CET Full or almost full
perfsample 3 1.5 --- --- ---
srmain 1 0.0 --- --- ---
system 0 0.0 --- --- ---
unused --- 37251.0 46.41 --- ---
SS8400_3_Local cli%
Increase the size of .srdata, backup, export
CLI command to increase 10% of size
cli% controlsr grow -pct 10
11
Dziękuję
18