By Gowthami | apps-dba.com | Oracle Exadata Series
Oracle Exadata Storage Servers use a layered disk architecture that is fundamentally different from regular Oracle databases. Understanding the relationship between Physical Disks, Cell Disks, Grid Disks, and ASM Disk Groups is essential for Exadata storage management and troubleshooting.
Exadata Storage Hierarchy
| Layer | Description | Management Tool |
|---|---|---|
| Physical Disk | Hard disk or flash drive in the cell | CellCLI / Hardware |
| Cell Disk | Representation of physical disk in Exadata software | CellCLI |
| Grid Disk | Partition of a Cell Disk exposed to ASM | CellCLI + ASM |
| ASM Disk | Grid Disk as seen by ASM | ASMCMD + SQL |
| ASM Disk Group | Collection of ASM Disks (DATA, RECO, etc.) | ASMCMD + SQL |
Exadata X8M Storage Configuration
The X8M storage cell uses a combination of NVMe Flash and Hard Disk Drive (HDD) storage:
- High Capacity (HC) cells: Large HDD disks for DATA, supplemented by Persistent Memory (PMEM) for Smart Flash Cache
- Extreme Flash (EF) cells: All-NVMe flash for ultra-low latency workloads
- Each cell: Typically 12 HDD or 8 NVMe drives, creating multiple Cell Disks
Working with Cell Disks (CellCLI)
-- Connect to storage cell (SSH as celladmin)
ssh celladmin@cell01.example.com
-- List all cell disks
CellCLI> LIST CELLDISK
CD_00_cell01 normal
CD_01_cell01 normal
CD_02_cell01 normal
...
-- Get detailed cell disk info
CellCLI> LIST CELLDISK ATTRIBUTES name, deviceName, size, status, diskType
-- Check cell disk health
CellCLI> LIST CELLDISK WHERE status != 'normal'
Working with Grid Disks (CellCLI)
-- List all grid disks on a cell
CellCLI> LIST GRIDDISK
-- Grid disk naming convention: DISKGROUP_CELLDISK
-- Example: DATA_CD_00_cell01 means:
-- DATA = ASM Disk Group name
-- CD_00 = Cell Disk 0 on this cell
-- cell01 = Storage cell hostname
-- Get grid disk details
CellCLI> LIST GRIDDISK ATTRIBUTES name, asmDiskGroupName, asmDiskName, size, asmModeStatus
-- Sample output:
-- DATA_CD_00_cell01 DATA DATA_CD_00_cell01 4T ONLINE
Working with ASM Disk Groups
-- Connect to ASM instance (from DB node)
export ORACLE_SID=+ASM
sqlplus / as sysasm
-- List ASM disk groups
SELECT group_number, name, type, total_mb, free_mb,
ROUND(free_mb/total_mb*100,1) AS pct_free,
state
FROM v$asm_diskgroup;
-- List ASM disks (Grid Disks exposed to ASM)
SELECT group_number, disk_number, name, path,
total_mb, free_mb, mode_status, state
FROM v$asm_disk
ORDER BY group_number, disk_number;
-- Check for offline or failing disks
SELECT name, path, mode_status, state, mount_status
FROM v$asm_disk
WHERE state != 'NORMAL' OR mode_status != 'ONLINE';
Checking Overall Exadata Storage Health
-- From DB node: check cell connectivity
dcli -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e "list cell attributes name, status"
-- Check for any disk errors
dcli -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e "list physicaldisk where status != 'normal'"
-- From ASM: check rebalance status (after disk changes)
SELECT * FROM v$asm_operation;
Summary
Exadata's storage architecture provides multiple layers of abstraction between physical hardware and the database. DBAs should understand all layers — from Physical Disks managed in the hardware to ASM Disk Groups used by Oracle Database. Regular health checks via CellCLI and ASM views ensure storage issues are detected early before they impact database performance.
Oracle Exadata - The Complete Guide
Master Exadata architecture, Smart Scan, Storage Indexes, Cell Disks, and all Exadata management skills with Gowthami's complete guide.
Get the Book
No comments:
Post a Comment