Cloud Zone is brought to you in partnership with:

Rob is a professional IT specialist with over 14 years of commercial expertise working for small, medium and large enterprise customers and clients as well as local, state and federal governments within Australia and internationally. Of these fourteen years, six have been served in a professional consultancy capacity, three as a project manager and eleven years in professional software engineering and architecture. Rob is a DZone MVB and is not an employee of DZone and has posted 62 posts at DZone. You can read more from them at their website. View Full User Profile

SQL Server Database Private Cloud Deep Dive

09.16.2012
| 3362 views |
  • submit to reddit

Presented by SQL guru Danny Tambs.  The session covers a whitepaper produced by HP and Microsoft.  The following is a summary of the session:  This session is specifically about the DBC Reference Architecture (appliance/recipe).

  • Reference Whitepaper
  • Benefits
  • Architecture drilldown
    • Components
    • Manageability
    • Achieving high availabilitye)
    • Metering and chargeback

Consolidation

Different hardware profiles, ages and versions.  Silos and data formats create disparity across organisations and within them.  Becomes costly to maintain, to upgrade regularly and to find experience staff to support it.  Older hardware is typically more expensive to run.

Private Cloud

NIST (US) put together a loose set of definitions of what to expect from a “private cloud”.  Capabilities include: elasticity, resource pooling, self-service and control & customize.  Much shorter deployment times (hours rather than weeks).

MS/HP DBC Reference Architecture

  • Complete – factory built, virtualization to poll & consulting and support
  • Optimise – Central console, tuned for SQL Server workload, migrate with near zero downtime
  • Agile – Provision on demand, meter and chargeback usage, modular for scale as you go

Vision

  • Box product (buy, install)
  • Appliances/Reference Architectures (recipes, buy premade or build your own)
  • The Cloud (SQL Azure)

Here is the throughput you are aiming to get.  How do you ensure you have enough I/O?

DBC Platform

Major benefits include reduced configuration steps – the appliances are prebuilt (thousands of man hours) and can meet the operational requirements.

  • Reduces power consumption,
  • reduces complexity to manage
  • Retire older hardware and consolidate

Not particularly recommended for databases over 1 TB, better for smaller gigabyte sized DB solutions.  Appears to be targeted at consolidation of hundreds or thousands of disparate servers into a managed virtual environment.  Additional benefits include skillset changes, DBAs can better manage applications, not just services.

Deep Dive: DBC Plug and Play

Base configuration: is very basic configuration with minimalistic disk, I/O, CPU and memory. 

IMG_2211 IMG_2212

Software stack

IMG_2213

Of course you can use your own monitoring and other software packages, but they must be certified with Windows Server.  Many anti-virus packages have caused blue screens by having not been certified to work with clusters, for example.

Hardware Profile Configuration

 IMG_2214

Built in reliability and H/A features – RAID controllers, spare disks etc.  The software to manage the rack is built in, no extra software required. 

DBC Common Usage Scenario

Storage is comprised of ‘hardware blocks’ of storage.  iSCSI storage cabling contains redundancy so that missing cable doesn’t drop the iSCSI availability (following best practice)

.IMG_2215

Networking – HP ProCurve switches (10 GB switches) with redundancy.  Includes iSCSI storage traffic connectivity.  Traffic partitioned into general network and storage.

IMG_2217

The baseline sizing used for the hardware and the solution.  A single rack configuration is optimized for a balanced mix of 200 database instances.  There are tools available to determine socket-to-VCPU translation in the virtual world.

Size depends on workloads.  Need to baseline based on your own specific workload needs.  In a 4 Blade configuration, the majority of management software is on the Blade 1, and that only comprises less than 20% of the Blade’s capacity (or less).

Supports virtualized clusters for reduced downtime and higher availability.  How do you solve disaster recovery scenarios?  What’s your bottleneck?

Why isn’t this solved by hardware?  Important to note this is a consolidation reference, not a HA reference.  There are options though.  Database mirroring, etc

Do you need HA?  Sometimes no.  Do you need 1 minute failover? Not necessarily.

Linking to the Customer Network – link aggregation control protocol.  Twin switches, with built in redundancy.
Only forced downtime is upgrading/flashing of firmware.  Workload can automatically failover or it can be live migrated before upgrade.

Performance Figures

IMG_2219

No caching!  24/7 full uptime capability baselined against true random i/o reads.   Want more? Server 2012 brings a lot to the table:

IMG_2220

1 million IOPS possible in a virtual environment. 

Toolkits

MAP toolkit.  Will profile existing SQL Server instances, categorizes and builds catalog of SQL Server.  Helps build consolidation strategy.  Captures I/O, performance information and can advise what kind of VM profile is required to consolidate.  Recommended to run the tool over a few weeks especially in high load times.

Support

A range of HP and Microsoft support is available.

Published at DZone with permission of Rob Sanders, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)