Posted by Jamie Gates - 20 August 2018
In 2005, late on a Saturday night, I inserted the first of two CDs into SQL Server 2000 and started an in-place upgrade. The upgrade completed normally and all systems ran without a hitch that Monday.
This was the most straightforward upgrade I’ve ever completed. Admittedly we had a copy of that server on standby, and had done a few practice runs beforehand to be sure it would all work out fine!
Popping the disk in and clicking “go” isn’t how these things normally go!
More typically, databases are migrated to an entirely new server. On the new server the permissions, agent jobs, connection strings, replication settings, log-shipping, disaster recovery, compatibility, licensing and performance questions all need to be answered! A SQL upgrade can quickly become a complicated set of interconnected projects to manage.
Change is good
Skipping one (or two) versions of SQL Server is quite often the norm. This is because the costs of managing an upgrade outweigh the benefits offered by an only slightly newer version of SQL. Completing SQL upgrades alongside scheduled hardware replacements has become a defacto standard. In this scenario, an organisation is only responding to the pain-points of security vulnerabilities posed by older versions of SQL, or the absence of support for these older versions.
Later versions of SQL do however offer a lot of great new features, like automated tuning, row-level security, they run on Linux and have resumable index rebuilding. Also, several existing features which were previously only available on the pricier Enterprise editions are now included in the Standard edition. Features like in-memory tables, data compression, change data capture and database snapshots are now all available in Standard.
But wait, there’s more!
Options for using hybrid or cloud platforms are also now available, and increasingly organisations are recognising the benefits offered and are adopting cloud-first policies.
A SQL estate today might consist of legacy systems nearing retirement, on-premise systems going through upgrade or migration planning, some SQL 2016/17 servers on Windows or Linux and also some fledgling cloud systems. Not to mention Reporting and Analysis Services!
When enterprises respond only to pain-points or evolve only organically, a lot of opportunities can be missed. Having a published roadmap shared among teams will instead ensure that dependencies and interconnections between systems are identified and planned for early on in upgrade cycles. It also ensures that new features offered by cloud platforms or later versions of SQL are leveraged to their best possible value. It can also be an invaluable tool for recognising opportunities to consolidate multiple databases and systems onto fewer servers, and for mitigating licensing costs across multiple servers and versions during overly drawn-out migrations.
A good roadmap should show the plan for moving systems forward from older version of SQL to current or cloud platforms over a timeline so that each intermediary stage is visible. It should also be a flexible plan, and remain responsive to opportunities and unexpected issues. Using a roadmap, each incremental change can be planned for in detail and then completed, while advancing the company towards its desired mid and long-term goals.
Jamie Gates is our Database Administrator and SQL Server expert - his ridiculous attention to detail is why we keep him on, as he celebrates his 15th anniversary at Sandfield this year.
Fully managed DBA services for SQL Server
Our team of expert Database Administrators make managing and optimising your SQL Server Databases easy. With over 20 years experience in Database Administration, our team look after some of New Zealand’s largest databases, tackling complex problems and providing tailored solutions that make a real difference to their business.