parallax circle item parallax circle item parallax circle item
Insights & Blogs

Gaining value from your investment in SQL Server Enterprise

Coffee and laptop on desk

At Sandfield, we have a wide variety of clients with different needs in their business. When designing our systems, we take a holistic approach to our solution - which means we often end up in discussions around provisioning of hardware and software to support our applications. One question which we often face is which SQL Server Edition is right for an organisation - and in particular whether SQL Server Enterprise is worth paying the money for.

There is one obvious deciding factor with SQL Versions, and that’s the built-in hardware limitations. But with SQL Server Standard supporting 16 CPU Cores and 128GB of RAM, few systems go beyond these gargantuan limitations. As such, this doesn’t normally play into our planning - but is worth being aware of.

So when evaluating stepping up from Standard to Enterprise, we need to consider the features available as a value proposition - the benefits gained through features that would otherwise be difficult or impossible to build. There needs to be a benefit in terms of development time saved, or industry-leading apps that can leverage these features to make it worthwhile.

Today, we’d like to outline a few of the Enterprise features we’ve chosen to use, and the business benefits our clients have gained from them.

Change Data Capture (CDC)

Change Data Capture is a feature which allows the database to keep a track of any changes to data within the database, and write these changes into a log, separate from the main database tables. This is typically sold by Microsoft as a tool for managing Data Warehousing, giving you a “firehose” type feed out to another system for batch processing.

We don’t use it quite like that, though - we leverage CDC as an asynchronous processing queue. Because of the way CDC has been designed, it allows us to capture changes to our transactional tables, and build “pre-crunched” near real-time dashboards for our systems. We’ve built a dashboard framework which collates data from a number of disparate systems into a single dashboard, allowing us to optimise how the data is stored for fast retrieval in our front end.

From a technical perspective, this allows us to move what would have otherwise been a heavy-hitting dashboard which would be a drag on CPU and storage performance - reading from the transactional tables, to something which can be run every couple of minutes to re-synchronise our dashboard data.

It allows us to operate in the SQL Server Engine’s preferred and most optimal mode: a single pass batch update. Rather than writing down iterative change to our dashboards, each minute we collate all changes since the last update and apply them all at once - eliminating the chance of dashboard updates causing blocking in the Database.

It also gives us an isolation layer between the core transactional tables and those responsible for crunching the data into the dashboard. Since CDC writes all of this information down as part of its log, there’s no need for us to “look back” to the original tables - eliminating the chance of causing locking in core tables, guaranteeing a level of performance in our client’s business critical tables.

Memory Optimised Tables

At Sandfield, we were an early adopter of SQL Server 2014 and Memory Optimised Tables - with a system which was in development against the Technical Preview which went live on the “Gold Master” release as soon as it was available. When Memory Optimized tables were initially announced, we spotted several opportunities in our projects where these could be leveraged.

Loading entire databases into memory has for long time been the holy grail. With RAM prices decreasing, and the improvements in software - what was once only a possibility for monied-up tech titans is now within reach for the rest of us.

Despite this, Memory Optimised Tables come with a lot of trade offs, and you find yourself counting individual bytes on each row, looking for as many opportunities to optimise as possible - the prospect of having tens of millions of rows stored in RAM really hits home the cost of storage.

Within these trade offs also come opportunities - things that would be otherwise plain impossible. Because of the incredibly fast read/write operations, you can get an opportunity to build specialised tables we’d refer to as ‘Inflated Indexes’. These indexes can allow you to build features which would be slow or impossible on disk, but incredibly fast in memory.

One example of a feature we’ve been working on behind the scenes is a highly optimised partial and fuzzy match engine for what we call “References” to items - Order Numbers, Shipment Numbers, etc. While it’s possible to do these kinds of searches easily with built-in SQL Server features over small data sets, these features don’t scale to the data sets we have on hand.

We call these “Inflated” indexes as they can be 5x-10x (or larger!) than the data stored on disk. So we rely on the incredibly fast read/write operations to make this kind of index possible.

Compression

Compression is one of those features that sounds like it’s there to save space on disk - and in some respects, it is. It also sounds like it’s the kind of thing that comes at a CPU cost - decompressing takes extra CPU cycles, and that’s typically a bad thing.

But it also comes with a counter-intuitive side: enabling compression on your tables can improve your overall performance. One of the main bottlenecks in systems is reading data from the disk for large queries. Compression can help in this instance, because there’s less data to read: since the files are smaller, we can bring them in faster.

This in turn can actually reduce CPU overhead too - one of the most CPU intensive operations you can measure is reading or writing to disk. The CPU is responsible for shuffling all of this data around, which can affect performance overall for the system. With smaller file sizes, there’s less information to move around, and a higher chance of retaining more of the database’s contents in “cache” - reducing lookup times.

Applying compression to improve performance is very much a case-by-case basis, but it’s one of the tools you can have on hand for particular kinds of data and workloads. It works especially well on data where repetition is common, for example tables which repeat the same data over and over again - invoices, for example.

Conclusion

When considering which SQL Server Edition is most appropriate for your organisation, you’ll need to weigh up the potential benefits of the extra features over the additional licensing cost. Our opinion is that unless you have a good use case for one or more of SQL Server Enterprise advanced features, there’s probably no good argument for licensing Enterprise in your organisation, unless you’re especially keen to assist Redmond to gold-plate their bank vaults.

But if you’re in the market to build transformative systems for your customers, lead your industry, and make it look effortless at scale - SQL Server Enterprise provides a toolset which in the right hands can give you a huge jump-start. If you’d like to get an opinion on which SQL Server Edition would best support your business, feel free to drop us a line.

Follow us for the latest insights

More Articles