In-Memory OLTP - Part 1

Watch the Video

You can watch the short video on In-Memory OLTP on Youtube. Watch Here


Overview

Introduced in SQL Server 2014 and greatly enhanced with SQL Server 2016 is In-Memory OLTP. In-Memory OLTP is a memory optimized database engine intergraded with SQL Server.

It's designed to improve performance of data workloads and transaction processing for high data ingestion, contention read/write, and low latency workloads. High data ingestion workloads are, for example, ETL (Extract, Transform, Load) processes. Read/Write workloads are workloads with enormous read and write requirements. Lastly, low latency workloads are applications that require specific duration times such as banking transactions.

In-Memory OLTP is available in the following SQL Server Products.

  • SQL Server 2014
  • SQL Server 2016
  • SQL Server vNext CTP1.1
  • Azure SQL Database with Premium

Memory-optimized tables

Memory-optimized tables store all their data in memory with no need to put anything in cache from disk. This mechanism eliminates locks and latches and provides great performance.

Things to note:

  • Entire table is stored in memory
  • Can store just schema (non-durable data) or data and schema (durable)
  • A second copy is maintained on disk if durable is chosen
  • Data in the table is read and written to memory
  • Default is durable (SCHEMA_AND_DATA)
  • Row versioning DOES NOT use tempdb

Durable vs Non-Durable

In memory-optimized tables there are two options that you can choose from when creating a table. The table below is a point of reference to understanding the difference between Durable and Non-Durable Memory-Optimized Tables
Type Description Decision Point Clause
Durable Transactions are written to memory and also to the transaction log
  • Need to recover data on server failure or failover?
DURABILITY=SCHEMA_AND_DATA
Non-Durable Transactions are written to memory but not written to the transaction log
  • Data is stored in another source and recovery is not needed? (ex. ETL process)
  • Want to eliminate all I/O operations?
DURABILITY=SCHEMA_ONLY

Querying In-Memory Optimized Tables

Microsoft recommends that the best method to benefit from memory-optimized tables is to use Natively-complied stored procedures. These stored procedures executed without the need for compilation or interpretation because this is done during compile time and not run time. Below is an example of a natively complied code on the 'dbo.Employee' table in the 'employeedb' database.
USE [employeedb]
GO
-- SQL Server 2016 SP1 supports CREATE OR ALTER statements
-- Replace it if you are SQL Server 2016 and below with 'CREATE' or 'ALTER'
CREATE OR ALTER PROCEDURE [dbo].[usp_UpdateEmployeeSalary]
	WITH NATIVE_COMPILATION,
	SCHEMABINDING,
	EXECUTE AS OWNER
AS
BEGIN ATOMIC WITH ( TRANSACTION ISOLATION LEVEL=SNAPSHOT,LANGUAGE = N'us_english')
	UPDATE [dbo].[Employee]
		SET Salary = Salary + (Salary * 0.05)
END;

What's New in SQL Server 2016 and In-Memory OLTP

There were some limitation with In-Memory OLTP when first introduced in SQL Server 2014. In SQL Server 2016, Microsoft invested some time in eliminated some of the limitations. The list below are changes that were made to In-Memory OLTP in SQL Server 2016.
  • Maximum Memory changed from 256 GB to 2TB recommended
  • All collations are now supported
  • ALTER a memory-optimized table after its created is supported
  • Parallel execution plans on memory optimized tables
  • TDE - Transparent Data Encryption
  • (max) Data Type
  • Indexes with NULLable columns
  • DML Triggers

Demo
Part 1 of our demonstration will be creating an In-Memory OLTP database, creating a memory-optimized table, and inserting data into our memory-optimized table. We will also explore performance improvements with natively-complied code.

Part 2, which will be in another post, is a demonstration that will explore altering memory-optimized tables and indexing on memory-optimized tables.

Step 1 - Create Memory OLTP Database

In this step we will create a database that will contain memory-optimized tables. It will be created with one MEMORY_OPTIMIZED_DATA filegroup. The filegroup stores data and delta file pairs for SQL Server to recover in a failure.

USE [master]
GO
CREATE DATABASE [memexampledb]
 CONTAINMENT = NONE
 ON  PRIMARY 
( NAME = N'memexampledb', FILENAME = N'C:\SQLDB\memexampledb.mdf'), 
 FILEGROUP [FG_MEMOP_EXAMPLE] CONTAINS MEMORY_OPTIMIZED_DATA  DEFAULT
( NAME = N'memexampledb_MEMOP_Data', FILENAME = N'C:\SQLDB\memexampledb_MEMOP_Data')
 LOG ON 
( NAME = N'memexampledb_log', FILENAME = N'C:\SQLDB\memexampledb_log.ldf')
GO
					

Step 2 - Create Memory-optimized Table

In this step, we will create two tables. The first table, tbl_disktable, will be created on disk. Our second table, tbl_inmemory, will be created on the memory filegroup we created when we created our database.

Notice in our TSQL statement for the tbl_inmemory table there is an additional clause passed. The WITH (MEMORY_OPTIMIZED=ON) indicates that the table be created in the filegroup we created.
The second option you will notice is the DURABILITY=SCHEMA_AND_DATA. This indicates that the table will be durable.

USE [memexampledb]
GO
-- Table on Disk
CREATE TABLE tbl_disktable ([id] [int] NOT NULL primary key nonclustered,
                        [name] [varchar](50) NULL,
                        [salary] [int] NULL,
                        [indicator] [int] NULL);

-- Table in Memory                     
CREATE TABLE tbl_inmemory ([id] [int] NOT NULL primary key nonclustered,
                        [name] [varchar](50) NULL,
                        [salary] [int] NULL,
                        [indicator] [int] NULL) WITH (
				MEMORY_OPTIMIZED=ON
				, DURABILITY=SCHEMA_AND_DATA);
					

Step 3 - Insert Data

In this step, we will insert data into both tables. This is sample data for the demonstration.
Notice the DELETE statements in the code. In SQL Server 2014 and in SQL Server 2016 you can not TRUNCATE a memory optimized table. To ensure no data exists in the table for this demonstration or if you want to delete all the data in a table (ex an ETL process) you must use DELETE instead of TRUNCATE.

use [memexampledb]
go
truncate table tbl_disktable
go
-- Can not truncate a Memory Optimized Table
delete from tbl_inmemory
go
declare @pdata int=1
while @pdata <= 5000
begin  
   insert into tbl_disktable(id,name,salary,indicator) 
       values (@pdata,'thomasliddledba-' + cast(@pdata as varchar(1000)),(@pdata - 1)
			,(@pdata % 6))
   select @pdata=@pdata+1
end
go
declare @pdata int=1
while @pdata <= 5000
begin  
   insert into tbl_inmemory(id,name,salary,indicator) 
       values (@pdata,'thomasliddledba-' + cast(@pdata as varchar(1000)),(@pdata - .10)
			,(@pdata % 6))
   select @pdata=@pdata+1
end
go					

Step 4 - Create a Natively-Complied Stored Procedure

For our demonstration we will need to create a Natively-Complied stored procedure to test the performance between disk based table and memory-optimized table.
This complied stored procedure will be a SELECT statement for row id 2500.
-- If you have SQL Server 2016 SP 1 this code will execute successfully
-- If you do not have SQL Server 2016 SP1 replace the CREATE OR ALTER with CREATE
CREATE OR ALTER PROCEDURE [dbo].[usp_tbl_inmemory_compile]
	WITH NATIVE_COMPILATION,
	SCHEMABINDING,
	EXECUTE AS OWNER
AS
BEGIN ATOMIC WITH ( TRANSACTION ISOLATION LEVEL=SNAPSHOT,LANGUAGE = N'us_english')
	SELECT 
		id
		,name
		,salary
		,indicator
	FROM dbo.tbl_inmemory WHERE [id]=2500
END;	
GO
EXEC dbo.usp_tbl_inmemory_compile					

Step 5 - Testing Performance

In the last step, we will start a profiler trace and execute the following code to test performance of our memory-optimized table vs our disk based table. You will need to start a profiler trace with the following events:
  • Stored Procedure-RPC:Completed
  • TSQL-SQL:BatchCompleted
You will need to choose the following columns
  • TextData
  • CPU
  • Reads
  • Writes
  • Duration
  • SPID
  • StartTime
  • EndTime
Profiler Screenshot
With our Profiler trace running, we can execute the code and analyze the results. You will need to execute the following code with profiler running.
use [memexampledb]
go
SELECT * FROM tbl_disktable WHERE [id]=2500
GO
exec dbo.usp_tbl_inmemory_compile
go
SELECT * FROM tbl_inmemory  WHERE [id]=2500
GO				

After we run our code, here are the results (see screenshot below). Our complied and inline TSQL code against our memory-optimized table used almost NO resources from to execute. Our disk base table had a duration of 1ms and 3 Reads. If this were a highly transactional system, a memory-optimized table would be a perfect candidate for this.

Profiler Screenshot
Our next test will be with a greater data set, 2500 results. First we will need to ALTER our complied stored procedure. We will modify the predicate from "=" to ">=".
-- If you have SQL Server 2016 SP 1 this code will execute successfully
-- If you do not have SQL Server 2016 SP1 replace the CREATE OR ALTER with CREATE
CREATE OR ALTER PROCEDURE [dbo].[usp_tbl_inmemory_compile]
	WITH NATIVE_COMPILATION,
	SCHEMABINDING,
	EXECUTE AS OWNER
AS
BEGIN ATOMIC WITH ( TRANSACTION ISOLATION LEVEL=SNAPSHOT,LANGUAGE = N'us_english')
	SELECT 
		id
		,name
		,salary
		,indicator
	FROM dbo.tbl_inmemory WHERE [id]>=2500  --This line changes from "=" to ">="
END;	
GO
EXEC dbo.usp_tbl_inmemory_compile	

Now we need to change our predicates in our pervious test to "=" to ">=" and execute the test.

use [memexampledb]
go
SELECT * FROM tbl_disktable WHERE [id]>=2500 --This lines changes "=" to ">="
GO
exec dbo.usp_tbl_inmemory_compile
go
SELECT * FROM tbl_inmemory  WHERE [id]>=2500 --This lines changes "=" to ">="
GO				
You will see that with larger data sets our memory-optimized table performed better then our disk based table AND our in-line TSQL. There could be a number of reasons why the TSQL inline statement against our memory-optimized table did not perform as well in this test. Either way, the results indicated what we expected. Memory-optimized tables perform much faster than disk based tables and with natively-completed stored procedures to access them.

Profiler Screenshot

Conclusion

In this article, we educated ourselves on what In-Memory OLTP and memory-optimized tables are, how to create and configure them, and show the performance benefits between disk based tables and memory-optimized tables.

Part 2 of our journey will be discussing altering memory-optimized tables and indexing on memory-optimized tables. Check back for more fun and Part 2.