{"id":3549,"date":"2025-05-23T07:55:59","date_gmt":"2025-05-23T07:55:59","guid":{"rendered":"https:\/\/www.infobip.com\/developers\/?p=3549"},"modified":"2025-06-10T12:33:38","modified_gmt":"2025-06-10T12:33:38","slug":"delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data","status":"publish","type":"post","link":"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data","title":{"rendered":"Delete without DELETE. Smarter strategies for removing high-volume, short-lived data"},"content":{"rendered":"\n<p>More often than not, we have some kind of logging table with short-lived data that should be deleted after some time, usually after a couple of hours or so. \u00a0<\/p>\n\n\n\n<p>Moreover, the insert rate in such tables can be quite high, with thousands of new rows getting inserted every second.&nbsp;<\/p>\n\n\n\n<p>To keep our system tidy, we regularly delete old rows from such tables.&nbsp;<\/p>\n\n\n\n<p>But that\u2019s where things get tricky.&nbsp;<\/p>\n\n\n\n<p>During that process, we are also faced with different issues to consider, some of them being:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Delete is an intensive DML (Data Manipulation Language) operation, putting additional pressure on the processor, memory, and storage resources. Old rows are usually deleted continuously, making deletion a constant background task.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Various relational database systems often deal with deletes in the form of soft deletes. In other words, a DELETE operation flags rows for removal, while the background process actually deletes them asynchronously. <span style=\"box-sizing: border-box; margin: 0px; padding: 0px;\">The asynchronous process is usually single-threaded, and quite often, it can&#8217;t deal with the sheer number of deleted rows in data-intensive environments<\/span>. This leads to bloated storage filled with so-called &#8220;ghost records.&#8221;&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>&nbsp;<span style=\"box-sizing: border-box; margin: 0px; padding: 0px;\">For those who want to know more about this process in SQL Server, here is the&nbsp;<a href=\"https:\/\/learn.microsoft.com\/en-us\/sql\/relational-databases\/ghost-record-cleanup-process-guide?view=sql-server-ver16\" target=\"_blank\">MS article<\/a>.<\/span>&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Whenever we execute a delete statement, we inevitably include some kind of lock\u2014row, range, or table\u2014it doesn&#8217;t matter; it\u2019s not exactly ideal in a system that thrives on speed and responsiveness. So we all love locks, right?&nbsp;<\/li>\n<\/ul>\n\n\n\n<p>Let\u2019s now explore some smarter, more efficient alternatives.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Partitioning intro<\/h2>\n\n\n\n<p>Partitioning is a technique for dividing a large table into smaller pieces that are <em>more manageable<\/em> than a whole table.&nbsp;<\/p>\n\n\n\n<p>While you can still refer to the original table as you usually do, partitioning can offer different options in daily work, like:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>efficient querying,&nbsp;<\/li>\n\n\n\n<li>improved data ingestion,&nbsp;<\/li>\n\n\n\n<li>easier execution of administrative tasks.&nbsp;<\/li>\n<\/ul>\n\n\n\n<p>One neat feature to use is the <em>easy removal<\/em> of one or more partitions instantly from the base table, using DDL (Data Definition Language) commands.&nbsp;<\/p>\n\n\n\n<p>Three key elements to set up partitioning are:&nbsp;<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Partition function&nbsp;<\/strong><br>This object defines the key used for partitioning.&nbsp;<\/li>\n\n\n\n<li><strong>Partition scheme&nbsp;<br><\/strong>This object defines filegroups (groups of data) to which the aforementioned partition function will be applied.&nbsp;<\/li>\n\n\n\n<li><strong>Partitioned table&nbsp;<br><\/strong>This table is assigned to store its data according to the partition scheme.&nbsp;<\/li>\n<\/ol>\n\n\n\n<p>Without further ado, let&#8217;s now move on to the&#8230;&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Example<\/h2>\n\n\n\n<p>Let&#8217;s shed a bit of light on partitioning with the following example:&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Starting point<\/strong>&nbsp;<\/h3>\n\n\n\n<p>Let&#8217;s suppose that we have a logging table in our system. We want to keep just three or so hours of logged-in data in it. Everything older than that should be deleted. This table looks somewhat like this:&nbsp;<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-sql\" data-lang=\"SQL\"><code>CREATE TABLE [dbo].[Logs] \u00a0\n\n( \u00a0\n\n[Id] [BIGINT] NOT NULL IDENTITY(1, 1),\u00a0 \u00a0\n\n[LogTime] DATETIME NOT NULL \u00a0\n\nCONSTRAINT DF_Logs_LogTime DEFAULT GETUTCDATE(), \/*Log time*\/\u00a0\u00a0\n\n[Text] VARCHAR NULL, \/*Log text*\/ \u00a0\n\nCONSTRAINT [PK_Logs] PRIMARY KEY CLUSTERED ([Id]) \/*Primary key*\/ \u00a0\n\n) ON [PRIMARY];\n\nGO<\/code><\/pre><\/div>\n\n\n\n<p>As you can see, we haven&#8217;t introduced any partitioning there, it&#8217;s just a simple table.&nbsp;<\/p>\n\n\n\n<p>Let&#8217;s also suppose that, to keep this table neat and tidy, we introduced a continuous cleanup process that deletes stale data.&nbsp;<\/p>\n\n\n\n<p>As our ingestion process intensifies, the same happens with the cleanup process. This leads to locking, ghost records, pressure on the storage system&#8230; problems are piling up there!&nbsp;<\/p>\n\n\n\n<p>We need some help.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Partitions to the rescue<\/strong>&nbsp;<\/h3>\n\n\n\n<p>So, what if we started using partitioning?&nbsp;<\/p>\n\n\n\n<p>Our data could be partitioned by an hour of row insertion. That way we could easily remove partitions with stale data.&nbsp;<\/p>\n\n\n\n<p>As we\u2019ve mentioned before, there are 3 key elements to set up in order to introduce partitioning.&nbsp;<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li><strong>First, the partition function. &nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>We agreed to have partitions per hour to keep just the last few hours of data. Let&#8217;s define it.&nbsp;<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-sql\" data-lang=\"SQL\"><code>CREATE\u00a0PARTITION\u00a0FUNCTION\u00a0[partFnByHour](TINYINT)\u00a0AS\u00a0RANGE\u00a0RIGHT\u00a0FOR\u00a0VALUES\u00a0(0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24); \/*Define range\u00a0of\u00a0hours\u00a0as\u00a0partition\u00a0function*\/\u00a0\n\nGO<\/code><\/pre><\/div>\n\n\n\n<p>This is quite a nice use case, as our partition function is static (one day is only 24 hours). In other cases (imagine date partitioning), we often deal with dynamic functions that then introduce additional administrative overhead (like sliding window scenarios), but let&#8217;s keep it simple here.&nbsp;<\/p>\n\n\n\n<ol start=\"2\" class=\"wp-block-list\">\n<li><strong>Then, we have to create a partition scheme.&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>To keep this example as simple as possible, we won&#8217;t delve into defining additional file groups and files. We will use the PRIMARY filegroup instead.&nbsp;<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-sql\" data-lang=\"SQL\"><code>CREATE\u00a0PARTITION SCHEME [partSchLog]\u00a0AS\u00a0PARTITION [partFnByHour]\u00a0ALL\u00a0TO\u00a0([PRIMARY]); \/*Assign your file\u00a0group\u00a0to\u00a0use partition\u00a0function*\/\u00a0\n\nGO<\/code><\/pre><\/div>\n\n\n\n<p>You can define a separate file group in your partition scheme statement for every element of the partition function. The KISS principle led me to use the &#8220;ALL TO&#8221; clause.&nbsp;<\/p>\n\n\n\n<ol start=\"3\" class=\"wp-block-list\">\n<li><strong>Finally, let&#8217;s modify our table to start using partitioning.&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-sql\" data-lang=\"SQL\"><code>CREATE TABLE [dbo].[Logs] \u00a0\n\n( \u00a0\n\n[Id] [BIGINT] NOT NULL IDENTITY(1, 1),\u00a0\n\n[Text] [VARCHAR](4000) NULL, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\/*Log text*\/\u00a0\n[LogTime] DATETIME NOT NULL\u00a0\n\u00a0\u00a0\u00a0CONSTRAINT DF_Logs_LogTime\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0DEFAULT GETUTCDATE() \/*Log time*\/,\u00a0\n[LogHour] [TINYINT] NOT NULL\u00a0\n\u00a0\u00a0\u00a0CONSTRAINT [DF_Logs_LogHour]\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0DEFAULT (CONVERT([TINYINT], DATEPART(HOUR, GETUTCDATE()))) \/*OUR PARTITION KEY*\/\u00a0\n\u00a0\n\u00a0\u00a0\u00a0CONSTRAINT [PK_Logs]\u00a0\n\u00a0\u00a0\u00a0PRIMARY KEY CLUSTERED (\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0[Id],\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0[LogHour]\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0) \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\/*Primary key*\/\u00a0\n\u00a0\n\n) ON [partSchLog] ([LogHour]); \/*Assign table to partition scheme*\/\u00a0\n\nGO<\/code><\/pre><\/div>\n\n\n\n<p>The key is to add a new column `LogHour` that will be used for aligning data according to the values of the partition function. In other words, we divided our log table by hour. This new column has a default constraint, so there&#8217;s no need to change the way we populate the table.&nbsp;<\/p>\n\n\n\n<p>Excellent, you say, but how will this help with those ugly deletes?&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>I delete without DELETE<\/strong>&nbsp;<\/h2>\n\n\n\n<p>Well, you won&#8217;t use the DELETE command anymore&nbsp;to get rid of stale data. Let me introduce you to the TRUNCATE TABLE WITH PARTITION command.&nbsp;<\/p>\n\n\n\n<p>TRUNCATE TABLE WITH PARTITION is part of the DDL set of commands, unlike DELETE, which is a DML command. That means we don&#8217;t deal with actual data anymore. It removes partitions just by modifying metadata about them in the system catalog. &nbsp;<\/p>\n\n\n\n<p>This process is finished in milliseconds, whereas DELETE spends continuously tens of seconds to get rid of stale data.&nbsp;<\/p>\n\n\n\n<p>Here is an example of a cleanup query.<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-sql\" data-lang=\"SQL\"><code>DECLARE\u00a0@dryrun\u00a0BIT\u00a0= 1; \/*If you want just\u00a0to\u00a0see executing statement, you know that\u00a0to\u00a0do...*\/\u00a0\n\nDECLARE\u00a0@sql NVARCHAR(MAX);\u00a0\n\nDECLARE\u00a0@hours\u00a0INT\u00a0= 2;\u00a0\n\nDECLARE\u00a0@dtEnd SMALLDATETIME = DATEADD(HOUR, DATEDIFF(HOUR, 0, GETUTCDATE()), 0);\u00a0\n\nDECLARE\u00a0@dtStart SMALLDATETIME = DATEADD(HOUR, -@hours, @dtEnd);\u00a0\n\nDECLARE\u00a0@partitions NVARCHAR(MAX) = N&#39;&#39;;\u00a0\n\nWITH\u00a0src \/*Hours\u00a0to\u00a0keep*\/\u00a0\n\nAS\u00a0(SELECT\u00a0@dtStart\u00a0AS\u00a0dt,\u00a0\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0DATEPART(HOUR, @dtStart) loghour\u00a0\n\n\u00a0\u00a0\u00a0\u00a0UNION\u00a0ALL\u00a0\n\n\u00a0\u00a0\u00a0\u00a0SELECT\u00a0DATEADD(HOUR, 1, src.dt) dt,\u00a0\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0DATEPART(HOUR, DATEADD(HOUR, 1, src.dt)) loghour\u00a0\n\n\u00a0\u00a0\u00a0\u00a0FROM\u00a0src\u00a0\n\n\u00a0\u00a0\u00a0\u00a0WHERE\u00a0DATEADD(HOUR, 1, src.dt) &lt;= @dtEnd),\u00a0\n\n\u00a0\u00a0\u00a0\u00a0\u00a0hrs \/*Hours\u00a0of\u00a0day*\/\u00a0\n\nAS\u00a0(SELECT\u00a0CAST(loghour\u00a0AS\u00a0TINYINT) loghour\u00a0\n\n\u00a0\u00a0\u00a0\u00a0FROM\u00a0\n\n\u00a0\u00a0\u00a0\u00a0(\u00a0\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0VALUES\u00a0\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0(0),(1),(2),(3),(4),(5),(6),(7),(8),\u00a0\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0(9),(10),(11),(12),(13),(14),(15),\u00a0\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0(16),(17),(18),(19),(20),(21),(22),(23)\u00a0\n\n\u00a0\u00a0\u00a0\u00a0) x (loghour) )\u00a0\n\nSELECT\u00a0@partitions = STRING_AGG($PARTITION.partFnByHour(h.loghour),\u00a0&#39;,&#39;) \/*Hours\u00a0to\u00a0remove*\/\u00a0\n\nFROM\u00a0hrs h\u00a0\n\nWHERE\u00a0NOT\u00a0EXISTS\u00a0\n\n(\u00a0\n\n\u00a0\u00a0\u00a0\u00a0SELECT\u00a0*\u00a0FROM\u00a0src s\u00a0WHERE\u00a0h.loghour = s.loghour\u00a0\n\n);\u00a0\n\nSET\u00a0@sql = CONCAT(N&#39;TRUNCATE TABLE dbo.Logs WITH (PARTITIONS (&#39;, @partitions,\u00a0&#39;))&#39;);\u00a0\n\nIF @dryrun = 1\u00a0\n\n\u00a0\u00a0\u00a0\u00a0PRINT CONCAT(&#39;CurrentDateTime = &#39;,FORMAT(GETUTCDATE(),N&#39;yyyy-MM-yy hh:mm:ss tt&#39;),&#39;; Command = &#39;, @sql);\u00a0\n\nELSE\u00a0\n\n\u00a0\u00a0\u00a0\u00a0EXEC\u00a0sys.sp_executesql @sql = @sql;\n\nGO<\/code><\/pre><\/div>\n\n\n\n<p>As you can see, we dynamically prepare our statement by selecting appropriate partition numbers according to the time of day. By executing a single command, we can remove several hours of data in milliseconds without any data movement.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Pros and cons&nbsp;<\/h2>\n\n\n\n<p>Like a coin has two sides, there are some pros and cons to this solution. Let\u2019s discuss them below.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Pros<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Reduced pressure on computing power and storage&nbsp;<br><\/strong>While DELETE-based cleanup processes run continuously, this one can be executed only once per hour. No more locks, endless page reads, modifications, and the like.&nbsp;<\/li>\n\n\n\n<li><strong>No more pain caused by soft deletes&nbsp;<br><\/strong>This process actually gets rid of data when it is executed&nbsp;<\/li>\n\n\n\n<li><strong>Applicable to most RDBMSs&nbsp;<br><\/strong>Yes, you can set up a similar process on PostgreSQL, too.&nbsp;<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Cons<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Administrative overhead&nbsp;<br><\/strong>Before you can start using this process, you need to write some additional code and set up some objects. All indexes on a table should be aligned to the same partition function. You can use different partition schemes, but these should be created using the same partition function,&nbsp;which can be a bit complicated and sometimes forgotten.&nbsp;<\/li>\n\n\n\n<li><strong>Quirky query optimizer&nbsp;<br><\/strong>The query optimizer can sometimes produce a bad execution plan while dealing with a partitioned table, so I recommend you review your queries.&nbsp;<\/li>\n\n\n\n<li><strong>Locks are still there&nbsp;<br><\/strong>While you won&#8217;t cause more long-running locks, the TRUNCATE TABLE WITH PARTITIONS still needs a Schema lock to perform truncation. But its duration is minuscule compared to its DELETE equivalent.&nbsp;<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Summary<\/h2>\n\n\n\n<p>The concept described here is a result of several brainstorming sessions organized just to find a solution to all the pain<s> <\/s>points mentioned in the post. &nbsp;<\/p>\n\n\n\n<p>We\u2019ve been successfully using this solution for several months now, and it works flawlessly. &nbsp;<\/p>\n\n\n\n<p>I hope this story and the examples will help you somehow in your future work!&nbsp;<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>More often than not, we have some kind of [&hellip;]<\/p>\n","protected":false},"author":64,"featured_media":3554,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_import_markdown_pro_load_document_selector":0,"_import_markdown_pro_submit_text_textarea":"","footnotes":""},"categories":[28,254,252],"tags":[144,256,291],"coauthors":[302],"class_list":["post-3549","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog-post","category-engineering-practices","category-tools","tag-developer-ecosystem","tag-programming","tag-sql"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Delete without DELETE. Smarter strategies for removing high-volume, short-lived data - Infobip Developers Hub<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Delete without DELETE. Smarter strategies for removing high-volume, short-lived data - Infobip Developers Hub\" \/>\n<meta property=\"og:description\" content=\"More often than not, we have some kind of [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data\" \/>\n<meta property=\"og:site_name\" content=\"Infobip Developers Hub\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/infobip\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-05-23T07:55:59+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-10T12:33:38+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.infobip.com\/developers\/wp-content\/uploads\/2025\/05\/An-image-of-a-SQL-Server-database-on-a-computer.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1472\" \/>\n\t<meta property=\"og:image:height\" content=\"832\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Leo Tausanovic\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@InfobipDev\" \/>\n<meta name=\"twitter:site\" content=\"@InfobipDev\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Leo Tausanovic\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data\"},\"author\":{\"name\":\"Leo Tausanovic\",\"@id\":\"https:\/\/www.infobip.com\/developers\/#\/schema\/person\/bac81e18e7eee9dbac580d38e2f65cbc\"},\"headline\":\"Delete without DELETE. Smarter strategies for removing high-volume, short-lived data\",\"datePublished\":\"2025-05-23T07:55:59+00:00\",\"dateModified\":\"2025-06-10T12:33:38+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data\"},\"wordCount\":1242,\"publisher\":{\"@id\":\"https:\/\/www.infobip.com\/developers\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.infobip.com\/developers\/wp-content\/uploads\/2025\/05\/An-image-of-a-SQL-Server-database-on-a-computer.jpg\",\"keywords\":[\"developer ecosystem\",\"programming\",\"SQL\"],\"articleSection\":[\"Blog Post\",\"Engineering Practices\",\"Tools\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data\",\"url\":\"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data\",\"name\":\"Delete without DELETE. Smarter strategies for removing high-volume, short-lived data - Infobip Developers Hub\",\"isPartOf\":{\"@id\":\"https:\/\/www.infobip.com\/developers\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.infobip.com\/developers\/wp-content\/uploads\/2025\/05\/An-image-of-a-SQL-Server-database-on-a-computer.jpg\",\"datePublished\":\"2025-05-23T07:55:59+00:00\",\"dateModified\":\"2025-06-10T12:33:38+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data#primaryimage\",\"url\":\"https:\/\/www.infobip.com\/developers\/wp-content\/uploads\/2025\/05\/An-image-of-a-SQL-Server-database-on-a-computer.jpg\",\"contentUrl\":\"https:\/\/www.infobip.com\/developers\/wp-content\/uploads\/2025\/05\/An-image-of-a-SQL-Server-database-on-a-computer.jpg\",\"width\":1472,\"height\":832,\"caption\":\"An image of a SQL Server database on a computer\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.infobip.com\/developers\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Delete without DELETE. Smarter strategies for removing high-volume, short-lived data\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.infobip.com\/developers\/#website\",\"url\":\"https:\/\/www.infobip.com\/developers\/\",\"name\":\"Infobip Developers Hub\",\"description\":\"Build meaningful customer relationships across any channel\",\"publisher\":{\"@id\":\"https:\/\/www.infobip.com\/developers\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.infobip.com\/developers\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.infobip.com\/developers\/#organization\",\"name\":\"Infobip Developers Hub\",\"url\":\"https:\/\/www.infobip.com\/developers\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.infobip.com\/developers\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.infobip.com\/developers\/wp-content\/uploads\/2023\/03\/Infobip_logo_favicon.png\",\"contentUrl\":\"https:\/\/www.infobip.com\/developers\/wp-content\/uploads\/2023\/03\/Infobip_logo_favicon.png\",\"width\":696,\"height\":696,\"caption\":\"Infobip Developers Hub\"},\"image\":{\"@id\":\"https:\/\/www.infobip.com\/developers\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/infobip\/\",\"https:\/\/x.com\/InfobipDev\",\"https:\/\/www.youtube.com\/channel\/UCUPSTy53VecI5GIir3J3ZbQ\",\"https:\/\/github.com\/infobip-community\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.infobip.com\/developers\/#\/schema\/person\/bac81e18e7eee9dbac580d38e2f65cbc\",\"name\":\"Leo Tausanovic\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.infobip.com\/developers\/#\/schema\/person\/image\/87bd3370bfa75c94f7fc86ca8e4c2d62\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/5942ac03ef9a899ec89e4f9906a23bccd271f4c624806ec628d522a59c314661?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/5942ac03ef9a899ec89e4f9906a23bccd271f4c624806ec628d522a59c314661?s=96&d=mm&r=g\",\"caption\":\"Leo Tausanovic\"},\"description\":\"Leo is a data aficionado, working as a MSSQL DBA on Big Data projects at Infobip for the last 10+ years. Loves the sea, reading, sea and sea.\",\"url\":\"https:\/\/www.infobip.com\/developers\/blog\/author\/leo-tausanovic\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Delete without DELETE. Smarter strategies for removing high-volume, short-lived data - Infobip Developers Hub","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data","og_locale":"en_US","og_type":"article","og_title":"Delete without DELETE. Smarter strategies for removing high-volume, short-lived data - Infobip Developers Hub","og_description":"More often than not, we have some kind of [&hellip;]","og_url":"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data","og_site_name":"Infobip Developers Hub","article_publisher":"https:\/\/www.facebook.com\/infobip\/","article_published_time":"2025-05-23T07:55:59+00:00","article_modified_time":"2025-06-10T12:33:38+00:00","og_image":[{"width":1472,"height":832,"url":"https:\/\/www.infobip.com\/developers\/wp-content\/uploads\/2025\/05\/An-image-of-a-SQL-Server-database-on-a-computer.jpg","type":"image\/jpeg"}],"author":"Leo Tausanovic","twitter_card":"summary_large_image","twitter_creator":"@InfobipDev","twitter_site":"@InfobipDev","twitter_misc":{"Written by":"Leo Tausanovic","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data#article","isPartOf":{"@id":"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data"},"author":{"name":"Leo Tausanovic","@id":"https:\/\/www.infobip.com\/developers\/#\/schema\/person\/bac81e18e7eee9dbac580d38e2f65cbc"},"headline":"Delete without DELETE. Smarter strategies for removing high-volume, short-lived data","datePublished":"2025-05-23T07:55:59+00:00","dateModified":"2025-06-10T12:33:38+00:00","mainEntityOfPage":{"@id":"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data"},"wordCount":1242,"publisher":{"@id":"https:\/\/www.infobip.com\/developers\/#organization"},"image":{"@id":"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data#primaryimage"},"thumbnailUrl":"https:\/\/www.infobip.com\/developers\/wp-content\/uploads\/2025\/05\/An-image-of-a-SQL-Server-database-on-a-computer.jpg","keywords":["developer ecosystem","programming","SQL"],"articleSection":["Blog Post","Engineering Practices","Tools"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data","url":"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data","name":"Delete without DELETE. Smarter strategies for removing high-volume, short-lived data - Infobip Developers Hub","isPartOf":{"@id":"https:\/\/www.infobip.com\/developers\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data#primaryimage"},"image":{"@id":"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data#primaryimage"},"thumbnailUrl":"https:\/\/www.infobip.com\/developers\/wp-content\/uploads\/2025\/05\/An-image-of-a-SQL-Server-database-on-a-computer.jpg","datePublished":"2025-05-23T07:55:59+00:00","dateModified":"2025-06-10T12:33:38+00:00","breadcrumb":{"@id":"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data#primaryimage","url":"https:\/\/www.infobip.com\/developers\/wp-content\/uploads\/2025\/05\/An-image-of-a-SQL-Server-database-on-a-computer.jpg","contentUrl":"https:\/\/www.infobip.com\/developers\/wp-content\/uploads\/2025\/05\/An-image-of-a-SQL-Server-database-on-a-computer.jpg","width":1472,"height":832,"caption":"An image of a SQL Server database on a computer"},{"@type":"BreadcrumbList","@id":"https:\/\/www.infobip.com\/developers\/blog\/delete-without-delete-smarter-strategies-for-removing-high-volume-short-lived-data#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.infobip.com\/developers\/"},{"@type":"ListItem","position":2,"name":"Delete without DELETE. Smarter strategies for removing high-volume, short-lived data"}]},{"@type":"WebSite","@id":"https:\/\/www.infobip.com\/developers\/#website","url":"https:\/\/www.infobip.com\/developers\/","name":"Infobip Developers Hub","description":"Build meaningful customer relationships across any channel","publisher":{"@id":"https:\/\/www.infobip.com\/developers\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.infobip.com\/developers\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.infobip.com\/developers\/#organization","name":"Infobip Developers Hub","url":"https:\/\/www.infobip.com\/developers\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.infobip.com\/developers\/#\/schema\/logo\/image\/","url":"https:\/\/www.infobip.com\/developers\/wp-content\/uploads\/2023\/03\/Infobip_logo_favicon.png","contentUrl":"https:\/\/www.infobip.com\/developers\/wp-content\/uploads\/2023\/03\/Infobip_logo_favicon.png","width":696,"height":696,"caption":"Infobip Developers Hub"},"image":{"@id":"https:\/\/www.infobip.com\/developers\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/infobip\/","https:\/\/x.com\/InfobipDev","https:\/\/www.youtube.com\/channel\/UCUPSTy53VecI5GIir3J3ZbQ","https:\/\/github.com\/infobip-community"]},{"@type":"Person","@id":"https:\/\/www.infobip.com\/developers\/#\/schema\/person\/bac81e18e7eee9dbac580d38e2f65cbc","name":"Leo Tausanovic","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.infobip.com\/developers\/#\/schema\/person\/image\/87bd3370bfa75c94f7fc86ca8e4c2d62","url":"https:\/\/secure.gravatar.com\/avatar\/5942ac03ef9a899ec89e4f9906a23bccd271f4c624806ec628d522a59c314661?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5942ac03ef9a899ec89e4f9906a23bccd271f4c624806ec628d522a59c314661?s=96&d=mm&r=g","caption":"Leo Tausanovic"},"description":"Leo is a data aficionado, working as a MSSQL DBA on Big Data projects at Infobip for the last 10+ years. Loves the sea, reading, sea and sea.","url":"https:\/\/www.infobip.com\/developers\/blog\/author\/leo-tausanovic"}]}},"_links":{"self":[{"href":"https:\/\/www.infobip.com\/developers\/wp-json\/wp\/v2\/posts\/3549","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.infobip.com\/developers\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.infobip.com\/developers\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.infobip.com\/developers\/wp-json\/wp\/v2\/users\/64"}],"replies":[{"embeddable":true,"href":"https:\/\/www.infobip.com\/developers\/wp-json\/wp\/v2\/comments?post=3549"}],"version-history":[{"count":7,"href":"https:\/\/www.infobip.com\/developers\/wp-json\/wp\/v2\/posts\/3549\/revisions"}],"predecessor-version":[{"id":3568,"href":"https:\/\/www.infobip.com\/developers\/wp-json\/wp\/v2\/posts\/3549\/revisions\/3568"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.infobip.com\/developers\/wp-json\/wp\/v2\/media\/3554"}],"wp:attachment":[{"href":"https:\/\/www.infobip.com\/developers\/wp-json\/wp\/v2\/media?parent=3549"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.infobip.com\/developers\/wp-json\/wp\/v2\/categories?post=3549"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.infobip.com\/developers\/wp-json\/wp\/v2\/tags?post=3549"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.infobip.com\/developers\/wp-json\/wp\/v2\/coauthors?post=3549"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}