- 12 minutes to read

Move backout messages to the active source for Pickup Service

This troubleshooting page explains how to replay messages from backout storage to the active source used by the Nodinite Pickup Service.

In other words, you are moving data:

  • From the backout queue, backout container, or backout folder
  • Back to the active source queue, active source container, or active source folder that Pickup Service normally reads from

Use this guide when:

  • Messages were moved to a backout queue or backout container
  • You fixed the root cause (for example malformed payload generation, wrong mapping, temporary downstream outage)
  • You now want to replay data to the source so Pickup Service can process it again

Important

Fix the root cause before replay. If you replay without fixing the underlying issue, messages can return to backout again. For the most predictable result, perform replay in isolation: stop the Pickup Service, or disable the specific configuration entry being replayed, before moving messages from backout to the active source.

Recovery workflow

  1. Identify the failing Pickup Service configuration entry.
  2. Confirm the configured active source and backout names.
  3. Stop the Pickup Service, or disable the specific configuration entry, to avoid race conditions during replay.
  4. Replay a small batch first.
  5. Validate Log Events in Nodinite.
  6. Replay the remaining backlog.

When Pickup Service is still running during replay, several things can happen at the same time:

  • The replay script copies blobs from the backout container to the active source container
  • Pickup Service immediately starts consuming those newly replayed blobs from the active source
  • If the original root cause is not fully resolved, or if downstream processing is still unstable, Pickup Service can place some of those messages back into backout again

This can make recovery appear inconsistent because the backout count may decrease, then increase again during the same replay session.

Warning

The replay script never writes directly to the backout container. If you see new items appearing there during replay, Pickup Service or another upstream process is creating them.

Recommended isolation order:

  1. Stop Pickup Service, or disable the specific source configuration entry
  2. Replay a small batch from backout to the active source
  3. Confirm the moved items remain in the source and are not recreated in backout
  4. Start Pickup Service again and validate processing
  5. Continue with larger batches only after the small batch succeeds cleanly

Service Bus replay (PowerShell 7)

Use this script to move messages from a backout queue to the active source queue.

Note

This script uses Service Bus REST operations and a SAS key. It replays the message body and content type. Start with a small MaxMessages value. Use -MaxMessages 0 for validation-only mode. The script verifies that both source and backout queues exist and are accessible, without replaying any message.

#Requires -Version 7.0
param(
    [Parameter(Mandatory = $true)]
    [string]$NamespaceFqdn, # example: myns.servicebus.windows.net

    [Parameter(Mandatory = $true)]
    [string]$SourceQueuePath, # example: pickup

    [Parameter(Mandatory = $true)]
    [string]$BackoutQueuePath, # example: pickup-backout

    [Parameter(Mandatory = $true)]
    [string]$SasKeyName,

    [Parameter(Mandatory = $true)]
    [string]$SasKey,

    [int]$MaxMessages = 100,
    [int]$ReceiveTimeoutSeconds = 5,
    [switch]$PreflightOnly,
    [switch]$DryRun
)

Set-StrictMode -Version Latest
$ErrorActionPreference = "Stop"

function New-SasToken {
    param(
        [string]$ResourceUri,
        [string]$KeyName,
        [string]$Key,
        [int]$MinutesToLive = 60
    )

    $expiry = [DateTimeOffset]::UtcNow.AddMinutes($MinutesToLive).ToUnixTimeSeconds()
    $encodedResource = [System.Web.HttpUtility]::UrlEncode($ResourceUri.ToLowerInvariant())
    $stringToSign = "$encodedResource`n$expiry"

    $hmac = [System.Security.Cryptography.HMACSHA256]:New ([Text.Encoding]:UTF8.GetBytes($Key))
    $signatureBytes = $hmac.ComputeHash([Text.Encoding]::UTF8.GetBytes($stringToSign))
    $signature = [System.Web.HttpUtility]::UrlEncode([Convert]::ToBase64String($signatureBytes))

    return "SharedAccessSignature sr=$encodedResource&sig=$signature&se=$expiry&skn=$KeyName"
}

$resource = "https://$NamespaceFqdn/"
$authHeader = New-SasToken -ResourceUri $resource -KeyName $SasKeyName -Key $SasKey

$receiveUri = "https://$NamespaceFqdn/$BackoutQueuePath/messages/head?timeout=$ReceiveTimeoutSeconds"
$sendUri = "https://$NamespaceFqdn/$SourceQueuePath/messages"

if ($MaxMessages -lt 0) {
    throw "MaxMessages cannot be negative. Use 0 for validation-only mode or a positive number for replay."
}

# Preflight checks: verify both queues exist and are accessible.
# Queue metadata read requires a SAS policy with Manage claim.
$sourceCheckUri = "https://$NamespaceFqdn/$SourceQueuePath"
$backoutCheckUri = "https://$NamespaceFqdn/$BackoutQueuePath"

$sourceCheck = Invoke-WebRequest `
    -Method Get `
    -Uri $sourceCheckUri `
    -Headers @{ Authorization = $authHeader } `
    -SkipHttpErrorCheck

$backoutCheck = Invoke-WebRequest `
    -Method Get `
    -Uri $backoutCheckUri `
    -Headers @{ Authorization = $authHeader } `
    -SkipHttpErrorCheck

if ($sourceCheck.StatusCode -ne 200) {
    throw "Source queue '$SourceQueuePath' was not found or is not accessible. HTTP $($sourceCheck.StatusCode)."
}

if ($backoutCheck.StatusCode -ne 200) {
    throw "Backout queue '$BackoutQueuePath' was not found or is not accessible. HTTP $($backoutCheck.StatusCode)."
}

if ($MaxMessages -eq 0) {
    $PreflightOnly = $true
}

if ($PreflightOnly) {
    Write-Host "Preflight passed. Source and backout queues exist and are accessible. No messages replayed."
    return
}

$replayed = 0
for ($i = 1; $i -le $MaxMessages; $i++) {
    $receiveResponse = Invoke-WebRequest `
        -Method Delete `
        -Uri $receiveUri `
        -Headers @{ Authorization = $authHeader } `
        -SkipHttpErrorCheck

    if ($receiveResponse.StatusCode -eq 204) {
        Write-Host "No more messages in backout queue."
        break
    }

    if ($receiveResponse.StatusCode -ne 201) {
        throw "Receive failed. HTTP $($receiveResponse.StatusCode)."
    }

    if ($DryRun) {
        $replayed++
        Write-Host "DryRun: would replay message #$replayed"
        continue
    }

    $contentType = $receiveResponse.Headers["Content-Type"]
    if ([string]::IsNullOrWhiteSpace($contentType)) {
        $contentType = "application/json"
    }

    $sendResponse = Invoke-WebRequest `
        -Method Post `
        -Uri $sendUri `
        -Headers @{ Authorization = $authHeader } `
        -ContentType $contentType `
        -Body $receiveResponse.Content `
        -SkipHttpErrorCheck

    if ($sendResponse.StatusCode -ne 201) {
        throw "Replay failed while sending to source queue. HTTP $($sendResponse.StatusCode)."
    }

    $replayed++
    Write-Host "Replayed message #$replayed"
}

Write-Host "Done. Total replayed: $replayed"

Service Bus examples

# 1) Dry run for first 10 messages
pwsh .\Replay-ServiceBusBackout.ps1 `
  -NamespaceFqdn "myns.servicebus.windows.net" `
  -SourceQueuePath "pickup" `
  -BackoutQueuePath "pickup-backout" `
  -SasKeyName "RootManageSharedAccessKey" `
  -SasKey "<secret>" `
  -MaxMessages 10 `
  -DryRun

# 2) Replay first 100 messages
pwsh .\Replay-ServiceBusBackout.ps1 `
  -NamespaceFqdn "myns.servicebus.windows.net" `
  -SourceQueuePath "pickup" `
  -BackoutQueuePath "pickup-backout" `
  -SasKeyName "RootManageSharedAccessKey" `
  -SasKey "<secret>" `
  -MaxMessages 100

# 3) Validation only (recommended first step)
pwsh .\Replay-ServiceBusBackout.ps1 `
    -NamespaceFqdn "myns.servicebus.windows.net" `
    -SourceQueuePath "pickup" `
    -BackoutQueuePath "pickup-backout" `
    -SasKeyName "RootManageSharedAccessKey" `
    -SasKey "<secret>" `
    -MaxMessages 0

Why Azure login can fail (and how to fix)

If you see errors like "Authentication failed against tenant" and "Please provide a valid tenant or a valid subscription", your account did not get a usable token for that tenant/subscription.

Common causes:

  • Wrong tenant selected
  • Conditional Access or MFA challenge not completed
  • Subscription exists in a different tenant than the one you authenticated to

Use this login pattern:

# Optional reset if you switched tenants/subscriptions multiple times
Disconnect-AzAccount
Clear-AzContext -Scope Process -Force

# Interactive login to the correct tenant (prompts for MFA if required)
# Use AuthScope Storage when you will access Blob data with -UseConnectedAccount
Connect-AzAccount -TenantId "<tenant-id>" -AuthScope Storage

# List available subscriptions in the active context
Get-AzSubscription | Format-Table Name, Id, TenantId

# Set the correct subscription from the list above
# Important: do not paste a subscription Id that is not returned by Get-AzSubscription
Set-AzContext -SubscriptionId "<subscription-id-from-list>" -TenantId "<tenant-id>"

# Verify final context before running replay commands
Get-AzContext | Format-List Account, Subscription, Tenant, Environment

If your Conditional Access policy blocks browser popup/interactive flow, try device code login:

Connect-AzAccount -TenantId "<tenant-id>" -AuthScope Storage -UseDeviceAuthentication

If you sign in with an account that belongs to multiple tenants, Azure PowerShell can show warning messages for tenants where you do not currently have access or where MFA/Conditional Access blocks token acquisition. This is often harmless if the final context is set to the subscription you actually need.

If Get-AzContext shows the expected subscription and tenant, you can continue with the Blob replay script even if warnings were shown for other tenants during sign-in.

Blob Container replay (PowerShell 7)

Use this script to move blobs from a backout container to the active source container.

Note

This script requires the Az modules and signs in with your current Azure identity. Use -MaxBlobs 0 for validation-only mode. The script verifies that both source and backout containers exist and that your identity can access them, without moving any blob. The script performs a server-side copy first, verifies that the destination copy succeeded, and only then deletes the blob from backout.

Install Az modules and sign in

Run the following commands in PowerShell 7 before you execute the Blob replay script.

# Install Az modules for current user (PowerShell 7)
Install-Module Az.Accounts -Scope CurrentUser -Repository PSGallery -Force
Install-Module Az.Storage -Scope CurrentUser -Repository PSGallery -Force

# Import modules in current session
Import-Module Az.Accounts
Import-Module Az.Storage

# Interactive login with storage data-plane scope
Connect-AzAccount -AuthScope Storage

# Optional: choose the subscription to use
Set-AzContext -Subscription "<subscription-name-or-id>"

If your environment uses automation identity, you can log in with a service principal instead:

$tenantId = "<tenant-id>"
$appId = "<app-registration-client-id>"
$secret = "<client-secret>"

$secureSecret = ConvertTo-SecureString $secret -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential($appId, $secureSecret)

Connect-AzAccount -ServicePrincipal -Tenant $tenantId -Credential $credential -AuthScope Storage
Set-AzContext -Subscription "<subscription-name-or-id>"

Why StorageOAuthEndpointResourceId can fail

If you get an error like this from New-AzStorageContext:

Authentication failed against resource StorageOAuthEndpointResourceId.
User interaction is required.
Please rerun 'Connect-AzAccount' with additional parameter '-AuthScope Storage'

the reason is that your current Azure session only has an Azure Resource Manager token. New-AzStorageContext -UseConnectedAccount also needs a Storage data-plane token for https://storage.azure.com/.

Use this sequence:

Connect-AzAccount -TenantId "<tenant-id>" -AuthScope Storage
Set-AzContext -Subscription "<subscription-id>"
Get-AzContext | Format-List Account, Subscription, Tenant, Environment

If interactive sign-in is blocked by Conditional Access or popup restrictions, use device authentication:

Connect-AzAccount -TenantId "<tenant-id>" -AuthScope Storage -UseDeviceAuthentication
Set-AzContext -Subscription "<subscription-id>"

If you still cannot acquire a Storage-scoped token, use one of these alternatives instead of -UseConnectedAccount:

  • Storage account connection string
  • SAS token with the required blob permissions
#Requires -Version 7.0
param(
    [Parameter(Mandatory = $true)]
    [string]$StorageAccountName,

    [Parameter(Mandatory = $true)]
    [string]$SourceContainerName, # example: nodinitelogevents

    [Parameter(Mandatory = $true)]
    [string]$BackoutContainerName, # example: nodinitelogeventsbackout

    [string]$Prefix = "",
    [int]$MaxBlobs = 100,
    [int]$CopyTimeoutSeconds = 300,
    [switch]$MoveAll,
    [switch]$PreflightOnly,
    [switch]$DryRun
)

Set-StrictMode -Version Latest
$ErrorActionPreference = "Stop"

Import-Module Az.Accounts -ErrorAction Stop
Import-Module Az.Storage -ErrorAction Stop

# Make sure you are signed in:
# Connect-AzAccount

$ctx = New-AzStorageContext -StorageAccountName $StorageAccountName -UseConnectedAccount

if ($MaxBlobs -lt 0) {
    throw "MaxBlobs cannot be negative. Use 0 for validation-only mode or a positive number for replay."
}

if ($CopyTimeoutSeconds -le 0) {
    throw "CopyTimeoutSeconds must be greater than zero."
}

# Preflight checks: verify both containers exist and are accessible.
$sourceContainer = Get-AzStorageContainer -Name $SourceContainerName -Context $ctx -ErrorAction SilentlyContinue
$backoutContainer = Get-AzStorageContainer -Name $BackoutContainerName -Context $ctx -ErrorAction SilentlyContinue

if (-not $sourceContainer) {
    throw "Source container '$SourceContainerName' was not found or is not accessible in storage account '$StorageAccountName'."
}

if (-not $backoutContainer) {
    throw "Backout container '$BackoutContainerName' was not found or is not accessible in storage account '$StorageAccountName'."
}

if ($MaxBlobs -eq 0) {
    $PreflightOnly = $true
}

if ($PreflightOnly) {
    Write-Host "Preflight passed. Source and backout containers exist and are accessible. No blobs moved."
    return
}

if ($MoveAll) {
    Write-Host "MoveAll enabled: processing until backout container is empty."
}

$moved = 0
do {
    if ([string]::IsNullOrWhiteSpace($Prefix)) {
        $blobs = @(Get-AzStorageBlob `
            -Container $BackoutContainerName `
            -Context $ctx `
            -MaxCount $MaxBlobs `
            -ErrorAction Stop)
    }
    else {
        $blobs = @(Get-AzStorageBlob `
            -Container $BackoutContainerName `
            -Context $ctx `
            -Prefix $Prefix `
            -MaxCount $MaxBlobs `
            -ErrorAction Stop)
    }

    Write-Host "Batch fetched from backout container: $($blobs.Count)"

    if (-not $blobs) {
        if ($moved -eq 0) {
            if ([string]::IsNullOrWhiteSpace($Prefix)) {
                Write-Host "No blobs found in backout container '$BackoutContainerName'."
            }
            else {
                Write-Host "No blobs found in backout container '$BackoutContainerName' with prefix '$Prefix'."
            }
        }
        break
    }

    foreach ($blob in $blobs) {
        $blobName = $blob.Name
        if ($DryRun) {
            Write-Host "DryRun: would move '$blobName'"
            $moved++
            continue
        }

        Write-Host "Processing blob: $blobName"

        # Move direction is always backout -> source
        $copyParams = @{
            SrcContainer  = $BackoutContainerName
            SrcBlob       = $blobName
            DestContainer = $SourceContainerName
            DestBlob      = $blobName
            Context       = $ctx
            DestContext   = $ctx
            Force         = $true
        }

        Start-AzStorageBlobCopy @copyParams | Out-Null

        $copyDeadline = (Get-Date).AddSeconds($CopyTimeoutSeconds)

        do {
            Start-Sleep -Seconds 1
            $destinationBlob = Get-AzStorageBlob `
                -Container $SourceContainerName `
                -Blob $blobName `
                -Context $ctx `
                -ErrorAction SilentlyContinue

            $copyStatus = if ($destinationBlob -and $destinationBlob.ICloudBlob.CopyState) {
                $destinationBlob.ICloudBlob.CopyState.Status.ToString()
            }
            elseif ($destinationBlob) {
                "Success"
            }
            else {
                "Pending"
            }

            Write-Host "Copy status: $copyStatus"

            if ($copyStatus -eq "Pending" -and (Get-Date) -ge $copyDeadline) {
                throw "Copy timed out after $CopyTimeoutSeconds seconds for blob '$blobName'."
            }
        }
        while ($copyStatus -eq "Pending")

        if ($copyStatus -ne "Success") {
            $copyDescription = if ($destinationBlob -and $destinationBlob.ICloudBlob.CopyState -and $destinationBlob.ICloudBlob.CopyState.StatusDescription) {
                $destinationBlob.ICloudBlob.CopyState.StatusDescription
            }
            else {
                "No additional copy status description was returned."
            }

            throw "Copy to source container failed for blob '$blobName'. Status: $copyStatus. $copyDescription"
        }

        $removeParams = @{
            Container = $BackoutContainerName
            Blob      = $blobName
            Context   = $ctx
            Force     = $true
        }

        Remove-AzStorageBlob @removeParams | Out-Null

        $moved++
        Write-Host "Moved blob #$moved : $blobName"
    }
}
while ($MoveAll)

Write-Host "Done. Total moved: $moved"

If output repeatedly shows only "Copy status: Pending", the server-side copy has not completed yet. The script now stops that wait after CopyTimeoutSeconds and fails fast with the blob name so you can retry or investigate throttling/network limits.

Blob examples

# 1) Dry run first 25 blobs
pwsh .\Replay-BlobBackout.ps1 `
  -StorageAccountName "mystorage" `
  -SourceContainerName "nodinitelogevents" `
  -BackoutContainerName "nodinitelogeventsbackout" `
  -MaxBlobs 25 `
  -DryRun

# 2) Validation only (recommended first step)
pwsh .\Replay-BlobBackout.ps1 `
    -StorageAccountName "mystorage" `
    -SourceContainerName "nodinitelogevents" `
    -BackoutContainerName "nodinitelogeventsbackout" `
    -MaxBlobs 0

# 3) Replay 200 blobs with a prefix filter
pwsh .\Replay-BlobBackout.ps1 `
  -StorageAccountName "mystorage" `
  -SourceContainerName "nodinitelogevents" `
  -BackoutContainerName "nodinitelogeventsbackout" `
  -Prefix "2026/03/" `
  -MaxBlobs 200

# 4) Replay all blobs without any prefix filter
pwsh .\Replay-BlobBackout.ps1 `
    -StorageAccountName "mystorage" `
    -SourceContainerName "nodinitelogevents" `
    -BackoutContainerName "nodinitelogeventsbackout" `
    -MaxBlobs 500 `
    -MoveAll

MaxBlobs controls the batch size for each pass.

  • Use -MaxBlobs 1 -MoveAll to move one blob at a time until the backout container is empty
  • Use -MaxBlobs 100 -MoveAll to move up to 100 blobs per pass until the backout container is empty
  • With -MoveAll, MaxBlobs is honored per batch (not as a total cap for the whole run)
  • Omit -MoveAll if you want the script to stop after one batch
  • To cap the total number of moved blobs, omit -MoveAll and run with your wanted -MaxBlobs value

Expected success output includes lines similar to:

Processing blob: 02599847-b31f-4954-...
Copy status: Success
Moved blob #1 : 02599847-b31f-4954-...
Done. Total moved: 1

If you only see a formatted blob listing table and do not see Processing blob: or Moved blob #..., your local script is still running a listing variant and not the move logic shown here.

Validation checklist after replay

  • Confirm the active source queue or active source container receives replayed items
  • Confirm Pickup Service consumes replayed items
  • Confirm the related Log Events appear in Nodinite
  • Confirm backout count decreases as expected
  • Confirm no new malformed events are created

Suggested next troubleshooting pages

  • Recovery from Event Hub checkpoint/backout inconsistencies
  • Replay strategy for File Folder backout (with checksum validation)
  • Bulk replay automation with schedule and throttling controls
  • Replay audit logging and operator runbook template

Next Step