Tag Archives: Sitecore

Test Coverage – Sealed Classes

We recently took over an exiting Sitecore solution, where the customer had a policy that the test coverage must be over 90%, which is fantastic.

Unfortunately, I found a very big class that had ExcludeFromCodeCoverage attribute applied to it, without a justification!

You must always specify the Justification to explain why code is excluded from test coverage. There are many valid reasons why you should exclude certain classes from test coverage, i.e., DTO’s, program entry code, DI setup, sealed classes, 3rd party services, etc.

The Problem

The following class has a lot of business logic and is excluded from test coverage as it has the ExcludeFromCodeCoverage attribute. Lets not get into the fact that no class should have multiple responsisbities and defiantly not have 750+ lines of code.

But it should defiantly have test coverage for the logic it provides.

[ExcludeFromCodeCoverage]
    public class SynchronizeOutlookAppointmentsService 
    {
        private const int MAX_FREEBUSY_ATTENDEES = 100;
        private const int MAX_FREEBUSY_PERIOD = 60;
        private readonly ExchangeService _exchangeService;
        private readonly ILogger<SynchronizeOutlookAppointmentsService> _logger;

        public SynchronizeOutlookAppointmentsService(
            [NotNull]ExchangeService exchangeService, 
            [NotNull]ILogger<SynchronizeOutlookAppointmentsService> logger)
        {
            Assert.ArgumentNotNull(logger, nameof(logger));
            Assert.ArgumentNotNull(exchangeService, nameof(exchangeService));
            _exchangeService = exchangeService;
            _logger = logger;
        }

        public void SynchronizeOutlookAppointments()
        {
            DateTime now = DateTime.Now;
            var availabilities = GetAvailability(now, GetToDate(now), GetEmailAddress());
            CheckWorkSchedules(now, availabilities);
            CheckRules(now, availabilities);
            CheckPlannedMeetings(now, availabilities);

            var delete = ShouldBeDeleted(availabilities);
            var update = ShouldBeUpdated(availabilities);
            var addded = ShouldBeAdded(availabilities);

            HandleDeletedAppointments(now, delete);
            HandleUpdatedAppointments(now, update);
            HandleAddedAppointments(now, addded);
        }
//700+ more lines, with lots of domain logic and rules

I assume it had the ExcludeFromCodeCoverage attribute applied because it depends on the ExchangeService class from Exchange Web Services (EWS). Unfortunately most of the classes in the EWS are sealed and their constructor is internal.

So, it is not possible to mock the class and therefore can’t be unit tested, unless you have a test instance of exchange server you can call and setup the test date relevant for the tests.

Sitecore (and every other piece of software) have classes which are sealed too, which makes testing difficult. When faced with an API that returns sealed class’s, how do we minimize what cannot be tested?

Solution

The solution is to isolate/hide the dependency on sealed class exposed by EWS. There are 5 steps:

  • Duplicate the sealed classes.
  • Convert the sealed classes, to duplicated classes.
  • Introduce an interface to abstract/hide the use of the classes.
  • Implement the interface to call ExchangeService.
  • Inject the interface into the class with the business logic.

Step 1 – Duplicate the sealed classes

We need to check availability and the details of any events in their calendar, so the following class from EWS were duplicated, luckely not all the classes are sealed, with internal constructors.

public class Availability
{
     public ServiceError ErrorCode { get; set; }
     public string ErrorMessage { get; set; } = string.Empty;
     public IEnumerable<CalendarEvent> Events { get; set; }
}
public class CalendarEvent
{
    public DateTime Start { get; set; }
    public DateTime End { get; set; }
    public LegacyFreeBusyStatus FreeBusyStatus { get; set; }
}

Step 2 – Convert the sealed classes, to duplicated classes

Introduce a factory class, that takes the sealed classes and converts them to the duplicated classes,.

public class AvailabilityFactory
{
    public IEnumerable<Availability> Create(GetUserAvailabilityResults getUserAvailabilityResults)
    {
        return getUserAvailabilityResults?
            .AttendeesAvailability?
                .Select(attendeeAvailability => new Availability()
                {
                    ErrorCode = attendeeAvailability.ErrorCode,
                    ErrorMessage = attendeeAvailability.ErrorMessage,
                    Events = Create(attendeeAvailability.CalendarEvents)
                }).ToList();
    }

    private IEnumerable<CalendarEvent> Create([NotNull]IReadOnlyCollection<Microsoft.Exchange.WebServices.Data.CalendarEvent> calendarEvents)
    {
        return calendarEvents
            .Select(calendarEvent =>
                new CalendarEvent()
                {
                    Start = calendarEvent.StartTime,
                    End = calendarEvent.EndTime,
                    FreeBusyStatus = calendarEvent.FreeBusyStatus,
                }).ToList();
    }
}

Step 3 – Introduce an Interface

Introduce a IAvailabilityRepository interface to abstract/hide the use of the ExchangeService class and the related sealed classes it returns.

public interface IAvailabilityRepository
{
    IEnumerable<Availability> Get(
                  IEnumerable<AttendeeInfo> attendees, 
                  TimeWindow timeWindow, 
                  AvailabilityData requestedData);
}

Step 4 – Implement the IAvailabilityRepository to call ExchangeService

Now the class has 2 lines that can’t be tested.

[ExcludeFromCodeCoverage(Justification = "Can't test as it requires access to outlook, and ExchangeService class is sealed")]
public class AvailabilityRepository : IAvailabilityRepository
{
    private readonly ExchangeService _exchangeService;
    private readonly AvailabilityFactory _availabilityFactory;

    public AvailabilityRepository(
        [NotNull] ExchangeService exchangeService,
        [NotNull] AvailabilityFactory availabilityFactory)
    {
        Assert.ArgumentNotNull(exchangeService, nameof(exchangeService));
        Assert.ArgumentNotNull(availabilityFactory, nameof(availabilityFactory));
        _exchangeService = exchangeService;
        _availabilityFactory = availabilityFactory;
    }
    public IEnumerable<Availability> Get(
        IEnumerable<AttendeeInfo> attendees,
        TimeWindow timeWindow,
        AvailabilityData requestedData)
    {
        var getUserAvailabilityResults = _exchangeService.GetUserAvailability(attendees, timeWindow, requestedData);
        return _availabilityFactory.Create(getUserAvailabilityResults);
    }
}

In addition to the above implementation you can also make a mock class for local debugging, testing, that logs to file, and or save reads/from a database.

Step 5 – Inject the interface into the class with the business logic.

Now the last set is to inject in the IAvailabilityRepository instead of ExchangeService and then use the duplicated classes.

It is now it is possible to test the logic in this class.

    public class SynchronizeOutlookAppointmentsService 
    {
        private const int MAX_FREEBUSY_ATTENDEES = 100;
        private const int MAX_FREEBUSY_PERIOD = 60;
        private readonly IAvailabilityRepository _availabilityRepository;
        private readonly ILogger<SynchronizeOutlookAppointmentsService> _logger;

        public SynchronizeOutlookAppointmentsService(
            [NotNull] IAvailabilityRepository availabilityRepository, 
            [NotNull]ILogger<SynchronizeOutlookAppointmentsService> logger)
        {
            Assert.ArgumentNotNull(logger, nameof(logger));
            Assert.ArgumentNotNull(availabilityRepository, nameof(availabilityRepository));
            _availabilityRepository = availabilityRepository;
            _logger = logger;
        }

        public void SynchronizeOutlookAppointments()
        {
            DateTime now = DateTime.Now;
            var availabilities = GetAvailability(now, GetToDate(now), GetEmailAddress());
            CheckWorkSchedules(now, availabilities);
            CheckRules(now, availabilities);
            CheckPlannedMeetings(now, availabilities);

            var delete = ShouldBeDeleted(availabilities);
            var update = ShouldBeUpdated(availabilities);
            var addded = ShouldBeAdded(availabilities);

            HandleDeletedAppointments(now, delete);
            HandleUpdatedAppointments(now, update);
            HandleAddedAppointments(now, addded);
        }
// TODO... split into a number of smaller classes where each class has a single responsibility 🙂

Of course, this class should be refactored and split into a number of smaller classes where each class has a single responsibility, but that is a blog post for another day.

I hope this post will help you ensure more of your code can be tested.

In addition this pattern is effective for abstracting away dependencies on 3rd party systems, for example SolR, and or many other Sitecore API’s.

sc_itemid Query String – Having a detrimental effect on the SEO

If the sc_itemid query string has a valid sitecore item ID (and or sitecore path), sitecore ignores the URL path and sets the current item to the specified item.

This is generally used by the sitecore client tools i.e., preview, experience editor, debugging, etc. to specify the current item.

Unfortunately, one customer identified that the use of sc_itemid, was having a detrimental effect on the SEO for their websites, but why was sc_itemid present on their public available websites?

I believed the correct solution was to identify why sc_itemid was present in the URL and correct the issue.

Unfortunately, after identifying several issues relating to editor’s content, legacy system and legacy code, the customer informed me that they could not be resolved/fixed and asked me to find an alternative solution.

Solution

If it is not a sitecore client request and sc_itemid is present make a permanent redirect, following the these rules:

  • If the sc_itemid has a valid item id or path, make a permanent redirect to canonical URL for the item.
  • If the sc_itemid does not have a valid item id or path, strip the sc_itemid query string and make a permanent redirect to that URL.

Step 1 – httpRequestBegin Pipeline Processor

Introduce a pipeline processor to check URL’s and add it to the httpRequestBegin pipeline just before the LayoutResolver processor.

Define which site’s should be ignored, for example the sitecore client typically uses the shell site context.

Define which URL paths should be ignored, for example URL’s starting with /sitecore are usually for the sitecore client, see the config below for more typical examples.

<?xml version="1.0"?>
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <pipelines>
      <httpRequestBegin>
        <processor type="Feature.ErrorHandling.Infrastructure.Pipelines.QueryStringPermanentlyRedirectHttpRequestProcessor, Feature.ErrorHandling"
				           patch:before="processor[@type='Sitecore.Pipelines.HttpRequest.LayoutResolver, Sitecore.Kernel']"
				           resolve="true">
          <IgnoreSites hint="list:AddIgnoreSite">
            <!--list of all the sites to ignore-->
            <site>shell</site>
            <site>admin</site>
            <site>login</site>
            <site>service</site>
            <site>modules_shell</site>
          </IgnoreSites>
          <IgnorePaths hint="list:AddIgnorePath">
            <!--list of all the sites to ignore-->
            <site>/sitecore</site>
            <site>/api</site>
          </IgnorePaths>
        </processor>
      </httpRequestBegin>
    </pipelines>
  </sitecore>
</configuration>

Step 2 – Which Requests/URL’s to ignore

The most important part of the any pipeline processors is to ensure that it identifies requests to ignore and exits as soon as possible, as all request go through the httpRequestBegin pipeline.

Therefore the QueryStringPermanentlyRedirectHttpRequestProcessor must check the following and exit if one of the conditions is true.

  • Local path is null
  • Http Context is null
  • The pipeline is aborted
  • The query string sc_itemid is not present
  • The Site context is in the Ignore Sites list (see configuration above)
  • The URL path starts with the path in the ignore paths list (see configuration above)
    • Context.PageMode.IsNormal is not true i.e. It is a sitecore client request – editing, preview, experience, debugging, etc.
  • Is a request for a physical file

Step 3 – Abort the Pipeline & Permanent Redirect

The last step is simple, make the permanent redirect depending on the following logic:

  • If sc_itemid identifies a valid item, get the canonical URL and redirect.
  • If sc_itemid does not identifies a valid item, remove the sc_itemid querystring and redirect to the URL.

Remember to abort the pipeline first before redirecting and catch the ThreadAbortException, see the code below

          try
            {
                args.AbortPipeline();
                args.HttpContext.Response.RedirectPermanent(redirectUrl);
            }
            catch (ThreadAbortException ex)
            {
                // do nothing, as this is caused by the redirect
            }

Hope this helps, Alan

Wrong Language

We have a very big Sitecore solution with over 3000 editors, unfortunately they spend a great deal of time creating a content in the wrong language.
Easy enough to fix a single item, but when each page has a lot of renderings, which in turn have a lot of data source items (see image below), the task is not so easy and very time consuming.

Solution

Provide the ability to select a given item and move all the language versions from one language to another and then delete the source language version.

I decided to add 2 buttons, and I was lucky as there was already a Data Migration chunk I could add the buttons to.

See this blog for a detailed introduction to adding buttons to the sitecore client.

In order to know which language to move from and to, I added a source and target parameter to the move language command.

This is very easy todo, as on the “click” field for small button you can add parameters, then when sitecore calls the Execute function the CommandContext has the values in the Parameters list, see the code below, which can then identify the source and target language.

public override void Execute(CommandContext context)
{
  var currentItem = context?.Items[0];
  if (currentItem == null)
    return;

  string source = context?.Parameters["source"];
  Assert.IsNotNullOrEmpty(source, $"Command parameter:source can not be empty or null");
  string target = context?.Parameters["target"];
  Assert.IsNotNullOrEmpty(target, $"Command parameter:source can not be empty or null");
  context.Parameters.Add("item", currentItem.ID.ToString());
  context.ClientPage.Start(this, nameof(Move), context.Parameters);
}

We need some configuration and in the solution there was already a data migration settings item that could store the following:

  • Modules Folder Template id
    • So I can identify which sub item contains the data source items and don’t iterate over the entire tree.
  • List of fields to ignore
    • i.e. revision, updated, updated by, owner, security, etc.

The code to move the language versions is quite simple.

  1. Get the current item, using the source language.
  2. Get the settings item.
  3. Get the items in the modules folder, by looking for any sub items that derive from the Modules Folder Template, then adding their descendants.
  4. For each item
    1. Get all the source versions (see code below)
    2. For each version create the new target language version
      1. For each field
        1. Skip all shared fields
        2. Skip fields that are in the Fields to Exclude list
        3. Copy the field
    3. Remove all the source language version
private void Copy(
        [NotNull] Database database,
        [NotNull] Item source, 
        [NotNull] Language target,
        [NotNull] IEnumerable<Field> languageMigrationExcludedFields)
        {
            Assert.ArgumentNotNull(source, nameof(source));
            Assert.ArgumentNotNull(target, nameof(target));
            Assert.ArgumentNotNull(languageMigrationExcludedFields, nameof(languageMigrationExcludedFields));

            foreach (var sourceVersion in source.Versions.GetVersions())
            {
                if (sourceVersion == null)
                    continue;
                var targetItem = database.GetItem(source.ID, target);
                var newTargetVersion = targetItem.Versions.AddVersion();
                Copy(sourceVersion, newTargetVersion, languageMigrationExcludedFields);
                sourceVersion.Versions.RemoveVersion();
            }
        }

Hope this helps, cheers Alan

Site Context for ApiControllers

Now almost every Sitecore project has rest API’s and I am always shocked when the database, language etc. is hard coded and or additional configuration is added to define default the language, database etc.

Wouldn’t it be nice if you can define the site context for the controller?

Then you can define a site declaration, and or use an existing site that each controller should use and then use the language, database etc. that is define for the site language, database etc.

Solution

The SiteContextAttribute provides the ability to define which site an ApiController should use, for example in the picture above it is setup to use the “Person” site.

    public class SiteContextAttribute : ActionFilterAttribute
    {
        protected readonly string SiteName;

        public SiteContextAttribute(string siteName)
        {
            this.SiteName = siteName;
        }

        private SiteContextSwitcher _siteContextSwitcher;
        private LanguageSwitcher _languageSwitcher;
        private DatabaseSwitcher _databaseSwitcher;

        public override void OnActionExecuting(HttpActionContext actionContext)
        {
            base.OnActionExecuting(actionContext);

            var siteContext = SiteContext.GetSite(this.SiteName);

            _siteContextSwitcher = new SiteContextSwitcher(siteContext);
            _databaseSwitcher = new DatabaseSwitcher(siteContext.Database);
            _languageSwitcher = new LanguageSwitcher(LanguageManager.GetLanguage(siteContext.Language));
        }

        public override void OnActionExecuted(HttpActionExecutedContext actionContext)
        {
            _languageSwitcher?.Dispose();
            _databaseSwitcher?.Dispose();
            _siteContextSwitcher?.Dispose();

            base.OnActionExecuted(actionContext);
        }
    }

The code gets the site name, then gets the site context and setups the language, database and site context for the controller.

For example, see below it is it possible to use the Context.Database and also the language of the item will also be correct.

I hope this helps, Alan

How SQL Index Fragmentation will kill Sitecore’s Performance

Thought I wrote a blog post about this years ago, but apparently I didn’t.

Problem

Poor index maintenance is a major cause of decreased SQL Server performance, which in turn will impact your Sitecore’s performance. The Sitecore databases contains tables with numerous entries, that get updated frequently, therefore high index fragmentation will occur.

Detecting SQL Server index fragmentation

The following script displays the average fragmentation, and as a help generates the SQL query to fix it.

SELECT OBJECT_NAME(ind.OBJECT_ID) AS TableName,
ind.name AS IndexName, indexstats.index_type_desc AS IndexType,
indexstats.avg_fragmentation_in_percent,
'ALTER INDEX ' + QUOTENAME(ind.name) + ' ON ' +QUOTENAME(object_name(ind.object_id)) +
CASE WHEN indexstats.avg_fragmentation_in_percent>30 THEN ' REBUILD '
WHEN indexstats.avg_fragmentation_in_percent>=5 THEN 'REORGANIZE'
ELSE NULL END as [SQLQuery] -- if <5 not required, so no query needed
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) indexstats
INNER JOIN sys.indexes ind ON ind.object_id = indexstats.object_id
AND ind.index_id = indexstats.index_id
WHERE
--indexstats.avg_fragmentation_in_percent , e.g. >10, you can specify any number in percent
ind.Name is not null
ORDER BY indexstats.avg_fragmentation_in_percent DESC

Below you can see the typical result of running the script above. I was shocked as the majority of indexes on my local SQL server where over 99%.

Solution

The script above generates the SQL statements needed to defragment the affected indexes, so you can automate the defragmentation process, using SQL Server Maintenance Plans.

Anyway I hope this helps keeping you sitecore solution running at its best, Alan

Sitecore and Azure Durable Functions

In this post I will show how Azure Durable functions can complement your sitecore solution and help enhance performance.

Problem

We took over a Sitecore solution and its content management server was running very slowly and intermittently the sitecore client would be unresponsive and crash

The problem was caused a number a lot of CPU/Data/Bandwidth intensive schedule tasks that were running to retrieve a wide range of data from a number of web services, then aggregate the data and perform complicated calculations, of which a small sub set of the result were presented on the website.

Solution

As the solution was already hosted in Azure, a perfect solution was to off load the heavy lifting from the Content Management server to Azure Functions, to do the data retrieval, calculations and provide the results for the website. Firstly, a very brief overview of the pro’s and con’s of Azure functions.

Pro’s

  • Serverless execution model
  • Dynamic Scaling
  • Micro pricing
  • Security
  • Wide range of triggers
    • Https, Timer (CRON), Azure storage changes, Azure Queue, Message from Service bus, etc.

Con’s

  • Stateless
  • Execution time limit (default 5 mins, max 10)
  • Concurrency

The main challenge with Azure functions is that most of the schedule tasks could take more than 10 minutes to complete and require state management. But not to worry as Azure Durable Functions came to the rescue.

Azure Durable Functions

Durable Functions are an extension of Azure Functions and Azure WebJobs that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you, so it is possible to implement code that run for a long time.

In addition if an Azure function fails, for example the web request times out, you can define if the durable function should wait and retry X times, before failing. Behind the scenes, the Durable Functions extension is built on top of the Durable Task Framework, an open-source library on GitHub for building durable task orchestrations.

Advantages of Durable Functions

  • They define workflows in code. No JSON schemas or designers are needed.
  • They can call other functions either synchronously or asynchronously.
  • Output from called functions can be saved to local variables.
  • They automatically checkpoint their progress whenever the function awaits.
  • Local state is never lost, even if the process recycles or the VM reboots.
  • Easy to Unit Test
  • Can run for a very long time, in theory forever
  • Cost effective, as you do not pay for execution time whilst waiting for tasks to complete.

Here is a brief introduction to the most common Durable Functions patterns

Pattern 1 – Function chaining

Function chaining refers to the pattern of executing a sequence of functions in a particular order. Often the output of one function needs to be applied to the input of another function.

function chaining

The code below is an example of how you would achieve this

chaining code

Pattern 2 – Fan-out/fan-in

Fan-out/fan-in refers to the pattern of executing multiple functions in parallel, and then waiting for all to finish. Often some aggregation work is done on results returned from the functions. This is perfect when you want to do a lot of things in parallel, to reduce the time taken to complete the task and then aggregate/process all the results.

Below is an example of how the code could look

Pattern 3 – Monitoring

The monitor pattern refers to a flexible recurring process in a workflow – for example, polling until certain conditions are met. A regular timer-trigger can address a simple scenario, such as a periodic clean-up job, but its interval is static and managing instance lifetimes becomes complex. Durable Functions enables flexible recurrence intervals, task lifetime management, and the ability to create multiple monitor processes from a single orchestration.

An example could be instead of exposing an endpoint for an external client to monitor a long-running operation, the long-running monitor consumes an external endpoint, waiting for some state change. See the example below.

Is Replaying

One thing that catches people out is that the code is re-run from the start of the function after each await completes, therefore for example with Logging and other code you need to check for IsReplaying so you only log once.

Durable Functions – Orchestrator code constraints

There are a number code constraints, that must be adhered to when using Durable function orchestration.

  • Code must be deterministic.
    • It will be replayed multiple times and must produce the same result each time.
    • For example, no direct calls to get the current date/time, get random numbers, generate random GUIDs, or call into remote endpoints.
  • Non-deterministic operations must be done in activity functions
    • This includes any interaction with other input or output bindings. This ensures that any non-deterministic values will be generated once on the first execution and saved into the execution history. Subsequent executions will then use the saved value automatically.
  • Orchestrator code should be non-blocking.
    • For example, that means no I/O and no calls to Thread.Sleep or equivalent APIs
    • Orchestrator code must never initiate any async operation, except by using the IIDurableOrchestrationContext API.
    • For example, no Task.Run, Task.Delay or HttpClient.SendAsync.
    • The Durable Task Framework executes orchestrator code on a single thread and cannot interact with any other threads that could be scheduled by other async APIs.
  • Infinite loops should be avoided
    • Because the Durable Task Framework saves execution history as the orchestration function progresses, an infinite loop could cause an orchestrator instance to run out of memory.
    • For infinite loop scenarios, use APIs such as ContinueAsNew to restart the function execution and discard previous execution history.

Result

By migrating all the long running CPU/data/bandwidth intensive tasks to Azure Durable Functions, the performance of the Sitecore solution went from painful to fantastic.

Unfortunately it is very common that Sitecore solutions assume responsibility for task that are not the websites responsibility, but pairing with Azure functions can help mitigate this issue.

An additional benefit was that the website was isolated/protected from 3rd party system changes, as when an external system changes only the Azure functions had to be modified and deployed – therefore no down time for the sitecore solution.

Anyway I hope sitecore develops will consider Azure functions to enhance their sitecore solutions.

 

Untangling the Sitecore Search LINQ to SolR queries

Problem

It can be very difficult to identify why you do not get the search results you expected from Sitecore Search, but there is a simple way to help untangle what is going on.

Solution

It is possible to see the query that Sitecore generates and sends to SolR and then use the query on the SolR instance to see what data is returned to Sitecore.

This is such a huge help when trying to understand why your queries do not work!

Step 1 – Find the Query that was sent to SolR from Sitecore

Sitecore logs all the queries it sends to SolR in the standard sitecore log folder, look for files named Search.log.xxx.yyy.txt .

Step 2 – Execute the query in your SolR instance

Go to your Solr instance, and use the core selector drop down to select the index your Sitecore Search query is being executed against.

Select Query, from the menu

Then paste the query from the sitecore log, and you can see the result that is returned to Sitecore.

This has helped me a lot, so I hope this helps others untangling their search results using Sitecore Search 🙂

 

 

 

Reduce Technical Debt Part 2 – Empty Try Catch

Here is a the second in the series on how to reduce Technical Debt, please read part one as it gives an insight into the scale and challenges we faced, and outlines what this blog post is trying to address.

As you are aware the first part introduced a few code examples to help remove redundant code, this blog will continue to focus on how to remove redundant code by introducing the EmptyTryCatchService class and the IgnoreEmptyTryCatch Custom attribute .

But before that I just briefly want to mention integrations, in my experience this is where a lot of redundant and or unnecessary code can hide.

Integrations

Therefore, an important concept to reduce technical debt, is to identify, separate and isolate dependencies on external systems, especially complex and or legacy systems.

I have already written a blog series about this, so if you missed please read it.

Integrations Platform

I believe in an ideal world, most integrations and especially complex and or legacy system specific code should be move out of the website solution to an integration platform!

Most issues, difficulties, problems and cost relating to code maintenance and technical debt for website is due to being responsible for stuff they should not be.

For example, the website is responsible for aggregation data from several systems to provide a unified view of their data, NO this is the job of an Integrations/aggregation platform

Empty Try Catch

So, let me start by stating – ignoring exceptions is a bad idea, because you are silently swallowing an error condition and then continuing execution.

Occasionally this may be the right thing to do, but often it’s a sign that a developer saw an exception, didn’t know what to do about it, and so used an empty catch to silence the problem.

It’s the programming equivalent of putting black tape over an engine warning light.

It’s best to handle exceptions as close as possible to the source, because the closer you are, the more context you have to achieve doing something useful with the exception.

Ignore Empty Try Catch – Custom attribute

In some rare cases the empty try catch can be valid, in which case you can use the custom attribute to mark the function and explain why it is OK, and check one last time is there not a TryParse version of the function and or code you are calling.

Performance

Slightly off topic, but still a type of technical debt, do not use exceptions for program flow!

Throwing exceptions is very expensive (must dump the registries, call stack, etc. and whilst doing this it blocks all threads) so it has a big impact on performance.

I have seen sites brought to their knees because of the number exceptions being thrown.

Redundant Code

In the solution we took over there were over 300 empty try-catch statements ☹

But how can it hide redundant code?

When an exception is thrown it can jump over lots of code, which is therefore never called.

Therefore, all the code after the exception is redundant.

Below is the classic Hello World program it works as expected, it prints out “Hello World”.

But there is a lot of technical debt, now this might look like a funny example, but I have seen a lot of similar examples in real world, usually with a lot more code in the try catch, and usually found most often around big complex integrations!

try catch redunant code

Solution – EmptyTryCatchService

For empty try catches I would not recommend you use Sitecore’s standard logging, as it can create enormous log files which is enough to kill your sitecore solution, if the empty try catch is called a lot.

For tracking down empty try catches, it is good to have a dedicated log file and a way to limit the amount of data written to the log file.

EmptyTryCatchService class provides the following features:

  • Report interval – the interval between exceptions with the same owner, name and exception message are written to the log file.
  • Max Log limit – when the number exceptions with the same owner, name and exception message is exceed no more data is written to the log file.
  • Dedicated log file for each day
  • Disable all logging via configuration.

EmptyTryCatchService class is a simple class that, relies on the MaxUsageLog for most of its functionality (see the code below).

In addition to finding redundant code the EmptyTryCatchService will track down hidden errors and problems in your solution, which will result in a reduction of the technical debt.

You must be careful when reviewing the exceptions logged and deciding how best to deal with the exceptions. See part 3 in the series, to reduce technical debt.

public class EnsureIsObsoleteService
{
private readonly MaxUsageLog _maxUsageLog =
new MaxUsageLog(10000, "EnsureIsObsoleteService",1000);
public void EnsureIsObsolete(object owner, string message)
{
_maxUsageLog.Log(owner, message);
}
}
public class MaxUsageLog
{

public MaxUsageLog(int maxLogLimit,
string fileNamePrefix,
int reportCountInterval=1000000)
{
_maxLogLimit = maxLogLimit;
_fileNamePrefix = !string.IsNullOrEmpty(fileNamePrefix) ? fileNamePrefix : "MaxUsageLog";
_reportCountInterval = reportCountInterval;
}

public void Log(object owner, string message, Exception ex = null)
{
if (!IsEnabled())
return;

string type = string.Empty;
if (owner != null)
{
if (owner is Type typeObj)
{
type = typeObj.FullName;
}
else
{
type = owner.GetType().FullName;
}
}
string key = GenerateKey(type, message, ex);
if (!ShouldLog(owner, key))
return;
var count = Count(key);
WriteToFile(owner, type, message, ex, count);
}

private int Count(string key)
{
return Usage.ContainsKey(key) ? Usage[key] : 0;
}

private void WriteToFile(object owner, string type, string message, Exception exceptionToLog, int count)
{
try
{
StreamWriter log = File.Exists(FileName) ? File.AppendText(FileName) : File.CreateText(FileName);
try
{
log.AutoFlush = true;
log.WriteLine($"{DateTime.Now.ToUniversalTime()}: Type:'{type}' Message:'{message}' Count:{count}");
if (exceptionToLog != null)
{
log.WriteLine($"Exception:{exceptionToLog}");
}
log.Close();
}
finally
{
log.Close();
}
}
catch (Exception ex)
{
if (!Sitecore.Configuration.ConfigReader.ConfigutationIsSet)
return;
Sitecore.Diagnostics.Log.Error(
$"Failed writing log file {FileName}. The following text may be missing from the file: Type:{type} Message:{message}",
ex, owner);
}
}
private bool ShouldLog(object owner, string key)
{
if (!Usage.ContainsKey(key))
{
Usage.Add(key, 1);
return true;
}
var count = Usage[key] = Usage[key] + 1;

if (count % _reportCountInterval == 0)
{
WriteToFile(owner, "******** Report Count Interval ******", $"Key:'{key}'", null,count);
}

if (count &gt; _maxLogLimit)
return false;
if (count == _maxLogLimit)
{
WriteToFile(owner, "******** Usage Max Exceeded ******", $"Key:'{key}' Max Limit:{_maxLogLimit}",null,count);
return false;
}
return true;
}
private string GenerateKey(string type, string message, Exception ex)
{
return ex != null ?
$"{_fileNamePrefix}_{type}_{message}_{ex.HResult}" :
$"{_fileNamePrefix}_{type}_{message}";
}

private string FileName
{
get
{
DateTime date = DateTime.Now;
string fileName = $@"\{_fileNamePrefix}.{date.Year}.{date.Month}.{date.Day}.log";

if (!Sitecore.Configuration.ConfigReader.ConfigutationIsSet)
return ConfigurationManager.AppSettings[Constants.Configuration.Key.LogFolderForApplications] + fileName;

return Sitecore.MainUtil.MapPath(Sitecore.Configuration.Settings.LogFolder) + fileName;
}
}

private bool IsEnabled()
{
if (!Sitecore.Configuration.ConfigReader.ConfigutationIsSet)
return StringToBool(ConfigurationManager.AppSettings[Constants.Configuration.Key.MaxUsageLogEnabled],false);

return Sitecore.Configuration.Settings.GetBoolSetting(Constants.Configuration.Key.MaxUsageLogEnabled, true);
}

private bool StringToBool(string value, bool defaultValue)
{
if (value == null)
return defaultValue;
bool result;
if (!bool.TryParse(value, out result))
return defaultValue;
return result;
}

private readonly int _maxLogLimit;
private readonly string _fileNamePrefix;
private readonly int _reportCountInterval;

// this is to ensure we can count how many times a message has been logged across all threads
private static readonly Dictionary&lt;string, int&gt; Usage = new Dictionary&lt;string, int&gt;();
}

Hope this was of help, Alan

 

Reduce Technical Debt and Redundant Code

A while ago, we at Pentia took over a massive Sitecore solution, which after 16 years of upgrades and development the maintenance cost consumed the entire digital budget of the customer.

In other words, the client was at a crossroad – to build new or renovate.

For this client the answer was relatively easy:

  • Firstly, the number of features and functionalities in the platform is vast, and just to scope and specify the entire platform was a massive, if not impossible, undertaking – and one which would claim a large number of resources internally and externally.
  • Secondly, while building a new platform (a massive task), the existing platform would have to be kept alive and slowly (painfully slowly) phased out over time. This means double resources for development, maintenance and operations.
  • Thirdly – and probably the most deterring factor – the change management involved in retraining the thousands of staff involved in and around the platform and across departments was substantial and disruptive to the entire organization.

Therefore, a renovation project was established, and the first task was to reduce technical debt for the solution.

Reducing maintenance cost

One of the best ways to reduce technical debt is to reduce the code base, less code == less maintenance cost. In this case we managed to delete 33% of the code base, here are a few key figures for the solution when we took it over.

  • 900+ sites (over ½ million items)
  • 15 years old (multiple upgrades from Sitecore 4.x to 8.2 and single migration)
  • 15 integrations
  • 600+ Layouts/sub layouts
  • Many JavaScript applications (Angular/React/Backbone/knockout/native/JQuery)
  • Code
    • 294030 lines code
    • Cyclomatic Complexity – main project 9662 average 1200
    • Depth of Inheritance – main project 17 average 8
    • Class Coupling – main project 1400, average 500
  • Single solution multiple roles
    • Content management
    • Content delivery
    • Publishing
    • Utility/API
    • Bot
  • No Access to production (apart from Sitecore client)
  • Manual deploys to Production
  • 2 separate solutions (Intranet & Websites) merged into a single solution 4 years ago
  • Not Helix compliant (sort of n-tier where projects had numbers)

The Challenge

Due to the sheer size of the solution, no one in the client’s organization knew which features were used and how much and no access to. There were many clear indications of code not being used or referred.

So, the initial task was to identify and remove unnecessary parts of the solution.

But how to you identify redundant code?

Visual studio has tools for that, unfortunately Sitecore/web application introduce additional challenges as un-referenced C# code can still be executed due to the following:

  • Configuration – pipelines, event handlers, custom configuration, etc.
  • Sitecore content – items that define that specific functions on a class should be executed i.e. WFFM.
  • Sitecore rendering engine that renders the presentation using web controls, layouts, sub layouts, controllers, code, etc.

In addition, then we must identify if the code used by the following is ever called

  • Layouts
  • Sub Layouts
  • Controllers
  • Web Controls
  • XSLT’s
  • Rest APi’s
  • Soap Web Services

Solution

As in most renovation projects, there is no silver bullet, it requires a longsighted plan, structured methodology, concepts, code, tools and continuous effort to reduce technical debt.

Ironically to reduce the code base you must introduce more code.

Custom Attributes

We introduced several custom attributes to help mark up the code and help identify issues to be address.

  • Obsolete
  • Used
  • Refactor
  • Ignore Empty Try Catch (see part 2)

Used

The point of this attribute is to clearly mark that a loosely referenced class, method or interface is indeed needed by the solution.

In other words, it indicates that a class, method or interface is used, even though it has no references. It’s possible to add a text to explain how and where it is used.

Obsolete

Whilst .net provides the Obsolete custom attribute; there are some missing options to indicate that the code is obsolete, and can be removed when a condition is met:

  • Specific date
  • Specific release is in production
  • 3rd party system is updated to a specific version

The point of this attribute was therefore to allow us to plan the renovation project in stages and remove code when the referring parts were cleaned up.

Refactor

During this project we ran into many pieces of code, classes and structures which were in dire need of refactoring. But because of constraints in time, code not deployed, lack of knowledge, dependencies, multiple version of 3rd party system, or for some other reason it was not possible at that time.

Therefore, the best we could do was add this attribute and define why it should be refactored, and why it hasn’t been refactored.

The purpose of this attribute was therefore documentation and planning of the renovation process

Introducing a “Ensure Code Is Obsolete” Service

It is very difficult to ensure that code is obsolete and is never called and that is why it is so difficult to delete code.

What we needed was a somewhat conclusive measurement if the running code was being executed.

What we decided to do was to introduce code in the solution that collected data on code executed across all running solution instances and aggregated the data and presented the results, to ensure that the code was not required.

The IIncrementCountService interface was introduced to provide the ability to count how often the code is executed and then send the results to be aggregated with the other instances, by the content management server.

public interface IIncrementCountService
{
  bool IncrementCount(Type type,string name);
}

Implementation Challenges

The Content Management, Content Delivery, Utility & API instances are in different network zones without access to each other.

The implementation must have a minimum impact on performance, network traffic, database storage.

Not introduce any new databases and or tables.

As we do not have access to production environment apart from the Sitecore Client, it is not possible log the data the file system.

Sitecore Remote Events

Remote events (see this blog for a good introduction) provide the perfect mechanism to allow all instances to send their counter data to the Content Management service which is responsible for aggregating the data and presenting the results.

You must be careful with events as if you flood the event queue table it can kill the performance of ALL your sitecore instances.

The following configuration was introduced (see my blog post on Type Safe Settings) so the IncrementCount function will only raises an event when one of the following is true:

  • The count exceeds 1000
  • The threshold of 15 minutes is reached
  • A new day starts

This ensures that the event queue is not overloaded and will minimize performance impact, network & database usage.

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:environment="http://www.sitecore.net/xmlconfig/environment" xmlns:role="http://www.sitecore.net/xmlconfig/role/"&gt;
	<sitecore>
		<feature>
			<Diagnostics>
				<CounterSettings type="Feature.Diagnostics.Infrastructure.CounterSettings, Feature.Diagnostics" singleInstance="true">
					<ThresholdCount>1000</ThresholdCount>
					<ThresholdTime>15</ThresholdTime>
					<Enabled>true</Enabled>
				</CounterSettings>
			</Diagnostics>
		</feature>
	</sitecore>
</configuration>

The IncrementLocalCountService class is responsible for incrementing the count, caching it locally and raising the event to notify the Content Management server, when one of the afore mention threshold is met.

   public class IncrementLocalCountService : IIncrementCountService
    {
        private readonly IList&lt;Counter&gt; _counters = new List&lt;Counter&gt;();
        private readonly CounterFactory _counterFactory;
        private readonly CounterUpdateRemoteEventFactory _counterUpdateRemoteEventFactory;
        private readonly CounterSettings _counterSettings;

        public IncrementLocalCountService([NotNull]CounterFactory counterFactory,
            [NotNull]CounterUpdateRemoteEventFactory counterUpdateRemoteEventFactory,
            [NotNull]CounterSettings counterSettings)
        {
            Assert.ArgumentNotNull(counterFactory, nameof(counterFactory));
            Assert.ArgumentNotNull(counterUpdateRemoteEventFactory, nameof(counterUpdateRemoteEventFactory));
            Assert.ArgumentNotNull(counterSettings, nameof(counterSettings));
            _counterFactory = counterFactory;
            _counterUpdateRemoteEventFactory = counterUpdateRemoteEventFactory;
            _counterSettings = counterSettings;
        }

        public bool IncrementCount(Type type,string name)
        {
            if (string.IsNullOrWhiteSpace(name))
                return false;
            if (_counterSettings == null || !_counterSettings.Enabled)
                return false;

            DateTime today = DateTime.Now.Date;
            // any from yesterday Flush
            Counter counter = _counters.FirstOrDefault(c =&gt; c.Name == name &amp;&amp; c.Date == today &amp;&amp; c.Type == type);
            if (counter == null)
            {
                counter = _counterFactory.Create(name, today, 0);
                _counters.Add(counter);
            }
            counter.Count++;
            Flush(today);
            return true;
        }

        private void Flush(DateTime today)
        {
            //iterate over all counters, flush that exceed the threshold count or time restriction
            foreach (var counter in GetThresholdExceeded())
            {
                RaiseEvent(counter);
                _counters.Remove(counter);
            }
        }

        private IEnumerable&lt;Counter&gt; GetThresholdExceeded()
        {
            DateTime timeLimit = DateTime.Now.Subtract(new TimeSpan(0, _counterSettings.ThresholdTime, 0));
            return _counters.Where(c =&gt; c.Created &lt; timeLimit || c.Count &gt;= _counterSettings.ThresholdCount).ToList();
        }

        private void RaiseEvent(Counter counter)
        {
            if (counter == null)
                return;
            var counterUpdateRemoteEvent = _counterUpdateRemoteEventFactory.Create(counter.Name, counter.Date, counter.Count);
            Sitecore.Eventing.EventManager.QueueEvent(counterUpdateRemoteEvent,true,true);
        }
    }

Who is responsible for aggregating the results?

The content Management is responsible for aggregating the results. It requires some extra configuration, to register that it will subscribe to handle remote events, raise the event and it then handle the remote event (see blog for more information).

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:set="http://www.sitecore.net/xmlconfig/set/" xmlns:role="http://www.sitecore.net/xmlconfig/role/"&gt;
	<sitecore role:require="Standalone OR ContentManagement"&gt;
		<events>
			<event name="counter:update:remote">
				<handler type="Feature.Diagnostics.Infrastructure.Events.Counter.CounterUpdateRemoteEventHandler, Feature.Diagnostics" method="Update" />
			</event>
		</events>
		<pipelines>
			<initialize>
				<processor type="Feature.Diagnostics.Infrastructure.Pipelines.Counter.SubscribeToCounterRemoteEventService, Feature.Diagnostics" />
			</initialize>
		</pipelines>
	</sitecore>
</configuration>

The code associated with the configuration above.

    public class CounterUpdateRemoteEventHandler
    {
        public void Update(object sender, EventArgs args)
        {
            if (args == null)
                return;

            try
            {
                var countRemoteEventArgs = args as RemoteEventArgs<CounterUpdateRemoteEvent>;
                Assert.IsNotNull(countRemoteEventArgs, $"Unexpected event args: {args.GetType().FullName}");
                Assert.IsNotNull(countRemoteEventArgs.Event, $"Event is nul: {args.GetType().FullName}");

                var counterRepository = ServiceLocator.ServiceProvider.GetService<CounterRepository>();
                Assert.IsNotNull(counterRepository, $"Could not resolve type:{typeof(CounterRepository).FullName}");

                var counterFactory = ServiceLocator.ServiceProvider.GetService<CounterFactory>();
                Assert.IsNotNull(counterFactory, $"Could not resolve type:{typeof(CounterFactory).FullName}");

                var @event = countRemoteEventArgs.Event;
                var counter = counterFactory.Create(@event.Name, @event.Date, @event.Count);
                if (counter == null)
                    return;
                counterRepository.Update(counter);
            }
            catch (Exception exception)
            {
                Log.Error($"CounterUpdateRemoteEventHandler.Update - failed", exception);
            }
        }
    }

    public class SubscribeToCounterRemoteEventService
    {
        public void Process(PipelineArgs args)
        {
            Sitecore.Diagnostics.Log.Info("SubscribeToCounterRemoteEventService.Initialize Called",this);
            var action = new Action<CounterUpdateRemoteEvent>(RaiseRemoteEvent);
            EventManager.Subscribe(action);
        }

        public void RaiseRemoteEvent(CounterUpdateRemoteEvent counterUpdateRemoteEvent)
        {
            if (counterUpdateRemoteEvent == null)
                return;
            RemoteEventArgs<CounterUpdateRemoteEvent> remoteEventArgs = new RemoteEventArgs<CounterUpdateRemoteEvent>(counterUpdateRemoteEvent);
            Event.RaiseEvent(counterUpdateRemoteEvent.EventName, remoteEventArgs);
        }
    }

Where is the Data Saved?

Ideally it should be saved in its own SQL database.

Unfortunately, we were not allowed to introduce and new databases and or tables, so we had to use the sitecore IDTable. The CounterRepository is responsible for retrieving, updating and  persisting the counters in the IDTable.

    public class CounterRepository
    {
        private readonly CounterFactory _counterFactory;
        private readonly GenerateKeyService _generateKeyService;

        public CounterRepository([NotNull] CounterFactory counterFactory, 
            [NotNull]GenerateKeyService generateKeyService)
        {
            Assert.ArgumentNotNull(counterFactory, nameof(counterFactory));
            Assert.ArgumentNotNull(generateKeyService, nameof(generateKeyService));
            _counterFactory = counterFactory;
            _generateKeyService = generateKeyService;
        }

        public bool Update([NotNull] Counter counter)
        {
            Assert.ArgumentNotNull(counter, nameof(counter));

            var counterInDatabase = Get(counter.Name, counter.Date);
            if (counterInDatabase == null)
                return Add(counter);
            counter.Count += counterInDatabase.Count;
            Delete(counterInDatabase);
            return Add(counter);
        }

        public IEnumerable<Counter> Get()
        {
            var idTableEntries = IDTable.GetKeys(Constants.IdTable.Prefix);
            return idTableEntries == null ? new List<Counter>() : _counterFactory.Create(idTableEntries);
        }

        private bool Add(Counter counter)
        {
            if (counter == null)
                return false;
            var idTableEntry = IDTable.Add(Constants.IdTable.Prefix,
                _generateKeyService.GenerateKey(counter.Name, counter.Date),new ID(Guid.NewGuid()),
                ID.Null,counter.Count.ToString());
            return idTableEntry != null;
        }

        private void Delete(Counter counter)
        {
            if (counter == null)
                return;

            IDTable.RemoveKey(Constants.IdTable.Prefix, _generateKeyService.GenerateKey(counter.Name, counter.Date));
        }

        private Counter Get(string name, DateTime date)
        {
            if (string.IsNullOrWhiteSpace(name))
                return null;

            var idTableEntry = IDTable.GetID(Constants.IdTable.Prefix, _generateKeyService.GenerateKey(name, date));
            if (idTableEntry == null)
                return null;
            if (!long.TryParse(idTableEntry.CustomData, out var count))
                count = 0;
            return _counterFactory.Create(name, date, count);
        }

      }

Presenting the results

No magic here a simple counter.aspx pages, which reads from the CounterRepository and displays it in a table, with the option to clear the database. Also some code to ensure that only Sitecore administrators can access the page. See Part 2 in the series.

Sitecore SolR Sorting Challenge

As I promised in my last post (please read it first) here is a solution to address the SolR sorting issues.

The Problem

The issue is that different pages, usually have different date fields to represent how they should be sorted and if we want to adhere to the Helix principles, the Solr feature must NOT KNOW ABOUT PAGE TYPES.

For example, a news page will have a news date, calendar event might use the start date and an some page will not have a date field and therefore will have to use created and or updated.

Typically, I see solutions that deal with this issue at retrieval time i.e. index all the different fields and then have a specific “order by” clause for each page type.

The biggest disadvantages of this approach is that you cannot sort a list with different page types i.e. get the 10 latest items that are either news, event or articles.
In addition, you have to manage all the different order by clauses. Which will destroy the Indexing/SolR abstraction as you will have to expose the IQueryable<T> in order to apply the order by clause.

Solution

I prefer to deal with the sorting issue at indexing time and have a single dedicated SolR field which is used to sort all item types. This allows you to sort news, articles, calendar events, etc. in the same way.

You still must deal with the issue that the SolR implementation should not know about which field to use for a give item type. To overcome this issue we use a configuration file that defines the mapping between an item of a specific type and which field to use for sorting.

Template to Field Mapping

The following configuration defines which field should be stored for sorting for each item template, if a field mapping is not defined, the item updated value is used.

In sitecore,  it is easy to map the configuration below to a C# class (i.e. SortFieldMappingRepository) for more information, about how to do this see my blog post on Structured, Type Safe Settings in Sitecore.

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:environment="http://www.sitecore.net/xmlconfig/environment" xmlns:role="http://www.sitecore.net/xmlconfig/role/">
	<sitecore>
		<feature>
			<SolRIndexing>
				<SortFieldMappingRepository type="Feature.SolRIndexing.Infrastructure.ComputedFields.Sorting.SortFieldMappingRepository, Feature.SolRIndexing" singleInstance="true">
					<mappings hint="raw:Add">
						<!--News, NewsDate -->
						<sortFieldMapping templateId="{AE6B4DF2-DF36-4C6D-ABDA-742EE6B85DE9}" sortFieldIdId="{3D43D709-DFAE-4B4F-8CB2-DF80D9B83857}"/>
						<!-- Calander, StartDate-->
						<sortFieldMapping templateId="{A8DD1F59-08AB-4BF0-BE76-8873A8F00628}" sortFieldIdId="{6369AC75-036B-48D8-95E2-F16998F8E777}"/>
						<!-- Video, VideoDate -->
						<sortFieldMapping templateId="{3D9D8B7A-FCB2-459B-908B-1E31F0C975FB}" sortFieldIdId="{E9993C21-1EF0-4C30-83D4-5F69923CEC3E}"/>
						<!-- Article, ModifiedDate Field -->
						<sortFieldMapping templateId="{F6B599F4-11C4-4C65-B253-95F3C40EBA18}" sortFieldIdId="{DC6C0E49-1705-4F3E-80EF-83176E482DBC}"/>
					</mappings>
				</SortFieldMappingRepository>
			</SolRIndexing>
		</feature>
	</sitecore>
</configuration>

Define the SolR index Field

Then we define the SolR index field used for sorting and specify that the SortComputedIndexField class is responsible for adding the sort date to the index.

<sitecore>
	<contentSearch>
		<indexConfigurations>
			<defaultSolrIndexConfiguration>
				<documentOptions>
					<fields hint="raw:AddComputedIndexField">
							<!-- Sorting-->
							<field fieldName="_sort" returnType="datetime" >Feature.SolRIndexing.Infrastructure.ComputedFields.Sorting.SortComputedIndexField, Feature.SolRIndexing</field>

					</fields>
				</documentOptions>
			</defaultSolrIndexConfiguration>
		</indexConfigurations>
	</contentSearch>
</sitecore>

The SortComputedIndexField class is responsible for providing the value for the sort field and it calls the CalculateSortDateService to determine the sort value.

namespace Feature.SolRIndexing.Infrastructure.ComputedFields.Sorting
{
    public class SortComputedIndexField : AbstractComputedIndexField
    {
        private readonly CalculateSortDateService _calculateSortDateService;

        public SortComputedIndexField(CalculateSortDateService calculateSortDateService)
        {
            _calculateSortDateService = calculateSortDateService;
        }

        public SortComputedIndexField()
        {
            _calculateSortDateService = ServiceLocator.ServiceProvider.GetRequiredService<CalculateSortDateService>();
        }

        public override object ComputeFieldValue(IIndexable indexable)
        {
            Item item = indexable as SitecoreIndexableItem;
            if (item == null)
                return null;

            if (!item.Paths.FullPath.StartsWith(Constants.SitecoreContentRoot))
                return null;
            return _calculateSortDateService.CalculateSortDate(item);
        }
    }
}

The CalculateSortDateService class iterates over the field mappings, defined in the configuration and uses the field value for the date if the field is found, otherwise the updated value for the item is used.

namespace Feature.SolRIndexing.Infrastructure.ComputedFields.Sorting
{
    public class CalculateSortDateService
    {
        private readonly SortFieldMappingRepository _sortFieldMappingRepository;

        public CalculateSortDateService([NotNull]SortFieldMappingRepository sortFieldMappingRepository)
        {
            Assert.ArgumentNotNull(sortFieldMappingRepository, nameof(sortFieldMappingRepository));
            _sortFieldMappingRepository = sortFieldMappingRepository;
        }

 
        public DateTime CalculateSortDate([NotNull] Item item)
        {
            Assert.ArgumentNotNull(item, nameof(item));
            var mappings = _sortFieldMappingRepository.Get();
            if (mappings == null)
                return item.Statistics.Updated;

            foreach (var sortFieldMapping in mappings.Where(m => m != null))
            {
                if (item.TemplateID != sortFieldMapping.TemplateId)
                    continue;

                Field dateField = item.Fields[sortFieldMapping.SortFieldId];
                if (dateField == null || string.IsNullOrWhiteSpace(item[sortFieldMapping.SortFieldId]))
                    continue;

                return new DateField(dateField).DateTime;
            }
            return item.Statistics.Updated;
        }
    }
}

Sorting Extensions

The last part is to provide the ability to sort the result set and for this we introduce the SortDateSearchResultItem class and a few extensions methods to add sort ascending & descending.

namespace Feature.SolRIndexing.Infrastructure
{
    public class SortDateSearchResultItem : SearchResultItem
    {
        [IndexField("_sort")]
        [DataMember]
        public virtual DateTime SortDate { get; set; }
    }
}

namespace Feature.SolRIndexing.Infrastructure.ComputedFields.Sorting
{
    public static class SortingQueryableExtensions
    {
        public static IQueryable<T> SortDescending<T>(this IQueryable<T> query) where T : SortDateSearchResultItem
        {
            return query.OrderByDescending(item => item.SortDate);
        }
        public static IQueryable<T> SortAscending<T>(this IQueryable<T> query) where T : SortDateSearchResultItem
        {
            return query.OrderBy(item => item.SortDate);
        }
    }
}

I hope this post will help, Alan