Category Archives: Sitecore

Sitecore config disappeared?

We had a Sitecore 9 solution that required the include files to be rearranged to align with a new azure deployment/provisioning setup using Terraform and to fix some duplication and inconsistencies.

Problem – Unicorn Sync Failed

When we enabled unicorn sync in the CD pipeline (see here for more details about Unicorn) the deploy failed?

Unfortunately (see image above) the configuration relating to the unicorn’s sc.variable‘s were missing?

Don’t have 2 <sitecore> elements

If there are 2 <sitecore> elements in a sitecore include file, even if they have mutely exclusive require statements Sitecore’s config merge ignores the file, without reporting an error!

Solution

There are 2 solutions to the problem

  1. Split the configuration into 2 files each with their own <sitecore> element.
  2. Merge the 2 <sitecore> elements into one element and move the require constraints to each sub element.

We decided to solution 2, see the result below.

Use patch:before=”prototypes” that way the new variables are shown at the top when showconfig.aspx is used (see image below) instead of half way down the page.

I hope this will help and no one else has to waste hours figuring this out, cheers Alan

Where did my request body data go?

Problem

The solution had several API controllers (see image above) that expected data to be posted in the body. Then for no reason the body data was null, and therefore the controller started throwing argument null exceptions, and the front end of course stopped working.

The issue was caused by a new custom HttpRequestProcessor (see image below), which was called in the httpRequestBegin pipeline.

It needed the data from the request body, for some reason the setting the position back to 0, did not work?

Therefore the controller got null instead of the data contained in the body.

Solution

Therefore it was necessary to copy the stream contents into a memory stream, read the data from that stream, then deserialize the class. See the solution below.

I hope this helps, and this issue was found Sitecore 8, with a lot of customization, patches, code and modifications over the past 16 years.

Site Context for ApiControllers

Now almost every Sitecore project has rest API’s and I am always shocked when the database, language etc. is hard coded and or additional configuration is added to define default the language, database etc.

Wouldn’t it be nice if you can define the site context for the controller?

Then you can define a site declaration, and or use an existing site that each controller should use and then use the language, database etc. that is define for the site language, database etc.

Solution

The SiteContextAttribute provides the ability to define which site an ApiController should use, for example in the picture above it is setup to use the “Person” site.

    public class SiteContextAttribute : ActionFilterAttribute
    {
        protected readonly string SiteName;

        public SiteContextAttribute(string siteName)
        {
            this.SiteName = siteName;
        }

        private SiteContextSwitcher _siteContextSwitcher;
        private LanguageSwitcher _languageSwitcher;
        private DatabaseSwitcher _databaseSwitcher;

        public override void OnActionExecuting(HttpActionContext actionContext)
        {
            base.OnActionExecuting(actionContext);

            var siteContext = SiteContext.GetSite(this.SiteName);

            _siteContextSwitcher = new SiteContextSwitcher(siteContext);
            _databaseSwitcher = new DatabaseSwitcher(siteContext.Database);
            _languageSwitcher = new LanguageSwitcher(LanguageManager.GetLanguage(siteContext.Language));
        }

        public override void OnActionExecuted(HttpActionExecutedContext actionContext)
        {
            _languageSwitcher?.Dispose();
            _databaseSwitcher?.Dispose();
            _siteContextSwitcher?.Dispose();

            base.OnActionExecuted(actionContext);
        }
    }

The code gets the site name, then gets the site context and setups the language, database and site context for the controller.

For example, see below it is it possible to use the Context.Database and also the language of the item will also be correct.

I hope this helps, Alan

How SQL Index Fragmentation will kill Sitecore’s Performance

Thought I wrote a blog post about this years ago, but apparently I didn’t.

Problem

Poor index maintenance is a major cause of decreased SQL Server performance, which in turn will impact your Sitecore’s performance. The Sitecore databases contains tables with numerous entries, that get updated frequently, therefore high index fragmentation will occur.

Detecting SQL Server index fragmentation

The following script displays the average fragmentation, and as a help generates the SQL query to fix it.

SELECT OBJECT_NAME(ind.OBJECT_ID) AS TableName,
ind.name AS IndexName, indexstats.index_type_desc AS IndexType,
indexstats.avg_fragmentation_in_percent,
'ALTER INDEX ' + QUOTENAME(ind.name) + ' ON ' +QUOTENAME(object_name(ind.object_id)) +
CASE WHEN indexstats.avg_fragmentation_in_percent>30 THEN ' REBUILD '
WHEN indexstats.avg_fragmentation_in_percent>=5 THEN 'REORGANIZE'
ELSE NULL END as [SQLQuery] -- if <5 not required, so no query needed
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) indexstats
INNER JOIN sys.indexes ind ON ind.object_id = indexstats.object_id
AND ind.index_id = indexstats.index_id
WHERE
--indexstats.avg_fragmentation_in_percent , e.g. >10, you can specify any number in percent
ind.Name is not null
ORDER BY indexstats.avg_fragmentation_in_percent DESC

Below you can see the typical result of running the script above. I was shocked as the majority of indexes on my local SQL server where over 99%.

Solution

The script above generates the SQL statements needed to defragment the affected indexes, so you can automate the defragmentation process, using SQL Server Maintenance Plans.

Anyway I hope this helps keeping you sitecore solution running at its best, Alan

Sitecore and Azure Durable Functions

In this post I will show how Azure Durable functions can complement your sitecore solution and help enhance performance.

Problem

We took over a Sitecore solution and its content management server was running very slowly and intermittently the sitecore client would be unresponsive and crash

The problem was caused a number a lot of CPU/Data/Bandwidth intensive schedule tasks that were running to retrieve a wide range of data from a number of web services, then aggregate the data and perform complicated calculations, of which a small sub set of the result were presented on the website.

Solution

As the solution was already hosted in Azure, a perfect solution was to off load the heavy lifting from the Content Management server to Azure Functions, to do the data retrieval, calculations and provide the results for the website. Firstly, a very brief overview of the pro’s and con’s of Azure functions.

Pro’s

  • Serverless execution model
  • Dynamic Scaling
  • Micro pricing
  • Security
  • Wide range of triggers
    • Https, Timer (CRON), Azure storage changes, Azure Queue, Message from Service bus, etc.

Con’s

  • Stateless
  • Execution time limit (default 5 mins, max 10)
  • Concurrency

The main challenge with Azure functions is that most of the schedule tasks could take more than 10 minutes to complete and require state management. But not to worry as Azure Durable Functions came to the rescue.

Azure Durable Functions

Durable Functions are an extension of Azure Functions and Azure WebJobs that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you, so it is possible to implement code that run for a long time.

In addition if an Azure function fails, for example the web request times out, you can define if the durable function should wait and retry X times, before failing. Behind the scenes, the Durable Functions extension is built on top of the Durable Task Framework, an open-source library on GitHub for building durable task orchestrations.

Advantages of Durable Functions

  • They define workflows in code. No JSON schemas or designers are needed.
  • They can call other functions either synchronously or asynchronously.
  • Output from called functions can be saved to local variables.
  • They automatically checkpoint their progress whenever the function awaits.
  • Local state is never lost, even if the process recycles or the VM reboots.
  • Easy to Unit Test
  • Can run for a very long time, in theory forever
  • Cost effective, as you do not pay for execution time whilst waiting for tasks to complete.

Here is a brief introduction to the most common Durable Functions patterns

Pattern 1 – Function chaining

Function chaining refers to the pattern of executing a sequence of functions in a particular order. Often the output of one function needs to be applied to the input of another function.

function chaining

The code below is an example of how you would achieve this

chaining code

Pattern 2 – Fan-out/fan-in

Fan-out/fan-in refers to the pattern of executing multiple functions in parallel, and then waiting for all to finish. Often some aggregation work is done on results returned from the functions. This is perfect when you want to do a lot of things in parallel, to reduce the time taken to complete the task and then aggregate/process all the results.

Below is an example of how the code could look

Pattern 3 – Monitoring

The monitor pattern refers to a flexible recurring process in a workflow – for example, polling until certain conditions are met. A regular timer-trigger can address a simple scenario, such as a periodic clean-up job, but its interval is static and managing instance lifetimes becomes complex. Durable Functions enables flexible recurrence intervals, task lifetime management, and the ability to create multiple monitor processes from a single orchestration.

An example could be instead of exposing an endpoint for an external client to monitor a long-running operation, the long-running monitor consumes an external endpoint, waiting for some state change. See the example below.

Is Replaying

One thing that catches people out is that the code is re-run from the start of the function after each await completes, therefore for example with Logging and other code you need to check for IsReplaying so you only log once.

Durable Functions – Orchestrator code constraints

There are a number code constraints, that must be adhered to when using Durable function orchestration.

  • Code must be deterministic.
    • It will be replayed multiple times and must produce the same result each time.
    • For example, no direct calls to get the current date/time, get random numbers, generate random GUIDs, or call into remote endpoints.
  • Non-deterministic operations must be done in activity functions
    • This includes any interaction with other input or output bindings. This ensures that any non-deterministic values will be generated once on the first execution and saved into the execution history. Subsequent executions will then use the saved value automatically.
  • Orchestrator code should be non-blocking.
    • For example, that means no I/O and no calls to Thread.Sleep or equivalent APIs
    • Orchestrator code must never initiate any async operation, except by using the IIDurableOrchestrationContext API.
    • For example, no Task.Run, Task.Delay or HttpClient.SendAsync.
    • The Durable Task Framework executes orchestrator code on a single thread and cannot interact with any other threads that could be scheduled by other async APIs.
  • Infinite loops should be avoided
    • Because the Durable Task Framework saves execution history as the orchestration function progresses, an infinite loop could cause an orchestrator instance to run out of memory.
    • For infinite loop scenarios, use APIs such as ContinueAsNew to restart the function execution and discard previous execution history.

Result

By migrating all the long running CPU/data/bandwidth intensive tasks to Azure Durable Functions, the performance of the Sitecore solution went from painful to fantastic.

Unfortunately it is very common that Sitecore solutions assume responsibility for task that are not the websites responsibility, but pairing with Azure functions can help mitigate this issue.

An additional benefit was that the website was isolated/protected from 3rd party system changes, as when an external system changes only the Azure functions had to be modified and deployed – therefore no down time for the sitecore solution.

Anyway I hope sitecore develops will consider Azure functions to enhance their sitecore solutions.

 

Name Value List Field

Name Value List – To the Rescue

Challenge

To provide a content API that depending on the path and language would return a different set of key value pairs. The client wanted the ability to define new keys without changing the template and or introducing new templates.

Typical Sitecore solution

Introduce a content item that could have any number of “Key Value” sub items which had a key and value field.

Unfortunately, the customer wanted to have as flat a structure as possible and different languages would have different keys.

Solution – Name Value List Field

I was surprised that I had never noticed that Sitecore has a field type called Name Value List.

The Name Value list field provides a key/value pair interface where you can add pairs dynamically, see below.

Name Value List Field

How to use with Synthesis

It is easy as the field is mapped to a IDictionaryField interface which provides basic functionality for working with Key/Values out of the box.

If you need some more advance features you can cast it to a DictionaryField, which is the underlying implementation.

How to use with vanilla Sitecore

The values are stored as query string, see image below.

So then you can use use Sitecore.Web.WebUtil.ParseUrlParameters to convert the raw value to a NameValueCollection to access the key/value pairs.

I was shocked that after working with sitecore I had missed this field (well maybe I never needed it) but at any rate I hope this blog post will help, Alan

 

Untangling the Sitecore Search LINQ to SolR queries

Problem

It can be very difficult to identify why you do not get the search results you expected from Sitecore Search, but there is a simple way to help untangle what is going on.

Solution

It is possible to see the query that Sitecore generates and sends to SolR and then use the query on the SolR instance to see what data is returned to Sitecore.

This is such a huge help when trying to understand why your queries do not work!

Step 1 – Find the Query that was sent to SolR from Sitecore

Sitecore logs all the queries it sends to SolR in the standard sitecore log folder, look for files named Search.log.xxx.yyy.txt .

Step 2 – Execute the query in your SolR instance

Go to your Solr instance, and use the core selector drop down to select the index your Sitecore Search query is being executed against.

Select Query, from the menu

Then paste the query from the sitecore log, and you can see the result that is returned to Sitecore.

This has helped me a lot, so I hope this helps others untangling their search results using Sitecore Search 🙂

 

 

 

How IQueryable and Take can kill your Sitecore Solution

We had a solution that had serve performance issue when it got a lot of visitors. Sitecore was casting the following exception and SolR had a similar errors in its logs:

Unable to read data from the transport connection: The connection was closed

We identified that the problem was caused by hitting the network bandwidth in Azure!
Yes, there were a lot of visitors, but enough to hit the bandwidth limit, the customer upgraded the plan to get more network bandwidth, but still the issues continued.

But what could cause this issue?

I started to review the SolR implementation and found the issue quite quickly.

return IQueryable<Result>
            .Where(result => result.Date < DateTime.UtcNow)
            .OrderByDescending(result => result.Date)
            .GetResults()
	    .Take(count)
            .ToList();

The Take() was made after GetResults() was called, so the entire data set is returned to Sitecore from SolR, then the take was applied to get the top 5 results.

This simple mistake was what caused all the network and performance issues.

Solution

return IQueryable<Result>
            .Where(result => result.Date < DateTime.UtcNow)
            .OrderByDescending(result => result.Date)
	    .Take(count)
            .GetResults()
            .ToList();

It was a simple fix (in 150+ places) to move the Take before GetResults!

This is why I believe that you should always Introduce a (SolR) Sitecore Search Abstraction, please read my post on this very subject, instead of returning the IQueryable interface.

Hope this helps, Alan

 

Reduce Technical Debt Part 3 – Test driven code and PBI tasks

In this blog post I am going to outline how test-driven design/unit test and having a code removal task for each Product Backlog Item (PBI) can help reduce technical debt.

If you have not already done so, please read part 1 and part 2 in this series on reducing technical debt, as they set the scene for what this blog post is trying to address.

Test-driven design/unit test

It is a fact that all developers (myself included) tend ignore and not correct/change comments when modifying code. Therefore, test driven code will reduce technical debt!

Unit tests decoratively define what code should do and are especially useful in describing exceptions, that would otherwise lead to miss understandings.

I could go on all day about the virtues of test drive design! But I am only going to focus on how it can help reduce technical debt by describing exceptions and or confusing code.

Unit Test for exceptions and or confusing code

We have all come across code where we think WTF! and then spend hours refactoring and or trying to determine why it does what it does.

For example, In France certain types of furniture have a different VAT rate a Chaiselong and a sofa have a different VAT rate.
Therefore, having a test call EnsureChaiselongAndSofaHasdifferentVatRateInFrance will help explain/document why this complexity and or strange functionality is in the code.

Now whilst it does not directly reduce the technical debt and or code size, it helps explain this code and therefore reduce the cost to maintain the code.

Having a test that confirms the strange code is in fact correct and required and why it is required has value and will reduce maintenance code and future bugs.

How do you ensure that each PBI raises the quality of the code?

Ensure that there is a code removal task for each Product Backlog Item.

This ensures that everyone involved in the project is aware that it takes time to identify and then remove redundant code and it is an essential part of all new development/modifications.

There should always be a task defined for every PBI even where it is 100% new functionality and there is defiantly no redundant code to be removed.

Therefore, the premise is that the team must prove and establish for every PBI that there is no redundant code.