Occasionally we run into an issue that our page is unaccessible or incredibly slow. Optimizely support tells us in these cases that it is a Find issue. We have been suggested to wrap all code related to Find in try-catch blocks or to implement a custom request timeout.
Has anyone else run into this problem and implemented some kind of solution?
We typically implement Find as any external service on which we cannot really depend on, but first things first, how do you use Find? Do you meassure the number of requests? This can be seen in services like Azure Application Insights and similar unless you have your own service monitoring software.
Find has a request cap depending on your license size (https://www.optimizely.com/online-order/order-episerver-find).
Optimizely is quite nice regarding QPS throttling but if you have a lot of queries over time the service will eventually shut down for a while, in some cases query serving will be stacked and you will experience the service as "slow".
So yea, I have entered projects where this have been an issue. First thing is to check how you actually query Find, can some queries be cached, rebuild your website not to be dependent on external services and implement them using some circuit breaker pattern or atleast as components/MVVM-services.
Our website is completely dependant on Find, every single page uses it, usually even several blocks per page need it.
I cannot answer about QPS, only requests per second. Don't know which plan the company has chosen but I would assume it should be one that is business-appropriate :D
Rebuilding the website seems like a scary scope, even though we are discussing for some places to add a possibility to fecth data ourselves in the catch blocks.
Alright, did you implement proper Query Caching?
No, we are not using it, and I can see in my current task as a separate bullet point saying that we will not use it : D We are using a CustomObjectCachingService instead.
I wonder if there is one entry point where we could check if Find is down and take action in that case; or do we need try catch blocks in all the methods all over the project?
Cant really understand why you wouldn't use it.
Anyway, you sure should check the availability or response time of Find, break out the implementation and use a circuit breaker, but then your site relied heavily on Find, right?
I'd still give the built-in cache a shot, there must be a very good reason not to. Usually a few seconds (like 10-30s) will give the desired effect.
And by not being reliant on Find do you mean you are taking the results from the database via the repositories?
Relying on Find is usually faster way, especially compared to loading from database. but it has its own caveat, as Eric mentioned you definitely want to:
this is a good start Common Find caching pitfalls | Optimizely Developer Community
QPS is the upper limit of how many queries you make to Find before your requests be returned with 429 (too many requests). IIRC it's a 5 second window, so if your QPS is 50, if you reach 250 queries in 5 seconds, your 251th and more queries will get 429 responses. you definitely want to avoid that.
We have ended up implementing try catch blocks and different scenarios, either retrieving the results from the database or displaying a temporary error message.
Thank you for help both!
Is this still valid in CMS12?
We have made many changes in CMS 11 to take content from database when Find is down, but now if we block Ffind IPs or give a wrong Find URL, the solution does not even start and throws and exeption.