1. Yes, the service will kick on the site by accesing the site url so that it wakes up.
2. Any web server in a farm takes the job. Communication between the service and the site is done using remoting over named pipes.
3. It prevents the site from registering with the service, hence it never gets asked about any jobs.
4. There was a few changes as of CMS 6 where we start tracking running jobs, rather that just fire and forget, but the scheduler never guarantees that jobs are not running in parallel.
5. Yes, see 2).
I would avoid using scheduled jobs if you are thinking about running the jobs at a very high frequence, anything below a minute should be raising warning flags.
Yeah! Thank for the answers, they did clarify most of my questions. But i still have some worries.
My idea is not to run the jobs frequently, but instead let them run in neverending while loops (with the possiblity to stop them). I would still let the schedueler run the job, let us say once every hour, so if the job somehow got killed due to an exception or something like that it would automaticly start up and continue the work.
But i need to make sure that the multiple jobs of the same process / agent runs in parallel, so is there anything i can do to avoid this? Static variable or lock is useless here ofcourse because of multiple sites (webfarm).
By the way one more very important advantage to the scheduled plugin is the fact that it run in the context of the website.
It does not sound like the scheduler is the tool for the job, or at least with these requirements. Maybe its not clear but the scheduler does no magic to run the jobs inside the context of the website, you can add an initialization module and start a background thread yourself for the processing.
Well the magic is in the windows service scheduler service that keeps the job alive. There is no easy way for you to keep a background thread alive.
Scheduled jobs in EPiServer are cool and would be suited i think if we only had a single webserver. But i still have a hard time seeing it work in webfarms.
So could you answer this specific question i thought about on my bike on the way home from work ...
From your answers:
1. Yes, the service will kick on the site by accesing the site url so that it wakes up.
3. It prevents the site from registering with the service, hence it never gets asked about any jobs.
If it uses the siteurl there is no way for the service to control which webserver in the farm it connects to when using wlbs right? So what if it connects to a website that is not registered to the service? There must be something more to this ...
And a few extra questions:
The scheduler only talks to sites on the same machine. When a job is due it reserves it in the database, so in a web farm whoever wins gets the job. A new background thread is started whenever a jobs needs executing. You can have the scheduler on one machine or on all machines.
I don't know of any documentation on this technical level.
I am about to implement a procoss based on an extended version of chain of responsibility pattern where multiple agents/processors will take my object and do something with it. Think of it as a assembly line.
My idea was to implement the different agents/processors as scheduled plugins. I like the idea of scheduled plugins as they are easy to deploy, control and monitor compared to windows services.
But i really would like some insights into how schedule plugins work internally. Here are my questions mixed with my own assumption on how it works.