The default behaviour sends the events in sequence and in the order you have them defined in the config. I guess that by blocking access to front 2 an exception is thrown and the message isn't sent to front 1 (or vice versa). So it doesn't sound like things are misconfigured, just ensure both front ends are available I guess?
I understand why this is implemented this way. It's probably the best solution to maximize performance. However, in some cases it will not be possible to use UDP, due to network configuration or security (I'm really not into this networking stuff). This issue will still occur if you're using TCP and 3 or more servers (or worker processes).
I guess you have two options:
1. EPiServer could provide a way to configure this to not fail and try to continue to send the events to the other servers even one of them fails. I don't know if this is implemented to be run in parallell, but maybe it should (and async even).
2. We can configure the application pool in IIS to always run and don't run in hibernation mode. The service which listens to Remote Events will then theoretically always run and you will not run into this issue anymore. I guess this is the best solution we have without the need to deploy new code / wait for a new implementation.
There's a bug report for this http://world.episerver.com/support/Bug-list/bug/CMS-7468
We have one admin server with the UI and two load balanced front facing servers. We can't use multicasting, so i have set up the remote service to use TCP. Everything works fine and changes propagate to the front facing servers.
If I block (in the fw) the remote service communication to front 2, changes also stop propagating to front 1. If I then on the admin server remove the client endpoint to the blocked front 2, updates will once again show up on front 1.
Is this the intended behaviour or have I misconfigured something?
I followed the guide at http://world.episerver.com/documentation/Items/Developers-Guide/EPiServer-CMS/8/Event-management/WCF-event-management/