I'm using the new EPiServer Full-Text Search. Any attempt to run search using the FTS Facade results in no results. I started logging, and I find this in the logs (note: "VERY LONG URL" is an actual URL...)
ERROR  EPiServer.Search.RequestHandler.GetSearchResults - Could not get search results for uri '[VERY LONG URL]'. Message: The remote server returned an error: (400) Bad Request. at System.Net.HttpWebRequest.GetResponse()
I copied out the VERY LONG URL and tried it in a browser. I get this:
The server encountered an error processing the request. See server logs for more details.
Nothing is written to the Event Log.
Now, in my experience, a Bad Request is usually a request where IIS can't match up a host name, however the host name is correct here and resolves just fine.
I examined VERY LONG URL carefully. It's indeed long at 432 characters, but that should still be valid. There are a lot of odd characters in there -- punctuation and such. Could any of these confict with IIS from a security perspective?
At the very least, is there a better way to get IIS to spit out an informative log file?
Here is VERY LONG URL:
I had this or a similar error. I think it was fixed by increasing the setting for the query string length:
<system.webServer> <security> <requestFiltering> <requestLimits maxQueryString="16384" /> </requestFiltering> </security> </system.webServer>
If the querystring is longer than the limit it will be brutally truncated or something so that the search service can't parse it.
I wondered about this too, but there's already a maxQueryString="65536" in there.
Doesn't IndexingService.svc support other methods than GET? That querystring can get alot longer when the user is in more groups than just Everyone and Anonymous.
AFAIK it can only use GET. We ran into the problem for exactly this reason, some of our users had hundreds of roles. But I checked again in our versioning system and what I checked in as fix for the problem is the maxQueryString setting.
In that case the FTS is run as a separate application so its config is very small. The only other thing in there with a limit of some kind is this (but I think that's the standard setup):
<system.serviceModel> <bindings> <webHttpBinding> <binding name="IndexingServiceCustomBinding" maxBufferPoolSize="1073741824" maxReceivedMessageSize="2147483647" maxBufferSize="2147483647"> <readerQuotas maxStringContentLength="10000000" /> </binding> </webHttpBinding> </bindings> </system.serviceModel>
I don't see the IndexingServiceCustomBinding referenced anywhere so maybe it doesn't even have an effect.
I should also say that some browsers cap the querystring themselves so it might be that it's truncated already when it hits the service when you try it out with a browser, meaning you see a different error than the one the application gets when calling the service. See if you can catch the response from the server using a defaultProxy in web.config and fiddler to make sure it is really the same error.
Did some more debugging today. Turned on Failed Request Tracing, and capture the failure.
The log is sadly pretty cryptic. I can see the raw request, and can confirm that the inbound URL is completely intact, from beginning to end.
Additionally, the "host" request header is there, so it should map it to the correct website.
The only irregularity in the failed request log is in System.ServiceModel.Activation.ServiceHttpModule. That's where it suddenly decides it's a "Bad Request," for no discernable reason.
I have confirmed this same behavior on two servers.
Here's the record from the Failed Request Log:
MODULE_SET_RESPONSE_ERROR_STATUSWarningModuleName="ServiceModel-4.0", Notification="AUTHENTICATE_REQUEST", HttpStatus="400", HttpReason="Bad Request", HttpSubStatus="0", ErrorCode="The operation completed successfully. (0x0)", ConfigExceptionInfo=""