Turkiyenin en sevilen filmlerinin yer aldigi porno internet sitemiz olan ve sex tarzi bir site olan sitemiz gercekten dillere destan bir durumda herkesin sevdigi bir site olarak tarihe gececege benziyor. Sitenin en belirgin ozelliklerinden birisi de Turkiyede gercekten kaliteli ve muntazam, duzenli siteleri olmamasidir. Bu yuzden iste. Ayrica en net goruntu kalitesine sahip adresinde yayinlanmaktadir.
Gelmiş geçmiş en büyük porno sitemiz olan 2pe de her zaman en kaliteli pornoları sunmayı hedefledik. Diğer video sitemiz olan vuam da ise hd porno ağırlıklı çalışmalara başladık.
(axis OR axes) title:spacing
title:spacing (axis OR axes)
posts:(axis OR axes) title:spacing
(toolbar OR event) title:spacing
None of these work right. It forgets that it's required to have "spacing" in the title and returns a lot of results that don't have this.
Yeah, this one is a bit odd. I have min-match (mm) set to 100% (based on your earlier feedback), which works fine for simple lists of terms.
I *think* that in this case it is treating each side of the query as a sub-clause and perhaps the mm rules aren't enforced for sets of sub-clauses. I've tried several things to force the correct behavior to no avail. For now I will need to move on to another item, but I'll keep the internal bug report Mitch filed for this open for further research. Meanwhile explicitly using AND between the clauses does coerce the desired behavior.
Thank you for all the useful feedback you have provided us about the search implementation, we are listening!
Maybe replace period with a space and treat each as it's own word or something.
You are using Solr, right? Reading the docs, it seemed all you need to do is switch the tokenizer setting to one that breaks on non-whitespace surrounded punctuation, like WhitespaceTokenizerFactory.
Actually, that tokenizer (WhitespaceTokenizerFactory) only splits on whitespace and any punctuation will end up included in one of the resulting tokens. I agree that we need a better solution for words containing dot notation. I've done some searching and found some interesting ideas on how to implement this including a custom tokenizer, modifying the standard tokenizers grammer, using a post tokenizing filter and more. I'm creating an internal report to track this. Thanks again for your input!