Consideration should be taken to the priority of active settings.
When a visitor first visit your website, the Dragonfly CMS Firewall runs the following steps in order: IP's, hostnames, bots, unknown user agents, referers and for last DNS block lists.
In this page you will find all settings and where to add new data.
Click on the image [+] to expand a section, [-] to collapse it.
When you expand a section it will save some data to a cookie to remember your preferences and help you working with it.
When you collapse a section it will clear such data from the cookie, so, for best performances, collapse them all when you finish working with it. Note: when you encounter terms like "valid*" or "protect*" means that the visitor will skip flooding protection and any other "blocking" setting.
Cache time to live
How many days the cached data will live in the database.
How long bans will last for. This setting is used for floodings, blocked unknown user agents and blocked referers.
Block unknown user agents
We do maintain a small list of user agents regarding browsers and crawlers, however this is impossible to maintain.
Previously this setting was strictly used to block every user agent not in our list, now this is used to block "impossible" user agents eg: empty, "-" or malformed user agents.
The DNS server to query for bots and hostname validations.
Enter localhost, an hostname, an IPv4 or an IPv6 where the DNS server is to be found, for local installs and testing just point to your router.
It must be a recursive DNS.
Mainly used when a visitor attempt to register, and block the registration if a match is found. The registration will be possible as soon as the visitor change email address.
Access will be denied to visitors coming from the specified referer websites.
Results are cached and access will be denied according to your "Ban duration" setting.
You can specify hostnames to be blocked or protected, results are cached according to "Cache TTL".
When enabled, the DNS will assign a hostname to every visitor and cache the results, validated and not.
As it is easy to fake hostnames, the hostname to be considered valid, needs the visitor IP to have a proper DNS reversed entry: ip -> hostname -> ip.
This service is provided to protect your website from malign visitors and malign crawlers trying to overload your website.
The service will ignore any protected IP, protected Hostname or protected Bot you may have, and it was successfully validated by the firewall.
When set to Normal, multi-tab browsing or opening a series of link one after the other one is considered as a "normal" browsing style, thus allowing visitors to better experience your website even when he use a very fast internet with a very fast CPU considering also that Dragonfly CMS could be installed on a very fast and low-load server.
But if such visitor try to restore a browser session with more then 5 or 6 tabs pointing to your website, will most likely get banned.
How severe it should be. If your website is hosted in a constantly slow or busy server, set this to High.
The firewall will provide extra details on each flooding.
With this service you can specify an IP based access control list.
Block or protect visitors by a straight IPv4 or IPv6 address, or CIDR.
If you are not sure how a IP/CIDR works, exercise your self with a IPv4 IPv6 CIDR calculator.
Blocked or protected IP
Something short for you to remember what is about.
This is where you can add new user agents to be detected as bots, it will override any previous user agent detection already done by Dragonfly CMS since some bot hides them self behind common user agents.
Use a unique name for new bots.
The string to search and match in the user agent.
Enter here the hostname of the crawler only if you want to verify the origin of bot thus protecting it from other checks.
A URL where to find information about this user agent, if any.
In this tab you can add, remove or modify DNS block list servers.
The server hostname.
Basically a whitelist for return codes.
Since return codes are not the same for all DNSBL servers, we try to accommodate most of them.
The first supported format is to use a list of return codes separated by commas eg: 127.0.0.1, 127.0.0.2, 127.0.0.3 etc.
The second supported format is to use bits eg: b:191.
You must check their usage page prior enabling this service and exactly understand what you want to filter.
In some DNSBL usage pages you will notice the word "bitmask", if that is the case then you must treat the last octet of the returned code as a bitmask.
For example, if you want to whitelist 127.0.0.1 and 127.0.0.64, do 1+64 = 65, use b:65. Note: even if there is a strong similarity between return codes, there is a huge difference between return codes using a bitmask and not. Read their usage page.
DNS block lists I've tried so far.
sbl-xbl.spamhaus.org: Fast, reliable, no false positives but miss most spammers.
dnsbl.tornevall.org (exclude "b:191") Moderately fast, reliable, some false positives due ghost data, some other false positives due unjustified 3rd party anti-spam mechanisms uncontrolled submissions, catch most spammers.
safe.dnsbl.sorbs.net (exclude "127.0.0.10"): Fast, reliable, no open proxies false positives, untested open proxies miss rate, untested spammer false positives/miss rate.