JuLiA is an automated comment moderation system. Uses natural language processing to parse for the semantics of abusiveness. **ENGLISH ONLY**
When a new comment hits your blog, it is sent to the JuLiA web service to be parsed. Our proprietary AI turns the comment into a mathematical object and performs some manipulations to determine the amount of abusiveness present. This is represented as a score which is sent back to your blog to be displayed along with the comment.
JuLiA is essentially a machine learning algorithm, which means that she learns to classify objects by example. We have trained the current version with over 10,000 human-tagged comments. In tagging these comments, we used a definition of abusiveness that is in line with the standards of most major online publications.
Actually, no. All that JuLiA does is generate semantic meta-data about a comment that is submitted to our service. Each individual user can then decide what to do with this data by using the settings menu that we provide along with the plugin. In reality, all that we are doing is allowing individual bloggers and publishers to take better control of their user-generated content and maintain their own editorial standards.
None of the techniques that commenters use to get around traditional keyword filters will work with JuLiA. We are currently using a vocabulary of 840,000 unique features taken from live human comments. Things like l33t speak and bro ken wor ds have already been included in the system as abusive examples, so JuLiA will recognize them.
One of the unique features of machine learning systems is that they can become more customized and improve over time. Currently we are using a single central algorithm which we re-train in house, but in later releases we will add the ability for users to re-train JuLiA and build their own customized version. Until then you will always be able to exert control over the actions that JuLiA takes, by setting your trust thresholds in the Settings menu.