I see that Atlassian Answers are facing issues with massive spam attacks. Today the same situation...
I can say that this is a little bit annoying. It looks like the same source is attacking all the time. I was wondering is there any chance to finally prevent from that kind of situation...
My suggestion for now is to simply implement CAPTCHA implementation before adding every question/comment..
Any other suggestions?
I instaban those accounts as soon as I see them (which removes all their posts).
Everybody with 3000 or more Karma can instaban accounts.
Simply click on the name of the spammer and select instaban underneath the cog icon menu to the right.
But it's indeed very annoying.
We have a number of prevention measures in place to reduce spam content on Answers. The content that actually gets through our multiple filters and becomes visible to you is only about 20%-30% of all the spam content that gets submitted. The other 70% is correctly blocked (there is a small number of false positives that we release manually).
Unfortunately, fighting spammers really is an arms race - things like Captchas and text recognition are not foolproof and can be subverted.
We already have a number of the systems you suggest in place already, but I am reluctant to go into detail about them (security through obscurity and all that).
In general I am against the implementation of a whitelist since that could actually negatively impact the experience for real users.
Thanks for being passionate about this and holding us to account. We'll continue to work on this.
Captcha's aren't really that good at stopping spam any more. I'd like to see some more simple blocks, like "you can only post a question once a day until your Karma is above X", and a similar rule for comments.
One thing I've noticed is that when we instaban someone, they get recreated by the 'bot. Something that remembers the name and automatically kills the repeat accounts off would be nice.
My personal idea is to merge all small solutions into one big. You can only add few Questions in a row (less than 10 min) only if:
* You have more than X Karma (50 I think is enough as Nic suggested.. )
* You typed CAPTCHA correctly (no more than 5 attempts) or answered correctly a simple question (2+2 = ? - better solution)
* You do not have untrusted links... (only problem with tiny links)
* Your questions is not similar to the one created last time (by comparing only text)
* You checed a Checkbox such as "I am not spamming". (Bots mostly do not check a checkboxes and if it will be required then it is even harder)
If only one of above points is not checked then you Questions for example should be going to the queue where can be validated by a moderator and published after that.
Another technique is to use field values checks with obfuscated field names or fake form fields...
@Dennis Kromhout Van Der Meer,
I think you know the road map. Are there any plans to reduce the Atlassian Answers spam or maybe there is already something going on?
I do not clearly understand... We moved to Confluence Questions but I am also seeing new spam questions created by dummy users (the same like when it was powered by OSQA.. How explain that? Isn't Confluence safe enough to not allow that?
It's still a public-sign-up system, and it's still a doddle to bypass Captcha. At least the postings are now throttled, so the spammer can't post more than a couple of times before an admin can kill them off. (There's a bug in the permissions at the moment - only admins can ban users, those of us with high karma can't do it yet)
Hey Community mates! Claire here from the Software Product Marketing team. We all know software development changes rapidly, and it's often tough to keep up. But from our research, we've found the h...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs