Leveraging AI to protect our users and the web

Recent advances in AI are transforming how we combat fraud and abuse and implement new security protections. These advances are critical to meeting our users’ expectations and keeping increasingly sophisticated attackers at bay, but they come with brand new challenges as well.

This week at RSA, we explored the intersection between AI, anti-abuse, and security in two talks.

Our first talk provided a concise overview of how we apply AI to fraud and abuse problems. The talk started by detailing the fundamental reasons why AI is key to building defenses that keep up with user expectations and combat increasingly sophisticated attacks. It then delved into the top 10 anti-abuse specific challenges encountered while applying AI to abuse fighting and how to overcome them. Check out the infographic at the end of the post for a quick overview of the challenges we covered during the talk.

Our second talk looked at attacks on ML models themselves and the ongoing effort to develop new defenses.

It covered attackers’ attempts to recover private training data, to introduce examples into the training set of a machine learning model to cause it to learn incorrect behaviors, to modify the input that a machine learning model receives at classification time to cause it to make a mistake, and more.

Our talk also looked at various defense solutions, including differential privacy, which provides a rigorous theoretical framework for preventing attackers from recovering private training data.

Hopefully you were to able to join us at RSA! But if not, here is re-recording and the slides of our first talk on applying AI to abuse-prevention, along with the slides from our second talk about protecting ML models.

The Six-Digit iPhone Passcode now isn’t Secure; Users Recommended to Choose a Longer Alpha-Numeric Code




While longer passwords surely give an additional layer of security, but the reality is that most users will never pick a 10-digit password. And at one point, there should be a balance maintained amongst convenience and security. So, in case you’re not in a situation to forfeit security for any reason, at that point you should simply try to think of a long alpha-numeric password.

While the default iOS password now remains at 6 digits, it used to be 4 a couple of years back, there is a possibility for the users to opt for a more extended alpha-numeric code. And so, to get to this alternative, go to Settings > Touch ID and Password. From that point, you would see a “Password Options” tag that should give you a chance to pick a custom alphanumeric code for your iPhone.

LinkedIn fixes major bug in AutoFill plugin

LinkedIn joins the data privacy breach club after a researcher detected a major vulnerability in the AutoFill plugin – that allows members to autofill their information in forms on other websites. The bug was detected by researcher Jack Cable who also released a proof-of-concept to explain how the vulnerability could be exploited through a cross-site scripting flaw on a website.

If exploited by third-parties, the bug releases private personal information kept on user profiles such as name, email, job, location and phone number.

“A user’s information can be unwillingly exposed to any website simply by clicking somewhere on the page,” reads Cable’s report. “This is because the AutoFill button could be made invisible and span the entire page, causing a user clicking anywhere to send the user’s information to the website.”

The AutoFill feature that allows a website to collect profile data, without explicit user content, was only for whitelisted domains approved by such as Twitter and Microsoft, the social network claimed, however Cable writes that “until my report, any website could abuse this functionality.”

After receiving a notification about the bug, LinkedIn fixed the vulnerability that could have compromised user data.

LinkedIn sent the following statement to TechCrunch:

We immediately prevented unauthorized use of this feature, once we were made aware of the issue. We are now pushing another fix that will address potential additional abuse cases and it will be in place shortly. While we’ve seen no signs of abuse, we’re constantly working to ensure our members’ data stays protected. We appreciate the researcher responsibly reporting this and our security team will continue to stay in touch with them.

For clarity, LinkedIn AutoFill is not broadly available and only works on whitelisted domains for approved advertisers. It allows visitors to a website to choose to pre-populate a form with information from their LinkedIn profile.