The strangest thing happened to me the other day. And I’m wondering why Facebook doesn’t have the (artificial) intelligence to catch it.
This attractive woman who I didn’t know sent me a friend request on Facebook.
Has that ever happened to you? It has? So it’s not just me?
In fact, since I’ve been on Facebook, this has happened hundreds of times. The women in question always have the same characteristics.
- They’re pretty, cute, nice-looking, or attractive in some way.
- They’re single.
- I’ve never seen them before.
- The main thing on their profile is pictures of themselves, often in revealing outfits and poses.
- Other photo posts, if any, are graphics with sayings about love.
- If there are text posts, they say something like “I’m Maryam and I’m lonely. *-*-* I have so much love in my heart! 💖💖💖. I like kissing and snuggling and staying under the covers when the sun comes up!!!”
- If they have a job, it’s as a student.
- They don’t have any friends in common with me.
This is, of course, a scam. It’s a variety of catfishing, masquerading as an imposter on social media. An expert on online dating (yes, there are such things) told me that their modus operandi is to friend you, get close with you, form an online relationship, and then get into desperate straights and ask for money. Naturally, there is no woman, the person on the other end is often a man, and it’s certainly not the person whose picture you’re looking at.
Men are the main victims, but a similar scam exists for women where the accounts appear to be well-built “silver fox” military types.
A.I. ought to be able to catch this
Once you’ve seen one or two of these, they’re dead easy to spot. So why can’t Facebook spot them?
I know enough about A.I. to know how you’d do this. You’d take a billion cases of friending. You’d look specifically at friending between women and men, and cases where the individual asking has very few friends already. You’d then load in all the variables known about those cases, and use them in a vast-machine learning regression algorithm to examine when they tend to lead to accepted friend requests, and when they lead to complaints by the person getting the request.
You ought to be able to generate a nearly foolproof algorithm to identify these fraudsters the moment they start sending friend requests and kick them off the network — or at least delete the requests and put them in purgatory.
Facebook hasn’t done this. Not only that, one of these fakes actually showed up in Facebook’s list of friend suggestions for me, “People you may know.” Its algorithm probably noticed that this account had sent scammy friend suggestions to a bunch of people who are my actual friends and said “Oh, don’t forget to include this guy in your scam, too.”
Facebook says it’s trying to be a good citizen. But that’s hard to credit when you read stories like this in TechCrunch:
Facebook pays teens to install VPN that spies on them
Desperate for data on its competitors, Facebook has been secretly paying people to install a “Facebook Research” VPN that lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity, a TechCrunch investigation confirms. Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits.
Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page.
The logical conclusion
Assume Facebook’s management is logical, wants to maximize profits, and doesn’t care about embarrassment unless it leads to regulation.
It would collect as much data as possible, so long as it’s not clearly violating the law.
It would use that data to increase its addictiveness.
It would ignore catfishing scams rather than put effort into them, because any such efforts wouldn’t impact profits at all. Not enough people are complaining.
It would send Mark Zuckerberg to apologize in front of Congress or any other legislative body that persistently demands his testimony, because apologies cost nothing.
And it would not change, because its users, er, members aren’t really complaining enough to slow down their online activities.
This is exactly what we see. It is the most logical possible explanation.
If you think this isn’t what’s going on at Facebook, what’s your explanation?