How to Outsmart the Scammers and Spammers at Their Own Game
Game Theory paradox explains the rise of online fraud from a mind-blowing perspective

Question everything you hear and trust only a fraction of what you read online.
This thought occurred to me last year in March and turned into something to write about.
Since then, misinformation and fraudulent activity have gone haywire, and I feel the need to look at it once again from a different perspective.
In light of recent events, leading social media companies seem determined to open the community floodgates and wave fact-checking at a time when unchecked AI is the number one player on the online gameboard.
AI has taken the level of fraudulent behavior on the internet to unseen levels, and I suspect that what we see is only the tip of the iceberg.
I am a community-builder with decades of experience in complex digital settings, and I’ve always been a player siding with the light side of the Force. So using another Star Wars analogy to illustrate my perspective, when you look across the content economy chessboard, there’s one thing tricksters, fraudsters and scammers all seem to have in common.
Something I heard the Stranger say in one of the episodes of Star Wars: The Acolyte.
When asked.
“What do you want?”
The Stranger answered.
“The freedom to wield my power the way I like.”
Sorry, Stranger, but your freedom ends where my nose begins. Your mindset only puts us one step closer to Narcissus’ demise, drowned while staring into his own reflection in a pool of water.
The Prisoner’s Dilemma
Have you heard about one of the great conundrums in game theory known as the prisoner’s dilemma?
The prisoner’s dilemma is an atypical non-zero-sum problem that can be applied to any circumstance where humans interact toward a goal.
In a perfect scenario, as presented in the Nash Equilibrium conceptual framework, no player is incentivized to change their strategy, given the strategies chosen by the other players.
We arrive at a standpoint in which all the players have made their best choices, given the choices of others.
Let’s put this in a real-world scenario as presented by the prisoner’s dilemma originally formulated by Merrill Flood and Melvin Dresher.
Two criminals get arrested and taken into custody. The police haven’t enough evidence, so they are taken into separate rooms for interrogation.
Each has two options: confess or deny.
If one of the prisoners confesses (betrays the other) and the other remains silent, the one who confessed earns a get-out-of-jail-free card while the silent accomplice serves the maximum sentence.
If both deny, the sentence will be less for both.
If both confess, the sentence will be higher.
In this scenario, the Nash equilibrium occurs when both prisoners confess. Even though both could benefit more by denying, the uncertainty about the other’s choice leads them to confess, as this is the strategy that minimizes the worst possible outcome (maximum penalty).
In games as in life, achieving Nash’s equilibrium is never easy, natural human behavior dictates otherwise.
We assume each player independently wants to increase their advantage as much as possible, regardless of the other player’s outcome, for personal gain.
Sooner or later, this will lead each player to choose to betray the other, when logic curiously states both players would get a better result if they collaborated.
Betrayal seems like the easy way out whenever we are presented with a non-sequential scenario.
Unfortunately, games, and almost every human construct, are designed in ways that incentivize each player to navel gaze and defraud the other, even after having promised to collaborate.
This is the key point of the dilemma.
So how do we control the game from the gamemaster standpoint?
In theory, the path to equilibrium can only be achieved by forceful collaboration.
Let’s imagine a game that is being repeated at regular intervals with rewards being given to players at the end of each iteration.
At each new cycle, the gamemaster offers each player the chance to punish the other for not complying collaboratively.
In this theoretical scenario, fraudsters are less likely to cheat in the next cycle out of fear of punishment, which, again in theory, may put everyone one step closer to the best possible outcome.
If only things were that easy.
New insights from Game Theory
In Star Wars: The Rise of Skywalker, Kylo Ren tries to convince Rey to join him after revealing she’s Palpatine’s granddaughter. He argues the dark side is inescapable and should be embraced.
Rey ultimately proves the dark side isn’t tied to bloodlines, we have been granted free will to pursue the righteous path.
So why do we decide to forsake virtue and seek out personal gain even at the cost of others?
Humans are social by nature and we couldn’t have made it this far without a certain degree of cooperative behavior.
Unfortunately, deep within our core matrix, the moral code that should guide us to a better future through collaboration is often trampled by the navel-gazing desire to increase our own payoff at the cost of the other players sitting at the table.
The strong prey on the weak and even if we deny it in the light of our sacrosanct human virtues, basic human behavior often proves otherwise, with countless examples.
Remember the toilet paper rush during the pandemic? Any situation where we find ourselves in the face of personal demise may push us to go back to the most basic animal instincts.
Recent research in game theory suggests that in certain scenarios, equal-level players can exploit the other to get a better payoff while achieving a stable strategy for everyone.
I’ve been looking into ways of extrapolating this into real-world scenarios, namely as applied by social media influencers, Ponzi schemes, and engagement rings.
In each case, the strategy is not based on power but on deceit. The top players exploit others by making them believe there’s something to gain for everyone.
In each of the above, those being exploited play along because they believe they are contributing to a successful game-playing strategy where everyone wins.
However, I have yet to encounter a real-world scenario where this strategy will indeed be stable.
The most likely outcome is that the bubble bursts and the vast majority comes out of the game empty-handed.
If punishment doesn’t solve the problem and collaboration only gets us a reduced penalty, what’s the way forward?
You can’t out-scam a scammer. Just play the game the best way you can. Play by the house rules. Don’t try to come up with your own method to outsmart the jester, or the joke will be on you.
All human ingenuity is a form of play, and in life, everyone can play or be played. So no matter what decision you take, you’ll always be faced with choices and left to wonder, and when in doubt, the best move is to go by the book. So choose wisely instead of risking losing it all, because then you’re done. Game over.
Rui Alves is a language teacher, published author, international book judge, and publisher. He runs Alchemy Publications and serves as editor-in-chief for The Academic, Portugal Calling, Engage, Rock n’ Heavy, Beloved, Zenite, Poetaph, and Babel.
Rui, I have been eager to hear your take on the scammers. Touché as always
Nice to see you, Rui. The Prisoner's dilemma is one I return to often. Social media has become a voluntary prison for many, including human scammers. Bots don't care if the well is poisoned. You're a smart guy - I bet you can figure it out. :)