How to Detect AI-Generated Social Media Content
TLDR
Detecting AI-generated social media content is getting harder as the tools improve, but there are still tells: overly polished prose with no personal specifics, generated profile photos with subtle artifacts, posting patterns that don't match human behavior, and accounts with no history before a certain date. None of these are definitive alone — you're looking for clusters.
- AI-Generated Content
- Text, images, audio, or video produced by machine learning models rather than a human. On social media, this typically means posts written by large language models (like GPT or Claude), profile photos created by image generation models, or video content synthesized from a real person's likeness.
DEFINITION
- Deepfake
- A synthetic media file — usually video or audio — that depicts a real person saying or doing something they did not actually say or do, generated using AI. The term comes from 'deep learning' + 'fake.' Originally used for video face-swaps, it now broadly refers to any AI-synthesized depiction of a real person.
DEFINITION
- AIGC
- AI-Generated Content — an umbrella term for any content produced by generative AI systems. Sometimes used as a policy category on platforms attempting to label or restrict such content.
DEFINITION
Why This Is Getting Harder
Two years ago, AI-generated text had obvious tells: repetitive sentence structures, strange word choices, a certain breathless enthusiasm about everything. A trained reader could spot it quickly.
That’s less true now. Current language models write well. Not always with personality or specificity, but grammatically correct, contextually appropriate, and plausible. The easy tells are disappearing.
Images followed the same arc. Early GAN-generated faces had obvious artifacts — the “uncanny valley” problem. Modern diffusion models produce photorealistic faces that pass casual inspection. The tells are still there, but you have to look harder.
This matters for social media because platforms are full of accounts built around AI-generated content — not as experiments, but as production operations. Some are influence accounts being built up before being used for political or commercial purposes. Some are pure ad-fraud. Some are content farms posting for algorithmic engagement. All of them look more human than they used to.
Text: What to Look For
Generic polish without personal specifics. Real people write about their actual lives — specific places, specific frustrations, specific things that happened to them. AI text is often confidently general. “I’ve been thinking a lot about authenticity online lately” without any specific incident that prompted the thought. “This is such an important conversation” without a personal stake in it.
The hedge-then-claim pattern. AI text often follows a structure: acknowledge complexity, then make an assertion, then invite dialogue. “While there are many perspectives on [topic], I believe [bland safe take]. What do you think?” Real people are more likely to just have an opinion without the scaffolding.
Vocabulary that doesn’t match the claimed identity. A plumber posting about their day should write like a plumber. If the vocabulary and sentence structure reads like a business writing course, something is off.
No mistakes. Real humans make typos, use weird abbreviations, and occasionally write something that doesn’t quite make sense. A feed of perfectly proofread posts is a mild signal.
Lack of timestamps that make sense. Real people post during lunch, in the evening, when something annoying just happened. AI-generated content farms often post on schedules — every four hours, every day at 9am.
Images: What to Look For
Hands. This is the most reliable tell for generated images, though it’s improving. AI image models have historically struggled with fingers — too many, too few, fused together, or oddly proportioned. Look at the hands if they’re visible.
Text within images. AI models generate images with text that is usually garbled, illegible, or follows the shape of text without being readable. If an image has a sign, book cover, or label, zoom in.
Background repetition. Diffusion models sometimes create backgrounds with repeated patterns or smeared objects — trees that blend together, crowd scenes where people look copied. It’s subtle.
Too-perfect faces. Real people have asymmetry, pores, occasional blemishes, varied lighting across their face. Generated faces tend to be slightly too smooth and too symmetric. The eyes in particular often have an eerie quality — they look directly at the camera with an intensity that doesn’t match the casual context.
No environmental context. Real people’s photos have backgrounds that make sense — their actual house, their actual street. AI-generated “lifestyle” photos often have generic, vaguely aspirational backgrounds that don’t correspond to any real place.
Accounts: Behavioral Signals
The profile photo and the posts are both gettable with AI. The account history is harder to fake.
Creation date. An account created in the last few months with thousands of posts and followers is suspicious. Growth that fast is either bought or botted.
No birthday posts, no milestones. Real long-term social media accounts have years of mundane human stuff — birthday wishes, holiday references, reacting to world events in real time. An account that only posts on-topic content with no human texture is a flag.
Follower/following ratio anomalies. An account with 50,000 followers following 12 people that only posts motivational content is probably not a real person who organically built an audience. Bought followers tend to come in round numbers and the account age-to-follower ratio doesn’t add up.
Engagement that doesn’t match content quality. High engagement on mediocre content, or engagement that comes immediately after posting from accounts with similar patterns, suggests bot-boosted distribution.
The Honest Limitation
None of these tells are definitive. Some real people write polished, generic prose. Some real people have new accounts. Some real photos look weird.
What you’re looking for is clusters — multiple signals pointing the same direction. One unusual thing is noise. Three unusual things on the same account is worth skepticism.
The deeper problem is that the tools generating this content are improving faster than the tools detecting it. The tells described above are accurate today. Some will be obsolete in a year as the models get better at mimicking human imperfection.
The only durable solution is verification at the source — proving there’s a real human behind an account before they can post. Detection after the fact is a game you can’t win long-term.
Truliv requires a liveness check before any account can post — not to collect identity, but to prove there’s a live human present. It’s not a perfect solution to everything wrong with social media, but it does mean every account passed a test a bot cannot pass.
Q&A
How can you tell if content is AI generated?
No single signal is reliable anymore, but clusters of signals are meaningful. For text: watch for polished, generic prose that makes broad claims without personal specifics, never uses contractions in an unusual way, and has a certain formal register even for casual topics. For images: look for AI artifacts — hands with wrong finger counts, text that is garbled or illegible, backgrounds with repeated or smeared patterns, ears that don't match. For accounts: new creation date, no personal photos or milestones, posting at inhuman frequency, or posting at consistent intervals suggesting scheduling rather than impulse.
Q&A
What are signs of a bot account?
Common signs include: account created recently with no history, username that is a name plus random numbers (common with auto-generated accounts), profile photo that looks like a generated face (symmetric, perfect skin, slightly uncanny eyes), posting cadence that is too consistent or too frequent for a human, posts that engage with trending topics immediately without any personal angle, and follower/following ratios that look purchased (thousands of followers, follows nobody, or follows in exact round numbers).
Want to be first on a human-only network?
Try Truliv free — no credit card required.