Reminds me of when Reddit posted their year end roundup https://web.archive.org/web/20140409152507/http://www.reddit... and revealed their “most addicted city” to be the home of Eglin Air Force Base, host of a lot of military cyber operations. They edited the article shortly afterward to remove this inconvenient statistic
Lammy 55 days ago [-]
> host of a lot of military cyber operations
Relevant: “Containment Control for a Social Network with State-Dependent Connectivity” (2014), Air Force Research Laboratory, Eglin AFB: https://arxiv.org/pdf/1402.5644.pdf
jsheard 55 days ago [-]
Did they edit it? I stepped forward a few years and it's still there.
The boring but more likely explanation is that "most addicted" is just a weird statistic that produced weird results.
Eglin has something like 50,000 people but it's actual population as a census designated area is more like 5000.
Oak Brook, IL was also "most addicted" but people didn't run with the idea that McDonalds HQ was running psyops.
tracerbulletx 55 days ago [-]
I mean they should. Because corporate influence networks exist just as much as state run ones do.
dialup_sounds 55 days ago [-]
There's a Popeye's at Eglin, maybe all that traffic was a chicken sandwich influence campaign?
tracerbulletx 55 days ago [-]
I'm not saying those trends charts demonstrate anything, just that commercial human astro-turfers or bot networks are no less of a thing than intelligence ones and it wouldn't really be a conspiracy theory to think McDonalds or any other company, trade association, lobbyist, PR firm etc, is operating a lot of social media accounts that could theoretically show up on a report like that if they were doing a lot of it from a specific place.
55 days ago [-]
ffsm8 55 days ago [-]
Urm. They almost certainly are though?
It was generally being called astroturfing when it got more apparent on Reddit in the early 2010s, and definitely didn't get less after.
dialup_sounds 55 days ago [-]
The point is that a vaguely defined throwaway line on Reddit's blog is not great evidence for either.
pabs3 55 days ago [-]
> military cyber operations
You would think such people would be competent enough to proxy their operations through at least a layers of compromised devices, or Tor, or VPNs, or at least something other than their own IP addresses.
mdhb 55 days ago [-]
OP has just completely pulled this analysis out of their ass. They aren’t all constantly running g cyber operations on Reddit, that bears zero resemblance to what cyber operations look like in real life including the point that you raised.
Not sure what the "most addicted" means except for "over 100k visits total" but it doesn't seem to be pulled out of ops ass,
55 days ago [-]
adastra22 55 days ago [-]
Tor was literally invented for this use case.
torginus 55 days ago [-]
Daily reminder (for myself especially) to engage as little with social media (reading/commenting) as possible. It's a huge waste of time anyways not like I don't have better things to do.
survirtual 55 days ago [-]
Addiction is hard.
This is a special addiction because most of us are community starved. Formative years were spent realizing we could form digital communities, then right when they were starting to become healthy and pay us back, they got hijacked by parasites.
These parasites have always dreamed of directly controlling our communities, and it got handed to them on a silver platter.
Corporate, monetized community centers with direct access to our mindshare, full ability to censor and manipulate, and direct access to our community-centric neurons. It is a dream come true for these slavers which evoke a host of expletives in my mind.
Human beings are addicted to community social interaction. It is normally a healthy addiction. It is not any longer in service of us.
The short term solution: reduce reliance on and consumption of corporate captured social media
The long term solution: rebuild local communities, invest time in p2p technology that outperforms centralized tech
When I say "p2p" I do not mean what is currently available. Matrix, federated services, etc are not it. I am talking about going beyond even Apple in usability, and beyond BitTorrent in decentralization. I am talking about a meta-substrate so compelling to developers and so effortless to users that it makes the old ways appear archaic in their use. That is the long term vision.
Walk willingly into platos cave, pay for platos cave verification, sit down, enjoy all the discourse on the wall. Spit your drink out when you figure out that the shadows on the wall are all fake.
andybak 55 days ago [-]
I might have a very different reading of the parable of the cave to you?
Can you elaborate? (At the risk of spoiling the joke)
FloorEgg 55 days ago [-]
I'm not author of parent.
My impression of the joke is that intelligent and knowledgeable people willingly engage with social media and fall into treating what they see as truth, and then are shocked when they learn it's not truth.
If the allegory of the cave is describing a journey from ignorant and incorrect beliefs to enlightened realizations, the parent is making a joke about people going in reverse. Perhaps they have seen first hand someone who is educated, knowledgeable and reasonable become deceived by social media, casting away their own values and knowledge for misconceptions incepted into them by persistent deception.
I'm not saying I agree entirely with the point the joke is making but it does sort of make sense to me (assuming I even understand it correctly).
CalChris 55 days ago [-]
> intelligent and knowledgeable. people willingly engage with social media and fall into treating what they see as truth, and then are shocked when they learn it's not truth.
I also see this with AI answers relying on crap internet content.
FloorEgg 55 days ago [-]
Most content on the internet has been optimized to get attention, not to represent truth.
AI trained on most content will be filled with misconceptions and contradictions.
Recent research has been showing that culling bad training data has a huge positive impact on model outputs. Something like 90% of desirable outputs comes from 10% of the training data (forget the specifics and don't have time to track down the paper right now)
I really hope that AI business models don't fall into relying on getting and keeping attention. I also hope the creators of them exist in a win-win relationship with society as a whole. If AIs compete with each other based on which best represent truth, then overall things could get a lot better.
The alternative seems dreadful.
Edit: I am curious why this is getting downvoted.
CalChris 55 days ago [-]
A small number of samples can poison LLMs of any size
Yeah I saw that one too, which I would think supports my point that distilling down training data would lead to more truth aligned AI.
I mean it's also just the classic garbage in garbage out heuristic, right?
The more training data is filtered and refined, the closer the model will get to approximating truth (at least functional truths)
It seems we are agreeing and adding to each other's points... Were you one of the people who downvoted my comment?
I'm just curious what I'm missing.
maxlin 54 days ago [-]
Good take. I think someone said (might've been Elon) that building an AI but limiting its training material to material from 1870-1970 would avoid a lot of this, as arguably that was the period of greatest advancement of humankind that is not spoiled by bad data, with no social networks and everything having printed having needed to have had more meaning behind it as it took more effort.
It would be VERY refreshing to see more than one company try to build an LLM that is primarily truth-seeking, avoiding the "waluigi problem". Benevolent or not, progress here should not be led just by one man ...
FloorEgg 54 days ago [-]
To me it looks like there are many people working to make AI truth seeking, and they are taking a variety of approaches. It seems like as time goes on opportunities to build truth seeking AI will only increase as the technology becomes more ubiquitous and accessible. Like if the costs of training a GPT-5 level LLM drop 10,000x.
gdulli 55 days ago [-]
I didn't downvote, but it's naive to the point of irresponsibility not to assume and prepare for LLMs being weaponized in the exact way as social media as you alluded to. It's not like human nature or the nature of capitalism has changed recently.
FloorEgg 55 days ago [-]
Are you saying that hope is naive and irresponsible?
intended 55 days ago [-]
What you are hoping for will not occur.
Do hope. But hoping for a unicorn is magic thinking.
For other people, they can either count this as a reason to despair, or figure out a way to get to the next best option.
The world sucks, so what ? In the end all problems get solved if you can figure them out.
FloorEgg 55 days ago [-]
For decades I have continuously studied physics, chemistry, biology, psychology, history, management science, market research, economics, religion, finance, and computer science among many other things. I study for 4-5 hours on average every day, and the rest of my working hours are spent practicing my craft.
The reason I say this is that blind hope and informed hope are two different things.
Media has always relied on novel fear to attract attention. It's always "dramatized"; sacrificing truth for what sells. However AI is like electricity or computation. People make it to get things done. Some of those things may be media, but it will also be applied to everything else people want to get done. The thing about tools is that if they don't work people won't keep using them. And the thing about lies is that they don't work.
For all of human history people have become more informed and capable. More conveniences, more capabilities, more ideas, more access to knowledge, tools, etc.
What makes you think that AI is somehow different than all other human invention that came before it?
It's just more automation. Bad people will automate bad things, good people will automate good things.
I don't have a problem with people pointing out risks and wanting to mitigate them, but I do have a problem with invalid presuppositions that the future will be worse than the past.
So no, I don't think I'm hoping for a unicorn. I think I'm hoping that my intuition for how the universe works is close enough, and the persistent pessimism that seems to permeate from social media is wrong.
defrost 55 days ago [-]
Speaking as someone who has also spent decades both studying and applying STEM and social sciences my commentary is this:
> The thing about tools is that if they don't work people won't use them.
People will and do use tools that don't work. Over time fewer people use bad tools as word spreads. Often "new" bad tools have a halo uptake of popularity.
> And the thing about lies is that they don't work.
History tells us that lies work in the short term, and that is sufficient to force bad decisions that have long shadows.
FloorEgg 55 days ago [-]
> The thing about tools is that if they don't work people won't use them.
My bad. I meant won't keep using them.
> History tells us that lies work in the short term, and that is sufficient to force bad decisions that have long shadows.
What do you mean by "work"?
It sounds like you are implying that a lie "works" by convincing people to believe it?
I meant a lie doesn't work in that if you follow the lie you will make incorrect predictions about the future.
If someone acts on a lie which results in a bad decision with a "long shadow" then wouldn't that mean acting out the lie didn't work?
defrost 55 days ago [-]
Lies work in the sense that they can persuade large groups of people to take courses of action based on their beleif in those lies.
They are used by bad actors to, say, win elections and then destroy systemic safeguards and monitoring mechanisms that work to spotlight bad actions and limit damage.
There are also lies, such as a common belief in Wagyl, that draw people to together and act in unison as a community to help the less fortunate, preserve the environment and common resources, and other things not generally perceived as destructive.
FloorEgg 55 days ago [-]
> Lies work in the sense that they can persuade large groups of people to take courses of action based on their beleif in those lies.
I don't disagree with this. It's reasonable to assume I was talking about that type of "work", but I wasn't.
> There are also lies, such as a common belief in Wagyl, that draw people to together and act in unison as a community to help the less fortunate, preserve the environment and common resources, and other things not generally perceived as destructive.
I am not familiar with this specific culture but I totally get your point. Most religion works like this. I would just consider that the virtues and principles embedded within the stories and traditions are the actual truths that work, and that Wagyl and the specifics of the stories are just along for the ride. The reason I believe this is because other religions with similar virtues and values will have similar outcomes even though the lie they believe in is completely different.
I said that lies destroy, and that wasn't right. Sometimes they do, but as you have pointed out, often they don't.
intended 55 days ago [-]
I applaud your efforts!
You stated:
> I really hope that AI business models don't fall into relying on getting and keeping attention. I also hope the creators of them exist in a win-win relationship with society as a whole.
The ratio of total hours of human attention available to total hours of content is essentially 0. We have infinite content, which creates unique pressures on our information gathering and consumption ability.
Information markets tend to consolidate, regulating speech is beyond fraught, and competition is on engagement, not factuality.
Competing on accuracy requires either Bloomberg Terminal levels of payment, or you being subsidized by a billionaire. Content competes with content, factual or otherwise.
My neck of the words is content moderation, misinformation, and related sundry horrors against thought, speech and human minds.
Based on my experience, I find this hope naive.
I do think it is in the right direction, and agree that measured interventions for the problems we face are the correct solution.
The answer to that, for me, is simply data and research on what actually works for online speech and information health.
FloorEgg 54 days ago [-]
It feels to me like everyone responding to that comment is irrationally pessimistic. However I keep noticing little mistakes in my own wording that alter the meaning away from my intention, and I can't help but think it's my own fault for not making my point more clear.
> I really hope that AI business models don't fall into relying on getting and keeping attention.
What I really meant is that I hope that the economic pressures on media don't naturally also apply to AI. I do think it's naive to hope that AI won't be used in media to compete for attention, I just don't think it's naive to hope that's not the only economic incentive for its development.
I also hope that it becomes a commodity, like electricity, and spills far and wide outside of the control of any monopoly or oligopoly (beyond the "tech giants"), so that hoping tech giants do anything against their incentive structures is moot. I hope that the pressures that motivate AIs development are overwhelmingly demand for truth, so that it evolves overwhelmingly towards providing it.
If this hope is naive, that would imply the universe favors deception over truth, death over life, and ultimately doesn't want us to understand it. To me, that implication seems naive.
The Bloomberg terminal is an interesting example and I see your point. I guess the question is what information is there a stronger incentive to keep scarce. The thing about Bloomberg terminals are that people are paying for immediate access to brand new information to compete in a near-zero-sum game. Most truth is everlasting insight into how to get work done. A counter example are textbooks.
intended 54 days ago [-]
Well, here’s an example of the blind spots we posses. You, and most people, by default privilege “information”. However, in our current reality, everything is “content”. Information is simply content with a flag.
The commodification is towards the production of content, not information.
Mostly, producers of Information, are producing expensive “luxury goods”, but selling them in a market for disposable, commodified goods. This is why you need to subsidize fact checkers, and news papers.
I believe this is a legacy of our history, where content production was hard and the ratio of information to content was higher.
Consumers of content are solving for not just informational and cognitive needs, they are also solving for emotional needs, with emotional needs being the more fundamental.
Consumers will struggle with so many sources of content, and will eventually look towards bundling or focus only on certain nodes.
Do note - the universe does not need to favor anything for this situation to occur. Deception is a fundamental part of our universe, because it’s part of the predator prey dynamic. This in turn arises out of the inability of any system to perfectly process all signals available to them.
There is always place for predators or prey to hide.
FloorEgg 53 days ago [-]
I agree.
I thought of the predator prey frame shortly after posting my last comment.
Maybe it boils down to game theory and cooperation vs competition, and the free energy principle. Competition (favoring deception) puts pressure on cooperation (favoring truth). Simultaneously life gets better at deceiving and at communicating the truth. They are not mutually exclusive.
When entities are locked into long term cooperation, they have a strong bias to communicate truth with each other. When entities are locked into long term competition, they have a strong bias to deceive each other.
Evolution seems to be this dance of cooperation and competition.
When a person is born, overwhelmingly what's going on between cells inside their body is cooperation. When they die, overwhelmingly what happens between cells is competition.
So one way that AI could increase access to truth, is if most relationships between people and AI are locked into long term cooperation. Not like today where it's lots of people using one model from a tech co, but something more like most people running their own on their own hardware.
I've heard people say we are in the "post truth era" and something in my gut just won't accept that. I think what's going on is the power structures we exist in are dying, which is biasing people and institutions to compete more than cooperate, and therefore deceive more than tell the truth. This is temporary, and eventually the system (and power structures) will reconfigure and bias back to cooperation, because this oscillation back and forth is just what happens over history, with a long term trend of favoring cooperation.
So to summarize... Complexity arises from oscillations between competition and cooperation, competition favors deception and cooperation favors telling the truth. Over the long-term cooperation increases. Therefore, over the long-term truth communication increases more than deception.
intended 53 days ago [-]
We are in a post truth era, and discomfort is a side effect of ideology and lack of information.
I’ve been there too, is what I am saying. But, reality is reality, and feeling bad or good about it is pointless beyond a point.
AI cannot increase access to truth. This is also part of the hangover of our older views on content, truth and information.
In your mental mode, I think you should recognize that we had an “information commons” previously, even to an extent during the cable news era.
Now we have a content commons.
The production of Information is expensive. People are used to getting it for free.
People are also now offered choices of more emotionally salient content than boring information.
People will choose the more emotionally salient content.
People producing information, will always incur higher costs of production than people producing content. Content producers do not have to take the step of verifying their product.
So content producers will enjoy better margins, and eventually either crowd out information producers, or buy out information producers.
Information producers must raise prices, which will reduce the market available for them. Further - once information is made, it can always just be copied and shared, so their product does not come with some inherent moat. Not to mention that raising prices results in fewer customers, and goes against the now anachronous techie ethos of “Information should be free”.
I am sure someone will find some way to build a more durable firm in this environment, but it’s not going to work in the way you hoped initially. It will either need to be subsidized, or perhaps via reputation effects, or some other form of protection.
Cooperation is favored if cooperation can be achieved. People will find ways to work together, however the equilibrium point may well be less efficient than alternatives we have seen, imagined or hoped for.
More dark forest, cyberpunk dystopia, than Star Trek utopia.
There’s an assumption of positive drift in your thinking. As I said, this is my neck of the woods, and things are grim.
But - so what? If things are grim, only through figuring it out can it actually be made better.
This is the way the pieces on the board are set up as I see it. If you wish to agency in shaping the future, and not a piece that is moved, then hopefully this explanation will help build new insights and potential moves.
FloorEgg 52 days ago [-]
My move is to focus on making it easier for college students to develop critical thinking and communication skills. Smoothing out the learning curves and making education more personalized, accessible, and interactive. I'm just getting started, but so far already helping thousands of students at multiple universities.
There's one thing that I just realized hasn't come up in our discussion yet which has a big impact on my perspective.
Everything in the universe seems built on increasing entropy. Life net decreases entropy locally so that it can net increase it globally. There also seems to be this pattern of increasing complexity (particles, atoms, molecules, cells, multi cells, collectives) that unlocks more and more entropy. One extremely important mechanism driving this seems to be the Free Energy Principle, and the emergent ability to predict consequences of actions. Something about it enables evolution, and evolution enables it.
This perspective is that gives me more confidence that within collectives the future will include more shared truth than the past, because at every level of abstraction and for all known history that has been the long term trend.
Cells get better at modelling their external environment, and better at communication internally.
The reason why I am so confident we are not "post truth" is because lies don't work, not in the sense that people can't be deceived by lies (obviously they can), but dysfunctional lies won't produce accurate predictions. Dysfunctional lies don't help work get done, and the universe seems to be designed for work to get done. There is some force of nature that seems to favor increasingly accurate predictive ability.
Your perspective seems to be very well informed from what feels like the root of the issue, but I think you're missing the big picture. You aren't seeing the forest, just the trees around you. I know you assume the same of me, that I don't see these trees that you see. I believe you, that what you see looks grim. I also agree we need to understand the problems to solve them. I'm not advocating for any lack of action.
Just suggesting that you consider:
- for all history life has gotten better at prediction
- truth makes better predictions than lies
What's more likely? we are hitting a bump in the road that is an echo of many that have come before it, or something fundamental has materially changed the trajectory of all scientific history up until this point?
Your points about the cost of information and the cost of content are valid. In a sense, content is pollution. It's a byproduct of competition for attention.
I can think of a few ways that the costs and addictive nature of content could become moot.
- AI lowers the cost of truth
- Human psychology evolves to devalue content
- economic systems evolve to rebalance the cost/value of each
- legal systems evolve to better protect people from deception
These are just what come to mind quickly. The main point is that these quirks of our current culture, psychology, economic system, technological stage and value system are temporary, not fundamental, and not permanent. Life has a remarkable ability to adapt, and I think it will adapt to this too.
I really appreciate you engaging with me on this so I could spend time reflecting on your perspective. If I ever came across as dismissive I apologize. You've helped me empathize with you and others with the same concerns and I value that. You haven't fundamentally changed my mind, but you gave me a chance to hone my thinking and more deeply reflect on your main points.
It feels like we agree on a lot, we are just incorporating different contexts into our perspectives.
intended 52 days ago [-]
> I know you assume the same of me, that
Nah. I see it more as there was an information asymmetry, on this specific topic, due to our different lived experiences.
I can feasibly provide more nuanced examples of the mechanics at play as I see them. Their distribution results in a specific map / current state of play.
> - Economic systems evolve
> - legal systems evolve
These types of evolutions take time, and we are far from even articulating a societal position on the need to evolve.
Sometimes that evolution is only after events of immense suffering. A brand seared on humanity’s collective memory.
We are not promised a happy ending. We can easily reach equilibrium points that are less than humanly optimal.
For example - if our technology reaches a point where we can efficiently distract the voting population, and a smaller coterie of experts can steer the economy, we can reach 1984 levels of societal ordering.
This can last a very long time, before the system collapses or has to self correct.
Something fundamental has changed and humanity will adapt. However, that adaptation will need someone to actually look at the problem and treat it on its merits.
One way to think of this is cigarettes, Junk foods and salads. People shifted their diets when the cost of harm was made clear, AND the benefits of a healthy diet were made clear AND we had things like the FDA AND scientists doing sampling to identify the degree of adulteration in food.
——
> My move is to focus on making it easier for college students to develop critical thinking and communication skills. Smoothing out the learning curves and making education more personalized, accessible, and interactive. I'm just getting started, but so far already helping thousands of students at multiple universities.
How are you doing this?
gdulli 55 days ago [-]
Hoping that the tech giants will put truth over profit is folly. Hoping that audiences will reject this is viable.
55 days ago [-]
FloorEgg 55 days ago [-]
> Hoping that the tech giants will put truth over profit is folly.
I never said that though?
> Hoping that audiences will reject this is viable.
I have no clue what you mean. What is "this" refering to?
01100011 55 days ago [-]
The number of otherwise intelligent folks I follow on twitter who occasionally brag or make note of their follower count without realizing 80%+ are bots is way too high.
I think that's by design though. Tolerate bots to get high-value users to participate more after they think real people are actually listening to them.
ninkendo 55 days ago [-]
I’ll take a stab: because twitter isn’t reality, it’s a microcosm. A tempest in a teapot. It’s something that if you step outside of, you realize it’s not the real world.
Leaving social media can be thought of as emerging from the cave: you interact with people near you who actually have a shared experience to yours (if only geographically) and you get a feel for what real world conversation is like: full of nuance and tailored to the individual you’re talking to. Not blasted out to everyone to pick apart simultaneously. You start to realize it was just a website and the people on it are just like the shadows on the wall: they certainly look real and can be mesmerizing, but they have no effect on anything outside of the cave.
mlrtime 55 days ago [-]
Reddit even more so, thats why you see these 'touch grass' comments littered around.
TJSomething 55 days ago [-]
It matches my usual reading pretty closely. Society gives names to things that aren't real and then argues about them. Twitter is a microcosm of this with their own categories and assemblages of ideas that are even less real than those present in broader society.
thrance 55 days ago [-]
OP's twist on the cave allegory is funny and makes sense if you take the usual modern reading, but that is very much not what Plato meant by it.
It was just a way for him to convey his "theory of forms" in which perfect versions of all things exist somewhere, and everything we see are mere shadows of these true forms. The men in the cave are his fellow Athenians who refuse his "obvious" truth, he who has peeked out of the cave and seen the true forms. All in all, it's very literal.
pixl97 55 days ago [-]
Really Twitter may be one of the worse ones, but the internet really has become CGP Grey's this video will make you angry.
tormeh 55 days ago [-]
"It's all fake?"
"Always has been"
BLKNSLVR 55 days ago [-]
I'll try with a Simpsons analogy:
> Walk willingly into platos cave, pay for platos cave verification, sit down, enjoy all the discourse on the wall.
Homer pays to get the crayon put back up his nose
> Spit your drink out when you figure out that the shadows on the wall are all fake.
Homer gets annoyed/surprised if someone calls him stupid.
parineum 54 days ago [-]
> Spit your drink out when you figure out that the shadows on the wall are all fake.
The shadows on the wall aren't fake, they are just... shadows of real things. Plato's cave is about having an incomplete view of reality, not a false view of reality.
mock-possum 55 days ago [-]
This is my sentiment exactly, and you put it a lot more succinctly than I was thinking.
pfannkuchen 55 days ago [-]
You’re talking about public school, right?
array_key_first 55 days ago [-]
No I think he's talking about Twitter. You can tell because this post is about Twitter.
pfannkuchen 54 days ago [-]
Yes, my point is that the consensus reality is also the cave. It’s caves all the way down.
protocolture 54 days ago [-]
No. I am considering home schooling my little one, but mostly due to the whole 2 Sigma problem rather than any perceived falsehood with public schooling.
ragazzina 55 days ago [-]
What did you learn in school that was actually fake?
pfannkuchen 55 days ago [-]
I didn’t say it was fake, per se. What happens is undersampling that conveniently aliases with a supporting story for the present moral zeitgeist. It’s not hard to find samples that contradict the story. This happens primarily in history, but also in auxiliary classes that touch on history or morality such as various humanities courses and what little is covered of economics.
protocolture 54 days ago [-]
The Private Catholic school I attended was forced to teach a particular religious education curriculum by the state. However, that curriculum was highly modular, and the only testing of their methodology was final test scores. So we skipped the entire section on islam (Test was "Pick 2 religions and compare them, we were given catholicism and judaism), and we also skipped every chapter under catholicism that implied any wrongdoing, historically on the part of the catholic church, like the occupations of the holy land.
pfannkuchen 54 days ago [-]
Sure, everyone knows this about religious schools. Does everyone accept this about secular schools? My impression is that they do not.
davkan 55 days ago [-]
And this is in contrast to private schools how? Just that they may diverge from the current moral zeitgeist to insert their own morals in the same places?
pfannkuchen 54 days ago [-]
I’m not contrasting with private schools, I’m just not sure 100% of private schools do this so I focused on what I know.
It’s kind of funny how everyone projects their own dialectic framing on statements, and assumes that a person opposing side A automatically supports whatever is side B in their own mind.
davkan 54 days ago [-]
Yes everyone does do that. Most people will not take a tangential single sentence criticism of an institution to mean that you hold none of the typical accompanying political views and are just narrowly opining on your experience.
I would imagine a large majority of readers read your original post and immediately in their head thought, “are they one of those school voucher people” or something along those lines.
pfannkuchen 54 days ago [-]
I don’t really understand why it’s necessary to assume anything at all. Can the statement not be taken at face value?
If we are all going around assuming 99% of the positions of people we are engaging with, what is the point of discussing anything?
lateforwork 55 days ago [-]
I assume the country of origin is detected based on IP address. These fakers will now create Azure VMs hosted in the US, then login to those VMs and use X from the VM. A lot of scammers disguise their location using this method.
giancarlostoro 55 days ago [-]
Yes and no. It also shows which app store country the account is tied to, and that my friend is a little bit more work. It also shows an icon when it suspects VPN. A lot of these foreign run accounts are in fact not using VPN and their host country matches their app store country. Lots of "e-girl" type of accounts are foreign owned, and there's an insane number of racist accounts LARPing as American run from places like Turkey, and other countries. I think my favorite call out was some Canadian account that nobody realized was Canadian. I think if you're going to inject yourself in the politics of other countries your audience deserves to know if you're not even living there.
01100011 55 days ago [-]
You don't need an app to use X though. I've been on X for over 5 years and never installed the app. In fact, X is far better with Firefox+uBlock on mobile.
giancarlostoro 54 days ago [-]
It points out when they are not using the app, and if they suspect a VPN. I saw one screenshot where it said "Desktop Browser" or something to that effect.
throwaway48476 55 days ago [-]
Country of origin is based on IP. Many British accounts are using VPNs due to the online safety act and this is noted by X. X also shows the country of the app store the app was downloaded from which is more accurate.
Ironically many of the people in favor of banning VPNs are themselves using a VPN.
brookst 55 days ago [-]
> Ironically many of the people in favor of banning VPNs are themselves using a VPN.
It’s ironic but also completely typical.
Same way so many people publicly freaking out about homosexuality turn out to be gay. There’s something in human nature that makes people shout about the dangers of the things they themselves do, some kind of camouflage instinct I guess.
mlrtime 55 days ago [-]
It seems a little self evident? A heroin addict might say they love it and never want to quit, at the same time say it's harmful, should be banned and nobody should ever try it.
dehugger 54 days ago [-]
Your analogy is equating homosexuality to heroin addiction. Was this intentional?
Slava_Propanei 54 days ago [-]
[dead]
ffsm8 55 days ago [-]
Okay, I came across another video talking about Roblox and it's pedo problems, and I think that's problematic. And I might talk about that if the topic goes toward "problematic things that are currently on my mind".
And with that statement you ironically insinuate that I'm a pedo
You're not the first person that made that argument (that the people talking about a problem actually are the real perps!), but from my perspective it feels more like an easy way to make it socially unacceptable to talk about categories of issues. Which is likely intended by the person making this argument, likely because... You see were this is going?
viraptor 55 days ago [-]
Parent said "many" and didn't in any way insinuate it's an implication you can run the other way.
yupyupyups 55 days ago [-]
>Ironically many of the people in favor of banning VPNs are themselves using a VPN.
Remember that China blocks Western social media, yet posts a lot of Chinese government propaganda on Western social media. Making VPNs illegal for the general public does not entail making VPNs inaccessible to government agents.
SoftTalker 55 days ago [-]
Sounds like how Congress exempts themselves from many of the laws they pass.
mlrtime 55 days ago [-]
> Ironically many of the people in favor of banning VPNs are themselves using a VPN.
How do you know this as a fact?
neom 55 days ago [-]
I'm not sure how they are getting the info but it's not as simple as logged in IP, mine says I'm based in Costa Rica, I was on vacation there 2/3 months ago, but it's not primarily where I use my x account and I've logged in from a phone and a computer from other countries since, and CR would be a relatively small amount of time in my total usage, so I find it strange it thinks my account is based there.
SmirkingRevenge 55 days ago [-]
Interesting. Maybe gps location data snapshots factor into it. You could probably defeat that with app permissions though. GPS spoofing is also possible, but a lot more friction for troll accounts.
Or maybe they are able to link carrier-sourced cellphone location datasets to particular twitter accounts. Those aren't going to be real-time though, so something like that could explain the lag.
neom 55 days ago [-]
I was thinking about it a bit more this morning, in Costa Rica I used a local sim from a local carrier, but since then I've been traveling but using esims from Airalo, they still use a local carrier, but I wonder if it's kinda like how the 2 factor auth stuff often won't let you use a VOIP/twilio number it needs to be from a real carrier, I wonder if x has a matrix of signals they use to decide to switch it or not, and within the carrier metric, esim re-seller is deprioritized over a real telco or something? Who knows, but it's kinda fun to think about! :)
Fanofilm 55 days ago [-]
X shows a "LOCK" icon when they are coming in VPN. To out them.
Also, it shows which country's app store you installed your app. For this reason, when they use their mobile app, it will be outted that way.
LordDragonfang 54 days ago [-]
Lock means the account is private, not that it's using a VPN.
acheong08 55 days ago [-]
It's not via IP address. I created my account using a US data center IP back in 2022 from Malaysia. I am now in the UK, using a Swiss VPN IP. My location shows up as Japan...
trollbridge 55 days ago [-]
I would suspect they are deducing country of origin via ad targeting, which is far more precise than just geolocating IP addresses.
ebbi 55 days ago [-]
Are Azure VMs different to a VPN? Sorry I'm not the most technical.
Reason I ask is because there are few people I follow that use VPNs but their location is accurate on X.
Also, X also shows where you downloaded the app from, e.g. [Country] App Store, so that one might be a bit more difficult to get around.
mlrtime 55 days ago [-]
It was a bad example as it's quite easy to detect cloud operator endpoints (their internet gateway). Try it sometime and see how many web site make you go through some captcha maze.
They would most likely use residential proxy/vpns that show your traffic coming out of a regular household ISP. They can be purchased for cheap.
hypeatei 55 days ago [-]
A VPN is just a tunnel to a server somewhere (in this case, an Azure VM) so anywhere you can rent/run a server is a place that you can setup a VPN and pass all your traffic through.
lateforwork 55 days ago [-]
You can use X through your web browser, avoiding the app store.
giancarlostoro 55 days ago [-]
You can but a lot of people use their phone and the official apps. It also shows if you primarily use a browser. :)
Ajedi32 55 days ago [-]
It looks like this shows where each account was originally created from. So new accounts can get around it, but all existing accounts that didn't have the foresight to be using a VPN from the start are now burned.
Going forward this is going to be a bit of a cat-and-mouse game. There are plenty of other tricks X can do to determine country of origin. Long term I agree the sock puppets have the upper hand here, though forcing them to go through the effort is probably a good thing.
ec109685 55 days ago [-]
App Store country of origin too weighs in.
chrismorgan 55 days ago [-]
Google thinks my account is American for Play Store geo limiting purposes, and if I recall correctly would only let me update it by adding a payment card, which I refuse to do. I don’t know where they even got that idea—they should have known full well I was Australian. My best guess is that for a few years I used a phone I bought while visiting America. But it was neither my first phone nor my most recent, and the account was at least five years old before I even visited the US.
55 days ago [-]
fnord77 55 days ago [-]
You also need a phone number tied to your twitter account.
adastra22 55 days ago [-]
You do? I don't.
ddtaylor 55 days ago [-]
Residential VPNs are already so cheap.
dawnerd 55 days ago [-]
Exactly how anyone still scraping Twitter does it. Dirt cheap. Same with accounts to use to get around api limits.
inesranzo 55 days ago [-]
Blocking datacenter IPs it is then.
lateforwork 55 days ago [-]
Or identify them as using a datacenter IP.
paxys 55 days ago [-]
No need, X has already rolled the feature back. I assume because the boss didn't like what it uncovered.
quotemstr 55 days ago [-]
No, it's live.
55 days ago [-]
duxup 55 days ago [-]
I never quite "got" twitter, it was never fun for me to participate on. It's telling / disturbing that folks had such trust in random accounts ...
afavour 55 days ago [-]
It’s an interesting phenomenon. I don’t think people en masse have trust in any of these accounts. But when your feed is filled with the kind of stuff they say day in day out it still affects your overall perspective of the world. “No smoke without fire” on a subconscious level.
brandensilva 55 days ago [-]
One of the many reasons why I try and bombard myself with different views and perspectives. Foreign vs domestic. Left vs right. Personal vs official.
I don't do this with every topic unless I'm interested in discussing something just so I'm more informed just to reduce bias.
laxd 55 days ago [-]
I've felt it through my guilty pleasure of scrolling instagram reels periodically. They've obviously changed their algorithm from time to time and it's crazy how I've intermittently have gotten endless right wing stuff and leftist ridiculing and thinking there's a lot good points. Then it's suddenly just convincing leftist material again or at best you're-all-dumb content.
It's really fucked how the online content providers have moved from letting you seek out whatever you might fancy towards deciding what you're going to see. "Search" doesn't even seem like an important feature anymore many places.
Balgair 55 days ago [-]
We know they're just optimizing for time spent on the site/app, as a proxy for the number of ads they show you and then get paid for.
But the thing that was supprising to me, as someone that remembers the world before the internet, is that anger is the thing that makes people stay on a site.
Before the internet came along, one would have thought that Truth would be the thing. Or funniness, or gossip, or even titalation and smut. Anger would have been quite far down on the list of 'addicting' things. But the proof is obvious, anger drives dollars.
There's no putting this knowledge away now that we know it.
The lesson only question is what are we going to do about it?
throwaway48476 55 days ago [-]
The internet has created low intentionality people.
hennell 55 days ago [-]
hacker news, Reddit and similar have always been about following subjects or topics you like, getting the latest discussion in a field of your interest. twitter was all about following people not topics, so you'd get a wider range of topics, but you tended to focus on accounts more and give more weight to specific users than you might here.
If you followed a variety of people it was quite addictive - so many celebrities or other notable people meant you got actual "first hand news", and it was fun seeing everyone join in on silly jokes and games and whatever, that doesn't hit quite as hard when it's just random usernames not "people".
But it suffered for that success, individual voices got drowned out in favour of the big names, the main way to get noticed becoming more controversial statements, and the wildly different views becoming less free flowing discussion and more constant arguments.
It was fun for a while if you followed fun people, but I think the incentives of such systems means it was always going to collapse as people worked out how to manipulate it.
mlrtime 55 days ago [-]
Reddit (at least anything that ever shows up on /r/all) is no different than X/Twitter. Even nice tech subs have the same issues occasionally.
X and Reddit are no different.
duxup 55 days ago [-]
I think the fact that you never have to interact with /r/all on reddit makes it quite different.
mlrtime 55 days ago [-]
Isn't x the same (I don't use it much). Can' you just look at posts from people you want?
spprashant 55 days ago [-]
For a long time, I did not get twitter either. But it seems to be the only popular platform where the academics and intellectual class want to hangout. Economists, researchers, policy wonks prefer posting on twitter over any other social platform.
shoobiedoo 55 days ago [-]
It is also the only way to get my city's public transport system to reply to queries about why a bus is extremely late, when/if it is coming. I always get a nice polite reply because it's publicly available. If I call I get stonewalled with endless call center rerouting eventually leading to a dial tone
Seattle3503 55 days ago [-]
Righting Wrongs by Kenneth Roth said something along the same lines, except in this case he said as director of human rights watch he was able to get the attention of despots and change their behavior by posting on twitter. It's clear there are some benefits. Roth's messaging would probably not be impacted by revealing his nation of origin, so it doesn't seem like have to throw the baby out with the bathwater.
Dan_- 55 days ago [-]
Many of those people have moved to Bluesky. Which has its own issues.
mdhb 55 days ago [-]
That hasn’t been true now for sometime. That crowd is all on Bluesky now.
jeromegv 55 days ago [-]
A lot of those people have left.
gonzobonzo 55 days ago [-]
All social media (including HN) is horrible in some ways. And they all suffer from too many people being overly credulous to random comments.
But the problem with over credulity goes far beyond social media. I've gotten strong push back for telling people they shouldn't trust Wikipedia and should look at primary sources themselves.
FranzFerdiNaN 55 days ago [-]
> I've gotten strong push back for telling people they shouldn't trust Wikipedia and should look at primary sources themselves.
Yeah, but basically nobody is capable of evaluating those sources themselves, outside of very narrow topics.
Reading a wikipedia page about Cicero? Better make sure you can read Latin and Greek, and also have a PhD in Roman history and preferably another one in Classical philosophy, or else you will always be stuck with translations and interpretations of other people. And no, reading a Loeb translation from the 1930's doesnt mean you will fully understand what he wrote, because so much of it all hinges on specific words and what those words meant in the context they were written, and how you should interpret whole passages and how those passages relate to other authors and things that happened when he was alive and all that fun stuff.
And that's just one small subject in one discipline. Now move on to an article about Florence during the Renaissance and oh hey suddenly there are yet another couple of languages you should learn and another PhD to get.
awesome_dude 55 days ago [-]
Probably the thing I loved about it most was the fact that I could talk directly with people that I felt had a real impact in the world
Scientists/Researchers
Journalists
Activists
Politicians
Subject Matter Experts (for the fields I am interested in)
There were (when I was using it) a large number of "troll" accounts, and bots, but it was normally easy to distinguish the wheat from the chaff
You could also engage in meaningful conversations with complete strangers - because, like Usenet, the rules for debate were widely adopted, and transgression results in shunning (something that I rarely see beyond twitter to be honest)
macintux 55 days ago [-]
Yep, I effectively landed my favorite job by engaging with the Erlang community on Twitter. I miss it, but it just got to be too toxic during the 2016 election cycle (in fairness, everything was too toxic then, and it hasn’t gotten better since).
awesome_dude 55 days ago [-]
I think that ALL communities become toxic as they grow
I often hear that one community, or another, is "really good, not toxic at all, which is true when it starts (for tech, whilst it's "new" and everyone is still interested in figuring out how it works, sharing their learnings, and actively working to encourage people to also take interest)
Then idealism works it way in - this community is the greatest that every existed ever - and whatever it is centred is the best at whatever
Then - all other things are bad, you're <something bad> if you think otherwise
And, boom, toxicity starts to abound
For me, I've seen it so many times, whether in motorised transport (Motorcycles vs cars, then Japanese bikes vs British/European/American then individual brands (eg Triumph vs Norton), or even /style/ of bike (Oh you ride a sport bike, when clearly a cruiser is better...))
In the tech scene it's been Unix vs Microsoft, then Microsoft vs Linux or Apple, and then... well no doubt you've seen it too
Uhm I would rather say it is when the idealists are pushed out by grifters is when things get bad for a community.
awesome_dude 54 days ago [-]
That sounds like an ideal (but mildly toxic, "grifters" is a negative label don't you think) way for things to be ... :P
jeffbee 55 days ago [-]
It is basically two totally distinct products: the "Following" feed that you can make it as you like, and the "For You" that is just a stream of the stupidest posts imaginable by people you don't know.
stephen_g 55 days ago [-]
I stopped using a few years ago it even so - while the Following feed is much better than the other one, the replies of anyone with even a bit of reach would still just be a sewer of bots and trolls. It was impossible to have meaningful dialogs with that. Twitter used to be better at hiding that nonsense but that changed.
PKop 55 days ago [-]
A lot of people did not have trust and have been asking for this country-of-origin feature for years. Better would be if they bring back country of initial account creation, or some way to identify VPN usage.
mlrtime 55 days ago [-]
Which ones do you use then besides HN obviously? I'm interested to know what you think = anonymous + trust.
duxup 55 days ago [-]
>what you think = anonymous + trust.
I really don't as far as social media goes. If I see a link here the account posting it likely doesn't play any part, trust comes from the source of the content more than random user.
ebbi 55 days ago [-]
My main use case is to get up-to-date news on things that mainstream media doesn't cover accurately.
And to be fair, a lot of these accounts that are exposed as grifters were called out as such for a while now. And most of them were so obviously griftery that the only ones that followed them were those that were already so deeply entrenched in their echo chamber.
It's funny that they're explicitly being exposed now though!
JoshTriplett 55 days ago [-]
> My main use case is to get up-to-date news on things that mainstream media doesn't cover accurately.
Or hasn't covered yet. It's interesting to watch the cycle of "shows up on social media" then "shows up in industry-specific press" then "shows up in mainstream press", with lag in each step.
These days, Fediverse is providing the same thing for some industries. You see stuff show up there first, then show up on X and industry press a little later, then mainstream press a little later.
awesome_dude 55 days ago [-]
I think that almost every platform has gone through a period where the latest breaking news was being published.. as it happened
IRC
Usenet
Reddit
Facebook (live)
Twitter
quickthrowman 53 days ago [-]
There was a period of time when it was ‘random phpBB forums’ too, 2000 to 2006 or so?
ok123456 55 days ago [-]
IRC is fine.
ecoled_ame 55 days ago [-]
It’s an online bar to meet girls and flex your creativity.
malfist 55 days ago [-]
If you believe those girls fawning over your creativity are real, then I've got a link for hot single milfs in your area wanting to talk to you.
ecoled_ame 55 days ago [-]
[flagged]
ceejayoz 55 days ago [-]
The sort of people who think "girls need colorful websites and a bunch of them are friends with me on Twitter" are the same ones who think "that stripper really likes me".
lazide 55 days ago [-]
[flagged]
taejavu 55 days ago [-]
Do you realise how patronising it is to say that girls like Instagram because it has better colours? I'm kind of shocked to see this take on HN.
lazide 55 days ago [-]
I like how that is the patronizing part. Don’t worry, it’s also because of the thirst traps and bars/bands.
Same reason why most 20 something dudes are too.
ecoled_ame 54 days ago [-]
he means like “girly girls”. girls is short for girly girls
yupyupyups 55 days ago [-]
It's probably an anecdote.
gnerd00 55 days ago [-]
this is blatantly biased -- most idiots here are not girls at all ! /s
agentifysh 55 days ago [-]
X obviously isn't the only platform where this is taking place and it is curious as to why they rolled it back.
how open are you to a US citizen verified town square online? You'd have to scan your passport or driver license to post memes and stuff.
samrus 55 days ago [-]
It doesnt have to be US citizen only. It just has to be who they are claiming to be. If someone in india or europe wants to comment on foreign politics, thats fine. They just shouldnt be able to pretend they are from the US or anywhere else
Barrin92 55 days ago [-]
a town square isn't just a place, it's always a polity that requires common values and a shared culture. Otherwise you at best have an airport lobby.
A town square in Cologne where 90% of participants don't hail from Cologne but London, Mumbai and San Francisco aren't going to solve the problems of Cologne or have any stake in doing so.
Which also reveals of course what Twitter actually is, an entropy machine designed to generate profit that in fact benefits from disorder, not a means of real world problem solving, the ostensible point of meaningful communication.
sagarm 54 days ago [-]
I think it's quite clear to anyone paying attention right now that sharing a polity does not mean sharing values and culture.
mschuster91 55 days ago [-]
> A town square in Cologne where 90% of participants don't hail from Cologne but London, Mumbai and San Francisco aren't going to solve the problems of Cologne or have any stake in doing so.
Upholding at least some utterly basic foundational values of humanity doesn't require holding any stake.
Steltek 55 days ago [-]
And if you're not interested in upholding basic values? What if you're looking to intentionally destroy things instead?
Verified residency is better than nothing for putting real money on the table. Although if you've been to a local town meeting, you'll know it's still not perfect.
embedding-shape 55 days ago [-]
> utterly basic foundational values of humanity
Except human across the planet doesn't even agree on those "foundational values". What seems obvious and fundamental to us, often isn't to others.
iamnothere 55 days ago [-]
> how open are you to a US citizen verified town square online? You'd have to scan your passport or driver license to post memes and stuff.
I had this same idea before and it’s not terrible. If it guaranteed user privacy by using an external identification service (ID.me?), it might get some attention. You would likely have to reverify accounts every 6 months or so to limit sales of accounts, and you would need to prevent sock puppets somehow.
If you allow pseudonymity you would get some interesting dynamic conversations, while if you enforced a real name policy I think it would end up like a ghost town version of LinkedIn. (Many people don’t want to be honest on a “face” account.) The biggest problem with current pseudonymous networks like X/Twitter is you have no idea if the other person really has a stake in the discussion.
Also, if ID were verified and you could somehow determine that a person has previously registered for the service, bans would have teeth and true bad actors would eventually be expelled. It would be better to have a forgiving suspension/ban policy because of this, with gradually increasing penalties and reasonable appeals in case of moderation mistakes.
agentifysh 55 days ago [-]
wouldn't that happen with this too if you require people to sign up with their US passport because then now everything you type is going to have much more weight I guess facebook in a sense is already like this with all the verification required.
the linkedin effect seems more due to the nature of corporate culture where everyone's profile is an extension of their persona optimized for monetary/career outcomes so you get this vapid superficial fakeness to it that turns people off.
this X feature does make it interesting like for example engaging with US politics while shouldnt stop commentary from foreigners it definitely should contain the limits of perception meddling
iamnothere 55 days ago [-]
That’s why I think verification has to be done with an external service provider using an anonymized token, to verify is/is not citizen. It would have to be very clear that the social site does not have your identity.
> the linkedin effect seems more due to the nature of corporate culture where everyone's profile is an extension of their persona optimized for monetary/career outcomes so you get this vapid superficial fakeness to it that turns people off.
The same would happen if people knew your IRL identity on a social site, see all the attempted “cancellations” on both sides of the aisle these last few years.
mlrtime 55 days ago [-]
You could easily builds this, but my guess is people would use it but very sparingly.
My small neighborhood has a non-anonymous chat group, which is 2-3 streets (~50 houses) inside a village which is inside a city. It is basically just a mini nextdoor but without ads or conspiracies.
tracerbulletx 55 days ago [-]
The propaganda apparatus will adapt if that becomes common so its not a permanent solution but it's nice for now.
Barbing 55 days ago [-]
Yar.
I wonder how much more expensive per post it would be for the bad guys if social networks required the most draconian verification technology, like a hardware-based biometric system you have to rent, and touch or sit near when posting on social media. And maybe you have to read comments you want to post to a camera.
Even at such a ludicrous extreme, state actors would still find ways to pay people to astroturf. But how effective would extraordinary countermeasures like that be, I wonder.
(Also I think high global incomes would greatly mitigate the issue by reducing the number of people willing to pretend they genuinely hold views of foreign adversaries and risk treasony kinda charges.)
embedding-shape 55 days ago [-]
> and it is curious as to why they rolled it back.
I took a look at some X profile's I know where they're based, and a couple of other random, and I can see "Account based in" and "Connected via" for all of them, just logged in as a free user.
Is it possible they enabled it back again?
p1necone 55 days ago [-]
I'm not really surprised it was rolled back given Musks political leanings. I am surprised it was even added in the first place though, surely this outcome was obvious?
bpodgursky 55 days ago [-]
It was rolled back temporarily because the first version had an "account created in country [X]" indicator that was found to be unreliable. The new version (which is active now) just has the country the user is currently in.
p1necone 55 days ago [-]
Sounds like this will stay useful for like a few days at best until these accounts work out what VPN to use to spoof the location properly.
embedding-shape 55 days ago [-]
I'm seeing two locations fields, current country and what country's appstore they first signed up with, it any, otherwise it says "web".
agentifysh 55 days ago [-]
I don't think so because it seems both sides were engaged with non-American IPs running hugely popular accounts and it makes sense, why wouldn't you play both sides when you are paid for attention?
I'm thinking Nikita is falling out with Elon as they both seem to have diverging goals with the platform. Advertisement revenues on X isn't that great and neither are conversions on X so you can't really get consistent payouts that match Youtube. Premium subscriptions don't bring in as much dough as advertising did during Twitter days.
ceejayoz 55 days ago [-]
> I don't think so because it seems both sides were engaged…
One side has largely left X.
pseudo0 55 days ago [-]
The stats don't bear that out. Bluesky has been losing momentum since the election, with its DAU dropping from around 3.5 million to under 1.5 million today. For comparison Twitter has over 100 million. Right-wing alternative platforms had similar issues sustaining momentum, despite a much stronger push factor (right-wing people kept getting banned). It's hard to overcome the power of Twitter's network effect.
We're on a thread about widespread fake/inauthentic users on Twitter right now. I see very little reason to trust those numbers.
stressback 55 days ago [-]
Is it any more trustworthy than you saying above that "one side has largely left X"?
ceejayoz 54 days ago [-]
I like to consider myself more trustworthy than Musk, yes.
jeromegv 55 days ago [-]
The stats show that Twitter is going down overall. People can’t handle the amount of bots and discourse over there
Threads is basically at the level of Twitter now.
Correct, and taken over reddit which has the same issue as X.
SV_BubbleTime 55 days ago [-]
>I'm thinking Nikita is falling out with Elon
Hmm, interesting insight, what did they each say when you talked to them?
abirch 55 days ago [-]
Seems like they would have had the statistics. It's a shame that they rolled it back. I'm not necessarily an Elon fan but I respected this feature immensely.
marginalia_nu 55 days ago [-]
Feature is online for me now. Maybe A/B test, or incomplete rollout?
energy123 55 days ago [-]
It's less about political leanings and more about profits. There's a reason Jack Dorsey didn't do this, or FB or Reddit.
awesome_dude 55 days ago [-]
And why IRC went from default showing IP information to cloaking
Zak 55 days ago [-]
I think that's mostly to do with script kiddies trying to DoS anyone they disagreed with.
mlrtime 55 days ago [-]
Oh yeah, I forgot about that. Anyone remember WinNuke?
mschuster91 55 days ago [-]
That was and still is a choice of the user.
The problem with not using a cloak was that you'd stand a very real chance of getting DDoS'd or, worse, outright hacked (made easier by the fact that in ye olde modem days, your computer was directly exposed to the Internet with no firewall/NAT to protect you), and even with using a cloak and a NAT router you'd still have trolls sending "DCC SEND" [1] into channels, immediately yeeting a bunch of people with old shoddy middleboxes.
Libera has a policy of just handing them out (to anyone that registers)
> Accounts registered after March 2024 that have a verified email address are automatically assigned a generic user cloak. If your account does not currently have a cloak, you may contact staff to receive one.
I’d be willing to believe Musk was actually surprised. Like a lot of people into heavy political ideology he seems to vastly overestimate the number of people who think the same way about things. He seems to inhabit a serious echo chamber.
lazide 55 days ago [-]
When you have that much money, it’s actually hard to find someone who will tell you something you don’t want to hear that is actually true and isn’t doing it just to ragebait you or the like.
And I don’t think he’s been trying all that hard either.
SmirkingRevenge 54 days ago [-]
The most valuable use case for AI might just be to roast billionaires surrounded by sycophants
Instead they built better sycophants
lazide 54 days ago [-]
Why would billionaires pay to be roasted when they could pay to be sucked off instead?
And let’s be honest, you know what you’d do too if it was you.
api 55 days ago [-]
Good point. The same thing happens to heads of state. Putin thought he could take Ukraine in a few weeks because he is surrounded by yes men.
smnthermes 54 days ago [-]
If Putin was that stupid, he'd already have been ousted.
lazide 54 days ago [-]
The beauty of being rich/powerful enough is that when the yes man turns out to be wrong, you can throw them into the meat grinder too.
Do it enough times, and you end up with yes men that also force other people into the meat grinder well enough you don’t have to care, directly.
It’s a type of genius. It works best when you embrace that everyone wants to suck up to you anyway, and there are always more flunkies where they came from, so you’re really helping the world out by filtering down to the somewhat effective ones ASAP.
p3rls 55 days ago [-]
[dead]
mjbale116 55 days ago [-]
All I want to know, is whether I am talking to an actual person.
And I also want that person to have a single account, not multiple ones
JohnTHaller 53 days ago [-]
They rolled it back because the right-wing accounts that Musk et al had been boosting turned out to be foreign actors.
Muromec 55 days ago [-]
If only there was some kind of PKI that could attest the identity of the person. It's a shame, that US doesn't have a government capable of running it.
8note 55 days ago [-]
if a guy in india can make great MAGA posts, is that really a problem?
its got the followers because the followers want to read and reshare it.
id maybe like to see the location of origin as a pie chart on the followers list, as well as on what theyre following, but if the idea is good(for whatever definition if good)
is being american even particularly relevant? i dont think the random guy in indiana's opinions on Mamdani are any more relevant than a random guy in nigeria's.
mdhb 55 days ago [-]
With the exception of evangelical Christian’s where there’s obviously a huge amount of overlap I’ve never seen a group so eager to be lied to and so lacking in critical thinking as MAGA folks.
ch2026 55 days ago [-]
every conversation of note here on HN is heavily manipulated too. any discussion platform where accounts can promote or demote other messages are all subject to rampant manipulation and propaganda.
throwaway48476 55 days ago [-]
Niche subjects are much nicer because there's often no incentive to manipulate.
mlrtime 55 days ago [-]
Anything to do with money being made will have manipulation, niche or not.
NaomiLehman 55 days ago [-]
there is a big incentive to manipulate HN though. it's not so niche either. lobste.rs is niche
throwaway48476 55 days ago [-]
If you're discussing products sure. But what incentive is there to manipulate discussing the rust language? It's free.
rzerowan 55 days ago [-]
On a technical note , is geolocation ever truly accurate. I guess they are doing this by IP and App store records - which are generally trivial to change.
IP blocks can shift and get repuposed so thats not accurate , and app store change is just a toggle away.
Is there any forum/app thats geolocked their users successfully to bypass IP recog or VPNs? I think only the national level carriers could pull of something like this , as no commercial entity would willingly restrict their global growth.
bawolff 55 days ago [-]
I believe there are companies that offer more accurate geolocation services by essentially having a deal with phone companies where they get secret info where the customer actually is from mobile companies records.
SmirkingRevenge 54 days ago [-]
You can definitely buy cellphone location data but afaik it's anonymized with an mobile identification #.
But there may be ways to link those records to a platform's users
ChrisArchitect 55 days ago [-]
Related:
X begins rolling out 'About this account' location feature to users' profiles
Wechat does not reveal "user location", it only applies to "public account". It's not what you'd imagine for an "account", it's more like an "news feed" where org can publish articles.
However if you comment on those articles, your provincial location would be attached. the Cyber Admin of CCP mandates every app to reveal the provincial location for author and commenter.
sunaookami 55 days ago [-]
It does for other social networks like Weibo, Douyin and Xiaohongshu though, you can see the location of the user (province-level for China, country-level for anyone else) on their profile.
comeonbro 55 days ago [-]
It should be noted that this has not revealed anywhere near as much as was being eagerly anticipated. So far nearly all the screenshots I've seen passed around are relatively low-follower barely-known accounts even in each one's own aligned political sphere.
sire404 53 days ago [-]
There were multiple accounts with hundreds of thousands of followers. One with 1M.
NaomiLehman 51 days ago [-]
yeah but barely any engagement
mcintyre1994 55 days ago [-]
I half think the place this ends up is that they make opting out of this part of the ‘verified’ account feature set. Not much point paying for a fake blue check if you can’t use it to get engagement.
afavour 55 days ago [-]
People will read all kinds of political implications into this but IMO it reflects something simpler and perhaps more damning for X: that paying users for the engagement their posts make is a fundamentally bad idea.
If you’re looking to make some money on X you want engagement. If you want engagement you want to say controversial things people will argue about. What better than right wing US politics, especially when the X algorithm seems to amplify it?
sunaookami 55 days ago [-]
Reminds me of a japanese news article about that topic where they exposed some of these accounts - these are mostly from India and other poor countries were posting fake engagement makes them a LOT more money than any other job. In this case there were some accounts that searched for Japanese earthquake news and posted condolences etc to drive up their numbers (this was after the 2023 Noto earthquake).
p3rls 55 days ago [-]
[dead]
8note 55 days ago [-]
yeah, id reasonably describe the results as a list of locations where the couple bucks that twitter will pay is enough to get by.
for canada though, id like to see the CBC dedicatedly paying canadians to post canadian perspectives on social media
rzerowan 55 days ago [-]
To add to this , twitters algo seems primed to amplify controvesial topics to boost engagements.Why some topics always seemsto keep getting boosted while others barely trend.
Which for many enterprising trolls/grifter have seen them become SEO(TEO?) experts to push their preferred narratives for clout/profit while drowning the entire timelines in a flood of noise.
thomassmith65 55 days ago [-]
While the location now shows US, X notes that the account location might not be accurate due to use of VPN
Just 'now'... not when signing up for their account?
It's cheap and easy to use social media to propagandize, so certainly there are scores of fake American accounts, but it's irritating that this article doesn't address VPN-usage during account creation.
mise_en_place 55 days ago [-]
Their humiliation ritual is just beginning.
cindyllm 55 days ago [-]
[dead]
Meekro 55 days ago [-]
Worth noting that these foreign accounts are pretty small. The biggest foreign pro-MAGA account mentioned in the article is "MAGA NATION" with ~400k subs, with the others being in the 10k-100k subs range.
Contrast that with legit pro-rightwing accounts: @tuckercarlson (17M), @benshapiro (8M), @RealCandaceO (7.5M), @jordanbpeterson (6M), @catturd2 (4M), @libsoftiktok (4.5M), @seanhannity (7M).
majani 54 days ago [-]
Color me shocked that libsoftiktok is an actual US account. I was certain it was a foreign operative
Meekro 54 days ago [-]
Guessing you don't follow politics much? The owner of that account is an American woman, her identity actually leaked years ago. She's sat for several in-person interviews, I think.
DustinEchoes 55 days ago [-]
This is happening all over US social media, even here. If you ever wonder why US hatred has permeated modern culture, this is why.
rurban 55 days ago [-]
Google always thinks I am in Finland, because one of my coworkers on the same VPN is in Finland. Bonkers. Finish ads suck
BhavdeepSethi 55 days ago [-]
First, allow anyone to sign up for the blue check mark for $8. Verified accounts loses its value and Twitter gets flooded with fake accounts and foreign run accounts. Now, try to fix it with by showing users country of origin. Now, these users will try to figure out ways to bypass it.
iamshs 55 days ago [-]
After this change, accounts created in US will be sought after. Operate them through a US VPN. Voracious appetite for consuming content will be filled by outsiders. US effs up the world with violence, now world is riling up Americans with similarly worded content of their own politicians.
iamshs 55 days ago [-]
Why couldn't you get any other news media than the HindustanTimes.com; a plainly unhealthy Indian news blog to be consumed. We Indians have to suffer it because we have few options, it's amazing when these unreliable sites are given SEO boost on here too...
kazinator 54 days ago [-]
I don't use X.
If I made an X account while vacationing in a foreign country, would that then be my country-of-origin for that account, even upon continuing to use X after returning home?
Or is it based on the IP address of last interaction?
blargey 55 days ago [-]
And so the user response is just organic local political provocateurs crowing over examples of their "opponents" on social media being foreign plants, while ignoring all the times their "side" were baited by fake enemies and boosted by fake allies, and then it's back to business as usual. Same game different players - if even that; the success of these accounts already relied on some combination of credulity and wilful ignorance.
Yay politics. Hooray for the engagement-driven internet.
noirchen 54 days ago [-]
There once were news about Fitbit or Garmin displaying workout records for users near classified military bases, and some workout routes were suspected to be in the playground of the bases.
bdangubic 55 days ago [-]
if proof of identity was required (IAL2) to use social media most political issues in america would be solved (and some social media companies would go bankrupt in a few months :) )
tb_technical 53 days ago [-]
It's wild to me that the DHS account was created in Tel Aviv. Several American nationalists are Indian.
It's absolutely nuts.
Zigurd 55 days ago [-]
This is another piece of a mosaic that is going to reveal that grey zone warfare by Russia against the west has much larger scope than most people are aware of. The UK National Crime Agency uncovered a huge money laundering enterprise based on the kinds of crime that fly under the national security radar in most places.
throwaway48476 55 days ago [-]
Most of the problem here is not gray zone warfare but just modern wow gold farmers engagement baiting for a $5 payout.
anigbrowl 54 days ago [-]
I don't think so. Political trolling is kind of hard work because you have a lot of people who are suspicious and used to deconstructing arguments. If you just want engagement, is it' easier to pick a sports team or pop star and just bait passionate fans by pretending to support a rival?
That kind of false engagement is also a problem (for advertisers, genuine fans etc) but doesn't shape elections and thus come with policy consequences.
lurk2 55 days ago [-]
> This is another piece of a mosaic that is going to reveal that grey zone warfare by Russia against the west has much larger scope than most people are aware of.
Are you people ever going to let this idea go? Almost all of this activity is coming out of India, Israel, and Nigeria. Russia isn’t mentioned once in the article.
> The network was small: just 49 Facebook accounts, 85 Instagram accounts and 71 Twitter accounts in question.
This is the pattern with all Russian influence operations; they’re always implied to be ominously large and end up being laughably small.
American political polarization had nothing to do with the Russians; this is just the refrain of frustrated Democrats who refuse to acknowledge the consequences of ill-conceived policy. Israel has always had far more sway over American politics.
energy123 55 days ago [-]
This sounds like being wilfully uninformed. Russia organized almost a dozen Black Lives Matter protests, one of them attended by Michael Moore. They ran about half of the largest US identity focused Facebook groups (Christian/Black/etc) during the 2020 US election. I gave you one small example, it's on you to look for the full picture rather than jump to an erroneous conclusion based on god knows what motivations.
The problem in particular is not only the scale but that this propaganda is not solely directed at altering US policy towards Russia, it's also about stoking ethnic and religious tension to try to weaken the US and destroy its ability to be a unified cohesive country. If the US is fighting itself then it isn't fighting Russia after all.
lurk2 55 days ago [-]
> They ran about half of the largest US identity focused Facebook groups (Christian/Black/etc) during the 2020 US election.
Can you provide any citation for this and the approximate date when this was revealed? I’ve been hearing about this since 2015 and the last report I looked at was entirely unconvincing.
> it's also about stoking ethnic and religious tension to try to weaken the US and destroy its ability to be a unified cohesive country.
That is likely one of Russia’s goals; it is not likely that the Russians were the origin of these political cleavages. This was the problem with the entire Russian influence narrative; it was a post-hoc rationalization for why exceptionally bad ideas like diversity and multiculturalism were rejected by a subset of the population. In essence: “If they hadn’t been exposed to these Facebook posts, they never would have had these illiberal ideas put into their heads.”
It was also impossible to take seriously because most of the elected officials promoting it were receiving campaign contributions from AIPAC.
This is not pointing to the Russia government trying to influence the election to gain favoritism.
"For the most part, the people who run troll farms have financial rather than political motives; they post whatever receives the most engagement, with little regard to the actual content"
BuzzFeed News investigation "didn't find concrete evidence of a connection" and "Facebook said its investigations hadn't turned up a connection between the IRA and Macedonian troll farms either"
Zigurd 54 days ago [-]
The reason I've mentioned the UK investigation is that they uncovered an elaborate multi hundred million dollar money laundering pipeline, including a bank that was bought by Russian mobsters, that was supporting and harvesting money out of fairly low level crime in the UK. It's not just sabotage operations and poisoning exiled dissidences that are readily understood to be part of espionage. Spies often work with criminals. Sometimes in unexpected ways.
I've been in touch with tech people in Eastern Europe. Grey zone warfare is very real in their countries.
lurk2 55 days ago [-]
This article doesn’t support your conclusions; it states that these pages were being ran out of Macedonia and Kosovo, primarily for profit.
> A 2018 BuzzFeed News investigation found that at least one member of the Russian IRA, indicted for alleged interference in the 2016 US election, had also visited Macedonia around the emergence of its first troll farms, though it didn’t find concrete evidence of a connection. (Facebook said its investigations hadn’t turned up a connection between the IRA and Macedonian troll farms either.)
Further, the article supports the point I was making:
> For the most part, the people who run troll farms have financial rather than political motives; they post whatever receives the most engagement, with little regard to the actual content. But because misinformation, clickbait, and politically divisive content is more likely to receive high engagement (as Facebook’s own internal analyses acknowledge), troll farms gravitate to posting more of it over time, the report says.
This isn’t evidence of a concerted influence campaign. It’s not even clear what the article means when it refers to these outfits as troll farms. What I imagine when I hear the phrase is a professionalized state-backed outfit with a specific mandate to influence public opinion in a target country; this isn’t what is being described in the article.
There’s evidence that Russia engaged in these kinds of influence campaigns during the 2016 election, but I’ve never seen evidence that they were particularly effective at it.
throwaway48476 55 days ago [-]
Agitprop cannot create divisions whole cloth, they can only amplify and inflame extant divisions.
seattle_spring 55 days ago [-]
> This sounds like being wilfully uninformed. Russia organized almost a dozen Black Lives Matter protests, one of them attended by Michael Moore. They ran about half of the largest US identity focused Facebook groups (Christian/Black/etc) during the 2020 US election
Maybe it wasn't your intent, but your comment makes it sound like this was an issue with only a single side of the political spectrum. However...
> The Russians weaponized social media to organize political rallies, both in support of and against certain candidates, according to the indictment. Although the Russians organized some rallies in opposition to Trump's candidacy, most were supportive.
Not to mention the recent exposure of the funding source of the fine folks over at Tenet Media.
throwaway48476 55 days ago [-]
Tenet is a good example that they don't pay for specific words to be said, they pay to amplify outlets who are already saying what they want amplified.
CamperBob2 55 days ago [-]
It would be silly if the Russians weren't stirring shit up in the enemy camp.
That's what the Russians do. It's too difficult to improve their own country, their own lives, and their own prospects, so they focus on the next-best strategy for the acquisition of power, which is dragging everybody else down to their level.
throwaway48476 55 days ago [-]
It wouldn't be difficult to improve Russia. Just the kelptocracy makes it impossible. "Why go to the moon when we have craters (potholes) here in samara" -some kid on tiktok.
throwaway48476 55 days ago [-]
In my personal experience a lot of the people involved in Facebook Russian influence operations are post Soviet exodus diaspora boomers. They share the content produced by the troll farms.
lurk2 55 days ago [-]
Interesting. The extent of Russian influence I noticed peaked in the Spring of 2016. Lots of self-professed fascists were converting to East Orthodox Christianity and subscribing to the idea that Russia, Iran, North Korea, and China formed some of the last governments on earth not controlled by a Rothschild-owned central bank.
I know of a few defectors who ended up there; one was an American that went by the name of “Texas,” while another one was a Canadian who moved there to be a farmer in hopes of protecting his family from what he saw as degenerate values being propagated by the Canadian education system. Texas was supposedly murdered by Russian soldiers while operating with Kremlin-aligned militias in the Donbas region. The Canadian is still living in Russia and has a YouTube channel.
I suspected a regular rotation of Kremlin agents were on /pol/ during the Syrian Civil War. Russian sentiment was generally far more positive prior to the invasion. It’s possible this was all organic and just collapsed as people saw what they did to Ukraine; I really have no idea.
Frog Twitter for their part pivoted on Russia quite quickly in the early 2020s, around the time Thiel was buying out podcasts.
throwaway48476 55 days ago [-]
Moving to Russia is an extreme outlier. These people exist but there's only a dozen or so. Numbers are not much different than the late Soviet era. In the 1920s thousands of Americans moved to Russia to build communism. Many also came back disillusioned, or died in a gulag.
On the other hand there's hundreds of thousands of diaspora Russians, and they're very pro russian. Richard Spencer's ex wife is a good example of this. Overall this is a much bigger impact than the dozen converts or a few thousand half hearted Harper's.
Obviously before the war Russia was less publicly objectionable. In Syria everyone just hated ISIS.
The /pol/ effect is nostalgia for worlds that no longer exist and we're not personally experienced. It's political flavored nostalgia instead of Pokémon collecting.
In terms of American twitter Russiagate and making Russia a red/blue partisan issue has been the most disastrous. It's simple contrarianism.
prmph 55 days ago [-]
You think IP addresses can't be spoofed rather easily?
What political interest does a Nigerian have in swaying US opinion?
lurk2 55 days ago [-]
> What political interest does a Nigerian have in swaying US opinion?
They’re grifters; their interest in American politics is commercial. Indians were targeting Trump supporters with fake news for ad revenue as early as 2015; this is a continuation of that model.
throwaway48476 55 days ago [-]
It was so successful that they took over the FBI.
add-sub-mul-div 55 days ago [-]
You're literally posting in the comments of an article that's about the ease of hiding geographical origin.
lurk2 55 days ago [-]
There were a number of accounts that got doxxed in the last year that were demonstrated to have Indian owners. Engagement farms have been doing this since Trump’s first term; the goal is primarily ad revenue, not political influence. I didn’t see any that were Israeli but everyone knew those accounts were there.
It’s possible the Russians have contracted influence campaigns out to Indian and Israeli firms, but the simpler explanation is just that India is continuing its long and storied history of using telecomm networks to scam unwitting boomers while Israel is continuing its long and storied history of being the worst greatest ally of all time.
afavour 55 days ago [-]
Eh. I think this is just evidence that if you pay people to have divisive opinions (as X does) then that will incentivize divisive discourse. We’re seeing it come from developing nations because it’s worth their time economically.
Interesting read. Thank you for sharing. Was there ever any evidence that they hit their projected metrics? A million followers after a year seems ambitious.
55 days ago [-]
xorvoid 55 days ago [-]
I'm amused this is a new reveal for some people. Seemed the very likely case for me.
mac-attack 55 days ago [-]
America outsourcing its rage is peak USA. McKinsey would be proud
cortesoft 55 days ago [-]
It's not the USA outsourcing rage, it is foreign actors paying to manufacture rage in the USA.
happosai 55 days ago [-]
While a handful troll factories are paid to destabilize USA, vast majority operate from 3rd world countries with only profit in mind.
cortesoft 55 days ago [-]
I didn’t say WHY they are manufacturing rage. Some do it for political reasons, some for profit, some for both.
anigbrowl 54 days ago [-]
Why do politics though? Trolling people over sport or entertainment or games is massively easier.
NaomiLehman 51 days ago [-]
politics is the easiest because of outrage
wolrah 55 days ago [-]
whynotboth.gif
I'd make the assumption that posters located in Russia, China, NK, etc. are likely to be in some way tied to the state, where posters in India, random African nations, etc. are more likely to be private actors of which some will be US-based outsourcing to low-cost labor.
spprashant 55 days ago [-]
Its also just financially lucrative? The right tends to have more politically incorrect things to say and its no surprise click-farms from Asia would want to capitalize on that shock value.
cortesoft 55 days ago [-]
That is still foreign actors manufacturing rage, whether it is for profit or for political motivations.
AnimalMuppet 55 days ago [-]
I think it's more like outsiders stoking US rage on both sides. That's a bit different than "outsourcing".
ants_everywhere 55 days ago [-]
Yeah, rage isn't something the US needs. It's something the US's enemies need us to have.
energy123 55 days ago [-]
Even their political commentary is an entertainment export.
amai 55 days ago [-]
Thank godness this feature doesn‘t exist on hackernews.
OrangeMusic 55 days ago [-]
Care to elaborate?
amai 54 days ago [-]
Without that feature foreign influence on hacker news stays hidden.
tehjoker 55 days ago [-]
is there any technical reason why us accounts might be identified as foreign or is this really a foreign play or us people hiring ppl outside the country
intothemild 55 days ago [-]
Because they're all from other countries. It's really not that deep
jsheard 55 days ago [-]
Turns out that if you pay users for engagement then users will post whatever gets the most engagement, regardless of whether it's true. Who could have forseen this.
happosai 55 days ago [-]
To be fair I don't think many expected making people angry would be the best strategy for engagement. You would think people would get tired of being angry and would stop using web sites that keep making them angry.
prmph 55 days ago [-]
Why would people stop using sites that continually provide them a daily dose of righteous anger, making their dreary lives a bit more meaningful?
pixl97 55 days ago [-]
This video will make you angry is 10 years old now.
Hank Greens account was showing japan. There are sadly, flaws in the system.
duskwuff 55 days ago [-]
> is there any technical reason why us accounts might be identified as foreign
Speculation: they're resolving historical IP addresses against a current IP geolocation database. An IP which belonged to a US company in 2010 may have since been sold to a Nigerian ISP, but that doesn't mean that the user behind that IP in 2010 was actually in Nigeria.
toast0 55 days ago [-]
I was going to speculate the same thing. Seems pretty likely... a lot of systems record information about when an account was created, and IP location correlations are period sensitive (at least some of them are... my college is most likely still using its /16 and hasn't moved outside the city where it was founded)
neaden 55 days ago [-]
I doubt for the most part it's people being hired. I think mostly it's probably people in low income countries who make a living posting as different identity on social media.
exegete 55 days ago [-]
How are they making a living posting political propaganda of they aren’t hired?
energy123 55 days ago [-]
Advertising revenue directly from X, posting affiliate links to products or gambling/crypto sites, and directly asking followers for money. Or they're being paid by clandestine operations such as Internet Research Agency.
55 days ago [-]
throwaway48476 55 days ago [-]
X has revenue share for popular posts.
colechristensen 55 days ago [-]
These aren't like a single real person pretending to be Americans because of their deep interst in American politics.
These are paid astroturfers probably more like call centers, paid for presumably by all sorts of interests from foreign intelligence services, to businesses (or select executives), to internal political groups or politicians trying to manipulate public opinion.
Both political extremes are suffering from this kind of manipulation where real concerns are twisted and amplified for lets say the more gullible half of the population (gullibility knows know political alignment exclusively). The excluded middle is afraid of the people who have been manipulated this way (death threats also know no political boundaries).
disambiguation 55 days ago [-]
VPN?
wnevets 55 days ago [-]
> is there any technical reason why
with the development capability remaining at twitter anything is possible.
d0100 55 days ago [-]
Maybe most of these accounts are managed by a offshore'd social media company?
bdangubic 55 days ago [-]
yup, for sure that is it :)
jiggawatts 55 days ago [-]
To a billionaire, hiring a few hundred Nigerians to upvote and share their propaganda is so cheap that it’s like you buying a cup of coffee.
They use professional paid services from these low labour cost countries all the time for publicity or to control the narrative.
By some estimates 20-60% of everything you see on social media is generated by a bot farm, depending on the forum in question. An analysis of Reddit showed some subreddits are 80% AI generated.
bcoates 55 days ago [-]
It doesn’t have to be and almost certainly isn't some billionaire. Formulaic spicy political nonsense is reliable engagement bait and it's easy to churn eyeballs into (small amounts of) money. It's not even unique, there are similar grinds about sports, religion, cute animals, subculture jokes, etc.
The "control the narrative" stuff is mostly a PR campaign by social media intelligence companies trying to make their services seem more valuable than they are.
55 days ago [-]
lurk2 55 days ago [-]
Anyone who follows the space closely has known about this for at least the last year. It was quite common for these accounts to make slip-ups that revealed their country of origin and to subsequently be doxxed by ideological enemies or rival influencers (they all want to be the leader and they all absolutely hate each other).
Almost all of these accounts are operating out of India or Israel. The Indians are usually in it for the money (though there’s probably some Israeli outsourcing going on there, too), whereas the Israelis were riding off 2010s Islamophobia to prime American Evangelicals for their activities in Gaza.
throwaway48476 55 days ago [-]
Indians pretend to be Israelis because they don't like Muslims who they see as similar to Pakistanis.
stefan_ 55 days ago [-]
Where are all these Israeli accounts? That just seems to be your weird personal bias. Weird, because you can just confirm it for yourself now!
albedoa 55 days ago [-]
> Weird, because you can just confirm it for yourself now!
That is exactly what is happening and what is being reported on. The thing you attribute to "weird personal bias" is being widely exposed.
We should probably examine your weird personal bias. Weird, because you could just read the article!
lurk2 55 days ago [-]
> Where are all these Israeli accounts?
The Department of Homeland Security, for one.
Edit: Link removed as I was disinformed by a /pol/ PsyOp.
energy123 55 days ago [-]
That is a doctored image according to both DHS and X's head of product. What kind of information bubble are you in?
Can't beat X. Such a great feature to see which accounts might be trying to astroturf. Already accounts like Republicans For Trump, and several (mostly no-name) US flag -toting accounts were exposed for not even being based in the US.
Now if we could have other platforms do the same, and not just accidentally like with the Reddit case lol
anigbrowl 54 days ago [-]
This information has always been available in the API, but Musk shut down free access to the API shortly after buying it, erecting a $5000/mo paywall. You can still use it to post, but not for serach or any kind of streaming without paying a fat fee.
nextworddev 55 days ago [-]
HN also needs this feature
55 days ago [-]
jaco6 55 days ago [-]
[dead]
Slava_Propanei 54 days ago [-]
[dead]
ecoled_ame 55 days ago [-]
[flagged]
bparsons 55 days ago [-]
It truly is the sewer of the internet.
seizethecheese 55 days ago [-]
You think this is unique to X? If anything, it’s unique they’re exposing this.
anigbrowl 54 days ago [-]
When the API was public academics and independent scholars used to do this sort of research all the time, but now it's prohibitively expensive. Read up on the search/streaming API and reflect on the fact that it used to be free.
tasty_freeze 55 days ago [-]
"exposing" should have been past tense. They've quickly reverted that feature once they realized how bad it made them look.
SV_BubbleTime 55 days ago [-]
Are you trying to make sure that any criticism of X can be shrugged off by supporters and detractors alike as cheap ideological attacks? Because if you are, excellent job.
ecoled_ame 55 days ago [-]
[flagged]
malfist 55 days ago [-]
Sorry to break it to you bud, but girls on the internet aren't usually girls in real life.
ecoled_ame 55 days ago [-]
You don’t think girls use the internet? I’ve met some in real life, know a bunch who clearly aren’t fake. That’s why they hang out on twitter. Places like HackerNews are cool for guys, but not fun. Twitter may be chaotic, but it’s fun for girls and many feel comfortable there. They can post stuff, kind of like pinterest.
Rendered at 04:36:54 GMT+0000 (Coordinated Universal Time) with Vercel.
Relevant: “Containment Control for a Social Network with State-Dependent Connectivity” (2014), Air Force Research Laboratory, Eglin AFB: https://arxiv.org/pdf/1402.5644.pdf
https://web.archive.org/web/20160410083943/http://www.reddit...
Funny nonetheless though.
Eglin has something like 50,000 people but it's actual population as a census designated area is more like 5000.
Oak Brook, IL was also "most addicted" but people didn't run with the idea that McDonalds HQ was running psyops.
It was generally being called astroturfing when it got more apparent on Reddit in the early 2010s, and definitely didn't get less after.
You would think such people would be competent enough to proxy their operations through at least a layers of compromised devices, or Tor, or VPNs, or at least something other than their own IP addresses.
Not sure what the "most addicted" means except for "over 100k visits total" but it doesn't seem to be pulled out of ops ass,
This is a special addiction because most of us are community starved. Formative years were spent realizing we could form digital communities, then right when they were starting to become healthy and pay us back, they got hijacked by parasites.
These parasites have always dreamed of directly controlling our communities, and it got handed to them on a silver platter.
Corporate, monetized community centers with direct access to our mindshare, full ability to censor and manipulate, and direct access to our community-centric neurons. It is a dream come true for these slavers which evoke a host of expletives in my mind.
Human beings are addicted to community social interaction. It is normally a healthy addiction. It is not any longer in service of us.
The short term solution: reduce reliance on and consumption of corporate captured social media
The long term solution: rebuild local communities, invest time in p2p technology that outperforms centralized tech
When I say "p2p" I do not mean what is currently available. Matrix, federated services, etc are not it. I am talking about going beyond even Apple in usability, and beyond BitTorrent in decentralization. I am talking about a meta-substrate so compelling to developers and so effortless to users that it makes the old ways appear archaic in their use. That is the long term vision.
Also don’t reply to this.
Can you elaborate? (At the risk of spoiling the joke)
My impression of the joke is that intelligent and knowledgeable people willingly engage with social media and fall into treating what they see as truth, and then are shocked when they learn it's not truth.
If the allegory of the cave is describing a journey from ignorant and incorrect beliefs to enlightened realizations, the parent is making a joke about people going in reverse. Perhaps they have seen first hand someone who is educated, knowledgeable and reasonable become deceived by social media, casting away their own values and knowledge for misconceptions incepted into them by persistent deception.
I'm not saying I agree entirely with the point the joke is making but it does sort of make sense to me (assuming I even understand it correctly).
I also see this with AI answers relying on crap internet content.
AI trained on most content will be filled with misconceptions and contradictions.
Recent research has been showing that culling bad training data has a huge positive impact on model outputs. Something like 90% of desirable outputs comes from 10% of the training data (forget the specifics and don't have time to track down the paper right now)
I really hope that AI business models don't fall into relying on getting and keeping attention. I also hope the creators of them exist in a win-win relationship with society as a whole. If AIs compete with each other based on which best represent truth, then overall things could get a lot better.
The alternative seems dreadful.
Edit: I am curious why this is getting downvoted.
https://www.anthropic.com/research/small-samples-poison
It was discussed a month or so back.
https://news.ycombinator.com/item?id=45529587
I mean it's also just the classic garbage in garbage out heuristic, right?
The more training data is filtered and refined, the closer the model will get to approximating truth (at least functional truths)
It seems we are agreeing and adding to each other's points... Were you one of the people who downvoted my comment?
I'm just curious what I'm missing.
It would be VERY refreshing to see more than one company try to build an LLM that is primarily truth-seeking, avoiding the "waluigi problem". Benevolent or not, progress here should not be led just by one man ...
Do hope. But hoping for a unicorn is magic thinking.
For other people, they can either count this as a reason to despair, or figure out a way to get to the next best option.
The world sucks, so what ? In the end all problems get solved if you can figure them out.
The reason I say this is that blind hope and informed hope are two different things.
Media has always relied on novel fear to attract attention. It's always "dramatized"; sacrificing truth for what sells. However AI is like electricity or computation. People make it to get things done. Some of those things may be media, but it will also be applied to everything else people want to get done. The thing about tools is that if they don't work people won't keep using them. And the thing about lies is that they don't work.
For all of human history people have become more informed and capable. More conveniences, more capabilities, more ideas, more access to knowledge, tools, etc.
What makes you think that AI is somehow different than all other human invention that came before it?
It's just more automation. Bad people will automate bad things, good people will automate good things.
I don't have a problem with people pointing out risks and wanting to mitigate them, but I do have a problem with invalid presuppositions that the future will be worse than the past.
So no, I don't think I'm hoping for a unicorn. I think I'm hoping that my intuition for how the universe works is close enough, and the persistent pessimism that seems to permeate from social media is wrong.
> The thing about tools is that if they don't work people won't use them.
People will and do use tools that don't work. Over time fewer people use bad tools as word spreads. Often "new" bad tools have a halo uptake of popularity.
> And the thing about lies is that they don't work.
History tells us that lies work in the short term, and that is sufficient to force bad decisions that have long shadows.
My bad. I meant won't keep using them.
> History tells us that lies work in the short term, and that is sufficient to force bad decisions that have long shadows.
What do you mean by "work"?
It sounds like you are implying that a lie "works" by convincing people to believe it?
I meant a lie doesn't work in that if you follow the lie you will make incorrect predictions about the future.
If someone acts on a lie which results in a bad decision with a "long shadow" then wouldn't that mean acting out the lie didn't work?
They are used by bad actors to, say, win elections and then destroy systemic safeguards and monitoring mechanisms that work to spotlight bad actions and limit damage.
There are also lies, such as a common belief in Wagyl, that draw people to together and act in unison as a community to help the less fortunate, preserve the environment and common resources, and other things not generally perceived as destructive.
I don't disagree with this. It's reasonable to assume I was talking about that type of "work", but I wasn't.
> There are also lies, such as a common belief in Wagyl, that draw people to together and act in unison as a community to help the less fortunate, preserve the environment and common resources, and other things not generally perceived as destructive.
I am not familiar with this specific culture but I totally get your point. Most religion works like this. I would just consider that the virtues and principles embedded within the stories and traditions are the actual truths that work, and that Wagyl and the specifics of the stories are just along for the ride. The reason I believe this is because other religions with similar virtues and values will have similar outcomes even though the lie they believe in is completely different.
I said that lies destroy, and that wasn't right. Sometimes they do, but as you have pointed out, often they don't.
> I really hope that AI business models don't fall into relying on getting and keeping attention. I also hope the creators of them exist in a win-win relationship with society as a whole.
The ratio of total hours of human attention available to total hours of content is essentially 0. We have infinite content, which creates unique pressures on our information gathering and consumption ability.
Information markets tend to consolidate, regulating speech is beyond fraught, and competition is on engagement, not factuality.
Competing on accuracy requires either Bloomberg Terminal levels of payment, or you being subsidized by a billionaire. Content competes with content, factual or otherwise.
My neck of the words is content moderation, misinformation, and related sundry horrors against thought, speech and human minds.
Based on my experience, I find this hope naive.
I do think it is in the right direction, and agree that measured interventions for the problems we face are the correct solution.
The answer to that, for me, is simply data and research on what actually works for online speech and information health.
> I really hope that AI business models don't fall into relying on getting and keeping attention.
What I really meant is that I hope that the economic pressures on media don't naturally also apply to AI. I do think it's naive to hope that AI won't be used in media to compete for attention, I just don't think it's naive to hope that's not the only economic incentive for its development.
I also hope that it becomes a commodity, like electricity, and spills far and wide outside of the control of any monopoly or oligopoly (beyond the "tech giants"), so that hoping tech giants do anything against their incentive structures is moot. I hope that the pressures that motivate AIs development are overwhelmingly demand for truth, so that it evolves overwhelmingly towards providing it.
If this hope is naive, that would imply the universe favors deception over truth, death over life, and ultimately doesn't want us to understand it. To me, that implication seems naive.
The Bloomberg terminal is an interesting example and I see your point. I guess the question is what information is there a stronger incentive to keep scarce. The thing about Bloomberg terminals are that people are paying for immediate access to brand new information to compete in a near-zero-sum game. Most truth is everlasting insight into how to get work done. A counter example are textbooks.
The commodification is towards the production of content, not information.
Mostly, producers of Information, are producing expensive “luxury goods”, but selling them in a market for disposable, commodified goods. This is why you need to subsidize fact checkers, and news papers.
I believe this is a legacy of our history, where content production was hard and the ratio of information to content was higher.
Consumers of content are solving for not just informational and cognitive needs, they are also solving for emotional needs, with emotional needs being the more fundamental.
Consumers will struggle with so many sources of content, and will eventually look towards bundling or focus only on certain nodes.
Do note - the universe does not need to favor anything for this situation to occur. Deception is a fundamental part of our universe, because it’s part of the predator prey dynamic. This in turn arises out of the inability of any system to perfectly process all signals available to them.
There is always place for predators or prey to hide.
I thought of the predator prey frame shortly after posting my last comment.
Maybe it boils down to game theory and cooperation vs competition, and the free energy principle. Competition (favoring deception) puts pressure on cooperation (favoring truth). Simultaneously life gets better at deceiving and at communicating the truth. They are not mutually exclusive.
When entities are locked into long term cooperation, they have a strong bias to communicate truth with each other. When entities are locked into long term competition, they have a strong bias to deceive each other.
Evolution seems to be this dance of cooperation and competition.
When a person is born, overwhelmingly what's going on between cells inside their body is cooperation. When they die, overwhelmingly what happens between cells is competition.
So one way that AI could increase access to truth, is if most relationships between people and AI are locked into long term cooperation. Not like today where it's lots of people using one model from a tech co, but something more like most people running their own on their own hardware.
I've heard people say we are in the "post truth era" and something in my gut just won't accept that. I think what's going on is the power structures we exist in are dying, which is biasing people and institutions to compete more than cooperate, and therefore deceive more than tell the truth. This is temporary, and eventually the system (and power structures) will reconfigure and bias back to cooperation, because this oscillation back and forth is just what happens over history, with a long term trend of favoring cooperation.
So to summarize... Complexity arises from oscillations between competition and cooperation, competition favors deception and cooperation favors telling the truth. Over the long-term cooperation increases. Therefore, over the long-term truth communication increases more than deception.
I’ve been there too, is what I am saying. But, reality is reality, and feeling bad or good about it is pointless beyond a point.
AI cannot increase access to truth. This is also part of the hangover of our older views on content, truth and information.
In your mental mode, I think you should recognize that we had an “information commons” previously, even to an extent during the cable news era.
Now we have a content commons.
The production of Information is expensive. People are used to getting it for free.
People are also now offered choices of more emotionally salient content than boring information.
People will choose the more emotionally salient content.
People producing information, will always incur higher costs of production than people producing content. Content producers do not have to take the step of verifying their product.
So content producers will enjoy better margins, and eventually either crowd out information producers, or buy out information producers.
Information producers must raise prices, which will reduce the market available for them. Further - once information is made, it can always just be copied and shared, so their product does not come with some inherent moat. Not to mention that raising prices results in fewer customers, and goes against the now anachronous techie ethos of “Information should be free”.
I am sure someone will find some way to build a more durable firm in this environment, but it’s not going to work in the way you hoped initially. It will either need to be subsidized, or perhaps via reputation effects, or some other form of protection.
Cooperation is favored if cooperation can be achieved. People will find ways to work together, however the equilibrium point may well be less efficient than alternatives we have seen, imagined or hoped for.
More dark forest, cyberpunk dystopia, than Star Trek utopia.
There’s an assumption of positive drift in your thinking. As I said, this is my neck of the woods, and things are grim.
But - so what? If things are grim, only through figuring it out can it actually be made better.
This is the way the pieces on the board are set up as I see it. If you wish to agency in shaping the future, and not a piece that is moved, then hopefully this explanation will help build new insights and potential moves.
There's one thing that I just realized hasn't come up in our discussion yet which has a big impact on my perspective.
Everything in the universe seems built on increasing entropy. Life net decreases entropy locally so that it can net increase it globally. There also seems to be this pattern of increasing complexity (particles, atoms, molecules, cells, multi cells, collectives) that unlocks more and more entropy. One extremely important mechanism driving this seems to be the Free Energy Principle, and the emergent ability to predict consequences of actions. Something about it enables evolution, and evolution enables it.
This perspective is that gives me more confidence that within collectives the future will include more shared truth than the past, because at every level of abstraction and for all known history that has been the long term trend.
Cells get better at modelling their external environment, and better at communication internally.
The reason why I am so confident we are not "post truth" is because lies don't work, not in the sense that people can't be deceived by lies (obviously they can), but dysfunctional lies won't produce accurate predictions. Dysfunctional lies don't help work get done, and the universe seems to be designed for work to get done. There is some force of nature that seems to favor increasingly accurate predictive ability.
Your perspective seems to be very well informed from what feels like the root of the issue, but I think you're missing the big picture. You aren't seeing the forest, just the trees around you. I know you assume the same of me, that I don't see these trees that you see. I believe you, that what you see looks grim. I also agree we need to understand the problems to solve them. I'm not advocating for any lack of action.
Just suggesting that you consider:
- for all history life has gotten better at prediction
- truth makes better predictions than lies
What's more likely? we are hitting a bump in the road that is an echo of many that have come before it, or something fundamental has materially changed the trajectory of all scientific history up until this point?
Your points about the cost of information and the cost of content are valid. In a sense, content is pollution. It's a byproduct of competition for attention.
I can think of a few ways that the costs and addictive nature of content could become moot.
- AI lowers the cost of truth
- Human psychology evolves to devalue content
- economic systems evolve to rebalance the cost/value of each
- legal systems evolve to better protect people from deception
These are just what come to mind quickly. The main point is that these quirks of our current culture, psychology, economic system, technological stage and value system are temporary, not fundamental, and not permanent. Life has a remarkable ability to adapt, and I think it will adapt to this too.
I really appreciate you engaging with me on this so I could spend time reflecting on your perspective. If I ever came across as dismissive I apologize. You've helped me empathize with you and others with the same concerns and I value that. You haven't fundamentally changed my mind, but you gave me a chance to hone my thinking and more deeply reflect on your main points.
It feels like we agree on a lot, we are just incorporating different contexts into our perspectives.
Nah. I see it more as there was an information asymmetry, on this specific topic, due to our different lived experiences.
I can feasibly provide more nuanced examples of the mechanics at play as I see them. Their distribution results in a specific map / current state of play.
> - Economic systems evolve > - legal systems evolve
These types of evolutions take time, and we are far from even articulating a societal position on the need to evolve.
Sometimes that evolution is only after events of immense suffering. A brand seared on humanity’s collective memory.
We are not promised a happy ending. We can easily reach equilibrium points that are less than humanly optimal.
For example - if our technology reaches a point where we can efficiently distract the voting population, and a smaller coterie of experts can steer the economy, we can reach 1984 levels of societal ordering.
This can last a very long time, before the system collapses or has to self correct.
Something fundamental has changed and humanity will adapt. However, that adaptation will need someone to actually look at the problem and treat it on its merits.
One way to think of this is cigarettes, Junk foods and salads. People shifted their diets when the cost of harm was made clear, AND the benefits of a healthy diet were made clear AND we had things like the FDA AND scientists doing sampling to identify the degree of adulteration in food.
——
> My move is to focus on making it easier for college students to develop critical thinking and communication skills. Smoothing out the learning curves and making education more personalized, accessible, and interactive. I'm just getting started, but so far already helping thousands of students at multiple universities.
How are you doing this?
I never said that though?
> Hoping that audiences will reject this is viable.
I have no clue what you mean. What is "this" refering to?
I think that's by design though. Tolerate bots to get high-value users to participate more after they think real people are actually listening to them.
Leaving social media can be thought of as emerging from the cave: you interact with people near you who actually have a shared experience to yours (if only geographically) and you get a feel for what real world conversation is like: full of nuance and tailored to the individual you’re talking to. Not blasted out to everyone to pick apart simultaneously. You start to realize it was just a website and the people on it are just like the shadows on the wall: they certainly look real and can be mesmerizing, but they have no effect on anything outside of the cave.
It was just a way for him to convey his "theory of forms" in which perfect versions of all things exist somewhere, and everything we see are mere shadows of these true forms. The men in the cave are his fellow Athenians who refuse his "obvious" truth, he who has peeked out of the cave and seen the true forms. All in all, it's very literal.
> Walk willingly into platos cave, pay for platos cave verification, sit down, enjoy all the discourse on the wall.
Homer pays to get the crayon put back up his nose
> Spit your drink out when you figure out that the shadows on the wall are all fake.
Homer gets annoyed/surprised if someone calls him stupid.
The shadows on the wall aren't fake, they are just... shadows of real things. Plato's cave is about having an incomplete view of reality, not a false view of reality.
It’s kind of funny how everyone projects their own dialectic framing on statements, and assumes that a person opposing side A automatically supports whatever is side B in their own mind.
I would imagine a large majority of readers read your original post and immediately in their head thought, “are they one of those school voucher people” or something along those lines.
If we are all going around assuming 99% of the positions of people we are engaging with, what is the point of discussing anything?
Ironically many of the people in favor of banning VPNs are themselves using a VPN.
It’s ironic but also completely typical.
Same way so many people publicly freaking out about homosexuality turn out to be gay. There’s something in human nature that makes people shout about the dangers of the things they themselves do, some kind of camouflage instinct I guess.
And with that statement you ironically insinuate that I'm a pedo
You're not the first person that made that argument (that the people talking about a problem actually are the real perps!), but from my perspective it feels more like an easy way to make it socially unacceptable to talk about categories of issues. Which is likely intended by the person making this argument, likely because... You see were this is going?
Remember that China blocks Western social media, yet posts a lot of Chinese government propaganda on Western social media. Making VPNs illegal for the general public does not entail making VPNs inaccessible to government agents.
How do you know this as a fact?
Or maybe they are able to link carrier-sourced cellphone location datasets to particular twitter accounts. Those aren't going to be real-time though, so something like that could explain the lag.
Reason I ask is because there are few people I follow that use VPNs but their location is accurate on X.
Also, X also shows where you downloaded the app from, e.g. [Country] App Store, so that one might be a bit more difficult to get around.
They would most likely use residential proxy/vpns that show your traffic coming out of a regular household ISP. They can be purchased for cheap.
Going forward this is going to be a bit of a cat-and-mouse game. There are plenty of other tricks X can do to determine country of origin. Long term I agree the sock puppets have the upper hand here, though forcing them to go through the effort is probably a good thing.
I don't do this with every topic unless I'm interested in discussing something just so I'm more informed just to reduce bias.
It's really fucked how the online content providers have moved from letting you seek out whatever you might fancy towards deciding what you're going to see. "Search" doesn't even seem like an important feature anymore many places.
But the thing that was supprising to me, as someone that remembers the world before the internet, is that anger is the thing that makes people stay on a site.
Before the internet came along, one would have thought that Truth would be the thing. Or funniness, or gossip, or even titalation and smut. Anger would have been quite far down on the list of 'addicting' things. But the proof is obvious, anger drives dollars.
There's no putting this knowledge away now that we know it.
The lesson only question is what are we going to do about it?
If you followed a variety of people it was quite addictive - so many celebrities or other notable people meant you got actual "first hand news", and it was fun seeing everyone join in on silly jokes and games and whatever, that doesn't hit quite as hard when it's just random usernames not "people".
But it suffered for that success, individual voices got drowned out in favour of the big names, the main way to get noticed becoming more controversial statements, and the wildly different views becoming less free flowing discussion and more constant arguments.
It was fun for a while if you followed fun people, but I think the incentives of such systems means it was always going to collapse as people worked out how to manipulate it.
X and Reddit are no different.
But the problem with over credulity goes far beyond social media. I've gotten strong push back for telling people they shouldn't trust Wikipedia and should look at primary sources themselves.
Yeah, but basically nobody is capable of evaluating those sources themselves, outside of very narrow topics.
Reading a wikipedia page about Cicero? Better make sure you can read Latin and Greek, and also have a PhD in Roman history and preferably another one in Classical philosophy, or else you will always be stuck with translations and interpretations of other people. And no, reading a Loeb translation from the 1930's doesnt mean you will fully understand what he wrote, because so much of it all hinges on specific words and what those words meant in the context they were written, and how you should interpret whole passages and how those passages relate to other authors and things that happened when he was alive and all that fun stuff.
And that's just one small subject in one discipline. Now move on to an article about Florence during the Renaissance and oh hey suddenly there are yet another couple of languages you should learn and another PhD to get.
Scientists/Researchers
Journalists
Activists
Politicians
Subject Matter Experts (for the fields I am interested in)
There were (when I was using it) a large number of "troll" accounts, and bots, but it was normally easy to distinguish the wheat from the chaff
You could also engage in meaningful conversations with complete strangers - because, like Usenet, the rules for debate were widely adopted, and transgression results in shunning (something that I rarely see beyond twitter to be honest)
I often hear that one community, or another, is "really good, not toxic at all, which is true when it starts (for tech, whilst it's "new" and everyone is still interested in figuring out how it works, sharing their learnings, and actively working to encourage people to also take interest)
Then idealism works it way in - this community is the greatest that every existed ever - and whatever it is centred is the best at whatever
Then - all other things are bad, you're <something bad> if you think otherwise
And, boom, toxicity starts to abound
For me, I've seen it so many times, whether in motorised transport (Motorcycles vs cars, then Japanese bikes vs British/European/American then individual brands (eg Triumph vs Norton), or even /style/ of bike (Oh you ride a sport bike, when clearly a cruiser is better...))
In the tech scene it's been Unix vs Microsoft, then Microsoft vs Linux or Apple, and then... well no doubt you've seen it too
Uhm I would rather say it is when the idealists are pushed out by grifters is when things get bad for a community.
I really don't as far as social media goes. If I see a link here the account posting it likely doesn't play any part, trust comes from the source of the content more than random user.
And to be fair, a lot of these accounts that are exposed as grifters were called out as such for a while now. And most of them were so obviously griftery that the only ones that followed them were those that were already so deeply entrenched in their echo chamber.
It's funny that they're explicitly being exposed now though!
Or hasn't covered yet. It's interesting to watch the cycle of "shows up on social media" then "shows up in industry-specific press" then "shows up in mainstream press", with lag in each step.
These days, Fediverse is providing the same thing for some industries. You see stuff show up there first, then show up on X and industry press a little later, then mainstream press a little later.
IRC
Usenet
Reddit
Facebook (live)
Twitter
Same reason why most 20 something dudes are too.
how open are you to a US citizen verified town square online? You'd have to scan your passport or driver license to post memes and stuff.
A town square in Cologne where 90% of participants don't hail from Cologne but London, Mumbai and San Francisco aren't going to solve the problems of Cologne or have any stake in doing so.
Which also reveals of course what Twitter actually is, an entropy machine designed to generate profit that in fact benefits from disorder, not a means of real world problem solving, the ostensible point of meaningful communication.
Upholding at least some utterly basic foundational values of humanity doesn't require holding any stake.
Verified residency is better than nothing for putting real money on the table. Although if you've been to a local town meeting, you'll know it's still not perfect.
Except human across the planet doesn't even agree on those "foundational values". What seems obvious and fundamental to us, often isn't to others.
I had this same idea before and it’s not terrible. If it guaranteed user privacy by using an external identification service (ID.me?), it might get some attention. You would likely have to reverify accounts every 6 months or so to limit sales of accounts, and you would need to prevent sock puppets somehow.
If you allow pseudonymity you would get some interesting dynamic conversations, while if you enforced a real name policy I think it would end up like a ghost town version of LinkedIn. (Many people don’t want to be honest on a “face” account.) The biggest problem with current pseudonymous networks like X/Twitter is you have no idea if the other person really has a stake in the discussion.
Also, if ID were verified and you could somehow determine that a person has previously registered for the service, bans would have teeth and true bad actors would eventually be expelled. It would be better to have a forgiving suspension/ban policy because of this, with gradually increasing penalties and reasonable appeals in case of moderation mistakes.
the linkedin effect seems more due to the nature of corporate culture where everyone's profile is an extension of their persona optimized for monetary/career outcomes so you get this vapid superficial fakeness to it that turns people off.
this X feature does make it interesting like for example engaging with US politics while shouldnt stop commentary from foreigners it definitely should contain the limits of perception meddling
> the linkedin effect seems more due to the nature of corporate culture where everyone's profile is an extension of their persona optimized for monetary/career outcomes so you get this vapid superficial fakeness to it that turns people off.
The same would happen if people knew your IRL identity on a social site, see all the attempted “cancellations” on both sides of the aisle these last few years.
My small neighborhood has a non-anonymous chat group, which is 2-3 streets (~50 houses) inside a village which is inside a city. It is basically just a mini nextdoor but without ads or conspiracies.
I wonder how much more expensive per post it would be for the bad guys if social networks required the most draconian verification technology, like a hardware-based biometric system you have to rent, and touch or sit near when posting on social media. And maybe you have to read comments you want to post to a camera.
Even at such a ludicrous extreme, state actors would still find ways to pay people to astroturf. But how effective would extraordinary countermeasures like that be, I wonder.
(Also I think high global incomes would greatly mitigate the issue by reducing the number of people willing to pretend they genuinely hold views of foreign adversaries and risk treasony kinda charges.)
I took a look at some X profile's I know where they're based, and a couple of other random, and I can see "Account based in" and "Connected via" for all of them, just logged in as a free user.
Is it possible they enabled it back again?
I'm thinking Nikita is falling out with Elon as they both seem to have diverging goals with the platform. Advertisement revenues on X isn't that great and neither are conversions on X so you can't really get consistent payouts that match Youtube. Premium subscriptions don't bring in as much dough as advertising did during Twitter days.
One side has largely left X.
https://bluefacts.app/bluesky-user-growth?t=3m
We're on a thread about widespread fake/inauthentic users on Twitter right now. I see very little reason to trust those numbers.
https://www.forbes.com/sites/conormurray/2025/11/03/threads-...
Hmm, interesting insight, what did they each say when you talked to them?
The problem with not using a cloak was that you'd stand a very real chance of getting DDoS'd or, worse, outright hacked (made easier by the fact that in ye olde modem days, your computer was directly exposed to the Internet with no firewall/NAT to protect you), and even with using a cloak and a NAT router you'd still have trolls sending "DCC SEND" [1] into channels, immediately yeeting a bunch of people with old shoddy middleboxes.
[1] https://nullroute.lt/~grawity/startkeylogger.html
> Accounts registered after March 2024 that have a verified email address are automatically assigned a generic user cloak. If your account does not currently have a cloak, you may contact staff to receive one.
https://libera.chat/guides/cloaks
And I don’t think he’s been trying all that hard either.
Instead they built better sycophants
And let’s be honest, you know what you’d do too if it was you.
Do it enough times, and you end up with yes men that also force other people into the meat grinder well enough you don’t have to care, directly.
It’s a type of genius. It works best when you embrace that everyone wants to suck up to you anyway, and there are always more flunkies where they came from, so you’re really helping the world out by filtering down to the somewhat effective ones ASAP.
its got the followers because the followers want to read and reshare it.
id maybe like to see the location of origin as a pie chart on the followers list, as well as on what theyre following, but if the idea is good(for whatever definition if good)
is being american even particularly relevant? i dont think the random guy in indiana's opinions on Mamdani are any more relevant than a random guy in nigeria's.
But there may be ways to link those records to a platform's users
X begins rolling out 'About this account' location feature to users' profiles
https://news.ycombinator.com/item?id=46024417
Top MAGA Influencers on X/Twitter Accidentally Unmasked as Foreign Trolls
https://news.ycombinator.com/item?id=46024211
https://www.reuters.com/technology/tencents-wechat-reveal-us...
However if you comment on those articles, your provincial location would be attached. the Cyber Admin of CCP mandates every app to reveal the provincial location for author and commenter.
If you’re looking to make some money on X you want engagement. If you want engagement you want to say controversial things people will argue about. What better than right wing US politics, especially when the X algorithm seems to amplify it?
for canada though, id like to see the CBC dedicatedly paying canadians to post canadian perspectives on social media
Which for many enterprising trolls/grifter have seen them become SEO(TEO?) experts to push their preferred narratives for clout/profit while drowning the entire timelines in a flood of noise.
It's cheap and easy to use social media to propagandize, so certainly there are scores of fake American accounts, but it's irritating that this article doesn't address VPN-usage during account creation.
Contrast that with legit pro-rightwing accounts: @tuckercarlson (17M), @benshapiro (8M), @RealCandaceO (7.5M), @jordanbpeterson (6M), @catturd2 (4M), @libsoftiktok (4.5M), @seanhannity (7M).
If I made an X account while vacationing in a foreign country, would that then be my country-of-origin for that account, even upon continuing to use X after returning home?
Or is it based on the IP address of last interaction?
Yay politics. Hooray for the engagement-driven internet.
It's absolutely nuts.
That kind of false engagement is also a problem (for advertisers, genuine fans etc) but doesn't shape elections and thus come with policy consequences.
Are you people ever going to let this idea go? Almost all of this activity is coming out of India, Israel, and Nigeria. Russia isn’t mentioned once in the article.
https://www.theguardian.com/technology/2020/mar/13/facebook-...
This is the pattern with all Russian influence operations; they’re always implied to be ominously large and end up being laughably small.
American political polarization had nothing to do with the Russians; this is just the refrain of frustrated Democrats who refuse to acknowledge the consequences of ill-conceived policy. Israel has always had far more sway over American politics.
The problem in particular is not only the scale but that this propaganda is not solely directed at altering US policy towards Russia, it's also about stoking ethnic and religious tension to try to weaken the US and destroy its ability to be a unified cohesive country. If the US is fighting itself then it isn't fighting Russia after all.
Can you provide any citation for this and the approximate date when this was revealed? I’ve been hearing about this since 2015 and the last report I looked at was entirely unconvincing.
> it's also about stoking ethnic and religious tension to try to weaken the US and destroy its ability to be a unified cohesive country.
That is likely one of Russia’s goals; it is not likely that the Russians were the origin of these political cleavages. This was the problem with the entire Russian influence narrative; it was a post-hoc rationalization for why exceptionally bad ideas like diversity and multiculturalism were rejected by a subset of the population. In essence: “If they hadn’t been exposed to these Facebook posts, they never would have had these illiberal ideas put into their heads.”
It was also impossible to take seriously because most of the elected officials promoting it were receiving campaign contributions from AIPAC.
"For the most part, the people who run troll farms have financial rather than political motives; they post whatever receives the most engagement, with little regard to the actual content"
BuzzFeed News investigation "didn't find concrete evidence of a connection" and "Facebook said its investigations hadn't turned up a connection between the IRA and Macedonian troll farms either"
I've been in touch with tech people in Eastern Europe. Grey zone warfare is very real in their countries.
> A 2018 BuzzFeed News investigation found that at least one member of the Russian IRA, indicted for alleged interference in the 2016 US election, had also visited Macedonia around the emergence of its first troll farms, though it didn’t find concrete evidence of a connection. (Facebook said its investigations hadn’t turned up a connection between the IRA and Macedonian troll farms either.)
Further, the article supports the point I was making:
> For the most part, the people who run troll farms have financial rather than political motives; they post whatever receives the most engagement, with little regard to the actual content. But because misinformation, clickbait, and politically divisive content is more likely to receive high engagement (as Facebook’s own internal analyses acknowledge), troll farms gravitate to posting more of it over time, the report says.
This isn’t evidence of a concerted influence campaign. It’s not even clear what the article means when it refers to these outfits as troll farms. What I imagine when I hear the phrase is a professionalized state-backed outfit with a specific mandate to influence public opinion in a target country; this isn’t what is being described in the article.
There’s evidence that Russia engaged in these kinds of influence campaigns during the 2016 election, but I’ve never seen evidence that they were particularly effective at it.
Maybe it wasn't your intent, but your comment makes it sound like this was an issue with only a single side of the political spectrum. However...
https://www.businessinsider.com/russians-organized-pro-anti-...
> The Russians weaponized social media to organize political rallies, both in support of and against certain candidates, according to the indictment. Although the Russians organized some rallies in opposition to Trump's candidacy, most were supportive.
Not to mention the recent exposure of the funding source of the fine folks over at Tenet Media.
That's what the Russians do. It's too difficult to improve their own country, their own lives, and their own prospects, so they focus on the next-best strategy for the acquisition of power, which is dragging everybody else down to their level.
I know of a few defectors who ended up there; one was an American that went by the name of “Texas,” while another one was a Canadian who moved there to be a farmer in hopes of protecting his family from what he saw as degenerate values being propagated by the Canadian education system. Texas was supposedly murdered by Russian soldiers while operating with Kremlin-aligned militias in the Donbas region. The Canadian is still living in Russia and has a YouTube channel.
I suspected a regular rotation of Kremlin agents were on /pol/ during the Syrian Civil War. Russian sentiment was generally far more positive prior to the invasion. It’s possible this was all organic and just collapsed as people saw what they did to Ukraine; I really have no idea.
Frog Twitter for their part pivoted on Russia quite quickly in the early 2020s, around the time Thiel was buying out podcasts.
On the other hand there's hundreds of thousands of diaspora Russians, and they're very pro russian. Richard Spencer's ex wife is a good example of this. Overall this is a much bigger impact than the dozen converts or a few thousand half hearted Harper's.
Obviously before the war Russia was less publicly objectionable. In Syria everyone just hated ISIS.
The /pol/ effect is nostalgia for worlds that no longer exist and we're not personally experienced. It's political flavored nostalgia instead of Pokémon collecting.
In terms of American twitter Russiagate and making Russia a red/blue partisan issue has been the most disastrous. It's simple contrarianism.
What political interest does a Nigerian have in swaying US opinion?
They’re grifters; their interest in American politics is commercial. Indians were targeting Trump supporters with fake news for ad revenue as early as 2015; this is a continuation of that model.
It’s possible the Russians have contracted influence campaigns out to Indian and Israeli firms, but the simpler explanation is just that India is continuing its long and storied history of using telecomm networks to scam unwitting boomers while Israel is continuing its long and storied history of being the worst greatest ally of all time.
See exhibit 8 and such: https://www.justice.gov/opa/media/1366201/dl
Or 10 which specifically talks about Twitter https://www.justice.gov/archives/opa/media/1366191/dl
I'd make the assumption that posters located in Russia, China, NK, etc. are likely to be in some way tied to the state, where posters in India, random African nations, etc. are more likely to be private actors of which some will be US-based outsourcing to low-cost labor.
https://youtu.be/rE3j_RHkqJc
Anger works wonders online.
Speculation: they're resolving historical IP addresses against a current IP geolocation database. An IP which belonged to a US company in 2010 may have since been sold to a Nigerian ISP, but that doesn't mean that the user behind that IP in 2010 was actually in Nigeria.
These are paid astroturfers probably more like call centers, paid for presumably by all sorts of interests from foreign intelligence services, to businesses (or select executives), to internal political groups or politicians trying to manipulate public opinion.
Both political extremes are suffering from this kind of manipulation where real concerns are twisted and amplified for lets say the more gullible half of the population (gullibility knows know political alignment exclusively). The excluded middle is afraid of the people who have been manipulated this way (death threats also know no political boundaries).
with the development capability remaining at twitter anything is possible.
They use professional paid services from these low labour cost countries all the time for publicity or to control the narrative.
By some estimates 20-60% of everything you see on social media is generated by a bot farm, depending on the forum in question. An analysis of Reddit showed some subreddits are 80% AI generated.
The "control the narrative" stuff is mostly a PR campaign by social media intelligence companies trying to make their services seem more valuable than they are.
Almost all of these accounts are operating out of India or Israel. The Indians are usually in it for the money (though there’s probably some Israeli outsourcing going on there, too), whereas the Israelis were riding off 2010s Islamophobia to prime American Evangelicals for their activities in Gaza.
That is exactly what is happening and what is being reported on. The thing you attribute to "weird personal bias" is being widely exposed.
We should probably examine your weird personal bias. Weird, because you could just read the article!
The Department of Homeland Security, for one.
Edit: Link removed as I was disinformed by a /pol/ PsyOp.
https://xcancel.com/nikitabier/status/1992382852328255743
Now if we could have other platforms do the same, and not just accidentally like with the Reddit case lol