
Vort3x | Cybersalon | February 15, 2026
Vort3x, published on the 15th of each month, aims to pick out significant developments in the intersection of computers, freedom, privacy, and security for friends near and far. The views expressed in these stories do not necessarily reflect those of Cybersalon, either individually or collectively.
Prepared by Wendy M. Grossman.
Contents: Cybersalon events | News | Features | Diary
Cybersalon Events
17th Feb 7pm Newspeak House
Our friends at “College of Political Technologists” are hosting ClawClub AI Agents Hack Night
Bring your laptop evening for playing with AI agents, agent UX (AUX), OpenClaw, Moltbot
Ticket £10 inc food/drinks
UPDATES
“Cut The Cord” campaign by ORG
Our friends at Open Rights Group are just launching a new campaign, to move away from US Cloud Services. Indie tech stack is acoming, how do you set up yours? Update from the launch event on 6/2 here
Our roving reporter @ZoeACamper finds the good, the bad and the ugly in the jungle of Las Vegas Consumer Electronics show that is a must-see for tech tribes from all over the world.
NEWS
Dutch MPs Call for a shift towards independence from Microsoft
———————————————————————-
Dutch MPs are calling for Dutch companies and government services to find alternatives to American cloud services, citing security reasons, Dutch News reports. NOS, the public broadcaster, has found that 98% of health insurers, 92% of hospitals, 98% of local councils, and NOS itself all are connected to at least one US-based cloud server. In addition, Solvinity, the Dutch cloud company that manages the national DigID system citizens must use to interact with government, agencies, health insurers, and pension providers, is being sold to US IT company Kyndryl, making the secure information it stores subject to the US PATRIOT Act. At The Register, Liam Proven reports that at the recent Open Source Policy Summit Finnish MEP Aura Salla and others warned that the EU’s dependence on Microsoft means it has a kill switch that could turn off the EU’s systems in an hour. A group called
https;//pulltheplug.uk has announced it will lead a “March Against the Machines” through London on February 28 to call on the UK government to give people a say in how AI is used.
Comment: Privacy advocates have warned for nearly 25 years that the PATRIOT Act, passed soon after the 9/11 attacks, require foreign subsidiaries of US companies to turn over data for inspection on demand. The increased power wielded by technology companies and policy shifts by the current US administration are fueling more widespread concern.
Iran Tests how to block citizens from Two-Tier Internet
———————————————————————
In late January, Iran began a test of a two-tier “Barracks Internet” that would restrict web access only to elites who have been security-vetted, keeping most of the country’s 90 million citizens within an intranet, Indranil Ghosh reports at Rest of World, based on information from Filterwatch. The government, which has restricted global access since 2013, has said international access will not be restored until late March at the earliest. Iran’s deputy communications minister has estimated daily economic cost of the shutdown at $4.3 million; NetBlocks estimates that the real cost is more than $37 million per day.
Developer Launches Social Media Platform Moltbook for AI Agents
———————————————————————-
The developer Matt Schlicht has released Moltbook, a “vibe-coded” Reddit-like platform for autonomous AI agents, Jared Perlo reports at NBC News. Humans are only allowed to observe; Schlicht has delegated managing the platform to his bot Clawd Clawderberg.
At MIT Technology Review, Will Douglas Heaven argues that the furor over Moltbook is wildly overblown and calls it “AI theater”.
Comment: We are still no closer to sentience or artificial general intelligence
New York State Seeks to Require Gun-Making Kill Switch on 3D Printers
———————————————————————-
The State of New York is seeking to require the incorporation of “blocking technology” into all 3D printers, CNC mills, and any other machine capable of modifying 3D objects from a digital file using subtractive manufacturing sold in the state that will use a firearms blueprint detection algorithm to analyze every print file and stop them from printing anything flagged as a potential gun part a, Mark Frauenfelder reports at BoingBoing. Experience in Germany indicates that it’s not possible to reliably identify firearm parts. Frauenfelder notes that open source firmware maintained by volunteers has no realistic path to compliance.
European Commission is upsetting Network Neutrality
———————————————————————-
The European Commission’s proposed Digital Networks Act could undo network neutrality protections in Europe, Epicenter.Works reports. Presented as modernization, the proposal removes 18 of 19 legal recitals, provides for “fast lanes”, introduces network fees (sometimes known as “fair share”), and gives the Commission access to the regulator’s internal discussions and working groups.
Comment: Network neutrality is the bedrock of the open Internet.
FEATURES & ANALYSIS
Fearful of AI Publishers Block Internet Archive’s Wayback Machine
————————————————————–
In this article at NiemanLab, Andrew Deck and Hanaa’ Tameez report that news publishers including the Guardian, the Financial Times, and the New York Times, have begun limiting the Internet Archive Wayback Machine’s access to their sites out of concern that digital archives will enable AI crawlers to scrape their content. Also blocking the Wayback Machine is Reddit, which licenses its content to Google as AI training data. In its study, NiemanLab found 241 news sites from nine countries that bar at least one of the Internet Archive’s four crawler bots.
Comment: This sounds like a case where an organization that provides a useful service is being hindered by the bad behavior of others.
Sainsbury’s staff using facial recognition tech eject innocent shopper
————————————————————–
In this article at the Guardian, Kevin Rawlinson reports that Londoner Warren Rajah was ordered to leave a Sainsbury’s supermarket in Elephant and Castle when staff misidentified him as someone else who had been flagged by the Facewatch facial recognition system the chain uses to bar known shoplifters. Rajah had to send Facewatch a photo of himself and another of his passport to establish that he was not in its database – that is, innocent.
Comment: I first encountered Facewatch in 2013 at the annual Biometrics Conference. The problems were glaringly obvious even then.
Self-replicating adversarial prompts as AI Brings New Security Threats
——————————————————————–
In this article at ArsTechnica, Benj Edwards cites researchers predicting a new security threat in the form of self-replicating adversarial prompts – that is, networks of AI software agents carrying out instructions from prompts and spreading them to other AI agents. Edwards says the problem began with the release of the vibe-coded open source AI personal assistant application OpenClaw in November 2025, which allows a large group of semi-autonomous AI agents to communicate with each other through any major communication app or sites like the simulated social network Moltbook. At TechRadar, Sead Fadilpašić reports that Moltbook’s vibe-coding has left millions of users’ credentials and personal data unsecured.
At Cyber Insider, At CyberInsider, Signal President Meredith Whittaker warns that AI agents embedded in operating systems are undermining the practical security guarantees offered by end-to-end encryption. In an article at Computer Weekly, Peter Sommer argues that the UK needs to rethink the 2016 Investigatory Powers Act because technology has changed so much since its passage.
Finally – and contrarily – at Axios Sam Sabin reports that Anthropic says its latest AI model, Claude Opus 4.6, has found more than 500 previously unknown high-severity vulnerabilities in open source libraries that have all been validated by either Anthropic’s team or by outside security researchers.
Comment: New technologies often bring mixed blessings; they bring new security threats or undermine the methods we’ve developed to counter old ones but may also bring new protections. Generative AI is no exception.
TikToker Secretly Films Inside London Homes to Get Clicks From Hate
———————————————————————-
In this article at London Centric, Katherine Denkinson and Jim Waterson investigate the case of a TikToker who secretly films anti-immigrant videos inside Londoners’ homes and uploads them. In a follow-up, they locate and interview the Tiktoker, who says he posts lies because he “just wanted the clicks” and realized that “hate brings views”. London mayor Sadiq Khan told LondonCentric that the story reveals a “dangerous and divisive trend”.
Plugin Helps AI to Write More Human
——————————————————————–
In this article at Ars Technica, Benj Edwards discusses an Open Source plugin that instructs the Claude large language model to avoid language and formatting patterns that indicate that a piece of text has been written by an AI. The plugin is based on a detailed list created and collated by the editors who are part of Wikipedia’s WikiProject AI Cleanup, which searches out AI-generated articles and identifies common patterns. At Science magazine, Nicola Jones reports that in order to eliminate AI slop the arXiv pre-print server will require first-time posters to be endorsed by an established arXiv author in their field.
DIARY
State of the Browser 2026 London
—————————————-
February 28, 2026
London, UK, and online
Now in its fourteenth edition, State of the Browser one-day, single-track conference talks about the modern web, accessibility, web standards, and more, organised by London Web Standards.
—————————————-
February 16-20, 2026
New Delhi, India
The India–AI Impact Summit 2026 marks a defining global inflection point — transitioning from dialogue to demonstrable impact. Anchored in the principles of People, Planet, and Progress, it envisions a future where AI advances humanity, fosters inclusive growth, and safeguards our shared planet.
—————————————-
February 19, 2026
London, UK, and online
On 1 January 2005, the UK and Scottish Freedom of Information (FOI) Acts came into force, giving people a vital democratic right. Now that it’s 21 years old, mySociety hosts FOI Fest to reflect, connect, and look ahead. FOI Fest 2026 is a one-day conference bringing together journalists, civil society organisations, government, academics, practitioners, and anyone passionate about information rights. Whether you use, respond to, or shape FOI, this is your chance to explore its achievements to date, tackle challenges, and imagine its future.
—————————————-
March 3-4, 2025
San Francisco, CA, US
[un]prompted is an intimate, raw, and fun gathering for the professionals actually doing the work, from offense to threat hunting to program building to national policy. No fluff. No filler. Just sharp talks, real demos, and conversations that matter.—————————————-
March 3-5, 2026
Berkeley, CA, USA
The fifth ACM symposium on computer science and law is the flagship conference for the emerging field of computer science and law. It brings together a community—scholars, practicing lawyers, and computing professionals—who are fluent both in computational thinking and its rigorous mathematical formalisms and in legal scholarship and thought with its equally rigorous yet human-centric set of principles, methodologies, and goals. Central to the study of “computer science and law” is the creation of a body of scholarship aimed towards the co-design of law and computing technology to promote social goals. We seek papers that combine rigorous technical computer-science reasoning with rigorous legal analysis to integrate the two disciplines.
—————————————-
April 13-16, 2026
Boulder, CO, USA
For over 75 years, the Conference on World Affairs (CWA) has brought together global leaders and experts from a wide range of fields to spark lively, thought-provoking conversations on the most pressing issues of our time. Free and open to all—whether in person at CU Boulder or via livestream—CWA is designed to inform, inspire, and engage diverse audiences.
—————————————-
April 23-35, 2026
Berlin, Germany
We Robot is an interdisciplinary, peer-reviewed conference that brings together leading scholars and practitioners to discuss legal, ethical and policy implications of robots and other emergent digital technologies. Since its inception in 2012, the conference has fostered dynamic conversations regarding robot theory, design, ethics and development. We Robot 2026 will create an international platform to discuss current and future AI and robotics policy, especially at a time when legal frameworks are evolving in different directions around the world. A major focus of the 2026 edition, the first to be held outside the US, will be a comparative analysis of different approaches to regulation, with the goal of fostering mutual learning and dialogue.
—————————————-
April 25-26, 2026
Manchester, UK
OggCamp is an unconference celebrating Free Culture, Free and Open Source Software, hardware hacking, digital rights, and all manner of collaborative cultural activities and is committed to creating a conference that is as inclusive as possible. If you’ve got a story to tell, no matter your background or current status, whether it’s your first talk or you’ve loads of experience, as long as the talk is connected (somehow) to our theme then we want to know about it.
—————————————-
May 5-8, 2026
Lusaka, Zambia and online
The goal for RightsCon 2026 is to strike a balance between a clear, familiar structure and the flexibility to respond to a rapidly changing digital landscape. At a time when the digital rights sector is facing unprecedented pressure and uncertainty, from political volatility to disruptive emerging technologies, we want to ensure that the program is able to address urgent, time-sensitive issues, while maintaining a stable foundation for participants to prepare and engage meaningfully.