
Vort3x, published on the 15th of each month, aims to pick out significant developments in the intersection of computers, freedom, privacy, and security for friends near and far. The views expressed in these stories do not necessarily reflect those of Cybersalon, either individually or collectively.
Prepared by Wendy M. Grossman.
Contents: Cybersalon events | News | Features | Diary
Cybersalon Events
AI Training and Synthetic Users – London, Newspeak House
Date: 24th March 7pm
A new research technique made possible by LLMs
Potential breakthrough in costs of traditional user research but a challenge for principles of user-centred design.
Email [email protected] to join
SciFi Writers The EasterCon 2026 – UK
Birmingham 3-6 April
Yearly get together of top UK Speculative Fiction writers with many international speakers, workshops and cosplay sessions
OggCamp
April 25-26, 2026
Manchester, UK
OggCamp is an unconference celebrating Free Culture, Free and Open Source Software, hardware hacking, digital rights, and all manner of collaborative cultural activities
Put forward proposal and apply here
NEWS
Generative AI Provides Unsound Medical Advice
———————————————————————-
At Nature Medicine, a group of US researchers find that in a structured test of triage recommendations, ChatGPT Health, which was launched in January 2026, failed most dangerously at the edge extremes of non-dangerous conditions and emergency conditions. Overall, the system under-triaged 52% of cases. At the Guardian, Andrew Gregory finds that Google AI overviews responding to the site’s two billion health queries a month cite YouTube more often than medical sites, according to a study of more than 50,000 health searches from Berlin. In a Mastodon posting, “Nömenlōony”, an electrician, shows the life-threatening results of asking ChatGPT to draw the correct way to wire a British plug. In a test at Ars Technica using Google’s agentic AI version of its Chrome browser, Ryan Whitwam finds its limitations at six tasks. His conclusion: more time and improvement is needed before it’s safe to trust the browser to perform tasks on its own. At 404 Media, Emanuel Waiberg reports that Meta’s director of AI safety accidentally let an AI agent loose on her email inbox and had to scramble to prevent it from deleting the whole thing.
Comment: I’d wait a while longer before trusting AI with important decisions.
European Parliament blocks renewal of temporary Chat Control
———————————————————————
The European Parliament has voted down European Commission’s plan to renew a provision for voluntary chat control in favor of creating more permanent regulation, Stefan Krempl reports at Heise Online. The temporary measure has been in place since 2021. Critics such as the Chaos Computer Club, Digital Gesellschaft, and European Digital Rights recently called in an open letter for the immediate end of chat control. In a blog posting, EDRi lays out its position on chat control in more detail.
Comment: “Chat control” is the EU’s Child Sexual Abuse Regulation, which would require online services to detect and report child sexual abuse material via ubiquitous surveillance, which would include scanning private messages and undermining end-to-end encryption.
UK’s Multibillion-Pound Embrace of AI – Phantom and ghosts
———————————————————————-
Much of the UK’s announced multibillion-pound investments in AI is “phantom investments”, including rented data centers and a supercomputer site due for completion in 2026 that’s still a scaffolding yard, Aisha Down reports at the Guardian. Board members at Nscale, one of two companies leading the investments, include Nick Clegg, the former head of global policy at Meta and former deputy prime minister, and Sheryl Sandberg, former chief operating officer at Meta. The Department of Science, Innovation and Technology “rejected these assertions”. Nscale, which is based in London, and the other leader, US-based CoreWeave, are both backed by Nvidia. CoreWeave announced in 2024 it would build two new data centres in the UK and invest £1 billion, but planning records indicate the company has merely become a customer of two existing datacentres dating to 2002 and 2015. At his blog, Ed Zitron has been questioning CoreWeave’s finances for the last year.
Comment: Both Rishi Sunak and Keir Starmer have stressed the importance of AI investment to bring jobs and economic growth to the UK but so far it has been hard to verify any proven spending
US State Department Builds Portal to Bypass Censorship
———————————————————————-
The US State Department is developing an online portal at “freedom.gov” to enable those elsewhere, including in Europe, to bypass national restrictions on accessing hate speech and terrorist material, Reuters reports. A source says user activity at the site will not be tracked. The project was due for an official launch in early February, but was delayed.
Comment: Will anyone trust this portal?
Meta Ray-Ban Smart Glasses Send Intimate Data to Kenya for Labeling
———————————————————————-
Kenyan workers for subcontractor Sama say that Meta’s Ray-Ban smart glasses are sending intimate images and video to them for analysis and labeling to help train AI models, Svenska Dagbladet reports, based on a joint investigation with Göteborgs-Posten. The deeply private data includes bank cards, sexual activity, naked people, and even bathroom visits. In tests, the newspapers find it’s impossible to use the glasses’ AI without contacting Meta’s servers. At TechCrunch, Sarah Perez reports that plaintiffs in New Jersey and California have filed a lawsuit against Meta over the revelations.
FEATURES & ANALYSIS
————————————————————–
In this paper at Nature, a group of researchers from institutions including Bocconi University, the university of St. Gall and the Paris School of Economics give details of a 2023 field experiment to test X’s political influence. After randomly assigning active US-based users to either a chronological or algorithmic feed for seven weeks, they found that the algorithm promotes conservative content and demotes posts from traditional media, and that even after switching off the algorithm users continue to follow conservative political activist accounts they have encountered in the feed, suggesting that the initial exposure to X’s algorithm has persistent effects. The researchers believe this explains why earlier studies found that turning off the algorithm had no impact on political attitudes. The researchers conclude that policy makers should embrace greater algorithmic transparency.
Comment: In retrospect, this is logical. People add accounts to follow when they encounter them in their feed, and the effects of that persist even if they change back to a chronological feed. Of course, we all like to believe we are not so easily influenced.
Large Language Models May End Online Anonymity
————————————————————–
In this paper, researchers at ETH Zurich and Anthropic find that large language models can help perform large-scale online deanonymization at high speed. It has always been possible for humans to link profiles and posts across platforms, but AI vastly speeds up this process. The researchers conclude that we need a new threat model for online privacy.
PromptWare: Countering AI-Enabled Malware Attacks
——————————————————————–
In this blog posting, security expert Bruce Schneier introduces a new paper outlining “promptware”, a new class of malware execution mechanisms. Schneier and his co-authors propose a seven-step process, which they dub a “promptware kill chain” to guide policy makers and security practitioners through this new threat.
Do you feel exhausted? AI’s Speed Brings Cognitive Debt
———————————————————————-
In this blog posting, computer science professor Margaret-Anne Storey argues that the extra speed AI is bringing to coding introduces “cognitive debt” by outstripping programmers’ ability to understand what the program does, how developers’ intentions are implemented, and how the program can be changed over time. Computer programs are not just code; they are also a theory in the minds of potentially thousands of developers. Developers, she concludes, need to slow down in order to build shared understanding. At TechCrunch, Connie Loizos finds that the programmers who use AI the most are the ones who are burning out.
Bankrupt Companies Can Leave Connected Cars Stranded
——————————————————————–
In this article at Ars Technica, Matthew MacConnell writes about what happens when the manufacturer of a connected car or essential technologies for them goes bankrupt, as Better Place did in 2013 and Fisker did in 2024. MacConnell warns that bankruptcies can leave these cars bricked, and discusses the industry shift that means that long-term continued access to proprietary software and manufacturer support are a critical failure mode, and matter as much as mechanical durability. Although well-established firms are a better bet, even these may decide not to continue support over the length of time most people expect to own cars. At TechCrunch, Amanda Silberling reports that Waymo is offering to pay DoorDash drivers to close the doors on Waymos whose passengers have left them open, stranding the cars.
DIARY
—————————————-
March 26-27, 2026
Brussels, Belgium
EuroDIG is a platform for discussion and the exchange of ideas on emerging issues and challenges concerning the Internet. All stakeholders are invited to shape the agenda jointly and take part in the discussion. The inclusive and continuous dialogue, which culminates in an annual event, has taken place in a different European country every year since its inception in 2008. The results are conveyed in the form of ‘Messages’ which are forwarded to policy makers and fed into the annual global UN Internet Governance Forum (IGF).
—————————————-
March 3-5, 2026
Berkeley, CA, USA
The fifth ACM symposium on computer science and law is the flagship conference for the emerging field of computer science and law. It brings together a community—scholars, practicing lawyers, and computing professionals—who are fluent both in computational thinking and its rigorous mathematical formalisms and in legal scholarship and thought with its equally rigorous yet human-centric set of principles, methodologies, and goals. Central to the study of “computer science and law” is the creation of a body of scholarship aimed towards the co-design of law and computing technology to promote social goals. We seek papers that combine rigorous technical computer-science reasoning with rigorous legal analysis to integrate the two disciplines.
—————————————-
April 13-16, 2026
Boulder, CO, USA
For over 75 years, the Conference on World Affairs (CWA) has brought together global leaders and experts from a wide range of fields to spark lively, thought-provoking conversations on the most pressing issues of our time. Free and open to all—whether in person at CU Boulder or via livestream—CWA is designed to inform, inspire, and engage diverse audiences.
—————————————-
April 23-35, 2026
Berlin, Germany
We Robot is an interdisciplinary, peer-reviewed conference that brings together leading scholars and practitioners to discuss legal, ethical and policy implications of robots and other emergent digital technologies. Since its inception in 2012, the conference has fostered dynamic conversations regarding robot theory, design, ethics and development. We Robot 2026 will create an international platform to discuss current and future AI and robotics policy, especially at a time when legal frameworks are evolving in different directions around the world. A major focus of the 2026 edition, the first to be held outside the US, will be a comparative analysis of different approaches to regulation, with the goal of fostering mutual learning and dialogue.
—————————————-
May 5-8, 2026
Lusaka, Zambia and online
The goal for RightsCon 2026 is to strike a balance between a clear, familiar structure and the flexibility to respond to a rapidly changing digital landscape. At a time when the digital rights sector is facing unprecedented pressure and uncertainty, from political volatility to disruptive emerging technologies, we want to ensure that the program is able to address urgent, time-sensitive issues, while maintaining a stable foundation for participants to prepare and engage meaningfully.
—————————————-
Las Vegas, Nevada, USA
DEF CON is one of the oldest continuously running hacker conventions around, and also one of the largest.