Vort3x by Wendy

Police, Cameras, Concern | Vort3x | December 15, 2025

Vort3x, published on the 15th of each month, aims to pick out significant developments in the intersection of computers, freedom, privacy, and security for friends near and far. The views expressed in these stories do not necessarily reflect those of Cybersalon, either individually or collectively.

Prepared by Wendy M. Grossman.

Contents: Cybersalon events | News | Features | Diary

Cybersalon Events

Our hacker friends in Germany are hosting CCC – Chaos Communication Congress 2025

December 27-30, 2025

Leipzig, Germany (Fly from London to Berlin, then 1h train to Lepizig by Deutsche Bahn £40 ticket)

Europe’s largest hacker conference, the Congress, now in its 39th year, has become a Europe-wide renowned event with more than 17,000 participants annually, drawing an ever-growing group of international guests.

NEWS

Australia Begins Social Media Ban for Under-16s – nonevent or end of the world for teens?

———————————————————————

Australia’s social media ban for those under 16 officially began on December 10, Ange Lovepierre reports at ABC News. The “ban” is not total; teens may use accounts on social media but may not own accounts.

On December 4, Meta began removing Instagram, Facebook, and Threads accounts in Australia; also subject to the law are TikTok, Snapchat, Reddit, X, YouTube, Twitch, and Kick. At the Guardian, Josh Taylor explains the origins of the law, which was inspired by Jonathan Haidt’s book The Anxious Generation. Shredder reports that as a consequence teen Olympic skateboarders are seeing their accounts terminated, potentially costing them their funding and the audiences they’ve built up over a long period.

Also at ABC, Clare Armstrong reports that in response to the law a pair of 15-year-olds, backed by the Digital Freedom Project, have filed a lawsuit to oppose the law, arguing that those most hurt by the ban will be vulnerable kids, LGBTQ+, and kids in rural and remote areas. The suit also questions whether the law is proportionate to its aims, given the likely impact on political communication. DFP believes other means should be used to improve online safety.

Finally, at the Australian Financial Review, Sam Buckingham-Jones reports that Reddit is preparing a court challenge to the law.

Comment: The ban will undoubtedly hurt a range of teens for different reasons. What’s not clear is what the specific goals are and how success or failure will be judged.

Microsoft Warns Users of Agentic AI about Security Risks in Windows 11

———————————————————————-

Microsoft is warning that users should only enable the agentic AI capabilities coming soon to Windows 11 if they understand that they will have access to apps and files and introduce novel risks such as to Cross-Prompt injection, malware installation, and Data Exfiltration, Zac Bowden reports at Windows Central. Users will need administrator privileges to enable the agentic AI, which will be turned off by default.

Comment: increasing complexity always brings additional vulnerability. However, it’s good to see Microsoft paying attention to the fact that many users may not instinctively understand the dangers implicit in allowing their computers to make decisions on their behalf.

UK Home Office Fails to Disclose Facial Recognition Algorithm Bias

———————————————————————-

The UK’s data protection watchdog, the  Information Commissioner’s Office, has complained that the Home Office failed to disclose significant historical biases in the retrospective facial recognition technology used in the Police National Database despite frequent engagement between the two organizations, Connor Jones reports at The Register. On December 4, the National Physical Laboratory conducted updated accuracy tests, which examined the currently-used Cognitive FaceVACS-DBScan ID v5.5 and the Idemia MBSS FR, intended for future use. The Home Office claims the new algorithm has no statistically significant bias, but also says that retrospective facial recognition results are never used as evidence without a manual review. Also at The Register, Carly Page reports that the government is planning to expand the use of facial recognition to many more situations and across many more police forces.

consultation document was published in the first week of December, and will remain open for comment for ten weeks.

Comment: This is a double blow to citizens’ trust in their government. The Home Office would do better to embrace transparency and rethink the policy. The ICO has long been criticized for doing too little enforcement.

European Commission Digital Omnibus Undermines Rights

———————————————————————-

On November 19 the European Commission published the Digital Omnibus, a package of tweaks (“simplification”) to General Data Protection Regulation, the ePrivacy Act, and the AI Act that could undermine core protections for citizens’ rights and freedoms, European Digital Rights reports at its blog.

EDRi is one of 133 organizations that have signed an open letter to the European Commission opposing the changes; these include trade unions and education groups, as well as scientists, environmentalists, and the Child Rights International Network.

Noyb provides a detailed legal analysis of the package’s provisions, which it has called “the biggest attack on Europeans’ digital rights in years”, and says it offers no benefits to SMEs but many for the largest technology companies.

Founder Rochko Steps Down as Mastodon Converts to Non-Profit

———————————————————————-

Citing burnout and system growth beyond what a single person can manage, Eugen Rochko, the creator of Mastodon, is stepping down as CEO of the ten-year-old open source decentralized social network as the organization transitions to a non-profit structure that will allow it to expand without depending on any one single leader, Sarah Perez reports at TechCrunch. Rochko says he will continue to contribute as an advisor, and has been given a €1 million one-time payment as compensation for the many years he worked for the network at below-market rates.

New executive director Felix Hlatky says as a non-profit based in Belgium, Mastodon, which has 14 employees, will be able to unlock new sources of funding. At Tedium, Ben Wermuller examines the state of Mastodon, Bluesky, and other open, interoperable, social networks.

Comment: Distributed, open networks sorely need a business model to ensure their future will turn out differently than email (distributed, but highly CONSOLIDATED), Usenet (overrun with spam and abusive behavior), and IRC (still active, but widely ignored). The only alternative we have to date is ownership by billionaires, who seem little interested in their wider impact on society beyond promulgating their own values.

FEATURES & ANALYSIS

Embedding Palantir in NHS Care Pathways Poses Risks

————————————————————–

In this article, medConfidential explores the consequences of embedding Palantir in the UK’s National Health Service, as health and social care minister Wes Streeting wants to do. Among the issues medConfidential raises: the potential that treatment pathways, which have traditionally required many meetings and significant clinical input, could rest on convincing a single Palantir employee to make a change. In addition, medConfidential writes that as the Department of Health and Education has spent its technology budget on Palantir, it is decommissioning its other data systems.

Local NHS decision makers can resist by choosing alternative suppliers or building in-house.

San Francisco Transport Head Offers Experience on Impact of Robotaxis

————————————————————–

In this article at Bloomberg, David Zipper interviews Jeffrey Tumlin, the outgoing head of the San Francisco Municipal Transportation Agency, about the impact of driverless taxis on other travel modes within the city. Tumlin says he’s been surprised to see Google’s Waymos become better than he is at spotting hidden pedestrians and predicting their behavior, but that they provide no benefit he can see for the city’s transportation system, as they mainly assist the privileged but creator wider problems for the overall system.

Waymos’ benefits, such as slow, steady driving, could be replicated by other modes, and their quality could be matched given the necessary funding.

Waymos, like Uber and Lyft before them, add congestion and take mode share from walking, biking, and public transport. This is less of an issue in cities that were designed for cars. At the Financial Times, Tim Bradshaw and Kana Inagaki report that Waymo will launch driverless taxis in London in 2026, its first European city, taking advantage of the 2024 Automated Vehicles Act. The Waymos’ arrival will be subject to approval from the UK Department of Transport and Transport for London.

Comment: London is a much older and more complex city than San Francisco or the other more car-friendly cities where driverless taxis have been launched. If Waymo gets its approvals, this will be a much bigger test of the technology than any we’ve seen to date and likely more risk for bicycle riders

Large Language Models Fuel the “AI Bubble”

——————————————————————–

In this article at TechCrunch, Sarah Perez finds that Clem Delangue, the founder and CEO of Hugging Face, believes we are in a large language model bubble, not an AI bubble, and believes it may burst in 2026.

LLMs are just one subset of AI, which encompasses advances in biology, chemistry, and applications to images, audio, and video. He also believes cheaper, smaller, faster, and more specialized models will be the future. In 2021, Timnit Gebru, Emily Bender, Margaret Mitchell, and Angelina McMillan-Major made the same argument about small models in their “Stochastic Parrots” paper.

At The Verge, citing a commentary in Nature, Benjamin Riley argues that today’s LLMs are not precursors to superintelligence. Given that the neuroscience we have so far tells us that human thinking is largely independent of human language, we cannot scale LLMs to reach AGI.

Comment: The drumbeat about the “AI bubble” keeps getting louder. Delangue is right the way many were about the Internet in the 1990s: there undeniably was a bubble that was going to burst, but most commentators understood that nonetheless ten years later the Internet would be much bigger than it was in 1999. AI will be much bigger and more pervasive ten years from now – but it likely will not be used in anything like the way we expect, and LLMs in particular show no sign at present of ever becoming profitable enough to pay back the massive investments that are being made. Two things can both be true.

Common Crawl Offers AI Companies Paywall-Protected Content

———————————————————————-

The obscure non-profit Common Crawl has been scraping billions of web pages to build a Massive Internet Archive for more than a decade and makes it freely available for research including to AI companies to train large language models and machine translation systems, including articles from major publishers protected by paywalls, Alex Reisner reports at The Atlantic.

Common Crawl’s director has said publicly that he believes AI models should be able to access anything on the Internet for free and that any other policy would kill the open web.

Reisner’s research shows that although the company says it delete material on request, it doesn’t actually do so and that although the company’s crawlers are now blocked by the top 1,000 websites, it retains older content already scraped. At Technollama, Andres Guadamuz considers the recent decisions in GEMA v. OpenAI and Kneschke v. LAION, in which German courts confirmed that the text and data mining exception in the EU’s Digital Single Markets directive applies to AI training and argues that it’s no longer tenable to believe that the exception was never meant for this purpose. Finally, at 404 Media, Emanuel Waiberg reports that new research shows that the tools researchers use to ensure that respondents to surveys are human and not AI-generated no longer work, and that the solutions found to date all have tradeoffs.

Comment: The issues around training AI models and copyright continue without (yet) any clear sign of how they may be resolved.

Global Internet Freedom Declines for 15th Straight Year

——————————————————————–

In this year’s Freedom on the Net report, Freedom House finds that global Internet freedom declined for the 15th consecutive year. Conditions have deteriorated in 28 of the 72 countries covered, only 17 registered overall gains, and half of the 18 designed “free” have seen declines. Authoritarian leaders are using control over online information to entrench themselves, manipulation in online spaces is increasing, and the future will depend on how governments roll out and regulate new technologies.

DIARY

State of the Net

—————————————-

February 23, 2026

Washington, DC, USA and online

The State of the Net Conference Series is hosted by the Internet Education Foundation, a 501(c)(3) non-profit organization dedicated to educating the public and policymakers about the potential of a decentralized global Internet to promote communications, commerce and democracy. IEF works closely with leaders on Capitol Hill and in the private sector to host the most important debates in Internet policy. IEF’s board of directors comprises public interest groups, corporations, and associations representative of the diversity of the Internet community.

State of the Browser 2026

—————————————-

February 28, 2026

London, UK, and online

Now in its fourteenth edition, State of the Browser is a yearly one-day, single-track conference with widely-varying talks about the modern web, accessibility, web standards, and more, organised by London Web Standards.

CS&Law 2026

—————————————-

March 3-5, 2026

Berkeley, CA, USA

The fifth ACM symposium on computer science and law is the flagship conference for the emerging field of computer science and law. It brings together a community—scholars, practicing lawyers, and computing professionals—who are fluent both in computational thinking and its rigorous mathematical formalisms and in legal scholarship and thought with its equally rigorous yet human-centric set of principles, methodologies, and goals. Central to the study of “computer science and law” is the creation of a body of scholarship aimed towards the co-design of law and computing technology to promote social goals. We seek papers that combine rigorous technical computer-science reasoning with rigorous legal analysis to integrate the two disciplines.

Conference on World Affairs

—————————————-

April 13-16, 2026

Boulder, CO, USA

For over 75 years, the Conference on World Affairs (CWA) has brought together global leaders and experts from a wide range of fields to spark lively, thought-provoking conversations on the most pressing issues of our time. Free and open to all—whether in person at CU Boulder or via livestream—CWA is designed to inform, inspire, and engage diverse audiences.

We Robot

—————————————-

April 23-35, 2026

Berlin, Germany

We Robot is an interdisciplinary, peer-reviewed conference that brings together leading scholars and practitioners to discuss legal, ethical and policy implications of robots and other emergent digital technologies. Since its inception in 2012, the conference has fostered dynamic conversations regarding robot theory, design, ethics and development. We Robot 2026 will create an international platform to discuss current and future AI and robotics policy, especially at a time when legal frameworks are evolving in different directions around the world. A major focus of the 2026 edition will be a comparative analysis of different approaches to regulation, with the goal of fostering mutual learning and dialogue.

OggCamp

—————————————-

April 25-26, 2026

Manchester, UK

OggCamp is an unconference celebrating Free Culture, Free and Open Source Software, hardware hacking, digital rights, and all manner of collaborative cultural activities and is committed to creating a conference that is as inclusive as possible. If you’ve got a story to tell, no matter your background or current status, whether it’s your first talk or you’ve loads of experience, as long as the talk is connected (somehow) to our theme then we want to know about it.

RightsCon 2026

—————————————-

May 5-8, 2026

Lusaka, Zambia and online

The goal for RightsCon 2026 is to strike a balance between a clear, familiar structure and the flexibility to respond to a rapidly changing digital landscape. At a time when the digital rights sector is facing unprecedented pressure and uncertainty, from political volatility to disruptive emerging technologies, we want to ensure that the program is able to address urgent, time-sensitive issues, while maintaining a stable foundation for participants to prepare and engage meaningfully.

Back to top button