[ SECRET POST #7048 ]

Apr. 23rd, 2026 17:49
case: (Default)
[personal profile] case posting in [community profile] fandomsecrets

⌈ Secret Post #7048 ⌋

Warning: Some secrets are NOT worksafe and may contain SPOILERS.


01.



More! )


Notes:

Secrets Left to Post: 01 pages, 08 secrets from Secret Submission Post #1006.
Secrets Not Posted: [ 0 - broken links ], [ 0 - not!secrets ], [ 0 - not!fandom ], [ 0 - too big ], [ 0 - repeat ].
Current Secret Submissions Post: here.
Suggestions, comments, and concerns should go here.
[syndicated profile] techdirt_feed

Posted by Leigh Beadon

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this special episode, Mike and Ben reflect on 100 episodes of the podcast, followed by an important announcement: we’re launching a Patreon and making some changes to Ctrl-Alt-Speech!

Starting on May 28th, Patreon members will get early access to extended weekly episodes with in-depth coverage of an extra major story. The free episodes will continue here on this feed, just slightly shorter and released one day later. 

You can become a member now at one of two levels: Supporters get early access to the extended episodes, and for a limited time Founders get that plus the opportunity to send us news stories that you think we should cover each week. After the new episodes begin at the end of May, the Founder tier will become the Insider tier with all the same benefits at a slightly higher price, so act now if you don’t want to miss out (you’ll also get bragging rights as a founding member!)

We’re immensely grateful to the incredible audience we’ve found over these past 100 episodes, and this is our way of helping make the podcast sustainable for the next 100!

[syndicated profile] eff_feed

Posted by Sophia Cope, Aaron Mackey

EFF filed an amicus brief for the second time in the U.S. Court of Appeals for the Ninth Circuit, arguing that allowing cases against the Apple, Google, and Facebook app stores to proceed could lead to greater censorship of users’ online speech.

Our brief argues that the app stores should not lose Section 230 immunity for hosting “social casino” apps just because they process payments for virtual chips within those apps. Otherwise, all platforms that facilitate financial transactions for online content—beyond app stores and the apps and games they distribute—would be forced to censor user content to mitigate their legal exposure.

Social casino apps are online games where users can buy virtual chips with real money but can’t ever cash out their winnings. The three cases against Apple, Google, and Facebook were brought by plaintiffs who spent large sums of money on virtual chips and even became addicted to these games. The plaintiffs argue that social casino apps violate various state gambling laws.

At issue on appeal is the part of Section 230 that provides immunity to online platforms when they are sued for harmful content created by others—in this case, the social casino apps that plaintiffs downloaded from the various app stores and the virtual chips they bought within the apps.

Section 230 is the foundational law that has, since 1996, created legal breathing room for internet intermediaries (and their users) to publish third-party content. Online speech is largely mediated by these private companies, allowing all of us to speak, access information, and engage in commerce online, without requiring that we have loads of money or technical skills.

The lower court hearing the case ruled that the companies do not have Section 230 immunity because they allow the social casino apps to use the platforms’ payment processing services for the in-app purchasing of virtual chips.

However, in our brief we urged the Ninth Circuit to reverse the district court and hold that Section 230 does apply to the app stores, even when they process payments for virtual chips within the social casino apps. The app stores would undeniably have Section 230 immunity if sued for simply hosting the allegedly illegal social casino apps in their respective stores. Congress made no distinction—and the court shouldn’t recognize one—between hosting third-party content and processing payments for the same third-party content. Both are editorial choices of the platforms that are protected by Section 230.

We also argued that a rule that exposes internet intermediaries to potential liability for facilitating a financial transaction related to unlawful user content would have huge implications beyond the app stores. All platforms that facilitate financial transactions for third-party content would be forced to censor any user speech that may in any way risk legal exposure for the platform. This would harm the open internet—the unique ability of anyone with an internet connection to communicate with others around the world cheaply, easily, and quickly.

The plaintiffs argue that the app stores could preserve their Section 230 immunity by simply refusing to process in-app purchases of virtual chips. But the plaintiffs’ position fails to recognize that other platforms don’t have such a choice. Etsy, for example, facilitates purchases of virtual art, while Patreon enables artists to be supported by memberships. Platforms like these would lose Section 230 immunity and be exposed to potential liability simply because they processed payments for user content that a plaintiff argues is illegal. That outcome would threaten the entire business models of these services, ultimately harming users’ ability to share and access online speech.

The app stores should be protected by Section 230—a law that protects Americans’ freedom of expression online by protecting the intermediaries we all rely on—irrespective of their role as payment processors.

Various updates

Apr. 23rd, 2026 17:45
primeideal: Text: "Right, the colors. Whoa! Go away! We're trying to figure out the space-time continuum here." on Ravenclaw banner (ravenclaw)
[personal profile] primeideal
I was feeling pretty optimistic about the sort-of-blank-verse poem I wrote a couple months ago, both in terms of how I felt about it personally and "no news is good news" when other people are getting rejections via Submission Grinder ;) but that didn't pan out. So now I get to try sending it (and some older stuff) to a new journal. (This is a spinoff of another magazine that I generally like and support but have been burned by in that they never responded, not even to the "hey did you get this," the first time I submitted to them. To their credit, the new mag has a policy of "if you don't hear anything after four weeks, assume rejection.")

Fun fact: in undergrad I semi-often wound up writing blank-verse-ish stuff as the result of a tug of war between my professors, who liked pretentious completely free verse, and me, who preferred more formal constraints like sonnets and stuff. ;) This time at least it's more deliberate.

I am out of the country seeing the world for the next few weeks! Not sure what my computer access will look like, I may have some downtime, but no promises--comments on exchange fic, etc. may be delayed. I have stocked up on plenty of reading material so hopefully there will be a couple bingo reviews coming later or sooner.

In the process of stocking up, there was a free giveaway of hardcopy books on a library shelf, and the original "Mistborn" was up for grabs, score! I don't think I need it on the plane, but good for canon review, or to give to someone else to get them into Sanderson :P
cupcake_goth: (Leeches)
[personal profile] cupcake_goth
(The container from the refrigerated section.)

I had some for lunch Monday, felt unwell and had a lot of problems sleeping, then woke up early suffering nausea, chills and a fever, excruciating muscle pain, a bad headache, and overwhelming fatigue. 

I did the right thing and tapped out on Tuesday and Wednesday with the hopes of being back to work today. The overwhelming vertigo and inability to think clearly killed that idea.

I am, of course, worrying about 1) the massive chaos I’ll return to, and 2) that’s three sick days that aren’t part of my intermittent leave, how does that look to management, something something job security?

ugh.

[syndicated profile] techdirt_feed

Posted by Karl Bode

Back when Netflix was proposing a takeover of Warner Brothers, you might recall that director James Cameron had no shortage of critical things to say.  

Cameron went so far as to write a heavily publicized letter to Senator Mike Lee, lamenting the Netflix Warner Brothers merger (and only the Netflix merger) as “disastrous to the motion picture business.” In the letter, Cameron calls himself a “humble movie farmer,” and repeatedly insisted Netflix would shorten the 45-day theater-to-streaming window (Netflix repeatedly stated the opposite).

Here’s the weird thing: Cameron had absolutely no criticism to offer of the alternative (and now reality) $108 billion Ellison family merger of Paramount and Warner Brothers, despite the deal being exponentially worse across every possible metric.

As we’ve noted previously, the massive debt load, and the numerous structural parallels between the studios, means the Paramount/Warner Brothers merger will result in significantly more layoffs than the Netflix deal would have seen. And that’s before you get to the dodgy Saudi and Chinese money backing the bid, or the Ellson family’s ties gushing enthusiasm for corrupt authoritarianism.

Cameron at first was dead quiet as the Netflix deal faded and the Paramount merger came into view. But now he’s increasingly becoming gushingly supportive of the transaction (quite the contrast to the 3,700+ Hollywood insiders coming out against the deal):

“I know David quite well. And I know that he really cares about movies. And he’s a natural born storyteller and thinks like almost an old school entrepreneurial producer that was a storyteller that loves storytelling and loved putting on spectacular shows,” Cameron said. “He’s the right man for the job to run a major studio, and now it looks like he’s going to have two of them, you know, swept under his leadership, which doesn’t bother me at all.”

So basically Cameron likes the deal, and is willing to overlook the massive layoffs looming just over the horizon due to unprecedented consolidation, because he personally likes the Ellison family. And the Ellison family promised him that they won’t touch the 45 day delay between theatrical runs and home release.

The problem (for James and everyone else) is that pre-merger promises are utterly meaningless. Every single time Warner Brothers has merged (now four times over 20 years) it’s been an abject disaster, preceded by all sorts of empty promises about amazing new synergies. The AT&T merger alone resulted in 50,000 layoffs, and there are indications that AT&T executives could be viewed as immeasurably competent compared to what we’re seeing out of Ellison-owned Paramount and CBS News.

It’s “funny” because in Cameron’s letter to Lee, he offers this observation about Netflix:

“What administrative body will hold them to task if they slowly sunset their so-called commitment to theatrical releases?”

But the exact same applies to the Ellison family promises. It’s potentially worse given the Ellisons’ close ties to the administration, which will not only mean rubber-stamped federal merger approval, but less accountability later down the line (in a country where Trump has already guaranteed that corporate regulators lack the ability to do this or any other job).

It seems likely that the Ellisons promised other things to Cameron. Time will tell.

But Paramount’s debt from the CBS, MMA, and now Warner Brothers deals is so historically massive, it’s simply inevitable that this results in all manner of layoffs and corner cutting to service it. Denying that this is coming is like trying to debate physics, or have a fist fight with a river. This sort of consolidation is uniformly harmful to labor, consumers, and creatives. We literally just went through all of this.

David Ellison is telling anybody who’ll listen that this merger will be different and will magically result in a bigger, bolder Hollywood, but there’s simply no historical evidence to believe a single word he’s saying. Every Warner Brothers merger to date has been pointless and awful, but this one has the potential to be historically so.

re-doing this...

Apr. 23rd, 2026 16:33
reggiekray: (Default)
[personal profile] reggiekray posting in [community profile] addme_fandom
Name: reggie/reg

Age: 36

I mostly post about: stranger things, billy hargrove, dacre montgomery, joe keery, joseph quinn, fred hechinger, the kray twins.

My hobbies are: drawing, writing, movies, spellwork/tarot/witchcraft.

My fandoms are: stranger things, gladiator ii, the eagle, fantastic four, x-men, anime/manga, rpf.

Before adding me, you should know: i am very gay and very trans, and will not tolerate any form of homophobia and transphobia. i'm also very witchy/pagan, and work with spiritual energy. if that bothers you, i understand! feel free to follow and/or unfollow at your leisure.

Speaking Freely: Lizzie O'Shea

Apr. 23rd, 2026 19:56
[syndicated profile] eff_feed

Posted by Jillian C. York

Lizzie O’Shea is an Australian lawyer, author, and the founder and chair of Digital Rights Watch, which advocates for freedom, fairness, and fundamental rights in the digital age. She sits on the board of Blueprint for Free Speech, and in 2019 was named a Human Rights Hero by Access Now.

Interviewer: Jillian York

Jillian York: Hi, good morning, or rather, good evening for you.

Lizzie O’Shea: Hi Jillian, it's great to be here. 

JY: I'm going to start with asking a question that I try to kick off every interview with, which is, what does free speech or free expression mean to you?

LO: Yes, so Digital Rights Watch, which is the organization I founded and I chair, is focused on fundamental rights and freedoms in the online world. And so freedom of speech is obviously a big part of that. It's obviously a very vexed right, partly because of its heritage and interpretation in places like the United States, which sometimes sits in contrast culturally to other parts of the world. Certainly, if you ask Australians about it, they do not want to have a culture of free speech that looks like the United States. 

Australians understand that freedom of expression is a really important component of democracy. So one of my jobs is to make the claim that curtailing freedom of speech, including in online settings, can have a real impact on democracy. And I think that's fundamentally true, and you don't want to wait until it's too late to be able to make that argument, to ensure that the policies are in place to protect that freedom. So I think it's a really important freedom. It's got a vexed history and expression in the modern online world, but many people still instinctively understand that those in power see speech as something that is important to challenging their authority, and so it can be a really important place to fight back and protect democracy and other rights from being impacted by those who hold power at the moment.

JY: I want to ask you about your book. You're a critic of techno-utopianism. Your book, Future Histories, came out right before the pandemic, if I recall, and it looks to the past for lessons for our technological and cultural future. I really appreciated your take on Elon Musk. So I guess what I want to ask you about is two things. What, in your view, has changed since you wrote it?

LO: Yeah, that's a really interesting question. I must admit, I was thinking about it the other day whether some of what I wrote really holds up. And I think the fundamentals are still true, in the sense that I still believe that a lot of the discussions and debates we have about technology today are presented as fundamentally novel when they are very old, ancient discussions and debates about how power should be distributed through society, and how technology enables that kind of power distribution or works against it, right? So I feel like that fundamental analysis, whatever contribution to the field, is still valid, of course. In some ways though, those technical systems have become more opaque, like the artificial intelligence industry and how that's been built off the back of years of exploitation of personal information and centralization of power in technology companies. Those things have become more powerful and concentrated and difficult to understand—if you're not deep in the weeds—beyond an instinctive understanding that something's going a bit wrong, perhaps. 

So in some ways those trends have exacerbated things in ways that I think many other contributors, yourself included, have brought a really important set of analyses to these discussions. More generally, though, one of my fundamental understandings of how I frame some of these arguments is that there are two sources of power, right? Government power and corporate power that really shape how the online world is developing. And post-pandemic, there's a lot greater skepticism, criticism, and outright distrust of government authorities seeking to do work to protect people from some of those corporate excesses. Now that's obviously something that is much more part of American culture as opposed to European culture, and in Australia, we sit somewhere in between. But that skepticism and that mistrust of institutions, I don't know that that serves us well. I'm somebody who does treat with criticism policies put forward by government, because I think it's our job as civil society people, as people part of a social movement that want to have rights at the center of our society, to be critical of those in power and make sure that they're being held accountable. But that mistrust has fundamentally shifted how possible it is to do that in an effective way. And I think that poses real challenges for people who want to see government policy look different to how it is and how you can bring people into a sense of trust, investing in a democratic rights based society, rather than rejection and cynicism being the overriding, overriding kind of factor in how they shape their political arguments. Which is a real challenge, I think, for people like us who rely on some of that mistrust and skepticism in order to fuel the fire of some of these campaigns, but do want to see people still invested in democratic processes.

JY: Yeah, absolutely. So speaking of policies, you're in Australia, where the government's enacted some of the strictest social media laws for minors in the world, I would say. In one of our most recent interviews, which was with Jacob Mchangama, we talked about how the comparison of social media to Big Tobacco is spreading, and this idea that there's no utility in social media for minors, that it's a net harm. I'm curious what your thoughts are on that, and then we can dive into the more nitty gritty bits of the Australian law.

LO: I think that's a great place to start, because the overwhelming sense in how this policy was presented to the public in Australia is that this is a very dangerous place for young people to be, and that desperate times call for desperate measures. “We don't have time to fix these spaces. We need to just restrict access.” It's described as a delay. Many, including me, describe it as a ban for under 16 year olds. So what has been very interesting in this discussion is who's been left out of the conversation. And if you talk to young people—and there are many organizations working with young people—and you talk to them about what they use social media for, they often say that they wish adults understood that they used it for different reasons, or they're scared about different things than what adults think they might be scared of. And so that kind of fundamental failure of communication, which I suppose is not a surprise, when these people don't actually have the power to vote, have the power to do things a normal legal person would do, is somewhat unsurprising.

But when you're making policy about these people, that can be quite impactful, it can have very detrimental impacts. And if you take a human rights approach, that is your job to think about the negative impact on human rights, and what you're going to do about it, it's not really good enough. And this has been an experiment that Australia has led on, very much, looking for headlines, for a perception of boldness. Some of that claim is legitimate in the sense that they want to be seen to be taking action, and a lot of people feel very concerned that governments aren't prepared to take action against big tech companies. So, some of that is a valid feeling. But I think in this context, we lose so much when we don't actually listen to the people affected, and listen to the myriad ways in which they use social media. Some things they're concerned about, some things they find harmful, some things they're really sick of. But there's so many ways in which they use it to find a sense of community, to find a sense of empowerment, to talk to people they would never otherwise be able to access, sometimes because they're isolated, socially, geographically, whatever it may be, and it's so disappointing to me that that kind of part of the conversation was not had as we debated this particular policy.

JY:  So, what do you think some of the harms are for youth who can't access social media? What are young people losing out on? Who is harmed by these laws?

LO:  It's a great question. When we do a human rights analysis, we have to think about who's harmed by a particular policy, even if we think it's overall justified on a utilitarian ground, say it's better off for everyone overall who's harmed, is a really important question, and so much of that has been absent from this discussion. So it's not just me. It's like hundreds and hundreds of experts in Australia and organizations that represent many, many people, have provided commentary and input into this process and expressed many concerns about this policy, and there's a few different ways in which people are harmed. 

So the first thing, of course, is that if you require that age verification occur, you're engaging in a privacy violation for many people, there are cyber security risks with collecting that kind of information. There's deterrent effects and the like. Now that may not concern you, or you may think that's a justifiable kind of infringement on privacy rights, but I think that's worth mentioning. It is quite significant, especially in a world in which age verification doesn't tend to work very well on any measure. There are very serious cybersecurity risks that have been associated with age verification processes and the like. So it's certainly not nothing. The other set of people that are harmed are particularly vulnerable people. 

There's a variety of people who are still accessing social media. So it looks like about seven in ten of young people on the early data who had social media accounts are still accessing social media now. Now these are early figures, so there's a lot to be said for looking at how this works in a year's time, for example. But I think one of the interesting things to think about is when those people, young people, who are on still on social media—in breach of this ban or in defiance of this ban, however you want to put it—might need to engage in help seeking behavior, there may be a deterrent there, because they know that the law is they're not supposed to be accessing social media. So that is a selection of young people that we're particularly concerned about. And then, more generally, of course, there's a whole cohort of people who are particularly vulnerable. Maybe they're LGBTIQ, maybe they're in an isolated geographic area, far away from a city. Maybe they're experiencing harm at home and have no one to talk to about it. There's all sorts of ways in which young people use social media to manage their own challenges, harms, difficulties, and very effectively. They find people to talk to about their problems when other people may not be available to them. And that is an issue that is hard to map, right? We know that there's been an increase in calls to things like Kids Helpline, which does what it says on the tin. So those kinds of things have seen an increase. But I think that is something that is harder to map, but still very, very important, and may result in people going to other parts of the internet as well to seek help in different ways that might also not be very safe for them. 

More generally it's worth remembering that if platforms can say with some confidence, from a policy perspective, that young people are no longer on their platform, there is less incentive to design for them as well, which is another associated problem. Now, it remains unclear as to how platforms are dealing with that issue, especially in light of the most recent data, which suggests that a lot of young people remain on the platforms. But that's an issue. Do we then allow platforms to no longer design in a way that respects the autonomy of young people, the safety of them, their security and the like, because they have special needs and interests and all those sorts of things. So that's another problem. There's lots of operational problems. There's lots of conceptual ones. I don't think many of these have been considered or accounted for in the process.

JY: Absolutely, those are the same things that worry me as well. Okay, let's talk about the campaign. So what has the pushback to this, to the law, looked like, and what changes were you calling for?

LO: Well, if I can Jillian, what I might start with is where the push came from. Because I think that's quite instructive. One of the key sets of institutions that were pushing for this ban were mainstream news organizations, and we're learning a bit more about this over time, but the Murdoch press and other large news organizations in Australia—Australia has one of the most concentrated media environments in the world—were pushing for this ban. There was a petition run on one of their websites that was gathering tens of thousands of signatures. There were also others. Then there was a lot of advocacy towards specific kinds of political leaders in the country, and then a kind of competitive race to see who could be the most extreme in terms of putting forward a policy. But it's certainly the case that this very powerful set of actors in our democracy, at least, were a key driver of this campaign for a social media ban for young people. Now, I think there's a sense of moralism about it, a sense of desperation about it, tapping into genuine fears from parents, you know, and the like. And you know, The Anxious Generation, the book by Jonathan Haidt, has obviously been very influential with many people, but the research is still a bit unclear, right? About what this all means. And lots and lots of researchers will tell you that that book isn't making a reasonable argument based on the data that we have, right? So, it's a very febrile environment for this kind of discussion, and those kinds of institutional actors were incredibly important in getting this on the political agenda.

We then had an electoral campaign, definitely a vision that conservative politics would push for this. So labor politics, you know, center left politics pushed for it, and won the election, right? Not on this issue alone, but it was in that environment in which this policy was developed. There was a very small amount of time for submissions, for policy discussion about it. Initially, the government had said they weren't going to do it because they were concerned that the age verification technology wasn't up to scratch. That changed very, very quickly, and then the policy was introduced. I think it was in six days, some very small amount of time. So many different child rights organizations, academics, institutions, filed policy submissions to discuss this, did a lot of advocacy work, but the passage of time between the announcement of the proposal and the passage of the legislation was extremely short, and what followed has been a year of discussion around whether this was a good thing, a year of testing age verification technology, often finding it wanting, but setting up a set of of preferred providers that platforms could use in order to satisfy the legislative requirements. A lot of lobbying from platforms as to whether they're in or out. There was a big discussion about whether YouTube should be in or out. And a lot of back room dealing between relevant politicians and big tech companies. So the whole thing is very unseemly, and we're now in the world where it's been introduced, a lot of failure for it to actually operationalize now. Now, it may be that that changes over time, but that's quite telling, right? 

It's telling also because I don't think all parents particularly like this proposal either. It's very popular, but there's certainly a section of parents that are facilitating their children's continued access to social media. And I think that's interesting in itself. Part of what it is—something we were talking about actually earlier in our conversation—people don't like governments telling them how to parent their children. That has taken some very negative expressions in parts of the world, you know, resistance to things like the availability of medicine and treatment for kids who might be trans. But in this context, it's like, “I'm not going to let the government tell me that I can't let my kid on social media.” So, I don't think it's clarified much in the debate in terms of understanding how platforms behave towards young people, what they could do better, of which there's many things, and then how we get to the world in which children are able to be online but better protected. I'm not sure this proposal has contributed to that. It's really muddied the waters about what the government is capable of doing, what it should be doing, and what platforms, you know, what should be the process that platforms go through when thinking about designing for children.

JY: That's such a great answer. Thank you. And actually, that brings me to another question, which is so in your ideal world, taking this law, being able to throw it out the window if you want…What would you what would you want to see, not just from social media, but from from the platforms, from governments, both for the sake of youth, but also, you know, for all of us.

LO: I think that is the exact right question to be asking, and it's a good time that we've managed to talk now, because actually, in the interim, what's come out is at the first draft that we've got of a Children's Online Privacy Code. And to me, that is really revealing, because it is designed to apply to all services that might be accessed by children, like all online services, and it has a really kind of sophisticated understanding of what consent might look like, where you need help with getting consent, when it comes to parents or adults that are supportive in your life. And then at different ages that might look a bit different, like you might get notified if consent has been refused by your caregiver, for example, if you've wanted to do something. So there's a more sophisticated understanding of what consent looks like, and a range of different restrictions on when private, when personal information can be collected and used.

It's got things in it that I don't particularly like. I would like to see a prohibition on the commercial exploitation of children's personal information, because I don't think any targeted advertising is justified, for example. And I think that kind of measure of that commercial exploitation is hugely problematic. I think we have to think about what deletion looks like. I think you should have a right to deletion, for example. But you know, we also have to respect that children grow into young adults, that making decisions at 16 might look quite different to when they're three. So what you do with their personal information, how they carry that forward into their adult lives might be different depending on the age and so that kind of privacy reform actually is the fundamental thing. I’m sure your listeners don’t need reminding of this.

That is my favorite right. Because I think restricting access to personal information is a rights-respecting way to improve the online environment for everybody. And what I think is really interesting about this Children's Online Privacy Code that is still in draft form, is that all these things should be available to adults as well. Like adults in Australia don't have the right to deletion at the moment. We don't have a right to comprehensively know where our information has traveled and to delete it. You know, look, we have fewer rights than Californians, for example, certainly fewer rights than Europeans. What this code has highlighted is that, in fact, all people should be enjoying this kind of protection that comes from restricting access and use of personal information and giving people more control over that, because that personal information is the raw material of the business model, and it leads to a very loose approach to its collection and leads to many negative downstream consequences, I would argue, including business models that prioritize engagement, that prioritize and monetize polarizing, extremist content, mis- and disinformation.

I think we could have a real crack at trying to ameliorate some of these problems, or certainly reduce their impact, if we started that fundamental raw material that fuels the business model. So that, I think, is a really telling alternative that we're now considering as a society, and I like to think that people will come to an understanding that you can you can find ways to elevate improve the online world, particularly for young people, without restricting their access to that online world in a way that is empowering for them, rather than patronizing or infantilizing. 

JY: I completely agree, and I think it's funny that people often see privacy and expression at odds with each other, when actually I think privacy enhances expression.

LO: I think it makes spaces safer, makes people freer to be able to say what they think, but also to have those discussions in ways that are more meaningful, that can help find connections, even across divisions, rather than exploiting that division for profit, which is so much of the current business model.

JY: Are there any other things happening in Australia that EFF’s readers should know about?

LO: Well, we're about to go through the second tranche of our privacy reform. So we did engage in our first tranche of privacy reform. We have a Privacy Act that was passed in 1988 and hasn't been meaningfully updated in the decades since. So we got a few small changes, which included the enabling provision to allow a Children's Online Privacy Code to be developed, which is why we're getting the benefit of that now. But we're about to see a range of different privacy laws introduced. What the content is, of course, will be the subject of a lot of discussion and debate. We're going to argue for the right to deletion, the right to a private right of action for privacy harms, better processes for consent, and improved definitions of personal information to really bring Australia in line with lots of other similar jurisdictions around the world. And we're really keen to advance that for all the reasons that I just mentioned. 

The other big change that I think is coming is that, you know, which is perhaps more on topic for this conversation, is that we've had this online safety policy that is constantly being touted as the first in the world, and world leading and this and that, and it's really been a very flawed and vexed process working out how we could develop codes that were designed to govern how certain services were provided in the digital age, in line with safety expectations. There’s been a lot of focus on complaints and take down notices and things like that, there's obviously been that vexed litigation with Elon Musk, trying to get him to take down a particular video, and ultimately, the failure of our regulators to succeed on that front, I think, probably correctly, because giving a regulator in Australia the right to take down content from anywhere in the world seems to me a very concerning development, if that was allowed to proceed. So this history of online safety, it's been a big part of successive Australian governments’ identities. We're about to see the introduction of a digital duty of care. So that's certainly the stated position of government. What that looks like in practice, I think will be really interesting. 

I like the idea of a digital duty of care. I like the idea of a flexible, overarching concept. What the content is, though, will be really important. So what I would like to see is proactive disclosure of harm or risk of harm, and then actions taken by platforms to do it. So more onus on platforms to provide transparency about what they know about how their online spaces are being used and what might be harmful. I mean, there's a question around whether we'll see an introduction of a civil right, something similar following from the litigation that’s taken place in California and New Mexico, and that is going to be leading, really, multiple claims that are being made all around the country in the US, against companies like Meta and Google and other social media platforms. So I think there may be a flow-on effect from that, as in, it might turn into a civil right to sue for failure to meet the requirements of digital duty of care. But I'm really interested to hear from any of your listeners, or anyone who's working in this space about what the content should be of that digital duty of care, because there's obviously limits as well. Like it can be not rights-respecting, and we're interested in making sure that's not the case. And I think there's probably a range in which it could be more protective or less and working out how to do that—there are examples from around the world, but that's going to be something I reckon we could use help with that we want to get right and make use of that opportunity as best we can. 

The last thing I'll say, I suppose, is that our government is always looking for ways to deal with mis- and disinformation, and that comes with real risks of censorship. And so, I think there's a strong argument to focus on privacy reform, because it's a rights-respecting reform as an antidote to mis- and disinformation. Greater transparency on platforms—I think about how they prioritize content in your feed, for example, can be useful, or reporting on what content is really popular, like ad libraries. There's all sorts of ways in which we can introduce greater transparency, but I do worry that as governments around the world feel emboldened to do so, they might look for more ways to to remove content, to be more involved in content moderation policies that have the real potential to to become censorship if we're not careful. So that's the other abiding concern I've got about Australian policy at the moment.

JY: One of my big concerns now too, is all of these authoritarian governments watching Australia, watching the UK, and enacting laws that are modeled on, but much more severe than than the ones in those places? Do you share that concern? 

LO:  Yeah. I mean, the other way in which it's come about in Australia, certainly like anti-doxxing laws, which, at the moment, we've got laws on our books that came about attached to a privacy reform. I'm hesitant to say it's a privacy reform, because it's not, but it's very egregious. It's a criminal offense to disclose basic details about someone online, if it's done with a set of intents and the like, about their particular status as a group, and that, I think you could drive a truck through in terms of how you could interpret it, right? There's such a wide variance, and bringing a proceeding against someone like prosecuting them for that is such a life altering experience. And I think if governments did want to focus on particular activists. And I'm particularly thinking of, you know, the way it was framed was certainly around the the discussion and debate about the genocide unfolding in Gaza. Like, I think, particularly about that movement, they're very vulnerable to crackdowns by government for speech that is perceived to be unacceptable by government. 

And I'm not even trying to debate it. I think there's certainly antisemitic commentary occurring in Australia, and indeed, there have been some people, like genuine Nazis arrested, which, you know is, is a different kettle of fish. But I think progressive movements, not just the defense of Palestine movement, but lots of other progressive movements are a particular risk of those kinds of laws. But I think mis- and disinformation is the other vehicle. So we have to be very careful about giving platforms, giving regulators both the mandate and then the authority to police content based on particular criteria. And often what they talk about, or they talked about in proposals that have now died in Australia, were things like public health issues. So, you know, that's a particular consent that drives a lot of people who are very concerned about the years of Covid up the wall. So it inspires a lot of reaction to it. But I think there's lots of ways in which undermining political stability is put forward as a proposal, as a justification for removing content. That's just so broad that I think you could really start to see censorship. It's just not good enough. I just don't think we can tolerate those kinds of proposals. I like to think that's not the case in Australia, but I just think there's a tendency among governments now to see this as an opportunity. It's an anxiety lots people have about mis- and disinformation, and so they draw on that as a mandate to act. And I think we should be very cautious about those proposals.

JY: Definitely. Okay, I’m going to ask the final question that I ask everyone. Who is your free speech or free expression hero? Or someone from history, or even someone personal who has influenced you?

LO: There’s a chapter in my book where I talk about the Paris Commune, which happened a long time ago, but I still think it’s a really interesting experiment in applied democracy. This is when a bunch of communauts took over Paris and started doing things differently in a variety of different ways. Gustave Coubert is this artist who’s leading the artist collective during this time, and I always found him entertaining because he would paint things that weren’t expected. So, often, nudes that were considered quite scandalous because they were everyday women who weren’t angelic or Madonna-esque in their style, but he’s got a very famous painting of female genitalia—

JY: Yes! Facebook took it down! [laughs]

LO: Exactly. It’s always been a very confrontational image. People find it sexist sometimes, because they think it’s very pornographic. I understood it differently. It’s called “The Origin of the World,” so I sort of see it as a force of giving life. Interpret however you like, the point is that Facebook couldn’t tolerate it and took it down. There’s a nice little bit of litigation where a schoolteacher had a page where he was teaching people that art, and Facebook could just not tolerate this art. In my mind, it was so telling that a communaut from hundreds of years before was basically revealing, as an expert troll almost, how conservatives—someone like Mark Zuckerberg—view, and how he shapes these platforms. And how they subtly reshape what we think is appropriate, what we think is free, what we think is within the realms of good society. And that you really do need artists telling you that that might not be true, and they’re some of the most effective actors at revealing that about those who hold power, like reshaping our understanding about what acceptable debate is, and how we can show power to be exercised in our online world, where in other circumstances it might be quite okay.

I love that story, and I love the communauts. There’s a lot of beautiful writing about them, there’s a beautiful book called Communal Luxury where they talk about all the different ways in which they were trying to reimagine their society and do it collectively, from things like having the first union of women but also having the design of clothes and furniture look different. I want to see a world in which people take that power in both the micro and macro and start to reshape their society in really creative ways. And I feel like digital technology has the real capability of allowing that to occur and I want to revive that sense of concrete democracy rather than just delegated democracy or deferred representative democracy where you tell someone else what you want but don’t have a say in a lot of decisions. And so, that really grassroots idea of democracy is something, and I think we’re in a world in which that could really occur with the assistance of digital technology. It’s a matter of working out how to bring it into being. And that’s what I see this movement as doing. People with digital rights as being their primary concern are trying to recreate that world so that there’s more communal, collective spaces for discussing what the future should look like.

[syndicated profile] techdirt_feed

Posted by Mike Masnick

Back in 2011 and 2012, one of the central technical objections that helped kill SOPA and PIPA was about DNS blocking. Engineers, internet architects, and cybersecurity experts all lined up to explain, in painstaking detail, why blocking at the DNS layer was a terrible idea. It would break the fundamental architecture of how the internet works. It would have massive collateral damage. It would undermine security protocols designed to protect users from exactly the kind of DNS manipulation that the bill proposed. And it wouldn’t even stop piracy, because anyone who actually wanted to get around DNS blocking could do so easily.

Congress, to its rare credit, actually listened to the technical experts (and widespread protests) and shelved the legislation. But the entertainment industry never gave up on the idea. They just went jurisdiction-shopping. And France, which has never met a maximalist copyright enforcement scheme it didn’t love, has been more than happy to oblige.

As recently reported by TorrentFreak, a Paris Court of Appeal validated DNS blocking orders requiring Google, Cloudflare, and Cisco to block access to pirate sites through their own DNS resolvers. This goes beyond traditional ISP resolvers, which France has been ordering blocked for years — this targets third-party resolvers — the ones that millions of people specifically choose to use because they offer better privacy, better security, and better reliability than their ISP’s default DNS.

But, of course, in France (and to the usual crew of Hollywood lobbyists), “better privacy, security, and reliability” can only mean one thing: used for piracy.

The court rejected all five appeals, and in doing so, articulated a legal principle so sweeping that it has no natural stopping point.

In this case, French pay-TV provider Canal+ went to court under Article L. 333-10 of the “French Sport Code,” which lets rightsholders request “all proportionate measures” against “any online entity in a position to help” block access to pirate sites. Canal+ argued that because users were simply switching to third-party DNS resolvers to circumvent ISP-level blocking, those resolvers should be conscripted into the blocking regime too.

Cloudflare and Cisco pushed back, arguing that their DNS resolvers serve a “neutral and passive function” — they translate domain names into IP addresses and that’s it. They compared their role to a phone book. The court’s response boiled down to: we don’t care.

The DNS resolution service allows its users, via the translation of a domain name into an IP address, to access websites on which sports competitions are broadcast in violation of rights-holders’ rights, and in particular to circumvent the blocking of those sites by ISPs.

The court found that the “neutral and passive” nature of DNS resolvers is “simply irrelevant to Article L. 333-10.” The law isn’t about liability at all — it only cares whether a service can help block access to pirate sites, which DNS resolvers clearly can. If you are technically capable of blocking access, you must.

Google, meanwhile, tried a different argument: that DNS blocking through third-party resolvers isn’t effective because users can just switch to a VPN or yet another resolver. The court wasn’t moved by that either:

Any filtering measure can be circumvented, and this possibility does not render the measures in question ineffective.

As long as DNS blocking stops some subset of users from reaching pirate sites, the court ruled, it’s “proportionate.” Under that line of thinking, any measure that inconveniences even a fraction of would-be pirates is legally justified, no matter how much collateral damage it causes for everyone else.

And if you think that principle has any limit, Canal+ has made it quite clear that they don’t think it does:

Canal+ said in a statement that the rulings are “more than a victory,” forming part of “a global approach that will be reinforced by the progressive deployment of complementary measures, including IP blocking.”

Canal+ has already been getting courts to order VPN providers to block as well. So now we have ISP DNS blocking mandated, third-party DNS resolver blocking mandated, VPN blocking mandated — and, per the TorrentFreak article, direct automated IP address blocking is coming too. They will not stop until the entire internet is broken.

Each step reaches further down the internet stack, breaks more of the internet for more people, and stops fewer actual pirates, because the people who are determined to pirate content are always one technical maneuver ahead. The people who get caught in the collateral damage are ordinary users who happen to use Cloudflare’s 1.1.1.1 or Google’s 8.8.8.8 for perfectly legitimate reasons like speed, reliability, and privacy.

Cisco, rather than comply with the original order, simply pulled its OpenDNS service out of France entirely. That’s the kind of collateral damage we’re talking about. French users who relied on OpenDNS for entirely lawful purposes completely lost access to the service. Because a copyright holder decided that the DNS layer was the right place to play whack-a-mole with pirate sites.

When Cisco argued on appeal that implementing geo-targeted DNS blocking would require 64 person-weeks of engineering work, the court waved it off, saying the estimate was “not supported by any objective evidence” and pointing out that Cisco already offers DNS filtering to enterprise customers. The fact that enterprise DNS filtering for corporate networks is a fundamentally different thing than mass geo-targeted blocking of domains at the resolver level for an entire country’s users apparently did not register as a meaningful distinction.

The court’s core reasoning — that any entity technically capable of blocking must do so, that circumvention doesn’t make blocking disproportionate, and that the “neutral and passive” function of an intermediary is irrelevant — creates a legal framework that can reach basically anything. If a DNS resolver can be conscripted because it’s “in a position to help,” what about browsers? What about operating systems? What about CDNs, or cloud hosting providers, or certificate authorities? The logic has no brake pedal. Every layer of the internet stack is, in some sense, “in a position to help” block access to content. The question the court’s reasoning cannot answer is: where does it end?

Under this reasoning, what’s to stop a rightsholder from arguing that browsers should block pirate URLs directly? Or that operating systems should refuse to resolve them at all?

That seems bad!

Of course, this kind of maximalist copyright enforcement is something of a French specialty. This is the same country that brought us HADOPI, the graduated response agency that cost French taxpayers €82 million over a decade while imposing a grand total of roughly €87,000 in fines. A staggering return on investment — if the goal was to light money on fire while accomplishing nothing. France has also been at the forefront of copyright exceptionalism that risks undermining the EU legal system more broadly, pushing interpretations of copyright law so aggressive that they threaten to distort the legal frameworks of neighboring countries.

France keeps doing the same thing over and over again: spend enormous sums, conscript more and more intermediaries, break more and more of the internet’s infrastructure, accomplish almost nothing in terms of actually reducing piracy, and then conclude that what’s really needed is… more of the same, but harder. The entertainment industry’s refusal to learn from twenty years of evidence that enforcement-maximalism doesn’t work is genuinely remarkable. Every study and every natural experiment shows the same thing: the most effective anti-piracy tool ever invented is convenient, reasonably priced legal access to content. But that requires adapting your business model, and it’s apparently much more satisfying to get courts to break the internet for you instead.

The ruling’s real danger is the template it sets. Other countries with similar legal frameworks will look at this appeals court validation and think: we can do that too. The “any entity in a position to help” standard, combined with the “doesn’t have to be perfectly effective” standard, combined with the “we don’t care about your neutral role in the architecture” standard, adds up to a legal toolkit for conscripting nearly any internet infrastructure provider into a copyright enforcement apparatus. And the costs get externalized onto those providers (and their users), while the rightsholders collect the benefits.

The engineers who fought SOPA warned about exactly this: DNS blocking breaks things, creates collateral damage, pushes enforcement into layers of the stack never designed for it — and doesn’t actually stop piracy, because the actual pirates just route around it while everyone else suffers. France apparently decided all of those concerns are, to quote the court, “simply irrelevant.” And now they’ve moving on to IP blocking.

At some point, you run out of layers of the internet to break. But apparently we’re going to have to find out where that point is the hard way.

[syndicated profile] techdirt_feed

Posted by Daily Deal

Dive into Godot – a rising star in the game engine world – with the 2026 Complete Godot Stack Development Bundle. You’ll learn to create platformers, RPGs, strategy games, FPS games, and more as you master this free and open-source engine with easily expandable systems. Plus, you’ll also explore techniques for game design and game asset creation – giving you the ultimate techniques to customize your projects. It’s on sale for $25.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

[syndicated profile] techdirt_feed

Posted by Tim Cushing

The government needs more funding than ever, which is kind of hilarious when you realize the Tea Party of the Obama era was the predecessor of this Big Government version of the GOP.

The DHS can’t even get itself a budget at the moment. Sure, it will get some money thrown to it sooner or later and the administration won’t let the lack of tax revenue offsets stop it from feeding billions more into its Bigotry Machine.

But that’s not all. Behold our all-but-officially-declared war in Iran, currently headed by the Department of Defense War Little Excursion, which is adding billions of dollars weekly to the national deficit. After all, as right-leaning libertarians like to point out, the government doesn’t actually “make” anything. The private sector builds the bombs and missiles. And unlike TSA agents, they expect to be paid.

You know who could help this country offset some of its insane expenditures? It’s the same people we’re spending billions to remove from the country:

Immigrants accounted for more US income and generated more revenue for the government because they were, on average, over 12 percentage points more likely to be employed than the US-born population. This means that even if immigrants earn lower hourly wages, they can still account for more total income per capita than the US-born population by working cumulatively more hours. This higher employment rate was driven by the fact that immigrants were, on average, 20 percentage points more likely to be of working age. Immigrants usually arrive in the US as young adults and often leave before retirement.

More succinctly, immigrants out-punch their weight class when it comes to erasing budget deficits:

Accounting for savings on interest payments on the national debt, immigrants saved $14.5 trillion in debt over this 30-year period.

[…]

Without the contributions of immigrants, public debt at all levels would already be above 200 percent of US GDP—nearly twice the 2023 level and a threshold some analysts believe would trigger a debt crisis.

But that help is apparently no longer welcome. The Trump administration has succeeded in eliminating the firewall between the IRS and ICE, allowing ICE agents to use this data to hunt down taxpayers who work harder and pay more taxes than the white, natural-born citizens that this administration pretends make America great.

That’s going to cause even more problems for an administration that is spending far more liberally than any “liberal” it blames its current budget problems on. Here’s how that looks on the ground as Tax Day has come and gone in the United States:

By the time Tax Day rolls around every April 15, accountant María José Solís usually has more to do. More clients. More paperwork. More phones ringing, more emails and WhatsApp messages pinging.

But this year, she said, more than 550 of her regular clients have disappeared. That’s about 15 percent of her customer base at Toro Taxes, the bilingual firm in Wheaton, Maryland, that Solís runs.

There’s your anecdote, albeit one that’s being repeated around the nation. Here’s the data:

The Yale Budget Lab estimates that the IRS stands to lose between $147 billion and $479 billion over the next decade as migration to the U.S. declines, deportations increase and immigrants of various statuses disengage from the formal economy for what some experts say may be an extended period.

That estimate will likely be low if the Trump administration continues to purge migrants at the rate it has since Trump returned to office. It will definitely be lower if another similarly bigoted GOP lawmaker succeeds him as president.

And it’s not just the losses up front. There’s money leaking out the back as well. It’s a double-dip, because migrants with ITINs (individual tax identification numbers) pay taxes for services they can’t actually access, like Social Security and Medicare. They’re actually subsidizing citizens who pay fewer taxes, work fewer hours, and commit more crimes than they do.

This nation continues to become poorer, not just in terms of financial viability, but in heart and spirit. Migrants made this nation great. Now, a bunch of ungrateful people who hate people who aren’t white are not only driving us deeper into debt, but they’re eliminating a source of income that never asked for anything more than a chance to survive.

(no subject)

Apr. 23rd, 2026 09:29
firethesound: (Default)
[personal profile] firethesound
New words this week : 9,949 words which is a little surprising. I thought it was going to be another slow week, and I guess it was not!

WIPs worked on this week : 2, with 1 new WIP

I thought this was going to be another week with lower wordcount because I did not feel like I had much writing time, thought looking back I did have one big day (2.4k) that I think helped boost my total quite a bit. I'm so blindingly exhausted that my memory is garbage right now.

The Old Guard

food truck au : 7,611 words which brings the total to 133,418 words and we're still going on the smut. I originally had it planned as a single chapter and figured it'd be on the longer side, but it would still be fine to post as one. We're at 22.5k and I think I've got another 1-2k to finish it off. Whew. ALSO. I think I finally have a title for it, which I have been worrying about because I usually have a title by this point.

au of food truck au : 2,511 words which brings the total to 2,511 words and it's inspired by something from food truck au that I really want to write and doesn't fit in that fic. So, I've just been throwing words at it in a separate doc, first to try to get it out of my head, and then because I think (and my beta agreed!) that it might be fun to post as a little "extra" for food truck au. The way it's written now is shaping up to be a whole new fic of its own, which I do not want to write, but I think that I am going to reshape it as a pwp so that'll keep the wordcount fairly constrained.
[syndicated profile] techdirt_feed

Posted by Karl Bode

There’s some endless, curious tensions within the corrupt Trump administration when it comes to their effort to completely destroy the government’s ability to hold corporations accountable for dodgy, nefarious, or even illegal behavior. Their own, lazy, circular logic and bad faith legal interpretations are creating vast new legal minefields we’ll be untangling for decades.

The wireless industry is a prime example.

For decades, major wireless carriers AT&T, Verizon, and T-Mobile collected vast troves of sensitive user location and movement data, then sold access to any random nitwit with two nickels to rub together. The result was a parade of scandals wherein everybody from stalkerslaw enforcement (or people pretending to be law enforcement), car companies, governments (foreign and domestic), and right wing extremists all happily abused the data in myriad, dangerous ways never made clear to the end user.

Though this behavior had been going on for years generating untold millions, it only gained mainstream attention thanks to a 2018 New York Times story showcasing how police and the prison system routinely bought access to this data and then failed completely to secure it. In 2024 the Biden FCC finally proposed fining wireless carriers $196 million ($91 million for T-Mobile, $57 million for AT&T, $48 million for Verizon).

Those fines have been winding through the courts ever since, with wireless carriers (with varying degrees of success) insisting that the FCC lacks the authority to do, well, anything they don’t like. Like most corporations, wireless giants have been broadly helped in that endeavor by Supreme Court rulings dismantling regulatory authority across several different pillars of consumer protection law.

AT&T was also helped dramatically by a 5th Circuit ruling last year declaring that the FCC fines somehow violated wireless carriers’ Seventh Amendment right to a jury trial. This was one of several specious arguments telecom lawyers threw at a wall to see which one would satisfy the Trump-addled court system. The 5th Circuit was happy to oblige, vacating the FCC’s long-percolating fines of AT&T.

You were to ignore that AT&T has been at the vanguard of making jury trials impossible for customers through its use of fine print forcing users to pursue binding arbitration, a lopsided system that finds in favor of corporations a vast majority of the time. Or that AT&T spends millions of dollars annually successfully lobotomizing the entirely of telecom oversight, be it congressional, legal or regulatory.

Regardless, these debates are now winding their way to the Supreme Court, where a majority of justices this week expressed some skepticism about the wireless carriers’ claims (that they have to be found guilty via a jury trial in order to be fined by the FCC).

The FCC is kind of defending the Biden era fines (Brendan Carr wants to retain some FCC authority to force corporations to bend the knee to authoritarianism). But here’s the fun thing; even if the justices disagree with the wireless carriers (which can certainly change after a few late night chats with telecom lobbyists), the FCC’s inclined to change the language of their forfeiture orders anyway:

“But even if AT&T and Verizon lose this case, they could get a victory of sorts because the FCC and justices seem to agree that FCC fine decisions are nonbinding and require a court decision to enforce them. A government lawyer told justices that the FCC may change the language of its forfeiture orders to make it clearer that fines don’t have to be paid until after a jury trial.

“It seems like you’ve won on the law going forward, one way or the other,” Justice Brett Kavanaugh told attorney Jeffrey Wall, who represents AT&T and Verizon. “Your reply brief begins, ‘the government’s in retreat.’ That’s absolutely correct.”

With the Supreme Court poking holes in regulatory autonomy across countless fronts (SEC v. Jarkesy, Loper Bright), there’s no limit of options for corporate lawyers looking to avoid regulatory accountability. Nearly any serious attempt by a regulator to hold corporations accountable for pretty much anything can now pretty easily be bogged down in years of litigation, quite by design.

You’d think the broad, dire impact of that would be of more interest to journalists and policy folk.

This whole Ars Technica article by Jon Brodkin is worth a read, and is a good demonstration of (1) how the Trump administration’s legal lackeys have to trip over themselves to pretend they’re engaged in good faith, non-corporatist, non-corrupt interpretation of consumer protection law, (2) how all the weird holes created by Supreme Court rulings aimed at demolishing even basic corporate oversight have created a vast minefield it’s a nightmare for everyone to navigate, and (3) how the press likes to pretend this is somehow normal behavior by a serious country and not a byproduct of abject corruption.

But in short it’s likely that AT&T, Verizon, and T-Mobile will never have to actually pay any fines related to their decade+ decision to spy on users and monetize their sensitive movement data. That’s not only an act of overt corruption (dressed up as serious, furrowed-brow legalese), but also the failure to hold wireless carriers accountable for privacy and security issues will pose a lasting cybersecurity threat.

It genuinely doesn’t get enough attention that the Trump administration (specifically the Trump-friendly Supreme and circuit courts) have delivered a killing blow to the federal government’s already shaky ability to hold corporations accountable for anything. People and the press deny, ignore, downplay, or normalize it, but these choices will range from massively problematic to fatal, and will reverberate for a generation.

[syndicated profile] bruce_schneier_feed

Posted by Bruce Schneier

404 Media reports (alternate site):

The FBI was able to forensically extract copies of incoming Signal messages from a defendant’s iPhone, even after the app was deleted, because copies of the content were saved in the device’s push notification database….

The news shows how forensic extraction—­when someone has physical access to a device and is able to run specialized software on it—­can yield sensitive data derived from secure messaging apps in unexpected places. Signal already has a setting that blocks message content from displaying in push notifications; the case highlights why such a feature might be important for some users to turn on.

“We learned that specifically on iPhones, if one’s settings in the Signal app allow for message notifications and previews to show up on the lock screen, [then] the iPhone will internally store those notifications/message previews in the internal memory of the device,” a supporter of the defendants who was taking notes during the trial told 404 Media.

Thursday @ 7:57 pm

Apr. 23rd, 2026 19:57
alisx: A demure little moth person, with charcoal fuzz and teal accents. (Default)
[personal profile] alisx

Scenes from the Shelter:

1. Lady who organises the volunteers taking around what looked like an induction group. One of them broke off when they saw me: “Did you work with [cat’s name]?” Sure did; was there when you adopted her, in fact. She was an older cat with health issues, who’d been there a long time. Well, her new cat mum showed me a photo of her sitting on the back of a sofa in the sun, gazing peacefully out an apartment window. Perfect.

2. There’s been a cat in one of the wall cages who bit when it was brought in and has spent the last few weeks hiding under a towel and hissing every time anyone opens the door. Including this morning, according to the notes. Except, when I went to feed it this afternoon? One teeny hiss then ten minutes of purring and rubbing all over the bars and my hands. I don’t know what changed, sweetheart, but I’m so glad.

Leave a comment.+

Spider-Man/Superman #1

Apr. 22nd, 2026 20:55
superboyprime: (Sun)
[personal profile] superboyprime posting in [community profile] scans_daily
"The world is drowning in hate and anger. Sides separated by an ever-widening canyon of digital bile. Soon both factions will tumble off edge... Hands clutching their weaponized phones, finding no olive branch to save them because neither side knows what that means anymore." - Geoff Johns, Doomsday Clock

Read more... )