Behind the Paddle

E54:Protecting the Digital World: Breaking Down the Online Safety Act Part 2

Porcelain Victoria Episode 34

Send us a text

In this episode of Behind the paddle Podcast, we dive deep into the Online Safety Act what it is, why it matters, and how it’s changing the way we interact online. From protecting young users and tackling harmful content to holding tech giants accountable, we unpack the major provisions, the controversies, and the real-world impact this landmark legislation could have on everyday internet users. 

Support the show

Check out our socials!

Thank you so much for listening 💖

Speaker 1:

Hello and welcome to Behind the Paddle Podcast with me, Porcelain Victoria. This is part two of the Online Safety Act. We are going to talk about a good few things in this episode, and hopefully you will enjoy listening to me and hearing my opinions and such. So we're just gonna go right into it. So the tech industry and compliance challenges. To comply with Online Safety Act. To comply with the Online Safety Act, social media platforms and online service providers are investing heavily in content moderation tools and resources. This shift has transformative consequences for the tech industry, both in terms of its international operations and broader regulatory landscape in content moderation technologies with heavily focus on AI-based solutions. These technologies are designed to automatically detect and flag harmful content, such as hate speech, child sexual abuse material, CSAM, terrorist propaganda, and other forms of illegal or harmful content. Companies are integrating machine learning and natural language processing tools to identify text, images, and videos that might violate the ACE provisions. Now we are seeing a lot of AI, especially in X Twitter right now, extremely, to the points of um the data center where Elon Musk actually runs it, it is damaging so much of the area. And we're gonna I'm sure I'll make a podcast episode on how harmful AI actually is. I mean, I don't really think it'll make that much of a difference. Us talking about it in terms of there are vegans out there, and if if we compare it to something else, say for like vegans activision in the ways of don't eat this animal because it's an animal, etc. And with AI, it's just like don't use AI because it's harmful to the planet. Same if you like don't recycle and stuff, but well that's kind of different again. A lot of people do not know about recycling sites and things, but that's another podcast episode. But yeah, like change has happened more like a wee bit of people are more vegan now, but not some people have actually went back to eating meat, um, especially after COVID. Um, so yeah, I don't know if strikes and talking about how bad AI, how it will actually affect people, because I know a lot of people do use AI these days. And I just don't think we take it very seriously what we're doing to the planet and how it is affecting people. So we'll see in multiple hundreds of years what the earth will be like because yeah, sadly, I I don't think really anything will change unless laws come in. That would be good if there was laws to regulate how you have AI and how you distribute it and how much of um the devices that you use for AI you can have and where you have them and things like that. But it's good that we are all coming together and saying AI is bad and everything like that. Like I'm not saying we should stop that, but in reality are people going to actually care? And it sucks to say, but this this is the reality where we're living, where the world is not so great right now. Do we care what's gonna happen a hundred, two hundred years from now? It's like big questions. Next point is AI limitations. While these AI-based tools offer scalability and efficiency, they are not without their limitations. AI has significant challenges in understanding context, which is often essential to determine whether content is truly harmful or whether it falls within the realms of free expression. For example, sarcasm, humor, political discourse, and nuanced debates about controversial topics like climate change or immigration are difficult for AI to assess accurately. As a result, these tools often make false positives, flagging legitimate content that does not violate the Act's provisions or false negatives, allowing harmful content to slip through. Contextual understanding and human oversight. AI moderation is still very much reliant on human intervention. However, the sheer volume of content uploaded every day makes it difficult for platforms to scale this oversight effectively. Platforms are looking to strike a balance between automatic systems and human moderators who must make the final judgment call. Unfortunately, human moderators can suffer from burnout, bias, or inconscientious in judgment, raising concerns about inadequate content review. This means while platforms may be improving their tools, context-based nuance remains a challenge that may result in either over-censorship or under-censorship of content. See, it comes down to like who reviews it. Do they think left, do they think right, are they in the middle? Like it's so messed up. It really is. There are additional concerns about the algorithmic bias inherent in AI-based moderation systems. Algorithms often reflect the biases of the data they are trained on, meaning that they could disproportionately flag content from marginalized groups such as activists, minority communities, or LGBTQ plus individuals. The risk is that these groups may be over policed online, facing higher rates of content removal or account suspension. This raises ethical questions about the diversity and inclusivity of the systems used by platforms to enforce the law. And so now we're going to look at the costs of compliance. The financial and operational costs of complying with the Online Safety Act are a significant burden for many platforms, particularly smaller companies and startups in the tech industry. Smaller platforms may have financial strain, often lack the resources to build and maintain the robot. The robust content moderation systems required by the Act. This includes hiring staff, developing AI-based moderation tools, and creating reporting mechanisms to comply with the transparency requirements of the Act. Additionally, the costs of legal compliance, including the need for specialized teams of lawyers, consultants and compliance officers, can be overwhelming for smaller players in the market. Like thankfully, I do not have to deal with that because I only use subscription sites. Oh I'm overwhelmed enough with like my three, four frickin' jobs. I do not need to add anything else to my plate where I need to hire a team of lawyers, consultants, and compliance officers. Like that is so overwhelming. It's ridiculous. Platforms are also required to maintain risk assessments and provide annual transparency reports about their content moderation efforts. This entails constant monitoring of their operations and ensuring that their practices align with the evolving requirements of the Act. For many smaller platforms, this represents an operational overhaul that could strain both financial and human resources. Like that is so draining. You've got so much to do, especially especially if you're such a small company, it costs oh no. We've there is so much the government tries to take away from small independent companies where it's tax VAT and now they are wanting, they're needing, they're demanding you to agree to this online safety act, and you have to abide by it. For startups and emerging tech companies, the cost of compliance could stifle in innovation. The fear of failing afoul of regulations coupled with the cost of developing systems to meet compliance standards could discourage new entrants into the market. As a result, the innovation landscape might shift, leading to a consolidation of power among larger, well-established platforms that are better equipped to bear the cost and operational burden of the act, e.g., like X, Instagram, Facebook. Given the cost and complexities associated with the compliance, some companies may take drastic steps to reduce their exposure to UK regulation. To avoid the stringent regulatory environment in the UK, some platforms may choose to relocate their operations to other countries with less aggressive online regulation, like OF and a few other companies. The US, for example, has more lenient content moderation laws, especially in comparison to the UK's more interventionist stance. By relocating their base of operations, these platforms could potentially sidestep the Act's requirements, or at least just avoid the financial burden of compliance. Except from the UK market, in some cases companies might decide that the costs of compliance overweigh the benefits of operating in the UK. As a result, they might reduce or eliminate access to their services for UK-based users altogether. This could result in UK users losing access to popular platforms, such as social media networks, gaming platforms, and streaming services, and may disproportionately impact those who rely on these platforms for employment, education or social connections. Alternatively, some platforms may limit the features available to UK users in order to avoid the regulatory pressures imposed by the Act. For example, a messaging service could restrict certain end-to-end encryption features or limit public posting features in the UK. This could degrade the overall user experience and lead to dissatisfaction among the platform's user base. I mean the UK in general likes to complain about if it's sunny out or if it's raining, so we'll we'll find something to complain about. Since its passage, the Online Safety Act has been met with significant legal challenges from various stakeholders, including tech companies, digital rights organizations, and civil liberties advocates. These challenges focus on the balance between enforcing safety and protecting fundamental rights like free speech, privacy and data protection. Tech companies, particularly social media platforms like Facebook, Twitter, and Google, have raised concerns about the Act's provisions that require them to remove harmful content and protect user safety. These companies argue that the Act imposes disproportionate burdens on them, especially regarding the need to moderate large volumes of user-generated content. I mean, there is so many of us on social media platforms. Let me just do a good quick Google on how many people actually use social media. Oh yeah, that's just 5.24 billion people globally who use social media, which represents 63.9% of the world's population. And the figure is equivalent to 94.2% of all internet users. That is insane. The growth rate is 4.1% annually, with 206 million users joining since last year. The key concerns from tech giants is the risk of overblocking legitimate content, which could infringe on free speech rights. They argue that the vague definitions of harmful content could lead to arbituary or inconsistent enforcement. Additionally, they raise concerns about the potential criminal liability for executives, as the act holds company leaders personally accountable for non-compliance. This is viewed by many as an overreach and could deter innovation. Human rights organizations such as Privacy International and Open Rights Group have voiced concerns about the implications for free speech and privacy rights under the Online Safety Act. They argue that the law could create an environment where platforms in a bid to avoid liability began censoring content that does meet strict safety criteria, but is nonetheless protected speech under international human rights standards. Privacy advocates also argue that the age verification requirements and other content filtering obligations could lead to increased surveillance of users, violating their right to privacy. These concerns are particularly significant for LGBTQ individuals, sex workers, and other marginalized groups who may fear increased monitoring and data collection. And of course you've got concerns about data privacy. The Act requires platforms to monitor and remove harmful content, which may involve data processing on a massive, massive scale. This could conflict with data protection regulations, such as General Data Protection Regulation GDPR as we all know it in the EU. Critics fear that the requirements to collect, store and process user data for the purposes of content moderation could lead to privacy violations, including the risk of user profiling and data breaches. Like I feel like we need like things like this can get so easily passed because not that many people have knowledge of it, understand it, and there's not that much advertising about it. So it's always gonna be tricky trying to reverse something like this, even if it does fail. Because there's been multiple things in the past which have failed, including a lot of things with sex work, but it still carries on because people some people are very delusional in them thinking that something does work when it actually doesn't. So I don't know if we will actually get this turned over, get rid of it at some point. I kind of highly doubt it, which extremely sucks, because we need to all learn so much more about this, I feel, and what goes on in Parliament and what goes on around the world when it comes to dictators and when it comes to people who are in power when things really, really change, and I think that's where I'm going with the podcasts recently, where shit's happening, nobody's knowing what's happening. And it's just like people need to be way more aware of what's going on recently, because there is so much out there but it's not getting mentioned, it's not getting talked about. And this goes for every single podcast episode that we've done where things just aren't talked about, and that's why we have the podcast, and it's what I love about the podcast as well. So, after my little ramble, on to the next point. Digital rights organizations may challenge the Online Safety Act in international courts, particularly under European and international human rights law. They argue that the Act could violate the freedom of expression protections enshrined in Article 19 of the Universal Declaration of Human Rights or the European Convention on Human Rights. These organizations contend that the Act's broad undefined language around harmful content is at odds with more narrowly tailored definitions of speech limitations under international human rights laws. Again, it just comes up more and more again where we're just getting silenced. And that's what it is. And then of course the tech companies that operate internationally might also challenge the Act provisions, arguing that the UK is imposing rules that could have extraordinary effects and violate the rights of users outside the UK. The Act's Reach beyond UK borders may prompt tech companies to take the matter to international courts to argue that it contravenes border global standards for online governance and digital rights. As the Online Safety Act is enforced and its real-world effects become clearer, very clear, there will likely be calls for amendments to address the law's vagueness, unintended consequences, and the need for stronger protections for free speech. These potential amendments could focus on several key areas clarification of harmful content. One of the primary criticisms of the Online Safety Act is the last lack of clarity in the definition of harmful content. This broad and often vague terminology leaves room for subjective interpretations, which can lead to overblocking or oversensorship of legitimate speech. Calls for amendments will likely demand more precise definitions of what constitutes harmful content to avoid the chilling effects on freedom of expression and to ensure that content removal processes are consistent and fair. For instance, content deemed quote legal but harmful could be flagged, raising concerns about what qualifies as harmful but not necessarily illegal. Critics argue that there should be a clearer framework for harmful content like misinformation or hate speech and content that should be protected under free speech principles. Free speech advocates will push and are pushing for stronger safeguards to ensure the rights to free expression is protected. Within the online environment, amendments may focus on ensuring the government bodies and platforms cannot make subjective decisions that excessively restrict speech, particularly political speech or content related to activism, and journalism. There could also be proposals to create clearer to create clearer appeal mechanisms for users whose content is removed under the Act. There is also a possibility that amendments could shield political speech from being flagged as harmful, particularly when it involves controversial topics. This could include adding specific protections for political discourse, and social activism, ensuring that activists, journalists, and oppositional voices are not silenced or censored. Given the concerns about data privacy, future amendments could focus on data protection to ensure the monitoring required by the Act does not infringe on users' privacy rights. Strengthening anonymity and creating clear boundaries around data collection and surveillance could be key points of debate. Proposals for amendments could also seek to limit the scope of age verification systems and other forms of user monitoring, ensuring they do not inadvertently lead to the collection and retention of sensitive personal data. We could focus on transparency. The effectiveness of the Act's provisions will be scrutinized, and future amendments may require platforms to be more transparent about their content moderation practices. This could include clearer reporting on the use of AI moderation, for moderation, the removal of specific content categories, and the accountability of tech companies in upholding the law. There may be demands for more public oversight, including independent audits, of how platforms are compliant with the Act. So basically the conclusion in this is just like the people who control the online world control what you want to say, control what they want to give out to the public. If you say you're against Donald Trump and it gets flagged and you then get banned or it gets removed, the the social media, the networks, they they have their say. As tech giants, human rights organizations, and digital freedom advocates continue to voice concerns, the law's ultimate effectiveness will depend on its ability to balance the competing demands of user safety, free speech, and privacy. The ongoing legal battles and potential amendments will likely determine whether the act becomes a blueprint for other countries or a cautionary tale of overreach and misapplication in the pursuit of online safety. It is clear that its implications will revertebrate across the digital landscape for years to come. The balance struck between online safety and digital rights is central to how the internet will evolve, not just in the UK, but also as a potential model for other nations around the world. We're going to talk a little bit about the online safety versus digital rights app. One of the key challenges the Online Safety Act presents is finding a delicate balance between creating a safer online environment and protecting fundamental digital rights such as free speech, privacy, and the right to dissent. While the law aims to reduce harm from content like child exploitation, terrorism and hate speech, it also raises concerns about censorship, privacy infringement, and the overreach of government control over what constitutes acceptable speech in the digital space. This balance will be crucial as the UK and other governments will explore frameworks for regulating online content. How well this law is implemented, enforced and adapted over the years will determine whether it becomes a successful model for online governance or a restricted barrier for freedom of expression. I mean as we've spoken about, it's already not going that well, let's say. The Online Safety Act is undoubtedly a bold attempt to regulate digital space and improve user safety, but its impact will depend on how effectively it is enforced and whether it can strike the right balance between safety and freedom. As we've discussed thoroughly throughout today's episode, the law's future evolution will likely be shaped by ongoing debates in parliament, courtroom, and global forums as tech companies, human rights groups, and users all weigh in. Legal challenges call for amendments, and evolving platform practices will continue to play a significant role in shaping the direction of this policy. The Online Safety Act has helped in the faster removal of harmful content. There has been seen the uptick in speed and transparency in removing illegal content, particularly child sexual abuse material and terrorist content. However, I do feel like it needs to be as we've discussed, it needs to be more. We need to figure out a way how we can get more of the child sexual abuse material off the internet and the terrorist content and the bad stuff. We need to keep free speech. Free speech is free speech and we need to get rid of the misinformation on the internet. It's it sucks the world we live in right now. And hopefully in years to come it will be better, hopefully. And they will make amendments to this bill. This has been a little mini part two, because I just wanted to add that small companies will struggle and there are advocates out there who are fighting to get amendments and there is still so much about the online safety act which does need to be changed. But yeah, this has been Behind Avatar Podcast. I hope you've enjoyed this little mini part two of me just mumbling on. But yeah, I think for the next episode, um I've got a few things that I would like to talk about, and I just have to pick one basically. But I think we're gonna talk about Carol Lee, the mother of sex worker rights. I'm gonna talk about her life. Um she was an amazing activist who revolutionized the fight for sex workers' rights by coining the term sex work. She reframed the discussion around the industry, moving it away from criminalization and stigma toward labor rights and bodily autonomy. And that episode will explore her life, her activism, and the impact in extreme detail, diving into the historical context, ideological conflicts and policy changes that she influenced. So I think that's what's next. And I cannot wait to talk about her because she sounds like such an amazing woman. So yeah, I hope you've enjoyed this podcast episode of Behind the Buddha Podcast with Meepulse and Victoria. And everybody have a lovely day. You can catch us on Spotify, Apple, Dark Fans, Minivids, and yeah, leave us a lovely review. If you want to know any topics, then give us a message. If you want to be on the show, then you can give us a message as well. Aside from that, bye.