FUTR.tv Podcast

Keeping Users Safe from The Dark Side of the Internet - Winning with WebPurify

FUTR.tv Season 2 Episode 127

Send us a text

There is a lot of bad behavior on the internet, and it is very difficult to police. The rise of generative AI, takes this to a whole new level.

Hey everybody, this is Chris Brandt, here with Sandesh Patel. Welcome to another FUTR podcast.

Today we are talking with Josh Buxbaum, co-founder of WebPurify, a content moderation platform. Josh is going to tell us about how they are trying to keep us all safe from dangerous people on the Internet. If you are not policing the content on your site, you are going to change your thinking by the time we are done here.

Welcome Josh

WebPurify:

Click Here to Subscribe:

FUTR.tv focuses on startups, innovation, culture and the business of emerging tech with weekly podcasts talking with Industry leaders and deep thinkers.

Occasionally we share links to products we use. As an Amazon Associate we earn from qualifying purchases on Amazon.

Chris Brandt:

There's a lot of bad behavior on the internet and it's very difficult to police. The rise of generative AI takes this to a whole new level. Hey everybody. This is Chris Brandt here with another FUTR podcast. Today we're talking with Josh Buxbaum, co-founder of WebPurify, a content moderation platform. Josh is gonna tell us about how they are trying to keep us all safe from the dangerous people on the internet, and he's gonna make it very clear if you are not policing the content on your site, you are definitely going to change your thinking by the time we are done here. Welcome, Josh.

Josh Buxbaum:

Hey Chris. Thanks for having me.

Chris Brandt:

Well, thanks for being on. Um, like I said, you know, I know we talked earlier and, uh, You know, this is you, you opened my eyes to a lot of areas that are potentially more problematic than I had thought of, and there's just a lot of bad actors out there. Um, can you speak a little bit to, like, why is it so important to have content moderation on your sites and your assets and things like that?

Josh Buxbaum:

Just to sort of set the stage, we've been in the content moderation space for 17 plus years, so. We've really watched as this type of content has become more and more of an issue. Right. Um, when we first started it was sort of, text was a big issue. Think, uh, you know, a o l chat rooms and I mean, not to date myself, but how quaint that was. What a lovely time. Um, and, uh, and so we've seen. This, this content just steamroll as, as you know, we were dealing with text and then images and then videos.

Chris Brandt:

We're, obviously, we're in an an era where there's just, you know, tremendous challenges on, you know, policing content and things like that. Um, but like, what, what motivates you? Like why, I know there's, there's a, you have in mind a much bigger and broader mission that you're bringing to this. Could you talk a little bit about what motivates you?

Josh Buxbaum:

Yeah, absolutely. And, and to be honest, I don't think anybody could do what we do. Uh, unless there was a drive, you know, there was a driving force other than having a successful business. Right. Or, or monetary gain. Right. Um, it's a very challenging industry to be in. We, we look at some really awful stuff. Uh, we deal with some really terrible people. Um, we're consistently exposed to it, you know, uh, and so you have to have something driving you. And, and for me, it's keeping users safe online. I mean, I, I've got a son, um, you know, a niece and nephew. Um, and for me, I, I, I, you know, I particularly, the longer I do this, the more I realize how dangerous it is out there. And so, look, it's impossible for me to even make a dent, or I should say, for us to make a dent. Now we have a, a very large team, moderating 24/7 human team. We've got significant amount of artificial intelligence. We're moderating millions and millions of images and video hours and so on. But still, the amount of content that's out there to police, uh, and, and to moderate is, is, is, is, um, insurmountable, right? Right. But my passion lies in keeping users safe and, and doing our best to, to put a dent in the problem, uh, and protect these communities and, and, and, and keep them safe.

Chris Brandt:

Well, and, and I know that you've actually had some success at that. I know, you know, there's, um, you know, I don't, I don't want to say words that are gonna make the YouTube algorithm freak out mm-hmm. But, you know, you're, you're protecting children out there as well. And, you know, some of the stuff that you've done has actually led to a substantial amount of arrests as well, correct?

Josh Buxbaum:

Correct. Yeah. So in, in 2022 alone, we were responsible for the rest of over, uh, 500, um, bad actors. Um, I don't know the wording that, you know, that's appropriate here, but, um, I don't know either. I think we would call them predators is good. Predators. Yeah, sure. Um, and, and, and absolutely, uh, that's, that's a huge win for us. It's one thing to take down an inappropriate. Bikini picture on a dating site, right? It's another thing to, uh, and, and to be honest, that's a great example. The, the, the verticals that we operate in are endless, right? So the need to moderate goes from catching this really, really bad illegal type content to, uh, moderating profiles on dating sites to e-comm platforms where users are submitting, uh, feedback or reviews. Um, it's really endless the amount of, of content that's out there and, and, and the use cases.

Chris Brandt:

And the thing is too, um, This stuff spreads like crazy. I know, I know you were talking, you told me about, you know, this Times Square, uh, experiment that you guys did. You you wanna talk about that real quick?

Josh Buxbaum:

You know, being a moderator, we recognize the challenges of being a live moderator and, and we can discuss later sort of the mental health programs we have in place and right. And sort of the challenges that, that they face every day. They truly are heroes, uh, of the internet. And what we try to do is, Sometimes you can sort of get tunnel vision when you're doing your job, and, and I've been a moderator myself as part of being in this business. You have to know every aspect of, of the job. And so I've sat at a desk and moderated hundreds of thousands of pieces of content over my career in the space. And, uh, what we wanted to do was drive home. You're not just sitting here clicking buttons on a computer, and it's very hard to convey that when an image shows up and you, you select a label and go to another and you do 10,000 at a time. Right. And so we were working on a project, uh, that, uh, where images were displaying in Times Square on one of the large screens, and it was a big project. And we thought this was such a great opportunity to drive home the impact that our moderators are having, because we're always trying to remind our team of how powerful their job is. And like I said, that can easily be forgotten. Right? And so what we did is, uh, we had someone, um, filming Times Square, uh, on the big screen, and an image would populate in our tool, in our office, and a moderator would see that image. He would then act on it and then see it show up, uh, you know, on the, on the big screen in Times Square. And I can't think of a. A better way to drive home the impact of what we do. Um, right. And, and Chris, the most interesting part is, so that was showing to what a few thousand people in Times Square, that is nothing compared to an image, you know, that goes to a social site that could be shared millions and millions of times within, you know, hours. Um, right. So we always are really doing our best to one, uh, drive home the impact of what we do. And two, celebrate the wins, like arrests and things. Um, Because what we do is super important and it isn't just sitting clicking buttons, but it can certainly feel like that sometimes.

Chris Brandt:

Well, and I think it's, it's quite frankly, one of the, Hardest things to get right on the internet. And without it, it just makes sites unusable, quite frankly.

Josh Buxbaum:

Yeah. And, and, and drawing that line, you know, those community guidelines are extremely challenging, drawing that line, uh, between something that's, that's truly offensive and that's acceptable. Sometimes fairly gray, right? And the risk for brands is they can alienate their users if they're too strict. Yes. And if they're not strict enough, they're gonna alienate their users because they're not comfortable with the experience they're having. And that line is constantly evolving and moving with new, new challenges, new, uh, you know, social topics or concerns or. Uh, you know, positions that, that a brand may be taking. It's always evolving. Yes. And it's, it's always challenging to find that line.

Chris Brandt:

We're in a point right now where there's a big paradigm shift in, in, in technology with generative ai, and I know that's a big potential. You know, market for you and big potential problem for all of us, right? I mean, can you, can you speak to who, how generative AI is impacting your business?

Josh Buxbaum:

Gen AI is so new and it's already having a significant impact, right? It's fresh, if you will. Right. So the first thing is the ability to create content. So, as I said, content, there's a significant amount, it's always growing. People are always coming up with new and creative ways to submit offensive content or even even harmless content, but it's, it's, there's always new content. All of a sudden now with generative ai, the ability to create content has been accelerated. And so I can create 30 art pieces, you know, some sort of digital art in, in, in five minutes. All I do is provide a prompt and it's there. So the amount of content, um, is, is, uh, exponentially growing. Um, and so we're seeing that on our platforms. Um, and then of course, the challenge with generative AI is folks can use it to create, intentionally, create content that would, uh, bypass artificial inte, basically use ai to trick ai to beat ai.

Chris Brandt:

Yeah.

Josh Buxbaum:

Exactly. That's disturbing. And then determining what is AI generated is a whole other challenge. And the, the real concern there is, I mean, of course the, you know, the challenge of whether someone's in, in, uh, interacting with a human or an AI is, is significant, particularly when you're looking for advice on real life problems. Right. And trusting that another human cares when you're really talking to a computer. Right. Um, but, uh, there's the, the other risk with, uh, with generative AI is that the, um, the content is, um, uh, the, the folks who created Gen AI don't necessarily know what it's capable of. They literally have hired folks, um, that called prompt engineers. And their, their job is solely to feed questions in, to see what the AI will. Uh, we'll, we'll send out, that's pretty concerning to build something and not even know what it's capable of.

Chris Brandt:

If you look at some of the different generative AI models, when you connect them to the internet, they get really squirrely really fast. Um, you know, absolutely. I know Microsoft, Microsoft was building one way back, and, and every time they put it on, it turned racist, you know, like a horrible, racist, violent, you know, thing. Right? Within about 24 hours or less. Um, you know, Bard, you know, you, you only get so many prompts before, you know, things start getting squirrely and, you know, integrating open AI into, into Microsoft's, you know, chat, uh, into Bing, you know, you can't do too many in a row, otherwise it starts going off the rails. So it it, it's complicated.

Josh Buxbaum:

The other big challenge is that we have this thing called UGC, user generated content, allowing people to create authentic content, whether like we're doing right now. Exactly. And then we're having an honest conversation where we're really sharing our opinions and we're both human beings. Um, right. Authenticity is jeopardized. So users in the old days, you know, you'd look at a picture of what a product looks like because the advertiser showed, you know, The person, uh, in that cool t-shirt, you know, whatever it may be. Now you look at the reviews, you look at real people who've purchased it. You look at their user generated content with them in the shirt. It fits too tight. It's darker than, than it's, than was in the picture, whatever it may be. We trust user opinions, right? We trust user generated content. Well, now definitely that is all at risk because if I can't tell what's user generated or AI generated, Yeah. The, the whole benefit of user generated content and the authenticity piece is at risk. Yeah. And so what I see to be the biggest challenge is, like I said earlier, determining whether someone wrote this review or the AI wrote this review, because now if I can't trust the authenticity, it, it, it, it puts, uh, uh, it, it threatens UGC in general.

Chris Brandt:

Tell me a little bit about how does web purify work? You know, like, I mean Sure. You, you, you have. A lot of people on the back end, obviously, because you know this is so complicated that you really do need some sort of. Human moderation, human intervention at times. But you also have AI working here, and you have, you know, you've, and you've got businesses that have different requirements. There's a lot that, that has to go into this. Tell me a little bit about how it all works.

Josh Buxbaum:

There's a lot of moving pieces for sure. Um, so we consider what we do to be moderation as a service. Yeah. So essentially, you know, you can easily integrate with our, our, our platform or our tools. And what that means is we're, we're called an api, which means you integrate with, with our services, you relay content to us. It populates in our tool. We moderate it, right? We make a decision and then send a response back to you. So we've sort of, uh, created a way to, uh, easily have moderators, live teams, uh, looking at all of your content. Um, and our tool is highly evolved over the last 17 years to, to, to give our moderators the tools they need to moderate quickly, but efficiently. Mm-hmm. Whether that be, um, The AI score is populating for them so they know what, what the AI thinks a piece of content qualifies as. Um, Whether that be mental health risks, like, uh, if there is potentially upsetting content, um, showing in gray scale, right? Uh, so it's not as impactful. We use a combination of AI and humans, uh, to accomplish, uh, our goals.

Chris Brandt:

You're, uh, obviously moderation as a service, so you are providing an API that people can interact with from their site to send you content for, for, uh, moderation. Is, so does that then, do you just like, sort of tag things as needing to be removed? Like what, what is, what are the workflow options? Can, can you do like, Inline moderation so that it comes to you before it gets put onto the page? Uh, or is it like after the fact it gets removed, or how, how does that all work?

Josh Buxbaum:

Yeah, so that's really client dependent, so, uh, okay. What you're referring to is pre-modern, ensuring that the content gets checked even before it goes live. Gotcha. Some clients elect to do that. Other clients let it populate and we look at it after. Um, so that, that flow is really up to the client.

Chris Brandt:

When it gets to you, the data gets tagged and then it goes back and then, then they have to build in the, the systems to remove that? Or do you have a few people who can actually physically go into those sites and remove that content? How does that work?

Josh Buxbaum:

So, typically, yeah. Typically the response that we send to them, they'll take action based on that. Um, so we're not necessarily, now some clients do have us integrate and use their tools, but typically we send a response and then they action on it appropriately. Mainly because we're not the voice of the brand, we simply let them know what we found, right? And then they decide how to inform the user, whether that user's banned, whether they're simply warned, right. Um, But, but essentially, you know, our approach has always been a combination of artificial intelligence and humans. Because as you know, the AI can struggle with context, right? And the AI will never replace the human piece, but it certainly helps us scale. Uh, and so a lot of times AI will review content first, our services will, uh, and then depending on scores, it'll either be thrown out immediately or escalate to humans. So, Um, and so when we do this, it's really a client's tolerance for risk. If they're gonna solely rely on ai, that's super risky, but that there may be a budget decision there, but typically, um, at scale we're using both to accomplish our mission.

Chris Brandt:

Just outta curiosity, I mean, since you've got this kind of platform like this, I mean, could you even just police brand guidelines, you know, like, like employees posting things that, you know, like make sure that they're. Posting it appropriately for, you know, the brand's specifications.

Josh Buxbaum:

Yeah. And that's interesting, right? So we're always finding new needs, right? So we, we always approach it as, hey, we're pro, we're protecting brands from the users, right? Right. Um, and now, you know, with this new gig economy, for example, uh, there are a lot of folks who are working for shopping apps or driving apps to whatever it may be. And so those folks on their last day, Uh, may decide to say something to the last ride that they had. Right? Or when they're shopping in the supermarket, take an inappropriate picture and say, Hey, is this something you wanted? Right? Um, and so, uh, you know, there's, there's this, this world where we're not just worried about users submitting to sites. We're now protecting companies either from their own employees or from, uh, you know, contractors or whatever it may be. But there's always these like new use cases surfacing that. I wish I could say I was a genius when we. Began this and saw them coming, but it's just, it's an endless amount of challenges, uh, that the Internet's present presenting. Right. And, and you know, like I said, with, uh, the Metaverse presents its own challenges, generative ai, right? The soon after presents its own challenges and I'm always just thinking, you know, what's next? And, and so that's our specialty is to really adapt quickly to these new technologies and. I think the first step in doing that is not pretending like you know the answer. There's a lot of quick tech that comes out that says, Hey, we can detect generat of AI content, no problem. You can't. Good luck with that. You simply can't, right? Yeah. But, but so we're not ones to just jump on a platform and say, we can do this. What we, what we do do is say, we've been doing this for 17 plus years, and we can approach this with the tools we've learned and recognize what we can't do and what we can, and we will evolve the way we address it, which is how we've always tackled every problem, uh, since we began. Um, and it's really a partnership with our clients to do that.

Chris Brandt:

It does seem like there's a potential for a tremendous, uh, amount of customization. Um, so, you know, you, you, you, you mentioned the fact that you've been at it for 17 years. Um, and, and you know, I, I know web purified back in the, the day was just sort of filtering, you know, data coming into a, a company for the most part. Right. Um, but that business has changed. Drastically since then. Um, could you, could you talk about some of the lessons that you learned about all this and, you know, like things that, you know, maybe weren't, weren't intuitive initially, but you're like, oh my gosh, this is, you know, such a big part of this.

Josh Buxbaum:

Uh, we began, we were strictly doing, uh, uh, text moderation, right? Um, and that was really the, the most common type of user generated content. And as I said, that evolved to new types of content. Um, I think the thing that we underestimated was how complicated each. Communities, guidelines can get right. Um, and it's fascinating how what you would think is something that's as simple as no nudity, violence, drugs, hate, um, it's really, it's just not that simple. No, it isn't. Um, and so we really, I call it sharing a brain. What we do with our clients is over time, you know, because we don't, we don't share. Our moderation teams across projects for this very reason. We have dedicated teams 24 7 that work on one specific workflow and they know it inside and out and, and they don't right outta the gate. There's a learning curve, but, um, you have to know the community and, and there's, it's, it's, it's, it's, it's. Highly detailed in terms of what you're moderating. Um, and so, you know, we'll, we'll have, um, um, new clients come to us and say things like, well, here are our rules. Right. And it'll say something like, nothing dangerous. Right. And I, what's dangerous? You know, that gives me a headache. Yeah, right. Exactly. And so w part of our job as consultants is to then delve into their rules and help create something that's trainable and replicable. So if, if I were to train moderators, uh, and, and say, you know, nothing dangerous, it means nothing. Yeah. Right. Is someone standing on a cliff? Dangerous? Is someone closing their eyes while driving, you know, whatever it may be. Me on a, me on a bicycle is probably dangerous. Right. You know, wearing an helmet, you know, so. Yeah. Right. Exactly. Um, yeah, and plus, you know, I have a, I have a poor sense of balance, but you would never know that in the image. Right, right. And so, um, uh, so we had to translate that to, you know, imminent death. Right. Likely death. Right. And it, and so we literally need to, to interpret these guidelines and they're constantly evolving. Um, and, and we're constantly, um, I'll give you a perfect example of how quickly we need to react. Yeah. Um, during the Black Lives Matter movement, there were, um, folks were. Uh, leaving their screens blank. They were, they were str, they were, they were, uh, strictly black screens. Right? Uh, and it was a movement to support the Black Lives Matter movement. Now, we happen to have a rule, uh, generally for our clients is if there's a blank submission, we reject it is blank or broken. The reason we do that is because images can populate later, right? So we can't risk that su that this was just some of, some sort of a technical issue and five minutes later something populates on the site. Oh man. We're rejecting these images, completely unintentional. Right, right. Obviously we weren't taking a position at all, but we had to very quickly change that rule when we got emails and complaints, uh, from clients saying, our, our users are complaining. They can't express themselves. They're being shut down. Right. Right. And so we, we, we have to be very nimble. Um, because there's always some sort of a, a new challenge or nuance. Uh, so immediately we, that, that rule was out the window. Right? Right, right. Um, and so we're constantly adapting not only to things like that, but there's just new threats. Right. Uh, for sadly, there'll be a, uh, something that got live streamed and now that's going to be showing on all the platforms we moderate. So we have a team that literally responds. They're, they're on a, you know, essentially we use something called PagerDuty, but they're paged. And then we put together training immediately. So within five to 10 minutes of the incident, our moderators are already ready, um, to, to see it in their, in the platforms they're monitoring. Yeah. Um, so it's very dynamic and it's, it's very active what we do, and it quite, it is 24/7.

Chris Brandt:

You mentioned that there's certain things that might start circulating, they become a meme and then they start circulating the internet. Mm-hmm. And the ability to identify those early would give you an advantage on, you know, trying to clean that mar the content up. Because I would imagine too, the sooner you get at dealing with that content, the less it propagates and you have to deal with it more.

Josh Buxbaum:

Right. Exactly. You have to nip it in the bud and it's it, and, and yes, the more it spreads, uh, the more evil it gets. And what I mean by evil is the more it's out there. Right. And you'll, you'll see it forever. Part of our responsibility of being in the industry is not. To keep secrets and say, um, you know, that's what's unique about the moderation space is we all work together. This isn't the kind of thing like, Hey, we figured out there's this really bad meme going around. We're gonna just, you know, keep it and just tell our clients. Right. There's a community, right? Yeah. We all work for the greater goods, so we're the first to blast out to our, our, our clients particularly, but also other folks in the space and say, Hey, there's this thing going around, right. Uh, we think it's pretty nefarious, uh, and we wanted to make you aware of it. Uh, but then we also alert our clients if something surfaces on a, a particular platform, hey, we wanna adapt, you know, will, will you be okay with us rejecting this type of content? Cuz it will be showing up in the next five hours, right? Or three hours or whatever it may be.

Chris Brandt:

Well, and I think at that speaks to, um, Why this is so hard to do yourself. I mean, somebody you know, a company with eight 17 years of experience doing this is going to have a lot of insights on just. You know, like all those things that, the nuances to moderating content. I just, you mean just from the things we just talked about, it just seems like that would be really easy to miss stuff. And I remember like, you know, Facebook had a big problem because, you know, somebody's, you know, nudity is, you know, somebody else's breastfeeding, you know, and that, you know, like that gets into, into a whole lot of, you know, difficult gray areas that are really hard to address.

Josh Buxbaum:

That's the constant challenge of what we do. And we even developed a, uh, a, um, an experience where you can go on and moderate, uh, you know, images, you can put yourself in the seat of a moderator, and what you do is you sit down and you have to do, you have to identify, which is, uh, a weaponry. Uh, the presence of weapons, dangerous driving, and two or three other categories and you moderate, uh, and then you fail miserably and then you're given the real details. Um, you know, um, uh, reckless driving means only one hand on the wheel. Uh, it's only for vehicles. Must have eyes, must not be on the road. And, and then all of a sudden you succeed because it's, you know, you, you delve into the minutiae of, of the criteria. It's a very significant portion of what we do is the consultative piece. So it's not just. You tell us what to do and we're gonna keep an eye on it. We're constantly having, uh, checking calls with our clients, identifying new problems, talking about trends in the community. They're sharing with us, pushback they're getting from the community. We're recommending adjustments, and so we're always building this knowledge library that we can share. With our clients, and we're always getting smarter. I mean, recently, uh, we brought onto our team, the former, uh, head of trust and safety operations at Twitter. Wow. Uh, Alexandra Popken, um, to, to bring her, her perspective and knowledge. Right. Um, so, so we are, we're amassing this, this, this. This knowledge over 17 years and bringing in different perspectives so that we can add that value. Because it's one thing to be able to moderate. It's another thing to understand this industry, understand communities, understand the challenges and how we can really drill down into these guidelines in an effective way.

Chris Brandt:

One of the things that we've heard a lot about is, um, how difficult a job it is to be a live moderator. Right. You know, and I know that. Uh, there's no, no getting around it. You're gonna be exposed to some really difficult things. Um, but could you talk to how you manage, you know, to the, to the mental health side of this and, and how you take care of your employees so that they can, you know, go home at the end of the day and, you know, Have a normal life, right?

Josh Buxbaum:

Yeah. And, and it's, it's a very valid question and I think unfortunately the moderation industry has been, um, shown in some pretty negative lights. And that may be just because that's the more interesting story. Sure. Um, and I certainly don't mean to downplay. Uh, how impactful it can be because I'm, I'm sure there's some really rough stories out there and, and, and I, I, myself, am aware of how difficult moderating can be, but you very rarely hear about, like I was discussing earlier, the wins, right? The right, the pride that the moderators take in their, in their work. Um, so yeah, we approach this a few different ways. So we have, um, a very advanced mental health program for all moderators. Um, whether that be 24 hour counseling available, mindfulness sessions, um, we, uh, we really build this family environment. So the office, uh, environment is, is really important. We've got our pool table and our foosball table and our library and our chill out area. That's great. Dot com and, and it's, yeah, I know, isn't it? Uh, and it, but it's, it's, it's. Some of those dot coms don't necessarily need all of those spaces. You're actually need, you're a company that needs it. Right? Right. We need people to be able to relax. Right? Yeah. Um, and so and so we utilize all that stuff and really encourage breaks and actually have mandatory breaks because you have to take care of yourself. Yeah. Right? Yeah. Our priorities to take care of our moderators first so we can efficiently take care of the communities. That we protect. Um, but Right. You know, because we're 24 7, we are a family and we support each other. We, we celebrate birthdays and holidays in the office together. So there's this real sense of team building, you know, um, we've got sports teams that compete with each other, the video moderators versus the image moderators. Right? Um, so there, there's, there's a significant amount of support, uh, both mental health wise and just, um, just sitting the guy sitting next to you. Right. And, and we encourage, you know, there's, there's, uh, it's, it's so dark what we do. Sometimes we encourage a sense of humor. We encourage them to talk about what they've seen, right? Sure. Um, That, that, but, but quite honestly, um, the moderators overall recognized this, and I think it was more difficult earlier because it wasn't necessarily seen as a career. Right. This used to be seen as a temporary job. You do your thing, you go home because, you know, you couldn't figure anything else out temp well, temporarily. Right? Right, right. Now there's a true career path, right? So our moderators aren't there to just sit and, and, and moderate they. They recognize that they're going to learn community guidelines. They're going to learn these processes. Eventually they can. And, and most of our trainers, most of our quality control team, uh, most of our managers are moderators who graduated. Wow. So there's now a career path. Uh, I mean, they could leave us and go be the head of trust and safety at, at any organization after working with us for five or 10 years. Right, right, right. So, um, it's a little different because they recognize the impact they're having and like I said, by celebrating the wins, they're empowered. It's not near as, as upsetting as one may think, but certainly we don't underestimate how impactful it is.

Chris Brandt:

I don't think one could underestimate the impact that those wins could have on somebody, because it's very different from, you know, like looking at. All the worst things on the internet that it can throw at you. And just being like, there's nothing I can do against this barrage of badness to to, to like, Hey, I'm actually, I. Saving people's lives. I'm really helping people. I'm protecting children. I'm, you know, like there's a lot of, lot of really amazing stuff in here that you can, you know, hang your hat on.

Josh Buxbaum:

Yeah, I mean, one, one angle we take, for example, is we do a significant amount of moderation in the metaverse. And unfortunately, women in gaming tend to be targets. And so our moderators, uh, in, in, in, in VR are, are primarily women. And, and what they are is they're sort of, Out there and they're going to be abused in some way. And the training is very interesting. It's, it's, they are actually quite happy when someone acts towards them a certain way because they're trained in it, number one. One. Cause they can do something about it. Number two. Yeah. Right. And it's almost like being an undercover police officer. You're like, ah, gotcha. Right, gotcha. Like, you just messed with the wrong person, right? And so they're armed with that power because essentially they're representing the game. And so it's, they're not victims. They feel like, wow, I'm glad this happened to me. Guess what, buddy? You're about to get kicked off and reported and all these actions would be taking against you. So you sort of spin the, the narrative and perspective. And if anything, When they go home on that day, they're glad that they encountered those bad folks versus just a regular day of being in game because they didn't have the impact they were hoping to have that day.

Chris Brandt:

That's awesome too, cuz I'll tell you man, gaming culture can be extraordinarily toxic. Toxic, especially for women. Yep. And I gotta imagine, you know, some people who have been subjected to that would love, love to have a way to, you know, Right that wrong.

Josh Buxbaum:

Absolutely. Yeah. And, and that's certainly, and that's what we we're going into the game, having that power, right? So it feels really good. Um, and, and of course, you know, within the metaverse you have that heightened sense of reality. So as you said, traditionally games are toxic. Now add this extra element of it feeling real and interacting. And so you have new challenges there.

Chris Brandt:

Within a week. Meta had assault allegations in, uh, their, in their game.

Josh Buxbaum:

It's, it's, it's a challenging environment, but it's also, I mean, it's also like the most incredible technology I've ever experienced. Like, it's such a leap. So it's, it's, it's like everything else. It's, it's amazing. It's brilliant, it's fascinating, it's engaging, but it comes with moderation challenges. And so, um, you know, we love the metaverse. We love being in it and, and, and, and keeping it safe. Um, But you know, we, we recognize that, how, how important it is to, to, to police these, these, uh, these users. Um, and, and so we're doing that. Um, but we, we, we happen, I happen to be like an addicted, uh, um, um, I've got my Oculus headset and, you know, it's, that's how I blow off steam, right? I, I love the game.

Chris Brandt:

My son is big in VR uh, gaming. He's a valve index guy, you know, he's got, he's got all the stuff on the walls and That's awesome. He's got every, uh, he's, he's slowly collecting every headset, I think. And this, uh, new Apple VR headset is, you know, melting his brain at the moment.

Josh Buxbaum:

Yeah. Well that's gonna be an expensive one, Chris. You may wanna keep him away from that for a while.

Chris Brandt:

3,500 bucks. Yeah. I'm like, you're gonna have to start saving up. My friend. Um, so like, what's next for WebPurify? Where do you guys go from here?

Josh Buxbaum:

I mean, we're, we're constantly adapting to the new challenges. So as we said earlier, you know, gen AI is, is, is a place that we're putting a lot of focus in. Um, and, and we're always the mental health piece. Also, we're always evolving our tool to help keep our moderators safe. So one thing that we've been adapting is, Um, when videos come in, uh, that are potentially offensive or upsetting, there's, uh, there's really no need for a moderator to have to watch that video with the audio, right? So what you can do is storyboard that video, which essentially just take, uh, screens from e a screen, uh, grabs from every two seconds or something. And essentially you can scan it real quick, see what's going on, and you didn't have the real experience of watching the video. Um, and so we've, we've talked with trauma therapists who, who have drawn a very clear line in the sand between looking at images versus watching a video. And the impact is vastly different. It's, it's significantly more impactful when it, when you watch a video and it seems real images, you can actually convince yourself that it's not real, that it's, it's it's a movie or that it's screenshots from whatever it may be. Um, so, so that story boarding is a really important feature, and that's something that is in our tool, but we're evolving it a bit more. Um, and again, working with the AI to determine what frames should show and, and, and so on.

Chris Brandt:

I know you also mentioned that like you also do show them in show things in black and white too, which also lessens the trauma too, which I think is interesting as well.

Josh Buxbaum:

Absolutely. So there's blurring tools. There's, there's, uh, black and white is definitely, Again, having sat in a moderator seat and looking at both versions of images while we were, uh, testing it, uh, really takes this thing outta certain types of content. When you take the color out, it just seems a lot less real.

Chris Brandt:

That sounds, uh, amazing. So like if, if somebody wanted to try out Web Purify or if they wanted to be, try to become a moderator, where, where should they go?

Josh Buxbaum:

Um, I mean, they could certainly, they can certainly email us. Um, you know, um, uh, they can, they can go to our website. If you're interested in, in using our services, you can go to our website, sign up for a free trial, um, and easily integrate, uh, just so. One of the benefits, Chris, of what we offer is we do offer turnkey services. So we do have a team that moderates for not safer work, nudity, violence, drugs hate 24 hours a day. Uh, it's just not specific to your brand. Um, they're just simply looking at those images. So that's one tool for low budget, uh, clients who, you know, we think it's important to let everybody have access to moderation. Mm-hmm. So we have this shared team that moderates for that criteria alone. Um, so you can simply go to our website. You can sign up for a free trial. Uh, you can email support@webpurify.com. If you have any questions, you can email sales@webpurify.com. We're here. Um, and, uh, yeah, we're, we're, we're always looking to for new challenges and ways to help, uh, keep communities safe. Cuz it, it's certainly, um, its certainly rough out there.

Chris Brandt:

Yes, it is. It is. Well, hey, you know, thanks for doing what you do. I, I, uh, I value. Uh, content moderation. You know, like I I'll say like even with this YouTube channel, like when we have female guests on, you get just horrible comments. Just like right in, you know, right outta the gate. And it's just, it's mind blowing to me that people spend the time doing that kind of nonsense. Yeah. So, I appreciate your moderation efforts.

Josh Buxbaum:

Thanks. I mean, you know, it's so bad that we, we actually, we did a, I shouldn't say it's so bad, but we did a survey of, uh, the content. We pulled data from our, our, all of our moderation stats last year, and that's across all different platforms, e-comm, dating. Um, e-learning sites and so on. Right. And we came to the conclusion of that data that 1 in 27 user submitted images was not safe for work. Um, and that's across all different platforms. Right. That's fascinating. That's crazy. Yeah. Yeah. And if that doesn't drive home the need for moderation, um, I don't know what does. Um, so yeah, we, we, we, you know, we're. We're necessary. Um, but we're always trying to figure a way out, a way to keep that balance with users happy and platform safe.

Chris Brandt:

And that 1 in 27 number is really scary considering we're like uploading like a billion photos a minute at this point. Right.

Josh Buxbaum:

It's just, yeah, it's a scary stat. It's crazy. It is.

Chris Brandt:

Again, thanks so much for, uh, keeping us safe out there. We appreciate the work you do and thanks so much for coming on. It was really great to hear your story and talk with you. Appreciate it.

Josh Buxbaum:

Chris, thanks for having me. Really appreciate it.

Chris Brandt:

Thanks for watching. I'd love to hear from you in the comments. Give us a like, think about subscribing, and I will see you in the next one.