All right. Hello, everybody. Uh, we're going to talk about AI ethics today. Um, very interesting topic. Um, AI is really hot right now, right? We got all these talks about AI doing different things, taking our jobs. Um, and I really wanted to talk a little bit about ethics. So before we get into talking about it. What brought everybody here? What what are you guys hoping to get out of out of this session? I just want a warm seat, a warm room. Anybody have any expectations? I want to make sure I meet them. Anybody excited? What do you got food for? Thought is what.
>>:
I'm looking for. Stuff to to think about. Moreover, not necessarily answers, but things to think about. Okay, I got a lot of stuff for you to think about.
>>:
Yeah, similar to that. Like just trying to get an idea of the overall landscape of things that need to be considered when implementing this. Okay. Awesome. Very cool. Anybody else? Uh, I would say like, uh, just thoughts of like how things are going to look forward to in the future. Okay. Changes.
>>:
Quite honestly, I. Everything that like the dust. Okay. Very cool. Yeah. We'll address some of that stuff as well. All right. So to hop into this, um, I think it's important to, to kind of understand what ethics are. Um, ethics examines the rational justification of a moral judgment. It studies what is morally right or wrong, just or unjust. In a broader sense, ethics reflects on human beings and their interaction with nature and other humans, on freedom, responsibility, and on justice. Why is why are ethics important? Guides us to tell the truth. We keep our promises or help someone in need. There is a framework of ethics underlying our lives on a daily basis, helping us make decisions that create positive impacts and steering us away from unjust outcomes. So as a kid, I, um, you know, elementary school were raised. Um, and hopefully we learn right and wrong. Right? We know fighting is wrong. Stealing is wrong. We learn those things. But ethics is a highly sophisticated, complex subject that has been going back in time with Plato and Socrates and all the all these great, um, topics around here.
And thinking about when I was first exposed to ethics was around fifth or sixth grade. My mom was a huge Asimov fan, and Asimov fan fans out there read iRobot Foundation. Okay, I've got a lot of folks here. So. So I'm sure a lot of folks are, um, familiar with the the the the laws of robotics, the iRobot um, series. Um, and, uh, this was my first oh, wow. You know, foray into ethics. Wow. This is more complicated than right and wrong. You have a machine. What's going on here? Um, the thing about these rules are these rules are a plot device. These rules were written in 1942 to help Asimov tell these stories. These rules get broken all the time. If you read the series and you go through things, um. He later added another rule down the road in here. Um, but he needed a plot device to do this. But what it does is, I think for especially young people, it really starts them to think about machines and ethics and what goes into that. Judgment day. I is going to take over the world and humans are gone, right?
That's what we're hearing. We are hearing this thing called artificial general intelligence AGI. And there's a lot of money and resources and time being spent on what are we doing to prevent robots from taking over, over the world? Um, AGI does not exist right now. And this digital consciousness that is aimed to suppress, um, humanity is just all this hype because AI is hot and we live in a society where it's gloom and doom, and we want to have these conversations. So I want to talk a little bit before we dive into this about why, how why did I started start to talk about ethics in AI? Why was it important to me? Why is this a subject that I wanted to talk about? And as web developers, this is a format that I'm sure everybody has developed. If you're a front end developer in their time, you have an icon, you have a title, and then you have some text underneath here. Um, this is for a legal help website that deals with family law, housing law, vehicles. And it's just a navigation. So a couple of years ago, we were starting to figure out how AI worked into our work processes and, um, these icons, a lot of times we use the noun project to, to, to, to source our icons.
Um, um, have the copyright and everything inside of there, but sometimes it's just a pain to search for what you want and stuff. So we started looking at using AI to generate some icons. So we put divorce inside. Um, this is the icon that it came back with. And I am so embarrassed to say this, this icon got into the mockup.
>>:
Um. We did a review internally, our review, but for some reason the client had access to Figma and they could see what we were doing in Figma right away. So before we even had a chance to pull this out. This is a legal help organization right there out to help people. And we have divorced with this man being pulled away from child with beer bottles. It was one of the most embarrassing conversations I ever had in my life. So back to AGI and, um, uh, AI ruling the world. Um, we have this alignment problem, and the alignment problem is making sure that AI aligns with human values. Uh, it's the idea that AI system goals may not align with humans. And the problem that would be heightened if superintelligence AIS are being developed. Um, we'll get back to this later, because I think there's some bigger fish that we need to fry. There's some other conversations that we want to have before we get into this. Um, so I want to talk a little bit, um, about AI, the types of AI that we're going to be talking about and some of the ethics that we have with this.
So AI is this just this, uh, promiscuous term that's used for all these different things nowadays? We we have algorithms. So algorithms help judge if you're getting a job, your credit score. Um, we have security systems that are scanning faces. They're scanning voices, and then we have text and image generators like Dall-E and ChatGPT. So before we even get to the AGI stuff, we need to talk about the ethics between these. Um, another thing is we go through I want to mention that, um, uh, I used Midjourney for the images on here, and you'll start seeing the prompts down here. This is what this is. These are the prompts that I'm using to to generate these guys. Um, and I started, uh, kind of mid in the slide deck kind of working and figured out this, this red robot. And I love this cute robot. I'm a robot guy. I just love robots I love toys, I love everything. So you'll see this red robot theme throughout. Um and cute red robot is is the prompt to get him. So we've seen this in the news.
Um, already a great example of this is why Amazon's automated hiring tool excuse me, discriminated against women. Um, so a few years ago, Amazon had an internal tool that they were using to evaluate, uh, resumes. And it would score candidates on 1 to 5. And it was quickly pulled and it was pulled because, um, uh, there's more men in it. They're getting the jobs. So therefore men should be better fit for this job. Um, so that was determined very quickly. Um, and, and pulled after that one went in there. So the first thing about ethics kind of getting into this is transparency. Um, I think it's important to talk about transparency. I think if you're using I, I feel it's important to let people know. Um, there's all kinds of news out about folks on Etsy selling art on Etsy. Um, I personally would be okay if I bought art on Etsy and it was AI generated. Some people are very upset about it, but I think that, uh, being transparent and telling is the important part inside of there. Um, we have laws in the EU that talk about transparency, but those laws haven't made it over to us yet.
So with the transparency, this gets pretty complicated with some stuff that that that's happening. Um, I think that if I is making a decision about your life, um, you, you should have the right for an explanation. So that's called counterfactual explanation. Um, the example of the Amazon situation or any job processing that's using AI to process a job, a job application. If you didn't get the job, why didn't you get the job? I think the companies need to be transparent about that, which gets really tough because you have these huge black box models that are being created that the developers oftentimes don't even understand what they've developed. And if I submit my job application, I get rejected. What do I what transparency do I really want? Do I want to see the code? Do I want to get in there and look at the code? Do I just want to know that, um, if I spelled the name of the company correctly on the application, I would have got the job. And I think some of those in instances are tough.
And one thing that we see that's happening is, um, there's these surrogate models that are being created to explain the model that was created. So you have this big black box models, the developers have no idea what's going on. So they create this other model to try to explain it. And it gets super complicated and and really bad. And even big companies do big companies have the time to explain why you're rejected? I mean, think about Google. I mean, there's no phone number to call Google, shoot them an email. Maybe you'll get a response with with the type of support that you have. So this is a fun one. I did this one yesterday and looking around the room right now, um, here's the prompts. Uh, two developers at Drupal camp looking at a computer. Um, what do we notice about this? Um, beards. We got a couple beards in the room. A couple of folks have beards. Glasses, glasses? Yeah, flannel. Does anybody have flannel? I see one flannel.
>>:
I'm being attacked.
>>:
Yeah. Oh, shit. These are two white dudes. Right, right. I mean, it is. I mean, this is spot on bias right here. Um, and this came from Midjourney, so I hopped over to ChatGPT and Dall-E and used it in that, and it generated a better image. But then I tried it this morning and it was back to being biased just 100%. Uh, flannel. And I think. They both look the same. Yeah, they do look the same. Right. Um, I think the flannel and the beards come because it says Drupal. That's where I think that comes from, I don't know. So this is a topic, um, fair watching. Uh, it's a concept that was, uh, first created in a paper in 2019 by by a group of scientists. And fair washing is the idea that a company who designed an AI keeps the brain of the AI hidden. Claims it. It's fair, but often knows. In fact, it's heavily biased to produce specific results. Um, so we see this in technology, and fair washing isn't just a technology problem either. Um, airport scanners. I'm sure folks who have flown recently have gone through the full body detector.
You go through that full body detector, you put your hand up, your hands up, and it goes through and it scans. So what that's doing is that's looking for for lumps. And maybe your you're something hidden with your lumps. Um, and before you go into that scanner, when you're going in there to, to put your hands up, the worker in the back has two buttons that he pushes. He pushes a blue button if you're a male or a pink button if you're a woman. So this has, um, a really tough effect on trans folks. So a woman, a trans woman who has breasts and a penis, uh, it could cause, uh, it could cause issue where it detects something, it could help them as trans. It could make it so that, um, they get groped at the end. Um, sorry. I get a little emotional with stuff like this. It's just a really hard process right there. That is just really not fair. Um, with folks. The other thing that this brings up is ordinary people can't challenge the system. Are you going to challenge, say, in a situation like that?
No. You just want to get on your flight and you just want to go. Um, does if you're not a programmer and you know, I. How are you going to challenge this? Ordinary people can't challenge the man. They're not challenging these. Which also brings up the fact that technology has a way of seeing. Technology sees what it's what it's told to see. And this is a concept called digital normalization. And this this comes from a researcher, Simone Brown, and she she mentions that her projection projected information onto the body. So where this comes in is with facial recognition and facial scanning is, uh, optimized for white people. Um, it works better. There's lots of studies on there. Um, it's the machine telling you that you're strange. So the situation with the full body scanner. Um, the machine was created by non trans people with a programming. So it's therefore saying that the trans person is strange. Um, people assume that the machine is neutral, but the machines are not neutral. So epidermal ization comes from, uh, France Branton, who was in the 40s.
He was a black gentleman in France. And he found that people were projecting because of the color of his skin, that he was dangerous. And, uh, um, any time, um, and any time the meaning of something like that was, uh, where am I going with that? Sorry. They were projecting the meaning onto his skin. Um, and part of being a racial minority, um, is that a white person gets to tell you what your deal is. So by color of his skin, um, he was known as dangerous. So we start seeing this inside of of the digital side with machines. Um, so what does this have to do with with AI? Well, the facial recognition. We talked a little bit about the optimization for white people. Uh, we talked about gender detection. Uh, gender detection has some, some, some digital, uh, issues inside of there. Um, and then the gender recognition is interesting because you can't always tell a gender by their face. Um, and we're starting to see that inside of the, the systems that are being developed. So gender recognition you can't always tell a gender by their face may work well enough most of the time, but it encodes a particular way of seeing and something that is, uh, over and over again, the same mistake that is against a group of people over and over is semantic racism.
Um, and semantic racism involves the procedures and routines of organization, culture, um, and it's also known as institutional discrimination. All right. So kind of moving into some other stuff. What can we do to fix this so we can build better. Right. We can build better AI. We can build things to be fairer. The biased information that we saw with the two developers, with the image, we can train with more data so we can get bigger data sets, we can use more diverse data sets and then we can police better. But there's another researcher, Ozz Keys. He states that with gender recognition AI, no matter how inclusive we try to make it, it will always be premised because someone can look at you and tell you what your deal is and that is always going to be incorrect. So this is a huge issue right now. And we see this in the news a lot. Data flattening and training data. Where's the data coming from. Where are we getting the data. Um this is the prompt for this guy right here is chain. Um I thought of um, what am I going to do for an image here?
And this is one of my favorite ones. You got this red robot trying to steal this art and doing an art heist. But what is an image that relates? Or what is an image that relates to stealing? So that's where I got the prompt for that. ChatGPT over into, um, Midjourney.
>>:
So with. With um, with data and training. Um, it people feel that it's just data. Your data is out there. Um, no consent is given. Um, and it's hard to find proof if you're if your data has been of the model has been trained. So this is a tool called have I been trained. It's a website from an organization called spawning. And this does images. So this is you can check to see which images. And this is searching a very large data model um uh called the the lion the lion lion five B um and it has it's what Dolly uses. It's what some other folks use, um, to, to get some of this information. And I put my name in and it pulls a couple things, a couple images, um, some things that are like super old and whatnot. Um, and it was just interesting. The nice thing about this is this has the ability for you to opt out. And what it is, is it's starting to be a registry of if I don't want my training data to be harvested, the term or whatever, you put it in there. And large folks like stable who does stable diffusion and then hugging face, who is the repository model for a repository for all these models are starting to use this data inside of here, which is super nice and super cool.
So the workforce. Amazon Mechanical Turk, um, Mechanical Turk has been around for years. I can remember using this in like 2005 for a project that we worked on. And Mechanical Turk is sending microprocessors off to, to somebody to have a real person complete and then come back. So the idea of it back when it was created was it was too complex for AI to do. We're going to have a human do it, but kind of put an API in front of it so that it feels like it's, it's it's um, uh, automated process. So the folks that are doing this are not making Mechanical Turk, um, small micro transactions. Um, this is bringing a new concept to workforce called sub employment. Um, it's the jobs are distributed across the world. So if we find folks, it's difficult to unionize in those situations. Um, and the jobs are they're not getting taken away, but they're getting de-skilled. Um, there's more surveillance, more surveillance, and there's less worker power inside of this. Um, we see the global north, um, gutting welfare and prison systems starting to, to use folks to train AI.
And then in the South, we see a lot of folks in slums and refugee camps that are doing training for, for, for, for AI. Um, you're more free when you work for humans. So, um, in a lot of these situations, you're working for AI, your productivity is down. So you're getting messages back and whatnot. Um, and it was interesting as I was thinking about this last night, um, because this is changing the workforce, but I had a nice conversation with a woman last night who is in as a as a side hustle. She's training AI, she does it while she's watching television and she gets paid a decent amount. So this actually changed how I was thinking about this a little bit. Because yes, I feel that this is a big problem, but I think that there are some organizations that are doing this correctly and making the training data and training, um, worthwhile for people actually doing it. So this is another thing to start thinking about with, uh, the ethics is the distribution of wealth. Um, estimated 500 billion in new household wealth by 2045.
Um, but where is that wealth actually going? Um, there are some very interesting, uh, concepts inside of distribution of wealth. Um, I started learning and I don't know enough to, to speak on this one is, um, Georgism. And Sam Altman of Open AI is a Georgist, and it has all these complicated things about how land is taxed and it should be corporations, and it's just wild and something to research a little bit more. Um, but the distribution of wealth is changing. And Sam Altman has, um, some, some very interesting concepts, uh, with, uh, paying, uh, people for scanning their irises and biometric data, and he pays them in crypto. So there's some interesting things that are going on inside of that. Um, also, that company that he has for that was raided by police in the first week of operation. So very interesting on what's going there. Um. So what can we do to fix some of these things? Um, regulation with privacy laws, antitrust laws and copyright, um, digital owning democracy. This is interesting where you get to decide on what you want to do with the data.
So all the data that you have on your phone, whether maybe you want to donate it to a company, maybe you want to sell it eventually. These are some regulations and some topics that are coming out. Um, and then digital socialism, socialism, um, owned by society. Um, so both data owning democracies and digital socialism recommends distribution of property to increase citizens economic power. Because I changes are concept of, uh, private property. Um. This is interesting because this is where we start getting ideas about universal income and all kinds of things that, uh, I honestly just don't think our society is ready for. And this is why we're getting a lot of pushback on some of the stuff that we're seeing. I mean, honestly, to sum it up, if we truly believe humans are equal, we need to do better. So this is a big eye problem as well that we might not necessarily think, um, Nvidia we've seen the stocks on that. Just go through the roof. Right. Those cards are just the chip shortage during the pandemic.
Um, all of this is computing process models take a lot of of of computing process. Um. We have non-mineral non-renewable minerals that are being harvested. So your batteries inside your phone, um, uh, your nickel, all those things that are coming out of the ground. The working conditions for the folks, the miners, the danger of inside of there. Um, but something that that I didn't think about is each time you query an AI model, it comes with a cost to our planet. Um, training in AI can take as much energy as it costs for 30 homes over one year, or 25 tons of CO2. And that's 25 times driving your car around the planet. And then big, huge, large models like ChatGPT can take 20% more than that, 20 times more than that, which is just huge. So some conclusions. Can we make AI better? I guess the question is, who is we? Are we equipped as a society to make ethical changes, knowing that these are not just technological changes? So if we get back to this alignment problem. We know that AGI is going to take resources from humans, it's going to take resources from our planet, and we are going to train it.
Um, the power of a hostile AGI would still depend on its relationships to our existing political and social systems. When we talk about ethical AI, instead of talking about Skynet, we should think about working conditions, climate change, and how to make the economy serve humans rather than the other way around. So I wanted. To end purposely short and have conversation around this. We had a little bit of conversation in the beginning, but I think it's important as we get to get together as developers, content folks and folks that are embarking on on this huge journey about AI to talk about it. What struck you inside of this? What did you learn, and what do we need to do as developers to be better with the ethics that we're working on? Any thoughts?
>>:
So there's a consideration around AI that I'm not sure we've got into so far, which is sort of like I feel like there are companies that are enticed by the idea of AI as an alternative to human labor. Right, okay. And so that potentially moves us towards a society where we have an even higher concentration of wealth. Um, it talks a little bit to that socialism that you covered there. But, you know, I think there's a lot of ethical implications to, to some of those ideas in terms of how that gets applied, who owns some of those tools and so on. Right? Yeah, definitely. I agree with that 100%. Um, I think, uh, I try to relate, um, this whole AI movement to other movement that we, we've seen, um, in computing. And the closest one I can come into is desktops versus laptops. Right? There is a huge shift when everybody had a this brick that they were tied to. And then now you have this machine and cell phones that are going home with you, and now you're bringing that work home with you constantly.
You're working constantly. You have something in your pocket that's constantly reminding you about messages and whatnot. Um, and I think that changed society very large, um, with the expectations of employers expecting employees to be on all the time. Right. Um, so, uh, this, this concept of replacing humans with AI, I think we as technologists need to keep promoting that. This is another tool in our belt. This is a tool to help us be more productive and be better. Um, I know Jack left Jack's. Jack's a creative guy, and just knowing how our creatives use, um, uh, the AI, it's to help do their job better. It's to help do them faster. There's no way some of these images. I mean, can you think of some of these images being used on a website? Right? I mean, I don't know, this monopoly guy's pretty cool, but I don't think that they are, um, they're not refined enough. They don't solve the problem. And that's when when we talk about graphics and we talk about illustrations and what it's very deep with, with a design rationale.
What is this solving? I mean, this solves exactly what I told it to. It's just put a monopoly guy like this. But there's it's not deep. It needs to be deep. I mean, there is an example that I heard about a couple of years ago where somebody in Europe had trained an AI model to do graphic design, and it was actually turning out things that customers were were buying and paying for. Yeah. So, I mean, you know, these specific examples. Probably not. But, you know, that's not to say I mean, it's already happening. So, you know, it's probably going to happen more if people. Think that they can make more money using it. Right? Yeah. I want to say that I see the issue isn't really the technology. The problem is that is our values as people. We value money over people. Sure. And that until we start valuing people, technology is going to be a monster. Yeah. Because people are they they value wealth over other people, right? Until we change our values and start valuing people and know that we have the resources to take care of everybody.
Yep. And no one's got to suffer. Even those that are at the top, they can still be at the top. It's just they don't need, you know, 90% of all of of our resources. Yeah. You know. And so it's basically the the issue isn't going to be technology is not going to save us. We have to save ourselves. Yeah I agree with that 100%. Hand back here I think. Yeah. Over here. All right. What do you got? Um, like I guess kind of piggybacking on that, like, I know with, uh, because I do study a bit of AI, but, like, and a lot of it is, well, all of it is pretty much the information that we give it, the better information comes out. So like my one concern that I've had before was like kind of as you were saying, like. I do see that there is going to be a change in the sense of like, yeah, there's going to be more people going into, uh, you know, try to do it as like a side hustle as you gave it as earlier of like giving it information. But how much of that information is necessary? And then to me, in my mind it's like, at what point is it going to then plateau?
Because AI is supposed to continuously learn. So at a certain point, how much more information can we give it? And then when we at that point, what does that look like as well? Because it's like. What would how like, what would we need to provide it at that point? Yeah. And I think I think it's societal values. Right. It's the values of our society and those change and those change over time. And um, uh, personally, I think that we were doing pretty good for a while. And then I think that we it's different. Right? It is very different than it was ten years ago. And I think that I we're going to start seeing that reflecting that it's it's going to go up, it's going to go down. And I think as human beings, we want to push this and make it as strong as it possibly can be. More data, the better diverse data. Yeah. Well that was a.
>>:
My thought is the more data how do we get more data without it being as biased? I mean, you take the legal body of data in the US and it's going to be skewed quite a bit. Sure. Um, so how do we account for and how are we training it not to be skewed. Right. Which brings up the question too. If you Google CEO and you expect images for CEO and you see the images that come back, you're going to get all white men CEOs. So should we be training AI to, uh, show what it really is or what it should be, right? Should it be reflecting? Um, uh, you know, the percentage of women and diversity inside of there or what should it be doing? Um, personally, I think it should be be be very diverse and all of the CEOs should be hung for their crimes equally. Who's going to say? The one thing we can do about the biases of AI is to have a more diverse group working on it, because you only get what we put into it, and if the only ones putting anything into it are white males, and that's what it's going to be skewed to.
Sure, we have to get more people involved, more different people involved so they can spot those things. You know. I don't know what the ethical implications are really with that. But, you know, is that something that. Has a future? I think it does. I mean, we see that like on Hugging Face, we see all of these models that that people are bringing up themselves and doing things. Um, Kurt, who gave a talk about documentation before me, talking about creating a model to go off of the gravity work, slack and notion and pulling all this information and training it. And that model that's created would have our company values and would be us. Um, and think about it from proposal writing. You use that when you're going to write a proposal, and then it has your tone, it has your feel, it has all of that. Um, but it comes back to, to being a society. And the values, my values at my company are not your values at your company. And you're still going to have this, this, this mix of, of values and what folks are.
But I do think that is a step in the right direction, because I think that doing it on a very large scale, um, represents it's tough to get the representation that, that you want. Cool. Thanks. Awesome. I mean, I guess we're not there yet, but like, what happens when Russia and China, you know, catch up? Sure. Right. Their AI models are going to look very different from what ours. Mine. Right. Um, yeah. And I mean, I think it gets into a very deep, um, ethics conversation and culture conversation where, you know, it's not technologists that that need to have these conversations. It's people that are in the humanities area. And because, I mean, we have that and it's not I you go to China, it's just a very different culture. Um, and I would expect to see that related inside of the models that they use. I just wanted to make a better, better comparison between what I said or better judgment. I talked about values that we value people more, but I think it's more a priority because we have our values.
But then we also have our priorities, and our priorities are what we actively go towards. And our priority is more of creating wealth for ourselves. And, you know, even though our values may be that we believe we're all equal and we care for we're supposed to care for each other, our priorities are not in line with our values. Yeah, we need to make have make those bring those into alignment. Right. Um, I was at, uh, Drupal con France this year, and I was sitting in a train station and there was a homeless gentleman, and just this really nice dressed woman just walked up to the homeless man and said, hey, come on, let's let's go over here. Walked over, bought it, bought him a meal. Homeless man went and sat and then she just went on with her day. Right. It was just whoop, whoop in and out. And I just thought to myself, I don't do that right. I, I when somebody's there asking for money or something like that, I just, just walk right by. But yesterday I was walking by and in front of Walmart, there was a woman with two children sitting there.
And is she, if I give her $20, is she going to go buy drugs? No, she there's something else going there and I don't need to know her full story. I don't need to know everything that's going on. Do I have an extra $20 in cache that I probably don't need right now? Yes. So if you can use that. And that comes down to priority for sure. Yeah. Related to that, I had someone, an acquaintance of mine tell me. He's like, you know, a lot of people will say, like, you know, if one out of ten people, you know, if all of the, you know, the unhoused people who are begging for money, like, you know, if some of them are charlatans, you know, that causes a lot of people to not give money. But what we really have to do is change that reflection to think like, if 1 in 10 of those people is someone who actually needs that help, why would you withhold it? Like, listen, you can either like you can either punish the charlatans or you can punish the people who actually need it. And so like, which choice do you want to make?
Yeah. Very good point. I'm curious if you've in your research for this. Right. If you've come across any um. Watchdog bodies or community groups that are sort of tracking ethics and are providing recommendations. You mentioned spawning and I checked out their website and they've got a lot of interesting stuff going on. Obviously it's going to take a while always with tech, right? The regulation comes probably later than it should. But I'm curious, in addition to spawning, if there are any additional resources out there, any other watchdog groups or community groups that are potentially tracking some of these, these ethical questions and providing recommendations? Yeah. As far as formalized groups, um, the one the spawning group is one that I'm aware of. Um, and there are so many papers that are coming out with these concepts, um, uh, that it's just a mile a minute. Right. We're getting all these papers, all these educational things, um, academic things that are coming out. Um, there's a woman out of Montreal.
Her name is Sasha. I can't remember her last name. And she's a PhD. She's got a Ted talk. That's a great Ted talk, but she works at hugging face on on ethics. And I think that, um, you know, hugging face, in my opinion, is becoming a leader inside of the ethics repository. And they have ethics around it. So I think that we're going to see things come out of them for sure. Awesome. Thank you. So I know like the one you mentioned about like the fact that it's doing sort of climate change because I would say I knew that there was that impact that is done, but I didn't realize how high that number was. Um, do you think a big part of that could be? Well, not a big part, but do you think a part of that could also be our faults? And like right now, I would say. We are in a consumer society, I would say especially in America or whatever. But like when it comes to, um, of course, as developers we need newer computers, newer laptops or newer, like much more, uh, higher processing, uh, like equipment to keep up.
But at the same time, um, what do you think or do you think it's possible that it will make a bigger difference if we are, like, waiting a few more years to kind of step up to, like the next MacBook or next, you know, laptop or anything like that. Um. Or or do you think it's just just moving at too rapid of a rate at this point? Yeah. Um, yeah, that's a great question. And I think that'll, that'll spawn a, a developer religious war. Right? I mean, think about, um, your 386 and your 486 from back in the day and your ten megabyte hard drive, right? Um, those programmers were, you know, fighting over every little bit of memory. And nowadays, are we fighting over every little bit of memory now because we have the machines to do it? So is is there a movement to write more performant code, more code that is more optimized? And what what impact would that have on on the environment? And I think it would have a huge impact. Right. I think that, um, uh, when you go to PHP conferences and you go to these low level programming conferences, they're not talking about, you know, memory utilization anymore or anything like that.
They're talking about, you know, the latest framework to get jobs done faster, faster, faster. So yes, there's bloat in that. Um, and with the computer, I totally agree with that. Um, my, my computer is five years old, and my Oki has not been working for for four years, and I've just taken O out of my language and just I don't use the O anymore just because it's just it just always sticks or it doesn't hit. Um, and I'd like I think one of the reasons why I don't do that is because it works fine. I can get around it and I don't need to. It's fine, but I don't think everybody's like that. Yeah.
>>:
Yeah. Thoughts on privacy. So we all say we need more data to make it work better for us, but at the same time, the people that could benefit the most from it are possibly marginalized or the privacy issue. So how do we get their data to make it better and unbiased, but at the same time respect privacy? Do you have thoughts? I have no thoughts. Who's got thoughts? So privacy right? It's super complicated. It's another one of those topics. Um, and I know as a parent that, um, I feel that it's not my decision about my children's privacy. It's their decisions. So if they want a photo posted on Facebook, hey, Abby, can I can I post this on Facebook? Yes. No, she's 11 going on 12. Does she have the mindset to be able to know the consequences of that? No, but we have those conversations and we let her choose. Um, so I, I got nothing else. She's got privacy right here. Well, I look at it through the lens of copyright and thinking about, like, the commons, like the public good versus like, you know, copyright and that control that we have over things.
Um, and so that's where I think, like we have to start asking ourselves, do we donate more to the public good? Do we put more into the commons and allow that which is in the Commons to be used in those sorts of ways right now, obviously, like private data stuff that is very like personal exists in a different realm. But, you know, if you are posting messages to social media like, I'm of the mind that if you put something on the internet, you have to assume that you have to be comfortable reading it and and in a court someday. So, you know, because it's all of it is going to be public. And so the question is like, what are those expectations of contributing to the public good versus maintaining control over that which you have? Like for instance, like right here, like these like images, these book covers that are here, like do should these be under the control of a copyright owner or should they be in the public good, like the example that I go to and then I'll be quiet is I think about Star Wars okay, Star Wars, whatever you think of it, it has had a profound and major impact on the public.
Everyone knows what a Jedi is. Everyone knows what a lightsaber is. There are references throughout our culture, in our media about Star Wars. It is a public good, but it is also under copyright by Disney. And so there's a conflict there. We've got something that is impacting our world that everyone references, but it's under the control of a single like Mega Corp. And like, how do we how do we square that circle? Should Star Wars be in the public domain because of the impact that it has? Yeah, and. I think that gets really complicated with things like the Fair Use Act and stuff within copyright, copyright and all that. But a great example is Winnie the Pooh, right? Winnie the Pooh coming out of copyright and the horror movie that was created after that. So that horror movie, do we now relate Winnie the Pooh to this horror movie? Or the cute little bear who's trying to get gets his head stuck in honey, right? I don't think the horror movie was detrimental to to the brand of Winnie the Pooh, and it was just something else.
So I think that especially with Disney, as we're progressing through the years, I think that our government gets it right. They went through with, with with Mickey Mouse. They went through how many changes of the law inside of Congress. I think government finally gets it that we can't do this anymore. So we're going to start seeing more things coming out of copyright, and then we're going to start seeing people using it and going into public domain, and people are going to start realizing, I guess it's not that big of a deal. It's not that big of a deal. It's fine. Um, and these specific examples, I mean, these were, you know, this was a talk that I gave about Android development in 2009. I mean, it was a public talk. So, yeah, I mean, I would expect it to be. Right. It was out on SlideShare. It's going out to the internet for sure. We'll see. Something about privacy. I think the real problem, the real issue with privacy is a lack of trust. I think that's really the issue. It isn't that, you know, your private data is out there.
It's going to be out there. The problem is, can you trust whoever is in charge of it, whoever has control of it? Yeah, because we have especially in the US, we have no there are no safeguards, there's no regulation, there's no standing for the person whose information is out there to be able to go go back or to try to protect it, or if it's misused, be able to take action. Yeah. Well, I mean, this raises the question, I'm sorry. The and also just the lack of accountability for those who have it, who can just use it for whatever they. And want to make money off of your your information or or you yourself without even your without your involvement. Right? So I guess this raises the fact of deepfakes and pornography, right? I think that in the upcoming years, there's going to be a lot of conversations around this, and I think we're going to change. I mean, we have to change, right? I mean, we're going to data is not trust is gone in those situations. It's not free speech. Right. So I have like.
I know we're. Limited on time and everything, but the one thing I was going to ask was, what do you think about, like when it comes to AI and like accessibility though for like there is such like, uh, I have a friend that he's working on a Drupal, um, like component where it's to add braille, like to a document and stuff like that, but like, do you see it being an issue, like, or what type of safeguards do you think would be necessary in that type of space? It's like I was trying to explain to him that, uh, saying Braille, like, I mean, well, I would say more like sign language is more what I would understand. It's. Someone's version of sign language is very different from someone else's version of sign language. So like, um, how do you think I would be able to adapt in those type of situations and stuff like that? Yeah. Um, I can't read Braille, so if that was outputted, I would need to have some other verification method. Right. You need to, to, to to assume that you need to test it and um, sign language 100%.
Right. There's different dialects every, you know, different countries. There's ASL, there's there's different things inside of there. But I think you need to have some sort of verification method. And is it a human doing the verification? Or maybe you haven't built a model to do the verification as well? Or do you have pay a service to, to to check the braille and um, have them do spot checks or something like that. So again, there's this human aspect of it which goes into the whole AGI that doesn't exist. I think that we're thinking that we're going to put something out into the public, and then it's just going to work. There's still there, it's still us that's reflected in this, and there's still us who needs to check it. So it's just like if you're writing a paragraph in ChatGPT, you can copy and paste it, put it out there, but you need to spot check it because hallucination, I mean, there's all kinds of things that it does. It does all kinds of stuff. You just got to read it. Anybody else.
>>:
That question eventually are these. Are these engines going to be trained on their own stuff as. More and more data that's out there is actually generated by. Okay. Uh, sure. I think they are. And I think as they are, are they going to get better or are they going to get worse? Are those biases going to be even stronger or less? I think again, it comes to us to put our spin on it and and fix it. Yeah, I think how I see it is just this data. You don't know what's going. Be wrong, data. Yeah. Because he. Doesn't know. Between. Yeah. You get this information about. Data. Some things can be right in one culture. Yeah. Anybody else. All right. Well, thanks for bearing with me. I, uh, I never thought I would say penis and breasts in a presentation. And when I did, I totally got sidetracked, and I got all uncomfortable, and it got really weird. So thanks for bearing with me and getting through it with me. And hopefully folks are taking away some good information and some good conversations to have.
In the future. Thanks everyone. Thank you.