- Date posted
- 2y
Is this still ethical?
Hi everyone. Something has been bothering me for a few days now, and I'm not sure how to handle it. I have been interested in artificial intelligence for quite some time and experiment with it passionately. I've developed an online platform where I use AI to write blog posts and articles. These writings are original and not plagiarized; they are generated by combining the data on which the AI has been trained to produce unique text pieces. I publish these articles on my blog or website, which receives a decent amount of traffic, and I'll soon be eligible for on-page advertisements. This means I could potentially earn money from content I didn't write myself. However, it's not straightforward to create such articles; it requires time and expertise to set the right instructions, parameters, and conditions for the AI software. A concern is the potentially harmful use of AI technology in the future. I wonder if my use of AI contributes to the development of a "monster" that could cause great harm or even loss of life. A thought that frequently crosses my mind is how I can justify continuing to use AI. One rationalization I often tell myself is that the more I understand about artificial intelligence, the better equipped I'll be to counter its negative impacts if it ever becomes a threat to me or my environment. By engaging with AI, I get to know its dangers more intimately. But then another thought occurs to me: if I were to compare this situation to a different period in our history, say World War II, what would the ethical implications be? Imagine if someone knew one of the major criminals of that era, understood the potential dangers he posed, but chose to benefit financially from his rise to power by supporting his political party or producing goods that would help him directly or indirectly. If that person then fled the country when things got dangerous, capitalizing on their intimate knowledge of the situation, would that be ethical? Knowing the risks associated with that person and then fleeing right before things escalated? I draw a parallel with my current situation; I see potential future dangers. Is it ethically responsible for me to continue utilizing this technology? Something like this also happened with me in the past. I used to love making electric house music, was quite good at it aswell. But suddenly got the thought: what if my house music makes people on a house party want to take drugs (as I have done myself in the past). Even since that moment ive been having trouble creating music even tough i coule definitely do it for a income. I just dropped it with many many and i mean many hours and a lot of money invested in it. Same is happening right now with AI. poured alot of hours into it, learning to program aswell, and am at the brink of dumping this one also because in potential it could be used for very very bad purposes by the wrong people which i am sure already is being used in harmfull ways, and me making money of some technology that can do harm to others in the wrong hands feels like im some kind of bad person. Because even tough my data should be 'safe' at their database their model gets trained ont the way people communicate with it and so speeding up the development of ai and then contributing to a smarter and more complex technology that again in the wrong hands could be used and will be used for bad things... on the other side of things, dont we use alot of technology that can also be used by bad people? Why dont i feel bad about that yet? Maybe because im not yet aware of it? Been getting work requests (writing scripts for people etc) which i used to use ai for and now im too scared / guilty feeling to use it. And cannot fulfill incoming requests by customers. I just realized that I've also been paying for their services for almost a year now. 20 x 12 is 240 euros that is being used to improve ai and therefore improve the possible or more so the fact that it some day or already is being used in malicious ways. Ive read online that ai in the wrong hands can be used in dirty war tactics, biochemical weapons and so on and on. I found a website that works an getting ai to be safer and you can donate to them. I feel i need to donate the equivalent of 240 euros to their cause just to even the part of paying for ai services out. would this be a compulsion?