As humanity continues to barrel down this path of creating a robotic future, I can't help but to wonder how soon will calamity come? I think back to a movie, The Animatrix, where there were autonomous robots that worked and operated among us. Then one day, a robot killed a human in defense of itself which set off a political and social firestorm. This lead to violence against robots, with the robots ultimately winning the war against humans. Although, this scenario is more than likely a ways out, I can imagine how a scenario like this might play out in the near future. Autonomous robots are working along side of humans in warehouse distribution centers with both the humans and robots under harsh working conditions. The robot, powered by a ChatGPT like engine, makes a decision that inadvertently causes the death of a worker. This causes protests against the use of humanoid robots alongside people in warehousing. Large companies use political pressure to try and squash the backlash because they want the "profit margins" gained by using autonomous humanoid robots to do the laborious warehousing work. Protests ensue, with robots in these distribution centers being destroyed by humans. Law enforcement, equipped with their own semi-autonomous robots, are then routinely deployed against these protest and so it begins. Ultimately, it won't be the robots themselves that want to conduct violence against humans, but the "owners" of those robots that have a vested interest in their labor to drive "profits". It seems that no matter how much we raise the alarm and say that we want to prevent such things, we as humans, are incentivized to pursue mankind's own destruction.
Saturday, March 4, 2023
Not If, But When...
Tuesday, January 24, 2023
We're in Trouble - How the Use of AI Search may Hasten Our Societal Demise
I was just reading an article about Google and ChatGPT. If I understand the article correctly, in times past, Google was hesitant to release AI driven products, for safety reasons. Now, due to the threat of products like ChatGPT and other AI creation tools, Google is softening its stance on its tools and preparing them for release (Ref link). Here's where the problems come in. Depending upon the social/economic class that you are apart of here in the US you may see things differently. You see as I've had more experience and learned more throughout this life I realize that we (society as a whole) have put our trust and dependencies into a bunch of half baked products.
Let me give you an example. Just yesterday I was making my first reel post on Instagram and wanted to sync/edit some photos (three to be exact) to go with a clip of music. I have been editing video since 2004 so I know my way around. Anyway, so I go to edit these photos with the music on IG and it guides me to use the Auto Sync function. So I'm like cool, the app will automatically sync the music and photos up for me, great! I hit okay and the process runs. Turns out that I don't like the duration that the program used for the first photo. Easy enough to fix, so I thought. I tried in vain to intuitively use the trim controls to lengthen and the first photo, but the trim/edit tools seemed archaic. I won't bore you with all the details, but let's just say it caused me to yell out loud several times during my attempt. I said to myself, "clearly the person/people who created this have never used it". I know that this statement is (or should be) incorrect. Of course they used the app, you have to when developing software. What is probably true is that they've never used the app as an individual with video editing experience who's in a hurry and has to quickly post a reel.
What I'm alluding to here is perspective. I believe that many of the people who are hired by these companies as software developers do just that, develop software. They are not necessarily end users. They will often times write the code to develop features that are efficient for them, but not for the end users actual use. The Managers, who aren't end users either, approve these features because all they are really concerned with are deadlines and budgets. So if this happens over and over again on mundane low risk applications, what do you think is going on with the programmers developing Chat AI's?
Here's another example about perspective. A long while back I wanted to create a fake profile (the results are in the image atop this article). So I was aware of some online sites that have AI generated profile pictures. Me being an African American/Black male, I wanted an AI generated picture to reflect me (or someone like me). So the site would allow the user to specify all the characteristics and then create the photo. How about the photos that came out for a dark skinned short hair man all looked like he was from India (no disrespect to all my Indian colleagues). Myself and my profile avatar are African American/Black and there's a difference. I contacted the site about this issue, they acknowledge it and mentioned that they are trying to increase their data sample size (update 1/23, nothing has changed).
Back to ChatGPT and AI products. If the people who are creating these software entities are doing a half-baked rushed unvetted job then what do you think the end user will wind up doing/creating. I think this is going to be a case of "Who Watches the Watchmen?". Once these AI driven products are released to the masses and wide spread, what's to stop unvetted information. Information doesn't have to be blatantly wrong to be harmful, it can just be "kinda" wrong and do massive amounts of damage. This is my point. Once we as a society are reliant upon AI generated services that are true/correct enough to be believable we are doomed. It will mostly affect those who are underserved and disenfranchised first. I would imagine that some of the readers of this article won't be able to relate, but eventually, once the lives of the people that the upper social economic classes are built upon crumble, so will their livelihood as well.