DGST 395

DGST 395: Week 9 Summary

I tried to approach the self-evaluation from an honest and thoughtful standpoint. When evaluating if I had learned certain skills (digital fluency, digital citizenship, and digital praxis), I thought about when I applied those skills. Did I struggle? And if I did, did I ask for help and clarification? I also asked myself if I could teach that skill. If I can teach someone something, that usually means I really understand it. Oftentimes, I help my peers at my table grasp things they do not understand, so I think I am doing well with that. 

I definitely agree with your observation that feminine AIs tend to perform traditionally “feminine” jobs like being a homemaker or caretaker, whereas masculine AIs tend to be used for “masculine” things like fighting. I can see where fiction gets its idea of AI. To me, it is a little creepy how we don’t really understand how the AI is making ‘decisions’ or getting to its final solution. That already sounds like a science fiction movie plot. That being said, I think AIs are a lot less sinister than the media portrays them to be. They do not have the consciousness needed to act like AIs in movies. Although AI is most likely not going to take over the world anytime soon, there are still serious consequences to faulty programming. Like the article said, algorithms can be biased or just flat-out wrong. These biases have serious real-life consequences if they are not noticed and fixed. 

Gendered AI examples

Since AI’s do not understand ethics, it depends on what you program the AI to ‘choose’ in the trolly problem. The problem with that is, it still has to be programmed by a human, who also does not have the ‘answer.’ An example we talked about is automatic driving cars. If there are pedestrians in front of you, but swerving will kill you, what is the ‘right’ thing for the car to do?

Photo by Patrick Robert Doyle on Unsplash

Here is a clip from The Good Place about the trolley problem. 

css.php