From the course: AI Accountability: Build Responsible and Transparent Systems (2022)
Unlock the full course today
Join today to access over 24,800 courses taught by industry experts.
Vulnerability to attacks
From the course: AI Accountability: Build Responsible and Transparent Systems (2022)
Vulnerability to attacks
- [Instructor] There's a lot of people out there who really love K-Pop, which among other things has given us the little finger heart gesture. But when you take maybe a billion K-pop fans acting as a block, interesting things can happen. One such phenomena is the coordinated hashtag takeover, which I first learned about in 2020 when K-pop fans co-opted hashtags associated with obnoxious racist behavior and put them on millions of posts about K-pop instead. This effectively neutralized the original meaning because the social media search engines now associated the hashtags with the new group, the K-pop fans. Now, among other things, it shows how it's possible to deliberately change the way an algorithm works, and it's sometimes for a positive goal, which was the K-pop fans' intention, but other times is in the service of goals that can be much more problematic or even catastrophic. All of this falls into the rubric of AI…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
The challenge of classification errors3m 39s
-
(Locked)
The causes of classification errors6m 18s
-
(Locked)
Bias in AI3m 50s
-
(Locked)
Supervised and unsupervised learning8m 16s
-
(Locked)
Biased labeling of data7m 15s
-
(Locked)
Construct validity6m 14s
-
(Locked)
The absence of meaning4m 54s
-
(Locked)
Vulnerability to attacks4m 45s
-
-
-
-
-
-