From the course: AI Accountability: Build Responsible and Transparent Systems
Unlock this course with a free trial
Join today to access over 24,800 courses taught by industry experts.
Attacking AI
From the course: AI Accountability: Build Responsible and Transparent Systems
Attacking AI
- [Instructor] Artificial intelligence algorithms are vulnerable to attack in many different ways. For instance, it's possible to attack it for text recognition, for audio recognition, and for the recognition of visual images. And I want to show you how these work with a few different live demonstrations from research papers and websites. The first example is text, and it's called "HotFlip: White-Box Adversarial Examples "for Text Classification." And here on the first page, they have a couple of examples. I'm going to zoom in so you can see those better. And what you have here are small snips of news stories. And the AI is trying to categorize what the topic is. And what the researchers are showing is how easy it is to throw off the AI with a substitution of a single letter. In the first one, we have a story that says, "South Africa's historic Soweto Township "marks its 100th birthday on Tuesday in a mood of optimism." 57% confidence that that's world news, yeah. And if you change…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
Technical challenges for generative AI6m 3s
-
(Locked)
The challenge of classification errors3m 39s
-
(Locked)
The causes of classification errors6m 18s
-
(Locked)
Bias in AI4m 3s
-
(Locked)
Genres of learning8m 16s
-
(Locked)
Biased training data7m 15s
-
(Locked)
Construct validity6m 24s
-
(Locked)
The absence of meaning5m 7s
-
(Locked)
Vulnerability to attacks4m 44s
-
(Locked)
Attacking AI7m 5s
-
-
-
-
-
-