AI: Friend or Foe? The Hidden Truth About the Tech We Trust

Valerie Alexis Chang
6 min readMay 17, 2024

Remember that time ChatGPT saved you hours writing an email (we’ve all been there!). Imagine AI doing even more — creating your own favourite music or even diagnosing your health!

The future with AI is exciting, but could this amazing tech have a hidden dark side? 🥷

This article dives into the potential dangers of AI, from creepy deepfakes to fake news spreading like wildfire. Don’t worry though, we’ll also explore how tech is fighting back with some seriously cool solutions.

The Dark Side of AI: When Tech Turns Tricky

👹 Deepfakes: From Funny Filters to OMG This is Scary!

Imagine seeing a video of your boss giving a fake performance review? That’s the power (and danger) of deepfakes. These AI-generated videos can be scarily realistic, used to spread misinformation or damage reputations.

Deepfakes started as funny celebrity parodies (remember Pope Francis rocking a puffer jacket?), but fast forward to 2024, and they’re a serious issue.

Share this if you think deepfakes are CREEPY! ➡️

🤖 AI vs. AI: The Machines Are Policing Themselves

Here’s the good news! Tech companies are developing ways to fight deepfakes.

Imagine AI that sniffs out deepfakes with superhuman accuracy. Companies like Sensity AI and Intel are leading the charge, and even Singapore Government recently invested $20 million in tackling deepfakes and misinformation.

Let’s look at more tech battling deepfakes:

  • FakeCatcher by Intel: Launched in late 2022, it boasts a 96% accuracy rate in spotting fake videos by detecting subtle skin-tone changes.
Source of image
  • Microsoft’s Content Provenance: Embeds digital watermarks into AI-generated images, acting like a fingerprint to identify their origin. Helps users flag deepfakes faster.

👶 Fake News: Why We Need AI Babysitters (Seriously!)

Social media can be a breeding ground for fake news. ♨️♨️♨️

Earlier in January 2024, there was crazy story on Instagram ( Instagram post 1, Instagram post 2) about a super COVID whipped up by AI. This shows how dangerous AI-generated misinformation can be.

It claimed scientists in China cooked up a super COVID strain deadlier than the original, specifically mentioning a 100% death rate! The story spread like wildfire, causing panic.

Here’s the truth: Researchers in China were studying a type of coronavirus, not even COVID-19. They tested it on special mice, and all the mice got sick. However, what happens to mice doesn’t necessarily translate to humans.

This incident highlights the danger of AI-generated misinformation. Malicious actors can use AI to create convincing but fake content, manipulating public opinion and causing unnecessary fear.

Have you ever fallen for fake news online? Let me know by commenting!⬇️

👩‍🍼 The Solution: AI Babysitters for a Trusted Web

The good news (again!) is that AI can be used to fight fake news too. Imagine an “AI babysitter” that monitors content before it gets posted online, flagging manipulated images, hate speech, and biased content. Companies like Amazon are already using similar tech to weed out fake reviews.

Image source

📔 Protecting Your Privacy: Are You an Open Book for AI?

We all share tons of data online. The question is, how is it being used? Here’s why some people are nervous:

  • Data Tracking: The Web is Watching
    Every click, like, and comment you make is being tracked. This data can be used by AI to build a detailed profile of you, which can feel creepy and intrusive.
  • Data Breaches: Your Info Could Be Exposed
    As AI gets hungrier for data, the risk of breaches increases. Hackers could steal your personal information, putting you at risk of identity theft or fraud.

A recent survey found that 70% of online shoppers are concerned about how AI might affect their privacy.

Artificial Intelligence (AI) in eCommerce: U.S. Consumer Expectations, Concerns & Awareness

🛡️ Your Data, Your Shield: How to Stay Safe in the AI Age

There’s a solution: Privacy-Enhancing Technologies (PETs). Think of them as a digital shield that protects your data while still allowing AI to function. ️

Here are some companies making waves in PETs:

  • IBM: Secure data collaboration tools.
  • Microsoft: Confidential computing services.
  • Oasis Labs: PETs for healthcare that analyze medical data without compromising patient privacy.

🔍 Peeking Under the Hood: How XAI Makes AI Fairer

Imagine you ask an AI assistant for restaurants, but it only recommends fancy French places. Frustrating, right? This could be due to bias in the AI’s algorithm.

This is where Explainable AI (XAI) comes in. Explainable AI (XAI) aims to shed light on the decision-making processes of complex machine learning models, particularly those considered “blackboxes.” These blackbox models are powerful but often opaque, making it difficult to understand how they arrive at their outputs. XAI techniques can help address this by.

Let’s see how XAI is being applied in real-world scenarios:

  • Search Engines: Google’s XAI lets developers see how search algorithms work to address any biasness. Imagine searching for “best laptops for designers” but only seeing budget-friendly options. XAI could reveal the algorithm is prioritizing keywords like “student” or “affordable” over “professional” or “high-performance,”
  • Self-Driving Cars: Waymo uses XAI to build trust. Imagine being in a self-driving car that swerves to avoid a ball. XAI can explain how the car identified the ball as a potential hazard, factored in its speed and surrounding traffic, and ultimately prioritized safety
  • Job Hunting: LinkedIn’s XAI helps users understand job recommendations. Say you’re a qualified product manager years of experience, But instead of seeing mid-level or senior product manager roles, your feed is flooded with entry-level positions. Through XAI, you might discover that the algorithm is prioritizing keywords associated with “no experience” or “recent graduate” more heavily than keywords signifying your experience, like “senior product manager” or “proven track record.

Limitations of XAI

It’s important to acknowledge that XAI isn’t a magic bullet. Some blackbox models, particularly deep neural networks, are incredibly complex. XAI techniques might struggle to provide perfectly clear and comprehensive explanations for their decisions. However, ongoing advancements in XAI are steadily improving our ability to understand these powerful tools.

💭 Ending Thoughts: AI — Friend or Foe? It’s Up to Us.

Let’s be honest, AI is pretty darn cool. It’s taking over those repetitive tasks across the board, from scheduling meetings at work to streamlining traffic flow in cities. On a personal level? Imagine AI planning your dream vacation to Japan, taking into account your love for ramen and hidden temples.

As much as I love the convenience AI brings, there’s a dark side too. These algorithms can be biased, leading to unfair situations. Think getting passed over for your dream job because the AI favors keywords like “recent graduate” over your years of experience. Not cool, right?

Here’s the thing: the future of AI isn’t set in stone. It depends on what we do now. Developers, lawmakers, and even us regular folks — we all have a role to play. We need to talk openly about AI’s ethical implications, make sure everything’s done transparently, and build these systems with fairness in mind. Only then can we truly trust AI and unlock its incredible potential for good.

[BONUS] Want to learn more about the hidden dangers of AI? Check out this infographic I put together.

--

--

Valerie Alexis Chang

Hi! 👋 I’m a product geek at Rakuten Viki. I love breaking down complex techy stuff to make it easy for everyone to understand. 🚀