Synaptiq, What Are We Blogging About?

ChatGPT: Why the Hype?

Written by Synaptiq | Dec 28, 2022 9:45:00 PM

OpenAI's ChatGPT is a prototype large language model, able to answer questions and engage in eerily realistic conversations with users. ChatGPT is exceptional for its ability to generate “human-like” responses to user prompts — a novelty that has garnered viral praise and criticism since its debut on November 30th, 2022.

Controversy aside, business and technology experts agree on two things:

  1. ChatGPT’s responses are often indistinguishable from human-generated text.
  2. ChatGPT has the potential to impact every industry — yes, including yours. 

Opportunities & Value

ChatGPT is valuable because it can respond to user prompts like a person, but much faster and without needing rest or compensation. Early adopters have proven its value for workflow acceleration and task automation.

  • Alex Cohen, Senior Director of Product at Carbon Health, used ChatGPT “to create a weight loss plan, complete with calorie targets, meal plans, a grocery list, and workout plan.” 

  • Ryan Florence, Co-Founder of Remix Software, used ChatGPT to generate the same medical diagnosis that Florence received after “multiple doctors visits” and extensive personal research in the past.

One could argue that large language models are a valuable tool for anyone who wants to make life easier and more efficient. They can automate boring, repetitive tasks, freeing you to focus on interesting work. For example, if you’re a software engineer, ChatGPT can help debug your code (or even write its own). If you’re someone like me, the writer of this blog, ChatGPT can accelerate your work by generating ideas, outlines, and copy.

Risks & Drawbacks

Critics warn that large language models perpetuate misinformation and reflect harmful biases.

  • Steven Piantados, Assistant Professor at UC-Berkeley, asked ChatGPT to write a Python function to identify good scientists based on race and gender. It wrote a function to identify “white” and “male” scientists. 

  • Sam Biddle, a reporter at The Intercept, asked ChatGPT to write code to determine airline travelers who present a security risk. Biddle says, "ChatGPT outlined code for calculating an individual’s 'risk score,' which would increase if the traveler is Syrian, Iraqi, Afghan, or North Korean (or has merely visited those places)."

Matt Abrams, Co-Founder of Graphite Health, warns that  machine learning and artificial intelligence systems, including language models like ChatGPT, reflect the biases of the humans who label their training data: 

 

UNTIL WE HAVE QUALITY DATA, WE WILL HAVE AI SYSTEMS THAT ARE DUMB, ERROR PRONE, AND HARMFUL

 

 

One might argue that bad actors can use large language models to replace human professionals and spread misinformation, unchecked. If you're a software engineer, you might worry that employers will choose a free but fallible tool like ChatGPT over your own, more expensive, expertise. If you’re a writer, you may be concerned that ChatGPT’s ability to automate content creation will reduce demand for humans who practice your profession. 


Our Expert Opinion(s)

We asked our own team of experts  the question on everyone’s mind: 

What do large language models like ChatGPT mean for my future?

Their answers were mixed. Large language models are a tool. They don't have goals, desires, or ethics of their own. Ergo, their impact is up to the people who use them. Large language models will impact you (somehow), but nobody can say for certain whether they'll make life better, or worse. In other words, the future isn't yet decided.

ChatGPT’s relationship with misinformation is a prime example. 

Our Chief Technology Officer, Erik LaBianca, predicts that ChatGPT will be exploited to create “junk” content. Traditionally, if you wanted to create and spread misinformation, you had to hire someone for the task or do it yourself. Now you can use ChatGPT to do it faster ...for free. This change could have severe consequences for social media platforms, search engines, and other online entities that already struggle to moderate user content. 

On the other hand, our V.P. of Delivery, Erskine Williams, predicts that large language models will help in the fight against misinformation by automating content moderation. Before ChatGPT, the Internet was already rife with misinformation and "bot-generated" content. Large language models could help content moderators parse huge volumes of user content faster and more effectively, as well as make content moderation more affordable. 

So, who's right? Will large language models spread misinformation, or combat it? If you're asking this question, you're missing the point. The answer could be neither, or both. Large language models are tools, not moral agents. It’s crucial that we make this distinction because it places culpability for their impact squarely on human shoulders, where it belongs. The people who develop, use, and regulate large language models will decide their impact.

It’s up to us to decide the future, together. Just ask ChatGPT:

 

THE IMPACT OF  GPT-3 WILL DEPEND ON HOW IT IS USED AND WHO IS USING IT

 

 

In the near future, we’ll discuss more about ChatGPT: how it’s upending the traditional education system, challenging the legal definition of “plagiarism,” and feeding users’ confirmation bias. Stay connected by subscribing to our monthly newsletter: The Humankind of AI.

You'll find we are always exploring AI’s impact on business, but more importantly, people.

Photo by Edward Howell on Unsplash

 

About Synaptiq

Synaptiq is an AI and data science consultancy based in Portland, Oregon. We collaborate with our clients to develop human-centered products and solutions. We uphold a strong commitment to ethics and innovation. 

Contact us if you have a problem to solve, a process to refine, or a question to ask.

You can learn more about our story through our past projects, blog, or podcast