⇲ Implement & Scale
DATA STRATEGY
levi-stute-PuuP2OEYqWk-unsplash-2
A startup in digital health trained a risk model to open up a robust, precise, and scalable processing pipeline so providers could move faster, and patients could move with confidence after spinal surgery. 
Read the Case Study ⇢ 

 

    PREDICTIVE ANALYTICS
    carli-jeen-15YDf39RIVc-unsplash-1
    Thwart errors, relieve in-take form exhaustion, and build a more accurate data picture for patients in chronic pain? Those who prefer the natural albeit comprehensive path to health and wellness said: sign me up. 
    Read the Case Study ⇢ 

     

      MACHINE VISION
      kristopher-roller-PC_lbSSxCZE-unsplash-1
      Using a dynamic machine vision solution for detecting plaques in the carotid artery and providing care teams with rapid answers, saves lives with early disease detection and monitoring. 
      Read the Case Study ⇢ 

       

        INTELLIGENT AUTOMATION
        man-wong-aSERflF331A-unsplash (1)-1
        This global law firm needed to be fast, adaptive, and provide unrivaled client service under pressure, intelligent automation did just that plus it made time for what matters most: meaningful human interactions. 
        Read the Case Study ⇢ 

         

          strvnge-films-P_SSMIgqjY0-unsplash-2-1-1

          Mushrooms, Goats, and Machine Learning: What do they all have in common? You may never know unless you get started exploring the fundamentals of Machine Learning with Dr. Tim Oates, Synaptiq's Chief Data Scientist. You can read and visualize his new book in Python, tinker with inputs, and practice machine learning techniques for free. 

          Start Chapter 1 Now ⇢ 

           

            How Should My Company Prioritize AIQ™ Capabilities?

             

               

               

               

              Start With Your AIQ Score

                6 min read

                Should You Trust GPT-4? We Asked a Human.

                Featured Image

                Large language models like Chat-GPT and, more recently, GPT-4, have garnered viral attention. They perform a variety of tasks — like writing, coding, and summarizing information — much faster than human workers, without needing rest or (much) compensation. Early adopters praise their ability to accelerate and automate work, but skeptics warn that there's more to these models than meets the eye.  Would you trust them to do your work?

                You already trust artificial intelligence.

                We trust artificial intelligence to make choices for us. We trust AI to curate our social media feeds, our search results, and even our movement. Delegating these choices has become so easy, intuitive, and habitual — we seldom think to question the results. If you think you’re the exception, ask yourself:

                • Do you take the route [insert favorite navigation app here] doesn’t recommend?
                • How often do you search for the answers [insert most used search engine here] doesn’t give you?
                • How often do you see the posts [insert procrastination-supporting social media app here] doesn’t show you?

                Ironically, whether we trust AI to make choices for us is itself a choice. But we seldom make it consciously — let alone democratically. Political Theorist Langdon Winner warns that technological change involves two choices (1) whether a technology will be developed and (2) what it will look like, exactly. That begs the question: Who gets to make these choices?  The author of this blog post wasn’t invited to vote in the AI referendum. Were you?  

                Research suggests the average person can’t reliably recognize AI, much less give it their informed consent. In 2017, the software company Pegasystems surveyed six thousand consumers across six countries: “Have you ever interacted with Artificial Intelligence technology?” Eighty-four percent had actually interacted with AI, based on the devices and services they reported using. Only 34 percent responded, “Yes.” 

                Ruh-roh.

                On the bright side, our trust in AI has yielded tangible benefits for people, animals, and the planet we call home. Nature conservationists have used AI to monitor endangered species, combat the illegal wildlife trade, and detect wildfires. Healthcare practitioners have used AI to anticipate public health emergencies (including COVID-19), accelerate the diagnostic process, and develop life-saving drugs. AI processes data faster than people; it makes decisions faster than we can. Sometimes, speed is a major factor in success, and faster choices are advantageous.

                How We Use AI for Good

                Other times, faster choices are seductively convenient. They’re tempting like a microwave meal after a double shift or a second expresso on a slow morning. Convenience whispers in your ear: Why not skip the ‘blah’ parts of life?

                Why not, indeed.

                Trusting GPT-4 is a tradeoff: speed vs. accuracy

                GPT-4 is the fourth and latest in a series of language models developed by OpenAI. Put simply, language models use a branch of AI called machine learning to predict the probability that any given sequence of tokens (basically, language units) is the appropriate response to a user query. GPT-4 is so exceptionally adept at identifying the appropriate responses to user queries, its responses could be mistaken for human-generated. 

                Put to the test by OpenAI, GPT-4 scored above the 80th percentile on a battery of standardized exams, including SATs, GREs, LSATs, and even the Uniform Bar Exam (with one exception; it placed in the 54th percentile on the writing section of the GREs).  It can hold a conversation, interpret images, write text, and code in multiple languages. The New York Times reports, “It’s close to telling jokes that are almost funny.

                These achievements are tempered by critical flaws. For one, GPT-4 hallucinates. 

                GPT-4 has the tendency to 'hallucinate', or produce content that is nonsensical or untruthful,” cautions OpenAI. Hallucination (a.k.a., “making stuff up”) was also a problem for GPT-4’s predecessors. For example, Chat-GPT, which was released several months prior to GPT-4, produces factually incorrect responses to user queries about 20 percent of the time. That's according to the developers of the Chat-GPT fact-checker, TruthChecker.

                Chat-GPT: What’s the Hype?

                 

                GPT-4’s hallucinations are infrequent, but their delivery is eerily convincing. 

                Imagine you’re an office worker who hates writing emails. One day, you’re assigned an intern: “GPT-4.” Naturally, you put them to work writing your emails. You proofread the first ten emails. Finding no issues, you skim the next ten. Eventually, you get comfortable enough to send emails without checking them. Everything is great — until your boss calls you to her office. She’s livid about an email you sent (and GPT-4 wrote). You have no idea what it says.

                Alternatively, imagine you’re a software developer for a social media platform. Your intern, GPT-4, debugs your code. One day, you push to production, and the platform crashes — catastrophe! You’re fired when your supervisor finds what caused the crash: a faulty “correction” by GPT-4, which you accepted without a second thought. 

                So, can you trust GPT-4? It depends. On the one hand, trusting GPT-4 to make choices for you is convenient (especially when speed is a major factor in success). Early adopters have used GPT-4 and its predecessors to choose how they work out, what they eat, and the contents of their emails and college applications, for example.

                On the other hand, trusting GPT-4 can be risky when accuracy (or something else) matters more than speed.  Writing an email to your boss? Proof-read it. Making choices with significant ramifications? Trust yourself. 

                THIS BLOG WAS WRITTEN WITHOUT LARGE LANGUAGE MODEL ASSISTANCE. 

                humankind of ai

                Photo by Kamil Pietrzak on Unsplash


                 

                About Synaptiq

                Synaptiq is an AI and data science consultancy based in Portland, Oregon. We collaborate with our clients to develop human-centered products and solutions. We uphold a strong commitment to ethics and innovation. 

                Contact us if you have a problem to solve, a process to refine, or a question to ask.

                You can learn more about our story through our past projects, blog, or podcast

                Additional Reading:

                We Helped a Startup Fight Wildfires with AI & Atmospheric Balloons

                Climate Change Fuels Record Wildfires

                The 2021 wildfire season scorched 3.6 million acres in the United States. [1]...

                Finding a Needle in a Haystack: How to Optimize Automated Document Retrieval

                At most companies, employees need a quick and easy way to search through a disorganized repository of documents to...

                Using Linear Regression to Understand the Relationship between Salary & Experience

                Understanding the factors influencing compensation is essential in the tech industry, where talent drives innovation....