Our AI Impact

 for the health of people

People_Stat-01

 

 Our AI Impact

 for the health of planet

Planet_Stat-03-01

 

 Our AI Impact

 for the health of business

Business_Stat-01-01

 

FOR THE HEALTH OF PEOPLE: EQUITY
Rwanda-Bridge-1-1
“The work [with Synaptiq] is unprecedented in its scale and potential impact,” Mortenson Center’s Managing Director Laura MacDonald MacDonald said. “It ties together our center’s strengths in impact evaluation and sensor deployment to generate evidence that informs development tools, policy, and practice.” 
Read the Case Study ⇢ 

 

    ⇲ Implement & Scale
    DATA STRATEGY
    levi-stute-PuuP2OEYqWk-unsplash-2
    A startup in digital health trained a risk model to open up a robust, precise, and scalable processing pipeline so providers could move faster, and patients could move with confidence after spinal surgery. 
    Read the Case Study ⇢ 

     

      PREDICTIVE ANALYTICS
      carli-jeen-15YDf39RIVc-unsplash-1
      Thwart errors, relieve in-take form exhaustion, and build a more accurate data picture for patients in chronic pain? Those who prefer the natural albeit comprehensive path to health and wellness said: sign me up. 
      Read the Case Study ⇢ 

       

        MACHINE VISION
        kristopher-roller-PC_lbSSxCZE-unsplash-1
        Using a dynamic machine vision solution for detecting plaques in the carotid artery and providing care teams with rapid answers, saves lives with early disease detection and monitoring. 
        Read the Case Study ⇢ 

         

          INTELLIGENT AUTOMATION
          man-wong-aSERflF331A-unsplash (1)-1
          This global law firm needed to be fast, adaptive, and provide unrivaled client service under pressure, intelligent automation did just that plus it made time for what matters most: meaningful human interactions. 
          Read the Case Study ⇢ 

           

            strvnge-films-P_SSMIgqjY0-unsplash-2-1-1

            Mushrooms, Goats, and Machine Learning: What do they all have in common? You may never know unless you get started exploring the fundamentals of Machine Learning with Dr. Tim Oates, Synaptiq's Chief Data Scientist. You can read and visualize his new book in Python, tinker with inputs, and practice machine learning techniques for free. 

            Start Chapter 1 Now ⇢ 

             

              How Should My Company Prioritize AIQ™ Capabilities?

               

                 

                 

                 

                Start With Your AIQ Score

                  5 min read

                  Five Tips for Keeping Your Continuous Integration (CI) Costs Down

                  Featured Image

                   

                  Photo by Firdouss Ross on Unsplash

                  Assuming your CI runs on runners that you pay for by the minute, here are some thoughts and tips.

                  1. Cancel non-critical runs if a new run comes in.

                  Often, I'll write some code, push my change, and then start reading the PR and find things to fix... so I fix them and push a new change. Suddenly, there are two runs going for my PR. Why not just cancel the previous one automatically so you don't have to pay for the remainder of that run?

                  Here's how we do this in GitHub Actions:


                  concurrency:
                  group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
                  cancel-in-progress: true

                  Tradeoff: If you do this for your critical tasks, such as e2e on merges to main, there's a risk that you might miss what code caused an action to start failing. However, this happens so rarely, and you can always manually go back and re-run canceled jobs, so we've never found this to be an issue.

                  2. Use tags to hold back actions that are not needed yet or at all.

                  We use tags on our Pull Requests in GitHub, like ci-skip or e2e-skip, to indicate what CI should run on branches. We also hold all tests on draft branches so that we can get early reviews but not face costs for these runs. These tags are also handy for those PRs that don't need any tests run. For example, you don't need to run your full suite of e2e tests for fixing a speelling error in your README.md.

                  Here's how we do this in GitHub Actions:

                  
                  jobs:
                    tests:
                      if: ${{ !github.event.pull_request.draft && !contains(github.event.pull_request.labels.*.name, 'e2e Skip') }}
                   

                  3. Use bigger runners to reduce run time.

                  It seems counterintuitive, but we've found that having various-sized runners has given us the flexibility to optimize our runs. We've seen runs that took 17 minutes on a small runner take only 5 minutes on bigger runners that can do more at once. Even if you're paying double per minute, you're still coming out ahead.

                  4. Balance human labor vs. computer labor.

                  Often, I'll look at our GitHub costs and shudder at what feels like a big bill, but let's not forget the opportunity cost. Machines cost fractions of a penny per minute, while humans can cost well over a dollar. It can pay dividends to invest in automating actions versus having humans do them. You'll see long-term gains quite easily if you calculate how much Human Actions cost versus GitHub Actions. Prioritize wisely.

                  5. Buy a Raspberry Pi (or put that old computer in the closet to use).

                  We have a lot of "meta" type actions that we run, like assigning pull requests, attaching Jira links, checking whether specific labels are on tickets (or not), metrics, etc. These are really handy, but when I saw that they were only taking a few seconds to run while we were paying a minute for each, I did some quick math: ~$25/month is what we were paying for them just for one of our projects. Enter the Pi that I had sitting in my closet. I installed the GitHub Actions Runner package, attached it to our runner group, and moved all our meta tasks to it. Now we run those for pennies in electricity... <makes a note to expense power bill for a fraction of a penny a month.

                  Takeaways

                  In conclusion, having automation do the heavy lifting can add up, but it doesn't have to break the bank if you monitor it. Some quick investments in cost reductions can earn back dividends over time.

                  Of course, if you want us to help you with your automation, our phone operators are standing by to take your call to send out a specialized team of optimizers.

                  By Peter Ramsing, Member of the Technical Staff at Synaptiq | LinkedIn | Github

                  humankind of ai


                   

                  About Synaptiq

                  Synaptiq is an AI and data science consultancy based in Portland, Oregon. We collaborate with our clients to develop human-centered products and solutions. We uphold a strong commitment to ethics and innovation. 

                  Contact us if you have a problem to solve, a process to refine, or a question to ask.

                  You can learn more about our story through our past projects, our blog, or our podcast.

                  Additional Reading:

                  How to Safely Get Started with Large Language Models

                  Photo by Dylan Gillis on Unsplash

                  Just as a skydiver never wishes they’d left their parachute behind, no business...

                  Future-Proof Your Supply Chain: AI Solutions for Extreme Weather

                  Photo by Simon Hurry on Unsplash

                  According to a recent survey by The Economist, more than 99 percent of executives...

                  Five Tips for Keeping Your Continuous Integration (CI) Costs Down

                  Photo by Firdouss Ross on Unsplash

                  Assuming your CI runs on runners that you pay for by the minute, here are some...