How to Know When You Need an AI Expert vs DIY
Knowledge is power. Knowledge is important in AI because it takes knowledge to effectively deploy AI solutions, with...
CONSTRUCTION & REAL ESTATE
|
Discover how crafting a robust AI data strategy identifies high-value opportunities. Learn how Ryan Companies used AI to enhance efficiency and innovation.
|
Read the Case Study ⇢ |
LEGAL SERVICES
|
Discover how a global law firm uses intelligent automation to enhance client services. Learn how AI improves efficiency, document processing, and client satisfaction.
|
Read the Case Study ⇢ |
HEALTHCARE
|
A startup in digital health trained a risk model to open up a robust, precise, and scalable processing pipeline so providers could move faster, and patients could move with confidence after spinal surgery.
|
Read the Case Study ⇢ |
LEGAL SERVICES
|
Learn how Synaptiq helped a law firm cut down on administrative hours during a document migration project.
|
Read the Case Study ⇢ |
GOVERNMENT/LEGAL SERVICES
|
Learn how Synaptiq helped a government law firm build an AI product to streamline client experiences.
|
Read the Case Study ⇢ |
Mushrooms, Goats, and Machine Learning: What do they all have in common? You may never know unless you get started exploring the fundamentals of Machine Learning with Dr. Tim Oates, Synaptiq's Chief Data Scientist. You can read and visualize his new book in Python, tinker with inputs, and practice machine learning techniques for free. |
Start Chapter 1 Now ⇢ |
By: Peter Ramsing 1 Jul 17, 2024 12:25:40 PM
Assuming your CI runs on runners that you pay for by the minute, here are some thoughts and tips.
1. Cancel non-critical runs if a new run comes in.
Often, I'll write some code, push my change, and then start reading the PR and find things to fix... so I fix them and push a new change. Suddenly, there are two runs going for my PR. Why not just cancel the previous one automatically so you don't have to pay for the remainder of that run?
Here's how we do this in GitHub Actions:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
Tradeoff: If you do this for your critical tasks, such as e2e on merges to main, there's a risk that you might miss what code caused an action to start failing. However, this happens so rarely, and you can always manually go back and re-run canceled jobs, so we've never found this to be an issue.
2. Use tags to hold back actions that are not needed yet or at all.
We use tags on our Pull Requests in GitHub, like ci-skip or e2e-skip, to indicate what CI should run on branches. We also hold all tests on draft branches so that we can get early reviews but not face costs for these runs. These tags are also handy for those PRs that don't need any tests run. For example, you don't need to run your full suite of e2e tests for fixing a speelling error in your README.md.
Here's how we do this in GitHub Actions:
jobs:
tests:
if: ${{ !github.event.pull_request.draft && !contains(github.event.pull_request.labels.*.name, 'e2e Skip') }}
3. Use bigger runners to reduce run time.
It seems counterintuitive, but we've found that having various-sized runners has given us the flexibility to optimize our runs. We've seen runs that took 17 minutes on a small runner take only 5 minutes on bigger runners that can do more at once. Even if you're paying double per minute, you're still coming out ahead.
4. Balance human labor vs. computer labor.
Often, I'll look at our GitHub costs and shudder at what feels like a big bill, but let's not forget the opportunity cost. Machines cost fractions of a penny per minute, while humans can cost well over a dollar. It can pay dividends to invest in automating actions versus having humans do them. You'll see long-term gains quite easily if you calculate how much Human Actions cost versus GitHub Actions. Prioritize wisely.
5. Buy a Raspberry Pi (or put that old computer in the closet to use).
We have a lot of "meta" type actions that we run, like assigning pull requests, attaching Jira links, checking whether specific labels are on tickets (or not), metrics, etc. These are really handy, but when I saw that they were only taking a few seconds to run while we were paying a minute for each, I did some quick math: ~$25/month is what we were paying for them just for one of our projects. Enter the Pi that I had sitting in my closet. I installed the GitHub Actions Runner package, attached it to our runner group, and moved all our meta tasks to it. Now we run those for pennies in electricity... <makes a note to expense power bill for a fraction of a penny a month.
In conclusion, having automation do the heavy lifting can add up, but it doesn't have to break the bank if you monitor it. Some quick investments in cost reductions can earn back dividends over time.
Of course, if you want us to help you with your automation, our phone operators are standing by to take your call to send out a specialized team of optimizers.
By Peter Ramsing, Member of the Technical Staff at Synaptiq | LinkedIn | Github |
Photo by Firdouss Ross on Unsplash
Synaptiq is an AI and data science consultancy based in Portland, Oregon. We collaborate with our clients to develop human-centered products and solutions. We uphold a strong commitment to ethics and innovation.
Contact us if you have a problem to solve, a process to refine, or a question to ask.
You can learn more about our story through our past projects, our blog, or our podcast.
Knowledge is power. Knowledge is important in AI because it takes knowledge to effectively deploy AI solutions, with...
December 9, 2024
Choosing an AI partner is a high-stakes decision. In 2025, AI will be the single-largest technology spending budget...
December 6, 2024
The 2021 wildfire season scorched 3.6 million acres in the United States. [1]...
November 18, 2024