Synaptiq, What Are We Blogging About?

Unlock The Power Of Large Language Models with Generative AI

Written by Malika Amoruso | Jun 30, 2023 1:53:33 PM

In the past year Generative AI, in the form of Large Language Models (LLMs), has emerged as a revolutionary technology for many industries and business functions. These powerful AI models, like OpenAI's GPT-4, are changing the way businesses operate, innovate, and interact with their customers. 

By understanding the ingredients of our language (text relationships and generating human-like text), LLMs have already found their place in a wide range of applications - some might not be “trending” like ChatGPT - but, it’s just as important to explore the ethical use cases and diverse set of recipes for LLMs across industries to understand their potential, and impact. 


Let’s dig in.

 

Healthcare and Medicine 

Distilling: Simplified Medical Records

Enhanced clinical documentation and communication is one area where LLMs can cut down the “cooking time”. A research study led by Dr. Monica Agarwal at MIT revealed the potential that LLMs hold for streamlining data extraction from medical documents and records [1]. In the study, the researchers found that while there are challenges and concerns to be aware of–data privacy, the complexity of medical data and clinical notes–LLMs proved promising as a way to supercharge distilling clinician notes and medical records into easily understood and organized data. This allows both fluidity of patient transfer from primary care to insurance claims–and patients themselves to be “in the loop” through every step of the diagnostic process. 

 
Sifting: Model Symptom Assistance

Sometimes it’s too soon to speak to a medical professional, but patients have concerns. That’s where LLMs can “sift” through provided symptoms to suggest a root cause, and begin what can be a harrowing process in a less pressurized way. Ada, a software-focused medical device company offers exactly that–a symptom checking LLM [2]. Ada is a “white box system,” which means that it allows clinicians to see into its decision-making process and understand exactly what leads the model to suggest particular recommendations. This allows patients to then collaborate and communicate the information they receive from the tool to their clinicians when they are ready. 

 

Manufacturing

Electric Mixing: Hyper Automated Robotics on the Production Line 

When used responsibly, LLMs also have powerful applications for manufacturing companies. One of these is the potential to optimize industrial robotics. An article from Forbes, reports that ChatGPT-3.5 and other models were able to generate simple programming commands for robots from natural language instructions [3]. Should this become scalable with more transparent and well-trained models, this application could optimize everyday manufacturing tasks, allowing engineers and programmers to focus on more complex details. Microsoft is experimenting with human-robot interfacing using ChatGPT-3.5, and applications particularly in service and manufacturing sectors [4]. They hope to make human-interfacing robotics an everyday tool from the home to the supply chain. 

 
Kneading: Administrative Assistants & Performance Reviews 

For large and small-scale manufacturers and companies, LLMs can ease the load on administrative tasks. They can act as personal assistants that generate copy for emails, process or organize data, and even suggest prompts to make strategy sessions and quality assurance reports more productive [5].

If a manufacturing company is looking to scale-up complex processes, LLMs can summarize steps, simplify documentation that explains these steps, and collect all of the documentation’s data in one place for reorganization and optimization. These strategies will likely increase manufacturing productivity and employee satisfaction by minimizing time spent on repetitive, low-impact, yet necessary tasks. 

 

Education

Seasoning: Personalized and Accessible Learning 

Over the past year, the field of education has been split between embracing and banning LLMs, and for good reason. When used unethically or dishonestly, LLMs are likely to inhibit learning and spread misinformation. However, when used responsibly, LLMs have the potential to “season” accessible learning experiences and provide personalized academic support. For example, LLMs can help introduce students to different genres and styles of writing in the voice of historic figures, and even help teachers support students learning a second language [6]. LLMs can also help students by facilitating topic research and providing summaries. However, before using LLMs in the classroom, teachers need to be trained in prompt engineering best practices, and should show their students how to spot incorrect statements and “hallucinations”1

If you want to learn more about large language models finding its way into Education, read our blog post here.

 

Common Pitfalls of LLMs and Our “Substitutions”

While large language models offer tantalizing possibilities for innovation and optimization, it's crucial to be aware of the associated challenges and pitfalls. Let’s cover a few of those, alongside strategies to avoid or minimize them. 

“Hallucinations” are one of the most apparent drawbacks as mentioned above, but can be addressed through prompts (e.g., entering hyper-specific queries that include desired audience and tone among other options), custom fine-tuning of the LLM itself, and by implementing information retrieval. Custom fine-tuning is the process where LLMs are specialized towards a particular goal, action, or set of data. 

Problematic data provenance is another common issue with LLMs. It refers to where data originates, where it travels, and how it is stored. LLMs so far have been notoriously opaque about their data provenance practices and lineages. There have been recent accusations of LLMs being trained on material that is copyrighted and owned by another person or company. Implementing data provenance best practices, and developing a more advanced data architecture can help. 

Bias is another problem that can become harmful when dealing with LLMs, as it refers to when models have implicit views that are passed on from training data that can be harmful or discriminatory [7]. This error is hard to catch during testing, as most model evaluation focuses on precision and accuracy rather than searching for bias. Since LLMs deal with language, bias testing is crucial to prevent the spread of misinformation and discrimination. If bias testing is part of model evaluation, and industry standards are set for bias testing, it will likely become less of a confounding issue. 

In short, Large Language Models have the power to transform industries and business functions, drive innovation, enhance productivity, and reshape customer and user experiences. From healthcare and manufacturing to education and more, the versatility of LLMs is apparent. But leveraging the capabilities of LLMs responsibly and ethically adds complexity, so it’s important to be hyper-aware of that before you unlock new possibilities, optimize, and soar to new heights. 

 

Ready to implement, but not sure where to begin?

 

Yes, Chef. 

Synaptiq uses Generative AI daily; it’s our bread and butter. We use it internally for research, quick answers, and prompt testing. We’ve also integrated LLMs into our ‘dough layers’, if you will, and fine-tune them as part of our  AIQ™ methodology - which is where most of our projects begin. We not only build intelligent products, we help companies identify viable and valuable AI-opportunities, many that include Generative AI technology like LLMs. That is to say: we don’t just reach into your pantry, cook something up, and see if it tastes good. We start by understanding your business, your current state, and work to design your future vision. And we use an AI roadmap to get you there. 

We’re committed to 3 overarching pillars at Synaptiq: our work must always be in support of the Health of People, the Health of Planet, or the Health of Business. Large Language Models hold an incredible amount of potential for impact in all of these areas. You’ll find us in our craft kitchen, developing recipes for solutions to well understood problems within all these pillars - we’d love for you to send us your recipe ideas, or share some of ours with you. 

 

1Hallucinations refer to when LLMs give realistic answers that are actually false. Examples of hallucinations include links that seem plausible or references to studies that do not exist.

 

Photo by Joanna Boj on Unsplash

 

About Synaptiq

Synaptiq is an AI and data science consultancy based in Portland, Oregon. We collaborate with our clients to develop human-centered products and solutions. We uphold a strong commitment to ethics and innovation. 

Contact us if you have a problem to solve, a process to refine, or a question to ask.

You can learn more about our story through our past projects, blog, or podcast