#custom metrics
Explore tagged Tumblr posts
jcmarchi · 5 months ago
Text
Tracking Large Language Models (LLM) with MLflow : A Complete Guide
New Post has been published on https://thedigitalinsider.com/tracking-large-language-models-llm-with-mlflow-a-complete-guide/
Tracking Large Language Models (LLM) with MLflow : A Complete Guide
As Large Language Models (LLMs) grow in complexity and scale, tracking their performance, experiments, and deployments becomes increasingly challenging. This is where MLflow comes in – providing a comprehensive platform for managing the entire lifecycle of machine learning models, including LLMs.
In this in-depth guide, we’ll explore how to leverage MLflow for tracking, evaluating, and deploying LLMs. We’ll cover everything from setting up your environment to advanced evaluation techniques, with plenty of code examples and best practices along the way.
Functionality of MLflow in Large Language Models (LLMs)
MLflow has become a pivotal tool in the machine learning and data science community, especially for managing the lifecycle of machine learning models. When it comes to Large Language Models (LLMs), MLflow offers a robust suite of tools that significantly streamline the process of developing, tracking, evaluating, and deploying these models. Here’s an overview of how MLflow functions within the LLM space and the benefits it provides to engineers and data scientists.
Tracking and Managing LLM Interactions
MLflow’s LLM tracking system is an enhancement of its existing tracking capabilities, tailored to the unique needs of LLMs. It allows for comprehensive tracking of model interactions, including the following key aspects:
Parameters: Logging key-value pairs that detail the input parameters for the LLM, such as model-specific parameters like top_k and temperature. This provides context and configuration for each run, ensuring that all aspects of the model’s configuration are captured.
Metrics: Quantitative measures that provide insights into the performance and accuracy of the LLM. These can be updated dynamically as the run progresses, offering real-time or post-process insights.
Predictions: Capturing the inputs sent to the LLM and the corresponding outputs, which are stored as artifacts in a structured format for easy retrieval and analysis.
Artifacts: Beyond predictions, MLflow can store various output files such as visualizations, serialized models, and structured data files, allowing for detailed documentation and analysis of the model’s performance.
This structured approach ensures that all interactions with the LLM are meticulously recorded, providing a comprehensive lineage and quality tracking for text-generating models​.
Evaluation of LLMs
Evaluating LLMs presents unique challenges due to their generative nature and the lack of a single ground truth. MLflow simplifies this with specialized evaluation tools designed for LLMs. Key features include:
Versatile Model Evaluation: Supports evaluating various types of LLMs, whether it’s an MLflow pyfunc model, a URI pointing to a registered MLflow model, or any Python callable representing your model.
Comprehensive Metrics: Offers a range of metrics tailored for LLM evaluation, including both SaaS model-dependent metrics (e.g., answer relevance) and function-based metrics (e.g., ROUGE, Flesch Kincaid).
Predefined Metric Collections: Depending on the use case, such as question-answering or text-summarization, MLflow provides predefined metrics to simplify the evaluation process.
Custom Metric Creation: Allows users to define and implement custom metrics to suit specific evaluation needs, enhancing the flexibility and depth of model evaluation.
Evaluation with Static Datasets: Enables evaluation of static datasets without specifying a model, which is useful for quick assessments without rerunning model inference.
Deployment and Integration
MLflow also supports seamless deployment and integration of LLMs:
MLflow Deployments Server: Acts as a unified interface for interacting with multiple LLM providers. It simplifies integrations, manages credentials securely, and offers a consistent API experience. This server supports a range of foundational models from popular SaaS vendors as well as self-hosted models.
Unified Endpoint: Facilitates easy switching between providers without code changes, minimizing downtime and enhancing flexibility.
Integrated Results View: Provides comprehensive evaluation results, which can be accessed directly in the code or through the MLflow UI for detailed analysis.
MLflow is a comprehensive suite of tools and integrations makes it an invaluable asset for engineers and data scientists working with advanced NLP models.
Setting Up Your Environment
Before we dive into tracking LLMs with MLflow, let’s set up our development environment. We’ll need to install MLflow and several other key libraries:
pip install mlflow>=2.8.1 pip install openai pip install chromadb==0.4.15 pip install langchain==0.0.348 pip install tiktoken pip install 'mlflow[genai]' pip install databricks-sdk --upgrade
After installation, it’s a good practice to restart your Python environment to ensure all libraries are properly loaded. In a Jupyter notebook, you can use:
import mlflow import chromadb print(f"MLflow version: mlflow.__version__") print(f"ChromaDB version: chromadb.__version__")
This will confirm the versions of key libraries we’ll be using.
Understanding MLflow’s LLM Tracking Capabilities
MLflow’s LLM tracking system builds upon its existing tracking capabilities, adding features specifically designed for the unique aspects of LLMs. Let’s break down the key components:
Runs and Experiments
In MLflow, a “run” represents a single execution of your model code, while an “experiment” is a collection of related runs. For LLMs, a run might represent a single query or a batch of prompts processed by the model.
Key Tracking Components
Parameters: These are input configurations for your LLM, such as temperature, top_k, or max_tokens. You can log these using mlflow.log_param() or mlflow.log_params().
Metrics: Quantitative measures of your LLM’s performance, like accuracy, latency, or custom scores. Use mlflow.log_metric() or mlflow.log_metrics() to track these.
Predictions: For LLMs, it’s crucial to log both the input prompts and the model’s outputs. MLflow stores these as artifacts in CSV format using mlflow.log_table().
Artifacts: Any additional files or data related to your LLM run, such as model checkpoints, visualizations, or dataset samples. Use mlflow.log_artifact() to store these.
Let’s look at a basic example of logging an LLM run:
This example demonstrates logging parameters, metrics, and the input/output as a table artifact.
import mlflow import openai def query_llm(prompt, max_tokens=100): response = openai.Completion.create( engine="text-davinci-002", prompt=prompt, max_tokens=max_tokens ) return response.choices[0].text.strip() with mlflow.start_run(): prompt = "Explain the concept of machine learning in simple terms." # Log parameters mlflow.log_param("model", "text-davinci-002") mlflow.log_param("max_tokens", 100) # Query the LLM and log the result result = query_llm(prompt) mlflow.log_metric("response_length", len(result)) # Log the prompt and response mlflow.log_table("prompt_responses", "prompt": [prompt], "response": [result]) print(f"Response: result")
Deploying LLMs with MLflow
MLflow provides powerful capabilities for deploying LLMs, making it easier to serve your models in production environments. Let’s explore how to deploy an LLM using MLflow’s deployment features.
Creating an Endpoint
First, we’ll create an endpoint for our LLM using MLflow’s deployment client:
import mlflow from mlflow.deployments import get_deploy_client # Initialize the deployment client client = get_deploy_client("databricks") # Define the endpoint configuration endpoint_name = "llm-endpoint" endpoint_config = "served_entities": [ "name": "gpt-model", "external_model": "name": "gpt-3.5-turbo", "provider": "openai", "task": "llm/v1/completions", "openai_config": "openai_api_type": "azure", "openai_api_key": "secrets/scope/openai_api_key", "openai_api_base": "secrets/scope/openai_api_base", "openai_deployment_name": "gpt-35-turbo", "openai_api_version": "2023-05-15", , , ], # Create the endpoint client.create_endpoint(name=endpoint_name, config=endpoint_config)
This code sets up an endpoint for a GPT-3.5-turbo model using Azure OpenAI. Note the use of Databricks secrets for secure API key management.
Testing the Endpoint
Once the endpoint is created, we can test it:
<div class="relative flex flex-col rounded-lg"> response = client.predict( endpoint=endpoint_name, inputs="prompt": "Explain the concept of neural networks briefly.","max_tokens": 100,,) print(response)
This will send a prompt to our deployed model and return the generated response.
Evaluating LLMs with MLflow
Evaluation is crucial for understanding the performance and behavior of your LLMs. MLflow provides comprehensive tools for evaluating LLMs, including both built-in and custom metrics.
Preparing Your LLM for Evaluation
To evaluate your LLM with mlflow.evaluate(), your model needs to be in one of these forms:
An mlflow.pyfunc.PyFuncModel instance or a URI pointing to a logged MLflow model.
A Python function that takes string inputs and outputs a single string.
An MLflow Deployments endpoint URI.
Set model=None and include model outputs in the evaluation data.
Let’s look at an example using a logged MLflow model:
import mlflow import openai with mlflow.start_run(): system_prompt = "Answer the following question concisely." logged_model_info = mlflow.openai.log_model( model="gpt-3.5-turbo", task=openai.chat.completions, artifact_path="model", messages=[ "role": "system", "content": system_prompt, "role": "user", "content": "question", ], ) # Prepare evaluation data eval_data = pd.DataFrame( "question": ["What is machine learning?", "Explain neural networks."], "ground_truth": [ "Machine learning is a subset of AI that enables systems to learn and improve from experience without explicit programming.", "Neural networks are computing systems inspired by biological neural networks, consisting of interconnected nodes that process and transmit information." ] ) # Evaluate the model results = mlflow.evaluate( logged_model_info.model_uri, eval_data, targets="ground_truth", model_type="question-answering", ) print(f"Evaluation metrics: results.metrics")
This example logs an OpenAI model, prepares evaluation data, and then evaluates the model using MLflow’s built-in metrics for question-answering tasks.
Custom Evaluation Metrics
MLflow allows you to define custom metrics for LLM evaluation. Here’s an example of creating a custom metric for evaluating the professionalism of responses:
from mlflow.metrics.genai import EvaluationExample, make_genai_metric professionalism = make_genai_metric( name="professionalism", definition="Measure of formal and appropriate communication style.", grading_prompt=( "Score the professionalism of the answer on a scale of 0-4:n" "0: Extremely casual or inappropriaten" "1: Casual but respectfuln" "2: Moderately formaln" "3: Professional and appropriaten" "4: Highly formal and expertly crafted" ), examples=[ EvaluationExample( input="What is MLflow?", output="MLflow is like your friendly neighborhood toolkit for managing ML projects. It's super cool!", score=1, justification="The response is casual and uses informal language." ), EvaluationExample( input="What is MLflow?", output="MLflow is an open-source platform for the machine learning lifecycle, including experimentation, reproducibility, and deployment.", score=4, justification="The response is formal, concise, and professionally worded." ) ], model="openai:/gpt-3.5-turbo-16k", parameters="temperature": 0.0, aggregations=["mean", "variance"], greater_is_better=True, ) # Use the custom metric in evaluation results = mlflow.evaluate( logged_model_info.model_uri, eval_data, targets="ground_truth", model_type="question-answering", extra_metrics=[professionalism] ) print(f"Professionalism score: results.metrics['professionalism_mean']")
This custom metric uses GPT-3.5-turbo to score the professionalism of responses, demonstrating how you can leverage LLMs themselves for evaluation.
Advanced LLM Evaluation Techniques
As LLMs become more sophisticated, so do the techniques for evaluating them. Let’s explore some advanced evaluation methods using MLflow.
Retrieval-Augmented Generation (RAG) Evaluation
RAG systems combine the power of retrieval-based and generative models. Evaluating RAG systems requires assessing both the retrieval and generation components. Here’s how you can set up a RAG system and evaluate it using MLflow:
from langchain.document_loaders import WebBaseLoader from langchain.text_splitter import CharacterTextSplitter from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.chains import RetrievalQA from langchain.llms import OpenAI # Load and preprocess documents loader = WebBaseLoader(["https://mlflow.org/docs/latest/index.html"]) documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) # Create vector store embeddings = OpenAIEmbeddings() vectorstore = Chroma.from_documents(texts, embeddings) # Create RAG chain llm = OpenAI(temperature=0) qa_chain = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=vectorstore.as_retriever(), return_source_documents=True ) # Evaluation function def evaluate_rag(question): result = qa_chain("query": question) return result["result"], [doc.page_content for doc in result["source_documents"]] # Prepare evaluation data eval_questions = [ "What is MLflow?", "How does MLflow handle experiment tracking?", "What are the main components of MLflow?" ] # Evaluate using MLflow with mlflow.start_run(): for question in eval_questions: answer, sources = evaluate_rag(question) mlflow.log_param(f"question", question) mlflow.log_metric("num_sources", len(sources)) mlflow.log_text(answer, f"answer_question.txt") for i, source in enumerate(sources): mlflow.log_text(source, f"source_question_i.txt") # Log custom metrics mlflow.log_metric("avg_sources_per_question", sum(len(evaluate_rag(q)[1]) for q in eval_questions) / len(eval_questions))
This example sets up a RAG system using LangChain and Chroma, then evaluates it by logging questions, answers, retrieved sources, and custom metrics to MLflow.
The way you chunk your documents can significantly impact RAG performance. MLflow can help you evaluate different chunking strategies:
This script evaluates different combinations of chunk sizes, overlaps, and splitting methods, logging the results to MLflow for easy comparison.
MLflow provides various ways to visualize your LLM evaluation results. Here are some techniques:
You can create custom visualizations of your evaluation results using libraries like Matplotlib or Plotly, then log them as artifacts:
This function creates a line plot comparing a specific metric across multiple runs and logs it as an artifact.
0 notes
shopwitchvamp · 8 months ago
Note
ive bought a bunch of your joggers and skirts and i have figured out that i can fit my 36 pack metal tin of prismacolor watercolors into one of the pockets of the joggers and am using this newfound power to bring more of my art supplies to work 💜 ᓚᘏᗢ
Ohh, nice!! I love that, haha
25 notes · View notes
yesmissnyx · 1 year ago
Note
If i could only get 1 of your gumroads, which one should i get between;
"By my perfect cockslut femdom pegging joi"
"cum for me- ordering you to cum"
"cum like a girl"
Or a better way to ask is which one was your favorite to record =)
Ohhh...I think I have to choose Cum like a Girl because the idea of it gets *me* super turned on. I can't help it, I'm a slut for chastity and feminization. Making someone cum from just using a vibrator on their caged cock? That's it, that's the stuff.
But...I also really love recording POV/RP stuff like Be My Perfect Cockslut 😈 My Dommy Mommy Girlfriend Voice is really good in that one.
Cum for Me is fun, but it shines most as a bonus to any of my edging JOIs.
Hope that helps you decide 😘!
30 notes · View notes
yes-armageddon-it · 2 years ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
58 notes · View notes
merrysithmas · 1 year ago
Text
my FAVORITE all time inane AU formula is "help something happened to santa and SOS somehow the cast of _____ has to deliver all the presents to the entire world/universe in one night"
and so now im picturing the SNW enterprise crew doing so w/kirk and Santa as some ubiquitous benevolent good tidings space entity
32 notes · View notes
insidiousclouds · 9 months ago
Text
Selling my art as a teenager really killed my passion for it. I don't want to draw anymore because it feels like work.
9 notes · View notes
cassiopeialake · 1 month ago
Text
no one wanted this ai podcast shit just give us top genres back 🙏
4 notes · View notes
thelasttemptationsoflee · 1 month ago
Text
Playing with her remote controlled vibrator and making her cum like a mindless whore on company time.
3 notes · View notes
istherewifiinhell · 1 year ago
Text
too loud too tired too much textures and smells 👍��
4 notes · View notes
readmarkclippastecollage · 8 months ago
Text
For us Americans, that converts to an ass load of weed.
Items People Tried To Sneak Through Customs
A wooden door stuffed with cocaine
Tumblr media
Frogs in a film canister
Tumblr media
Cocaine disguised as candy
Tumblr media
Cats filled with opium
Tumblr media
Snake in a clay pot
Tumblr media
A gecko in a false book
Tumblr media
A metric ton of marijuana as a donkey
Tumblr media
SOURCE & MORE IMAGES
198K notes · View notes
captain-daryn · 9 days ago
Text
I wanted to share a story, but my story telling skills are terrible so bear with me lol. I was at work tonight, and me and my coworkers were all hanging around waiting for the store to close in five minutes so we could go home. I heard a loud older man's voice call down to us from the other end of the aisle, "Can some help?!"
Oh boy, here goes a grumpy boomer I thought to myself.
Turns out he was purchasing a new vacuum for his wife and the one he wanted to get was an online-only item. He was frustrated and thought he had wasted a drive down because we didn't have it in store.
I do some digging, nope, none of our other stores have it either. So I put it into our ordering system and it says it won't be here until the 15th at the soonest (two weeks from now). Great. He's not happy but he says he will live with it if he has to. It's like 5 minutes after we are closed now.
Before I confirm everything, he says he wanted to open a new credit card through us. No biggie, I get credit for doing so, and it only takes a few minutes.
We finish the sign up, it's now like 10 minutes after we closed. it's New Year's Day, I had like 5 hours of sleep at most in the last 24 hours at this point, I wanna go home, but I'm putting on my customer service act and smiling and being polite for him.
I go grab his credit card paper work and I'm able to surprise him with a coupon for $100 off his purchase (the coupon comes with every new credit card opened, but the amount off varies based on the purchase amount). And I'll be honest I forgot that the coupon was a thing bc I was so dead tired, so it was a nice surprise for me too.
Not to mention he is also a veteran, and we offer a 10% discount at my store for vets, which automatically applied to his purchase when I put in his info. So, at the end of the day, he got a $600 vacuum for like $400 after tax. So he was happy.
As the paperwork printed, I realized the delivery was actually bumped up to the 4th! So it's actually going to get here in like 3 days instead of 14!
At the end of the interaction though, he asked to see a manager, and I thought oh no I must have done something wrong.
Call my manager, wait for a minute, no luck. I find my other manager and she came out to speak with him.
"I have a complaint id like to inform you about. She did a *fantastic* job tonight. Ran right over to help me...." and basically told the manager about our interaction.
Since we were now closed though, I offered to walk him up to the entrance to make sure he got out okay. He told me how even with a full time job with his degree back in the day, he still had to work a part time job at JC Penny's call center in order to afford caring for his 4 kids and wife. He said in the 6 years he worked there, only 10 people had ever taken the chance to call back and let a manager know they appreciated his help. So now he makes sure to always tell a manager when he has had a great experience anywhere, because he knows how much it means to the worker at the end of the day.
By the time he was out the door, it was about 25ish minutes after we closed, but I didn't mind because I knew I had done something good for an older gentleman, and he in return did something good for me.
Idk if there's a moral to this story, but it made me feel good that he wanted to pass that along. I think we take for granted the help people are able to give us day-to-day. A little gratitude really goes a long way, especially in jobs like retail or food. And I think some patience from everybody is appreciated.
So maybe take an extra minute, and thank the person who just helped you. You might turn their shitty long day into a pretty good evening.
1 note · View note
champstorymedia · 18 days ago
Text
Social Media Metrics That Matter: Measuring the Impact on Your Business
<h1>Social Media Metrics That Matter: Measuring the Impact on Your Business</h1> <p>In today's digital landscape, social media has evolved beyond a platform for personal interaction; it has become a crucial component for businesses striving to grow and engage with their audience. Understanding the right social media metrics that matter is essential for measuring the impact on your business,…
0 notes
thebrandarchitect · 28 days ago
Text
1 note · View note
amplexadversary · 1 month ago
Text
I wonder if a really dedicated collection of book nerds could get those Elaine Duillo style cover illustrations a foothold in the publishing industry again. There are certainly enough artists who can achieve that level of intricacy that a really really popular Trend might be able to do it.
Perhaps any of those bookbinding hobbyists might want to try to go pro and pair up with an artist to refurbish something well enough to hook the really rich art snobs into buying unique, custom pieces for a fuckton of money.
#ignore Morg#It would need to be a book that's extremely popular but too new to really be getting special collector's editions#someone *really* fast might be able to pull it off with a copy of Wicked#I don't know the exact legal situation for selling refurbished books but I think at most you'd need a deal with a used bookseller to be saf#Donating some custom pieces to libraries might garner interest as well#I know that there's usually going to be a subset of hobbyists that at least want to try going professional#and I think this would be both really funny and really good for the economy if it worked and became a Thing#because there's nothing the corpos love more than a trend#and pulling any of them away from the race to the bottom is a very good thing#if nothing else putting artists in a more favorable position will get circulation up and that's the thing that's really good#because the same money is then benefiting many more people#Like. I am a biologist not an economist but I know enough about the subject to understand#that the people cooking the metaphorical pizza are doing a bad job.#It tastes wrong. And different methods are necessary to make a better one.#social issues#kind of#It's clear that social progress going forward is likely going to rely on convincing people who know fuckall about politics#with arguments about the economy. which would likely be best accomplished by pushing circulation HARD as a metric#and using the income of artists as a measure of economic health. Because the fuckalls are only going to listen to the mystical *economyyyyy#Like a fucking oracle or something#So pushing circulation as an easy-to-understand concept and doing it harder than the conservatives do the ''trickle down'' shtick#is probably the best move in general#Hell the argument even flows well with surface logic -#- do you just want a trickle getting through or do you want the whole system circulating? Make it a metaphor about meemaw's heart#I am fucking rambling in the tags but as bad as I am at actually talking to people I am pretty good at picking approaches through writing#So if anyone more persuasive than me wants to start working that angle I would be THRILLED
0 notes
strategichannah · 2 months ago
Text
How to Keep Your Audience Engaged Using Interactive Content
💥 Want to stand out? Learn how to engage your audience with interactive content! From polls to quizzes, make your content memorable. #InteractiveContent #AudienceEngagement
How to Keep Your Audience Engaged Using Interactive Content Written By: that Hannah Jones Time to Read: 5 minutes In a digital age flooded with content, it’s no longer enough to post static images or blog articles and hope for engagement. The modern consumer craves active involvement, and interactive content can give your brand that edge. According to the Content Marketing Institute,…
0 notes
lucyklay · 3 months ago
Text
0 notes