#LLM Plugins
Explore tagged Tumblr posts
Text
The Future of Information Gathering with Large Language Models (LLMs)
The Future of Information Gathering with Large Language Models (LLMs)
The Future of Information Gathering: Large Language Models and Their Role in Data Access What’s On My Mind Today? I’ve been doing a lot of thinking about the future, especially about how we find and use information on the internet. I had some ideas about what might change and what it could mean for all of us. To help me flesh out these ideas, I worked with a super-smart 🙂 computer program…
#AI and Digital Transformation#Artificial Intelligence in Data#Data Access with AI#Digital Advertising and AI#Future of Information Gathering#Future of Web Browsing#Internet Privacy and AI#large language models#LLM Plugins#User Control Over Data
0 notes
Text
Master Willem was right, evolution without courage will be the end of our race.
#bloodborne quote but its so appropriate#like big healing church vibes for them tech leaders#who are all shiny eyed diving headfirst into the utopic future#nevermind the fact that the dangers are only rlly obscured by their realism#the dichotomy of this ai will change the world make the world perfect that what we want#and they wanna get it there#but theyre not there#the chatbot is the thing they have to show for#granted when u use plugins and let a bunch of llms do shit together#u see the actual inscrutable magic that they can make happen#and that magic is also so threatening#but were all blinded to it bc we can only rlly acknowledge the watered down simplified realistic view of the reality#which is synival silly and ridiculous bc reality itself is one but so subjective bc we only have each persons interpretation#and its all so socially constructed#so the utopia becomes the dream of these tech leaders#and they wanna make it real#and its almost there#but the nighmare?#the dystopic future#they dont acknowledge#bc they dont dream or try to make it happen#and then theres just the basic reality#but its all tigether#just bc utopia is unlikely but possible#doesnt mean that having the tech for it#will lead to that#bc the system is the same#weve kept inovating non-stop and yeah so much commodification and wuality if life#but not acc changing anytjing
92K notes
·
View notes
Text
Learn about Microsoft Security Copilot
Microsoft Security Copilot (Security Copilot) is a generative AI-powered security solution that helps increase the efficiency and capabilities of defenders to improve security outcomes at machine speed and scale, while remaining compliant to responsible AI principles. Introducing Microsoft Security Copilot: Learn how the Microsoft Security Copilot works. Learn how Security Copilot combines an…
View On WordPress
#AI#assistive copilot#copilot#Defender#Develop whats next#Developer#development#generative AI#Getting Started#incident response#intelligence gathering#intune#investigate#kusto query language#Large language model#llm#Microsoft Entra#natural language#OpenAI#plugin#posture management#prompt#Security#security copilot#security professional#Sentinel#threat#Threat Intelligence#What is New ?
0 notes
Text
Interesting BSD license WP AI plugin..
"SuperEZ AI SEO Wordpress Plugin A Wordpress plugin that utilizes the power of OpenAI GPT-3/GPT-4 API to generate SEO content for your blog or page posts. This Wordpress plugin serves as a personal AI assistant to help you with content ideas and creating content. It also allows you to add Gutenberg blocks to the editor after the assistant generates the content."
g023/SuperEZ-AI-SEO-Wordpress-Plugin: A Wordpress OpenAI API GPT-3/GPT-4 SEO and Content Generator for Pages and Posts (github.com)
#wordpress#wordpress plugins#ai#ai assisted#content creator#content creation#ai generation#wp#blog#ai writing#virtual assistant#llm#app developers#opensource
0 notes
Text
Obsidian And RTX AI PCs For Advanced Large Language Model
How to Utilize Obsidian‘s Generative AI Tools. Two plug-ins created by the community demonstrate how RTX AI PCs can support large language models for the next generation of app developers.
Obsidian Meaning
Obsidian is a note-taking and personal knowledge base program that works with Markdown files. Users may create internal linkages for notes using it, and they can see the relationships as a graph. It is intended to assist users in flexible, non-linearly structuring and organizing their ideas and information. Commercial licenses are available for purchase, however personal usage of the program is free.
Obsidian Features
Electron is the foundation of Obsidian. It is a cross-platform program that works on mobile operating systems like iOS and Android in addition to Windows, Linux, and macOS. The program does not have a web-based version. By installing plugins and themes, users may expand the functionality of Obsidian across all platforms by integrating it with other tools or adding new capabilities.
Obsidian distinguishes between community plugins, which are submitted by users and made available as open-source software via GitHub, and core plugins, which are made available and maintained by the Obsidian team. A calendar widget and a task board in the Kanban style are two examples of community plugins. The software comes with more than 200 community-made themes.
Every new note in Obsidian creates a new text document, and all of the documents are searchable inside the app. Obsidian works with a folder of text documents. Obsidian generates an interactive graph that illustrates the connections between notes and permits internal connectivity between notes. While Markdown is used to accomplish text formatting in Obsidian, Obsidian offers quick previewing of produced content.
Generative AI Tools In Obsidian
A group of AI aficionados is exploring with methods to incorporate the potent technology into standard productivity practices as generative AI develops and speeds up industry.
Community plug-in-supporting applications empower users to investigate the ways in which large language models (LLMs) might improve a range of activities. Users using RTX AI PCs may easily incorporate local LLMs by employing local inference servers that are powered by the NVIDIA RTX-accelerated llama.cpp software library.
It previously examined how consumers might maximize their online surfing experience by using Leo AI in the Brave web browser. Today, it examine Obsidian, a well-known writing and note-taking tool that uses the Markdown markup language and is helpful for managing intricate and connected records for many projects. Several of the community-developed plug-ins that add functionality to the app allow users to connect Obsidian to a local inferencing server, such as LM Studio or Ollama.
To connect Obsidian to LM Studio, just select the “Developer” button on the left panel, load any downloaded model, enable the CORS toggle, and click “Start.” This will enable LM Studio’s local server capabilities. Because the plug-ins will need this information to connect, make a note of the chat completion URL from the “Developer” log console (“http://localhost:1234/v1/chat/completions” by default).
Next, visit the “Settings” tab after launching Obsidian. After selecting “Community plug-ins,” choose “Browse.” Although there are a number of LLM-related community plug-ins, Text Generator and Smart Connections are two well-liked choices.
For creating notes and summaries on a study subject, for example, Text Generator is useful in an Obsidian vault.
Asking queries about the contents of an Obsidian vault, such the solution to a trivia question that was stored years ago, is made easier using Smart Connections.
Open the Text Generator settings, choose “Custom” under “Provider profile,” and then enter the whole URL in the “Endpoint” section. After turning on the plug-in, adjust the settings for Smart Connections. For the model platform, choose “Custom Local (OpenAI Format)” from the options panel on the right side of the screen. Next, as they appear in LM Studio, type the model name (for example, “gemma-2-27b-instruct”) and the URL into the corresponding fields.
The plug-ins will work when the fields are completed. If users are interested in what’s going on on the local server side, the LM Studio user interface will also display recorded activities.
Transforming Workflows With Obsidian AI Plug-Ins
Consider a scenario where a user want to organize a trip to the made-up city of Lunar City and come up with suggestions for things to do there. “What to Do in Lunar City” would be the title of the new note that the user would begin. A few more instructions must be included in the query submitted to the LLM in order to direct the results, since Lunar City is not an actual location. The model will create a list of things to do while traveling if you click the Text Generator plug-in button.
Obsidian will ask LM Studio to provide a response using the Text Generator plug-in, and LM Studio will then execute the Gemma 2 27B model. The model can rapidly provide a list of tasks if the user’s machine has RTX GPU acceleration.
Or let’s say that years later, the user’s buddy is visiting Lunar City and is looking for a place to dine. Although the user may not be able to recall the names of the restaurants they visited, they can review the notes in their vault Obsidian‘s word for a collection of notes to see whether they have any written notes.
A user may ask inquiries about their vault of notes and other material using the Smart Connections plug-in instead of going through all of the notes by hand. In order to help with the process, the plug-in retrieves pertinent information from the user’s notes and responds to the request using the same LM Studio server. The plug-in uses a method known as retrieval-augmented generation to do this.
Although these are entertaining examples, users may see the true advantages and enhancements in daily productivity after experimenting with these features for a while. Two examples of how community developers and AI fans are using AI to enhance their PC experiences are Obsidian plug-ins.
Thousands of open-source models are available for developers to include into their Windows programs using NVIDIA GeForce RTX technology.
Read more on Govindhtech.com
#Obsidian#RTXAIPCs#LLM#LargeLanguageModel#AI#GenerativeAI#NVIDIARTX#LMStudio#RTXGPU#News#Technews#Technology#Technologynews#Technologytrends#govindhtech
3 notes
·
View notes
Text
There's nowhere to go on the public internet where your posts won't be scraped for AI, sorry. Not discord not mastodon not nothing. No photoshop plugin or robots.txt will protect you from anything other than the politest of scrapers. Maybe if people wanted to bring back private BBSes we could do something with that but at this point, if you're putting something online, expect it to be used for LLM training datasets and/or targeted advertisements. 🥳
#does it suck? yes. have i selected 'don't AI me' on the tumblr toggle? yes. will it do anything? no#that said....it's really just a question of scale#most things have been inherently copyable since the invention of the printing press
7 notes
·
View notes
Text
One reason I haven't experimented with the LLMs as much is that the price of the VRAM requirements rapidly jumps off the deep end.
For Stable Diffusion, the image generator, you can mostly get by with 12GB. It's enough to generate some decent size images and also use regional prompting and the controlnet plugin. About $300, probably.
The smallest Falcon LLM, 7b parameters, weighs in around 15GB, or a $450 16GB card.
A 24GB card will run like $1,500.
A 48GB A6000 will run like $4,000.
An 80GB A100 is like $15,000.
3 notes
·
View notes
Text
Introduction to the LangChain Framework
LangChain is an open-source framework designed to simplify and enhance the development of applications powered by large language models (LLMs). By combining prompt engineering, chaining processes, and integrations with external systems, LangChain enables developers to build applications with powerful reasoning and contextual capabilities. This tutorial introduces the core components of LangChain, highlights its strengths, and provides practical steps to build your first LangChain-powered application.
What is LangChain?
LangChain is a framework that lets you connect LLMs like OpenAI's GPT models with external tools, data sources, and complex workflows. It focuses on enabling three key capabilities: - Chaining: Create sequences of operations or prompts for more complex interactions. - Memory: Maintain contextual memory for multi-turn conversations or iterative tasks. - Tool Integration: Connect LLMs with APIs, databases, or custom functions. LangChain is modular, meaning you can use specific components as needed or combine them into a cohesive application.
Getting Started
Installation First, install the LangChain package using pip: pip install langchain Additionally, you'll need to install an LLM provider (e.g., OpenAI or Hugging Face) and any tools you plan to integrate: pip install openai
Core Concepts in LangChain
1. Chains Chains are sequences of steps that process inputs and outputs through the LLM or other components. Examples include: - Sequential chains: A linear series of tasks. - Conditional chains: Tasks that branch based on conditions. 2. Memory LangChain offers memory modules for maintaining context across multiple interactions. This is particularly useful for chatbots and conversational agents. 3. Tools and Plugins LangChain supports integrations with APIs, databases, and custom Python functions, enabling LLMs to interact with external systems. 4. Agents Agents dynamically decide which tool or chain to use based on the user’s input. They are ideal for multi-tool workflows or flexible decision-making.
Building Your First LangChain Application
In this section, we’ll build a LangChain app that integrates OpenAI’s GPT API, processes user queries, and retrieves data from an external source. Step 1: Setup and Configuration Before diving in, configure your OpenAI API key: import os from langchain.llms import OpenAI # Set API Key os.environ = "your-openai-api-key" # Initialize LLM llm = OpenAI(model_name="text-davinci-003") Step 2: Simple Chain Create a simple chain that takes user input, processes it through the LLM, and returns a result. from langchain.prompts import PromptTemplate from langchain.chains import LLMChain # Define a prompt template = PromptTemplate( input_variables=, template="Explain {topic} in simple terms." ) # Create a chain simple_chain = LLMChain(llm=llm, prompt=template) # Run the chain response = simple_chain.run("Quantum computing") print(response) Step 3: Adding Memory To make the application context-aware, we add memory. LangChain supports several memory types, such as conversational memory and buffer memory. from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory # Add memory to the chain memory = ConversationBufferMemory() conversation = ConversationChain(llm=llm, memory=memory) # Simulate a conversation print(conversation.run("What is LangChain?")) print(conversation.run("Can it remember what we talked about?")) Step 4: Integrating Tools LangChain can integrate with APIs or custom tools. Here’s an example of creating a tool for retrieving Wikipedia summaries. from langchain.tools import Tool # Define a custom tool def wikipedia_summary(query: str): import wikipedia return wikipedia.summary(query, sentences=2) # Register the tool wiki_tool = Tool(name="Wikipedia", func=wikipedia_summary, description="Retrieve summaries from Wikipedia.") # Test the tool print(wiki_tool.run("LangChain")) Step 5: Using Agents Agents allow dynamic decision-making in workflows. Let’s create an agent that decides whether to fetch information or explain a topic. from langchain.agents import initialize_agent, Tool from langchain.agents import AgentType # Define tools tools = # Initialize agent agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) # Query the agent response = agent.run("Tell me about LangChain using Wikipedia.") print(response) Advanced Topics 1. Connecting with Databases LangChain can integrate with databases like PostgreSQL or MongoDB to fetch data dynamically during interactions. 2. Extending Functionality Use LangChain to create custom logic, such as summarizing large documents, generating reports, or automating tasks. 3. Deployment LangChain applications can be deployed as web apps using frameworks like Flask or FastAPI. Use Cases - Conversational Agents: Develop context-aware chatbots for customer support or virtual assistance. - Knowledge Retrieval: Combine LLMs with external data sources for research and learning tools. - Process Automation: Automate repetitive tasks by chaining workflows. Conclusion LangChain provides a robust and modular framework for building applications with large language models. Its focus on chaining, memory, and integrations makes it ideal for creating sophisticated, interactive applications. This tutorial covered the basics, but LangChain’s potential is vast. Explore the official LangChain documentation for deeper insights and advanced capabilities. Happy coding! Read the full article
#AIFramework#AI-poweredapplications#automation#context-aware#dataintegration#dynamicapplications#LangChain#largelanguagemodels#LLMs#MachineLearning#ML#NaturalLanguageProcessing#NLP#workflowautomation
0 notes
Text
Diversions #7: Additional progress
Wow, so much has happened since #6. Where do I begin? Should I try to recap my travels to Portland or to Atlanta for WordCamp or FinCon? Should I summarize all of the progress we’ve made on the addition to our home? Should I write about my recent upgrade to an M4 MacBook Pro? Or, do I focus on the updates that have come to the ActivityPub plugin that has me rethinking how I use my personal website (yet again)?
How about all of it?!
I’ve posted a few photos from Portland, though I have many more photos that I’ve taken on film that I just haven’t developed yet. This weekend I went to do exactly that and hit a roadblock. It would help if I had fresh chemicals around!
I’ve also commented on what happened at WordCamp and wrote some key takeaways from FinCon. So I won’t rehash any of that.
Instead, I’ll focus on the reminder that traveling (in general) and going to events has a positive impact on your outlook, your network, and your business. Not all of them can be directly measured or are reflected on the bottom row a spreadsheet – but you can feel them as you move forward. You may face a challenge and end up thinking of a company or person you met at an event that can help. In short; go to events.
The addition is moving along quickly now. This sort of project seems to be one where the beginning parts; planning, drawing, documenting, etc. are the slow parts. Then, once the work begins, it is difficult to keep up with.
Working from home affords me the ability to jog outside once or twice a day to snap a photo or two of the progress. I’m very happy that I’m not the one doing the work this time.
The foundation and floor base
I know snags are inevitable. In fact, the forecast at the end of this week already looks like it will slow down progress some. But that is how these sorts of things go.
The new lighter, faster 14″ M4 Pro MacBook Pro is really a delight to use. I hit a snag or two when upgrading the software that was Intel-only to their Apple Silicon counterparts (especially Adobe) but other than just a few hiccups — the upgrade/migration process is so easy.
My previous MacBook Pro was no slouch but this one is a screamer. Building projects, running local LLMs, or just having literally every single app open doesn’t seem to phase it. Using it as a laptop, something I was never able to do with my previous computer, is also really fun. I can actually get a lot of work done and the battery life is great!
My last bit of equipment upgrading is going to be a Thunderbolt dock and at least one external SSD. I need my port situation to be a bit cleaner and I’m hoping a dock will free me up to just have a single wire running into the laptop. And, as of today, I’m not backing up using Time Machine so I cannot delay getting a new SSD.
The social media landscape is a mess! Part of me loves the messiness. It is an interesting time and lots of people are experimenting with many things. New platforms, new ideas, new ways of sharing. Gone are the days of only one or two social networks thriving.
But another part of me is frustrated. I don’t really know what I want to do with my personal blog as it relates to the social web. I have the ActivityPub plugin installed, and it works well to distribute my posts onto the fediverse as a first class citizen. But it isn’t (and likely never will be) perfect.
If you’re at all interested in your website joining the fediverse, I recommend giving the plugin a try and learning how it works. This isn’t cross-posting. It makes your website “an ActivityPub endpoint” so that the account is your website (and/or its authors). It is very cool and interesting.
The reason the ActivityPub plugin has me rethinking my personal blog use, is that I can begin to see a future where all of my favorites, boosts, quote posts, etc. live here on my website – rather than my Mastodon account. But for that I’d want to restructure my site’s design a bit (and likely my RSS feed). Of course, if I explore this at all I’ll write about it.
Regardless of how this phase of the social web shakes out I know I’ll be publishing here on my blog.
An edition of Diversions wouldn’t be complete without some links:
Christoph Rauscher’s newsletter – I like how Christoph publishes his newsletter. Lovely.
Live at Delia’s Third Happening – Site Nonsite live album recording.
Vault – By Commercial Type.
Rex Brasher Field Notes – Lovely little film about Brasher’s work as well.
Rachel Binx’s website – Go take a look.
Apple Pay Plates – This is me buying Crocs or fanny packs.
0 notes
Text
Calling LLMs from client-side JavaScript, converting PDFs to HTML + weeknotes
See on Scoop.it - Education 2.0 & 3.0
I’ve been having a bunch of fun taking advantage of CORS-enabled LLM APIs to build client-side JavaScript applications that access LLMs directly. I also span up a new Datasette plugin …
0 notes
Text
I’m going to approach this as though when tumblr user tanadrin says that they haven’t seen anti-AI rhetoric that doesn’t trade in moral panic, that they’re telling the truth and more importantly that they would would be interested in seeing some. My hope is that you will read this as a reasonable reply, but I’ll be honest upfront that I can’t pretend that this isn’t also personal for me as someone whose career is threatened by generative AI. Personally, I’m not afraid that any LLM will ever surpass my ability to write, but what does scare me is that it doesn’t actually matter. I’m sure I will be automated out whether my artificial replacement can write better than me or not.
This post is kind of long so if watching is more your thing, check out Zoe Bee’s and Philosophy Tube’s video essays, I thought these were both really good at breaking down the problems as well as describing the actual technology.
Also, for clarity, I’m using “AI” and “genAI” as shorthand, but what I’m specifically referring to is Large Language Models (like ChatGpt) or image generation tools (like MidJourney or Dall-E). The term “AI” is used for a lot of extremely useful things that don’t deserve to be included in this.
Also, to get this out of the way, a lot of people point out that genAI is an environmental problem but honestly even if it were completely eco-friendly I’d have serious issues with it.
A major concern that I have with genAI, as I’ve already touched on, is that it is being sold as a way to replace people in creative industries, and it is being purchased on that promise. Last year SAG and the WGA both went on strike because (among other reasons) studios wanted to replace them with AI and this year the Animation Guild is doing the same. News is full of fake images and stories getting sold as the real thing, and when the news is real it’s plagiarised. A journalist at 404 Media did an experiment where he created a website to post AI-powered news stories only to find that all it did was rip off his colleagues. LLMs can’t think of anything new, they just recycle what a human has already done.
As for image generation, there are all the same problems with plagiarism and putting human artists out of work, as well as the overwhelming amount of revenge porn people are creating, not just violating the privacy of random people, but stealing the labour of sex workers to do it.
At this point you might be thinking that these aren’t examples of the technology, but how people use it. That’s a fair rebuttal, every time there’s a new technology there are going to be reports of how people are using it for sex or crimes so let’s not throw the baby out with the bathwater. Cameras shouldn’t be taken off phones just because people use them to take upskirt shots of unwilling participants, after all, people use phone cameras to document police brutality, and to take upskirt shots of people who have consented to them.
But what are LLMs for? As far as I can tell the best use-case is correcting your grammar, which tools like Grammarly already pretty much have covered, so there is no need for a billion-dollar industry to do the same thing. I am yet to see a killer use case for image generation, and I would be interested to hear one if you have it. I know that digital artists have plugins at their disposal to tidy up or add effects/filters to images they’ve created, but again, that’s something that already exists and has been used for very good reason by artists working in the field, not something that creates images out of nothing.
Now let’s look at the technology itself and ask some important questions. Why haven’t they programmed the racism out of GPT-3? The answer to that is complicated and the answer is complicated and sort of boils down to the fact that programmers often don’t realise that racism needs to be programmed out of any technology. Meredith Broussard touches on this in her interview for the Black TikTok Strike of 2021 episode of the podcast Sixteenth Minute, and in her book More Than A Glitch, but to be fair I haven’t read that.
Here's another question I have: shouldn’t someone have been responsible for making sure that multiple image generators, including Google’s, did not have child pornography in their training data? Yes, I am aware that people engaging in moral panics often lean on protect-the-children arguments, and there are many nuanced discussions to be had about how to prevent children from being abused and protect those who have been, but I do think it’s worth pointing out that these technologies have been rolled out before the question of “will people generate CSAM with it?” was fully ironed out. Especially considering that AI images are overwhelming the capacity for investigators to stop instances of actual child abuse.
Again, you might say that’s a problem with how it’s being used and not what it is, but I really have to stress that it is able to do this. This is being put out for everyday people to use and there just aren’t enough safeguards that people can’t get around them. If something is going to have this kind of widespread adoption, it really should not be capable of this.
I’ll sum up by saying that I know the kind of moral panic arguments you’re talking about, the whole “oh, it’s evil because it’s not human” isn’t super convincing, but a lot of the pro-AI arguments have about as much backing. There are arguments like “it will get cheaper” but Goldman Sachs released a report earlier this year saying that, basically, there is no reason to believe that. If you only read one of the links in this post, I recommend that one. There are also arguments like “it is inevitable, just use it now” (which is genuinely how some AI tools are marketed), but like, is it? It doesn’t have to be. Are you my mum trying to convince me to stop complaining about a family trip I don’t want to go on or are you a company trying to sell me a technology that is spying on me and making it weirdly hard to find the opt-out button?
My hot take is that AI bears all of the hallmarks of an economic bubble but that anti-AI bears all of the hallmarks of a moral panic. I contain multitudes.
9K notes
·
View notes
Text
The Mistral AI New Model Large-Instruct-2411 On Vertex AI
Introducing the Mistral AI New Model Large-Instruct-2411 on Vertex AI from Mistral AI
Mistral AI’s models, Codestral for code generation jobs, Mistral Large 2 for high-complexity tasks, and the lightweight Mistral Nemo for reasoning tasks like creative writing, were made available on Vertex AI in July. Google Cloud is announcing that the Mistral AI new model is now accessible on Vertex AI Model Garden: Mistral-Large-Instruct-2411 is currently accessible to the public.
Large-Instruct-2411 is a sophisticated dense large language model (LLM) with 123B parameters that extends its predecessor with improved long context, function calling, and system prompt. It has powerful reasoning, knowledge, and coding skills. The approach is perfect for use scenarios such as big context applications that need strict adherence for code generation and retrieval-augmented generation (RAG), or sophisticated agentic workflows with exact instruction following and JSON outputs.
The new Mistral AI Large-Instruct-2411 model is available for deployment on Vertex AI via its Model-as-a-Service (MaaS) or self-service offering right now.
With the new Mistral AI models on Vertex AI, what are your options?
Using Mistral’s models to build atop Vertex AI, you can:
Choose the model that best suits your use case: A variety of Mistral AI models are available, including effective models for low-latency requirements and strong models for intricate tasks like agentic processes. Vertex AI simplifies the process of assessing and choosing the best model.
Try things with assurance: Vertex AI offers fully managed Model-as-a-Service for Mistral AI models. Through straightforward API calls and thorough side-by-side evaluations in its user-friendly environment, you may investigate Mistral AI models.
Control models without incurring extra costs: With pay-as-you-go pricing flexibility and fully managed infrastructure built for AI workloads, you can streamline the large-scale deployment of the new Mistral AI models.
Adjust the models to your requirements: With your distinct data and subject expertise, you will be able to refine Mistral AI’s models to produce custom solutions in the upcoming weeks.
Create intelligent agents: Using Vertex AI’s extensive toolkit, which includes LangChain on Vertex AI, create and coordinate agents driven by Mistral AI models. To integrate Mistral AI models into your production-ready AI experiences, use Genkit’s Vertex AI plugin.
Construct with enterprise-level compliance and security: Make use of Google Cloud’s integrated privacy, security, and compliance features. Enterprise controls, like the new organization policy for Vertex AI Model Garden, offer the proper access controls to guarantee that only authorized models are accessible.
Start using Google Cloud’s Mistral AI models
Google Cloud’s dedication to open and adaptable AI ecosystems that assist you in creating solutions that best meet your needs is demonstrated by these additions. Its partnership with Mistral AI demonstrates its open strategy in a cohesive, enterprise-ready setting. Many of the first-party, open-source, and third-party models offered by Vertex AI, including the recently released Mistral AI models, can be provided as a fully managed Model-as-a-service (MaaS) offering, giving you enterprise-grade security on its fully managed infrastructure and the ease of a single bill.
Mistral Large (24.11)
The most recent iteration of the Mistral Large model, known as Mistral Large (24.11), has enhanced reasoning and function calling capabilities.
Mistral Large is a sophisticated Large Language Model (LLM) that possesses cutting-edge knowledge, reasoning, and coding skills.
Intentionally multilingual: English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, Polish, Arabic, and Hindi are among the dozens of languages that are supported.
Multi-model capability: Mistral Large 24.11 maintains cutting-edge performance on text tasks while excelling at visual comprehension.
Competent in coding: Taught more than 80 coding languages, including Java, Python, C, C++, JavaScript, and Bash. Additionally, more specialized languages like Swift and Fortran were taught.
Agent-focused: Top-notch agentic features, including native function calls and JSON output.
Sophisticated reasoning: Cutting-edge reasoning and mathematical skills.
Context length: 128K is the most that Mistral Large can support.
Use cases
Agents: Made possible by strict adherence to instructions, JSON output mode, and robust safety measures
Text: Creation, comprehension, and modification of synthetic text
RAG: Important data is preserved across lengthy context windows (up to 128K tokens).
Coding includes creating, finishing, reviewing, and commenting on code. All popular coding languages are supported.
Read more on govindhtech.com
#MistralAI#ModelLarge#VertexAI#MistralLarge2#Codestral#retrievalaugmentedgeneration#RAG#VertexAIModelGarden#LargeLanguageModel#LLM#technology#technews#news#govindhtech
0 notes
Text
New Samsung Galaxy Z Series Products Integrate the Doubao Large Model - Journal Today Online https://www.merchant-business.com/new-samsung-galaxy-z-series-products-integrate-the-doubao-large-model/?feed_id=135754&_unique_id=669955dba45ff On July 17th, Samsung Electronics l... BLOGGER - #GLOBAL On July 17th, Samsung Electronics launched the new generation Galaxy Z series products for the Chinese market. During the event, Samsung Electronics announced a partnership with Volcano Engine to enhance the intelligent assistant and AI visual access of Galaxy Z Fold6 and Galaxy Z Flip6 smartphones by integrating Doubao large models, improving the smartphones’ intelligent application experience. Previously, Samsung had announced deep cooperation with Google Gemini at overseas product launch events, while in China it chose partners such as Volcano Engine as collaborators for large models.In addition to the AI functions such as drawing circles for search, real-time translation, and voice transcription that have been disclosed, at this China region launch event, Samsung specially showcased the capabilities brought by Galaxy AI based on the Doubao large model for two new foldable phones: when users search for travel-related keywords through Bixby voice assistant, Samsung’s Galaxy AI will search and combine high-quality content sources to provide users with the latest online information and deliver it to users in the form of short video content cards.For example, when users travel in a city, Bixby assistant can rely on the massive content sources of Doubao large model content plugins to provide users with information such as attractions, food, hotels, etc., helping users improve their travel plans and check in at every beautiful spot.In addition, by introducing the Doubao large model single-image AI portrait technology, Samsung users only need to upload a single photo to convert it into various new images in different styles such as business, 3D cartoon, cyberpunk. This allows users to change their avatar style anytime and fully meet personalized needs.According to Zhao Wenjie, Vice President of the Volcano Engine ecosystem, the Doubao large-scale model has been serving many businesses within ByteDance and providing services to enterprise customers through the Volcano Engine. Zhao Wenjie said: “The Volcano Engine is constantly exploring generative AI technology in three aspects: better model performance, continuous reduction of model costs, and making business scenarios easier to implement so that more people can use it and inspire more innovative scenarios.Xu Yuanmo, Vice President of User Experience Strategy for Samsung Electronics Greater China Region, stated that in the global market, Samsung is collaborating with internationally renowned companies to build the Samsung smart mobile product ecosystem; in the Chinese market, Samsung is also working with top domestic companies in various fields to refine the most outstanding products.“In the field of AI, we collaborate deeply with the best domestic partners to fully tap into and integrate Samsung’s hardware and system advantages, jointly committed to creating globally leading AI smartphones for consumers,” said Xu Yuanmo.SEE ALSO: Baidu AI Cloud Collaborates with Samsung China: Galaxy AI Integrates ERNIE LLM http://109.70.148.72/~merchant29/6network/wp-content/uploads/2024/07/pexels-photo-19318474.jpeg #GLOBAL - BLOGGER On July 17th, Samsung Electronics launched the new generation Galaxy Z series products for the Chinese market. During the event, Samsung Electronics announced a partnership with Volcano Engine to enhance the intelligent assistant and AI visual access of Galaxy Z Fold6 and Galaxy Z Flip6 smartphones by integrating Doubao large models, improving the smartphones’ intelligent … Read More
0 notes
Text
New Samsung Galaxy Z Series Products Integrate the Doubao Large Model - Journal Today Online - BLOGGER https://www.merchant-business.com/new-samsung-galaxy-z-series-products-integrate-the-doubao-large-model/?feed_id=135753&_unique_id=669955dabe2a5 On July 17th, Samsung Electronics launched the new generation Galaxy Z series products for the Chinese market. During the event, Samsung Electronics announced a partnership with Volcano Engine to enhance the intelligent assistant and AI visual access of Galaxy Z Fold6 and Galaxy Z Flip6 smartphones by integrating Doubao large models, improving the smartphones’ intelligent application experience. Previously, Samsung had announced deep cooperation with Google Gemini at overseas product launch events, while in China it chose partners such as Volcano Engine as collaborators for large models.In addition to the AI functions such as drawing circles for search, real-time translation, and voice transcription that have been disclosed, at this China region launch event, Samsung specially showcased the capabilities brought by Galaxy AI based on the Doubao large model for two new foldable phones: when users search for travel-related keywords through Bixby voice assistant, Samsung’s Galaxy AI will search and combine high-quality content sources to provide users with the latest online information and deliver it to users in the form of short video content cards.For example, when users travel in a city, Bixby assistant can rely on the massive content sources of Doubao large model content plugins to provide users with information such as attractions, food, hotels, etc., helping users improve their travel plans and check in at every beautiful spot.In addition, by introducing the Doubao large model single-image AI portrait technology, Samsung users only need to upload a single photo to convert it into various new images in different styles such as business, 3D cartoon, cyberpunk. This allows users to change their avatar style anytime and fully meet personalized needs.According to Zhao Wenjie, Vice President of the Volcano Engine ecosystem, the Doubao large-scale model has been serving many businesses within ByteDance and providing services to enterprise customers through the Volcano Engine. Zhao Wenjie said: “The Volcano Engine is constantly exploring generative AI technology in three aspects: better model performance, continuous reduction of model costs, and making business scenarios easier to implement so that more people can use it and inspire more innovative scenarios.Xu Yuanmo, Vice President of User Experience Strategy for Samsung Electronics Greater China Region, stated that in the global market, Samsung is collaborating with internationally renowned companies to build the Samsung smart mobile product ecosystem; in the Chinese market, Samsung is also working with top domestic companies in various fields to refine the most outstanding products.“In the field of AI, we collaborate deeply with the best domestic partners to fully tap into and integrate Samsung’s hardware and system advantages, jointly committed to creating globally leading AI smartphones for consumers,” said Xu Yuanmo.SEE ALSO: Baidu AI Cloud Collaborates with Samsung China: Galaxy AI Integrates ERNIE LLM http://109.70.148.72/~merchant29/6network/wp-content/uploads/2024/07/pexels-photo-19318474.jpeg New Samsung Galaxy Z Series Products Integrate the Doubao Large Model - Journal Today Online - #GLOBAL BLOGGER - #GLOBAL
0 notes
Text
New Samsung Galaxy Z Series Products Integrate the Doubao Large Model - Journal Today Online - #GLOBAL https://www.merchant-business.com/new-samsung-galaxy-z-series-products-integrate-the-doubao-large-model/?feed_id=135751&_unique_id=669955d8dd3a8 On July 17th, Samsung Electronics launched the new generation Galaxy Z series products for the Chinese market. During the event, Samsung Electronics announced a partnership with Volcano Engine to enhance the intelligent assistant and AI visual access of Galaxy Z Fold6 and Galaxy Z Flip6 smartphones by integrating Doubao large models, improving the smartphones’ intelligent application experience. Previously, Samsung had announced deep cooperation with Google Gemini at overseas product launch events, while in China it chose partners such as Volcano Engine as collaborators for large models.In addition to the AI functions such as drawing circles for search, real-time translation, and voice transcription that have been disclosed, at this China region launch event, Samsung specially showcased the capabilities brought by Galaxy AI based on the Doubao large model for two new foldable phones: when users search for travel-related keywords through Bixby voice assistant, Samsung’s Galaxy AI will search and combine high-quality content sources to provide users with the latest online information and deliver it to users in the form of short video content cards.For example, when users travel in a city, Bixby assistant can rely on the massive content sources of Doubao large model content plugins to provide users with information such as attractions, food, hotels, etc., helping users improve their travel plans and check in at every beautiful spot.In addition, by introducing the Doubao large model single-image AI portrait technology, Samsung users only need to upload a single photo to convert it into various new images in different styles such as business, 3D cartoon, cyberpunk. This allows users to change their avatar style anytime and fully meet personalized needs.According to Zhao Wenjie, Vice President of the Volcano Engine ecosystem, the Doubao large-scale model has been serving many businesses within ByteDance and providing services to enterprise customers through the Volcano Engine. Zhao Wenjie said: “The Volcano Engine is constantly exploring generative AI technology in three aspects: better model performance, continuous reduction of model costs, and making business scenarios easier to implement so that more people can use it and inspire more innovative scenarios.Xu Yuanmo, Vice President of User Experience Strategy for Samsung Electronics Greater China Region, stated that in the global market, Samsung is collaborating with internationally renowned companies to build the Samsung smart mobile product ecosystem; in the Chinese market, Samsung is also working with top domestic companies in various fields to refine the most outstanding products.“In the field of AI, we collaborate deeply with the best domestic partners to fully tap into and integrate Samsung’s hardware and system advantages, jointly committed to creating globally leading AI smartphones for consumers,” said Xu Yuanmo.SEE ALSO: Baidu AI Cloud Collaborates with Samsung China: Galaxy AI Integrates ERNIE LLM http://109.70.148.72/~merchant29/6network/wp-content/uploads/2024/07/pexels-photo-19318474.jpeg BLOGGER - #GLOBAL
0 notes
Text
New paper: AI agents that matter
New Post has been published on https://thedigitalinsider.com/new-paper-ai-agents-that-matter/
New paper: AI agents that matter
Some of the most exciting applications of large language models involve taking real-world action, such as booking flight tickets or finding and fixing software bugs. AI systems that carry out such tasks are called agents. They use LLMs in combination with other software to use tools such as web search and code terminals.
The North Star of this field is to build assistants like Siri or Alexa and get them to actually work — handle complex tasks, accurately interpret users’ requests, and perform reliably. But this is far from a reality, and even the research direction is fairly new. To stimulate the development of agents and measure their effectiveness, researchers have created benchmark datasets. But as we’ve said before, LLM evaluation is a minefield, and it turns out that agent evaluation has a bunch of additional pitfalls that affect today’s benchmarks and evaluation practices. This state of affairs encourages the development of agents that do well on benchmarks without being useful in practice.
We have released a new paper that identifies the challenges in evaluating agents and proposes ways to address them. Read the paper here. The authors are Sayash Kapoor, Benedikt Ströbl, Zachary S. Siegel, Nitya Nadgir, and Arvind Narayanan, all at Princeton University.
In this post, we offer thoughts on the definition of AI agents, why we are cautiously optimistic about the future of AI agent research, whether AI agents are more hype or substance, and give a brief overview of the paper.
The term agent has been used by AI researchers without a formal definition. This has led to its being hijacked as a marketing term, and has generated a bit of pushback against its use. But the term isn’t meaningless. Many researchers have tried to formalize the community’s intuitive understanding of what constitutes an agent in the context of language-model-based systems [1, 2, 3, 4, 5]. Rather than a binary, it can be seen as a spectrum, sometimes denoted by the term ‘agentic’.
The five recent definitions of AI agents cited above are all distinct but with strong similarities to each other. Rather than propose a new definition, we identified three clusters of properties that cause an AI system to be considered more agentic according to existing definitions:
Environment and goals. The more complex the environment, the more AI systems operating in that environment are agentic. Complex environments are those that have a range of tasks and domains, multiple stakeholders, a long time horizon to take action, and unexpected changes. Further, systems that pursue complex goals without being instructed on how to pursue the goal are more agentic.
User interface and supervision. AI systems that can be instructed in natural language and act autonomously on the user’s behalf are more agentic. In particular, systems that require less user supervision are more agentic. For example, chatbots cannot take real-world action, but adding plugins to chatbots (such as Zapier for ChatGPT) allows them to take some actions on behalf of users.
System design. Systems that use tools (like web search or code terminal) or planning (like reflecting on previous outputs or decomposing goals into subgoals) are more agentic. Systems whose control flow is driven by an LLM, rather than LLMs being invoked by a static program, are more agentic.
While some agents such as ChatGPT’s code interpreter / data analysis mode have been useful, more ambitious agent-based products so far have failed. The two main product launches based on AI agents have been the Rabbit R1 and Humane AI pin. These devices promised to eliminate or reduce phone dependence, but turned out to be too slow and unreliable. Devin, an “AI software engineer”, was announced with great hype 4 months ago, but has been panned in a video review and remains in waitlist-only mode. It is clear that if AI agents are to be useful in real-world products, they have a long way to go.
So are AI agents all hype? It’s too early to tell. We think there are research challenges to be solved before we can expect agents such as the ones above to work well enough to be widely adopted. The only way to find out is through more research, so we do think research on AI agents is worthwhile.
One major research challenge is reliability — LLMs are already capable enough to do many tasks that people want an assistant to handle, but not reliable enough that they can be successful products. To appreciate why, think of a flight-booking agent that needs to make dozens of calls to LLMs. If each of those went wrong independently with a probability of, say, just 2%, the overall system would be so unreliable as to be completely useless (this partly explains some of the product failures we’ve seen). So research on improving reliability might have many new applications even if the underlying language models don’t improve. And if scaling runs out, agents are the most natural direction for further progress in AI.
Right now, however, research is itself contributing to hype and overoptimism because evaluation practices are not rigorous enough, much like the early days of machine learning research before the common task method took hold. That brings us to our paper.
What changes must the AI community implement to help stimulate the development of AI agents that are useful in the real world, and not just on benchmarks? This is the paper’s central question. We make five recommendations:
1. Implement cost-controlled evaluations. The language models underlying most AI agents are stochastic. This means simply calling the underlying model multiple times can increase accuracy. We show that such simple tricks can outperform complex agent architectures on the HumanEval benchmark, while costing much less. We argue that all agent evaluation must control for cost. (We originally published this finding here. In the two months since we published this post, Pareto curves and joint optimization of cost and accuracy have become increasingly common in agent evaluations.)
2. Jointly optimize accuracy and cost. Visualizing evaluation results as a Pareto curve of accuracy and inference cost opens up a new space of agent design: jointly optimizing the two metrics. We show how we can lower cost while maintaining accuracy on HotPotQA by implementing a modification to the DSPy framework.
3. Distinguish model and downstream benchmarking. Through a case study of NovelQA, we show how benchmarks meant for model evaluation can be misleading when used for downstream evaluation. We argue that downstream evaluation should account for dollar costs, rather than proxies for cost such as the number of model parameters.
4. Prevent shortcuts in agent benchmarks. We show that many types of overfitting to agent benchmarks are possible. We identify 4 levels of generality of agents and argue that different types of hold-out samples are needed based on the desired level of generality. Without proper hold-outs, agent developers can take shortcuts, even unintentionally. We illustrate this with a case study of the WebArena benchmark.
5. Improve the standardization and reproducibility of agent benchmarks. We found pervasive shortcomings in the reproducibility of WebArena and HumanEval evaluations. These errors inflate accuracy estimates and lead to overoptimism about agent capabilities.
AI agent benchmarking is new and best practices haven’t yet been established, making it hard to distinguish genuine advances from hype. We think agents are sufficiently different from models that benchmarking practices need to be rethought. In our paper, we take the first steps toward a principled approach to agent benchmarking. We hope these steps will raise the rigor of AI agent evaluation and provide a firm foundation for progress.
A different strand of our research concerns the reproducibility crisis in ML-based research in scientific fields such as medicine or social science. At some level, our current paper is similar. In ML-based science, our outlook is that things will get worse before they get better. But in AI agents research, we are cautiously optimistic that practices will change quickly. One reason is that there is a stronger culture of sharing code and data alongside published papers, so errors are easier to spot. (This culture shift came about due to concerted efforts in the last five years.) Another reason is that overoptimistic research quickly gets a reality check when products based on misleading evaluations end up flopping. This is going to be an interesting space to watch over the next few years, both in terms of research and product releases.
#agent#agents#ai#ai agent#AI AGENTS#AI Pin#AI systems#alexa#Analysis#applications#approach#benchmark#benchmarking#benchmarks#binary#bugs#Case Study#challenge#change#chatbots#chatGPT#clusters#code#Community#data#data analysis#datasets#Design#developers#development
0 notes