#Langflow
Explore tagged Tumblr posts
pulipuli · 12 days ago
Link
看看網頁版全文 ⇨ 雜談:怎麽讓AI能根據我的雲端硬碟回答問題 / TALK: How Can I Enable AI to Answer Questions Based on My Cloud Storage? https://blog.pulipuli.info/2025/01/talk-how-can-i-enable-ai-to-answer-questions-based-on-my-cloud-storage.html Nextcloud的AI應用程式不能處理中文,所以我自己用Langflow整合到Nextcloud,讓大型語言模型能夠根據我在雲端硬碟裡面的內容來回答問題。 這篇就講一下大致上的做法。 ---- # Nextcloud的llm2應用程式 /。 (圖片來自:https://www.youtube.com/watch?v=6_BPOZzvzZQ&t=138s )。 https://docs.nextcloud.com/server/latest/admin_manual/ai/app_assistant.html#installation。 Nextcloud在好幾年前就嘗試將LLM (大型語言模型)接入到Nextcloud。 有了LLM的輔助,我們可以在Nextcloud裡面作翻譯、寫作等功能。 Nextcloud的AI應用可以用OpenAI GPT-3.5的API,也可以在本地架設Llama 3.1模型。 它能做到機器翻譯、語音轉文字(透過stt_whisper2)、產生文字、摘要、產生標題、抽取主題詞、根據上下文重新撰寫(context write)、重寫、文字轉圖片(使用tex2image_stablediffusion2 )、上下文對談(context chat,此處的context是指Nextcloud裡面的檔案)、上下文助理 (context agent),功能非常豐富。 不過要架設具有AI功能的Nextcloud Assistant並不容易。 乍看之下,好像是要用nextcloud aio版本,搭配appapi之類的東西才能運作。 需要的元件很多,稍微複雜了一些,真是令人困擾。 https://github.com/nextcloud/context_chat_backend。 研究的過程中,我發現Nextcloud很多AI元件其實背後也是使用langchain。 既然如此,那我何不自己用langchain來處理就好了呢?。 ---- # Langchain的低程式碼版本:Langflow / Low-Code LangChain: Langflow。 https://www.langflow.org/。 最近因為工作的需求,我開始研究Langchain的相關應用。 基於Dify的開發經驗,我也想找一個low code版本的開發方式,也許這可以讓未來接手的人更容易理解LLM的運作過程。 ---- 繼續閱讀 ⇨ 雜談:怎麽讓AI能根據我的雲端硬碟回答問題 / TALK: How Can I Enable AI to Answer Questions Based on My Cloud Storage? https://blog.pulipuli.info/2025/01/talk-how-can-i-enable-ai-to-answer-questions-based-on-my-cloud-storage.html
0 notes
codemonkhq · 5 months ago
Text
Discover how to build a RAG-based Blog Writer API using LangChain and Langflow. This guide covers creating a powerful content generation tool that combines retrieval and language models, enhancing accuracy and efficiency in generating high-quality blog posts.
0 notes
suatatan · 2 months ago
Text
0 notes
kazifatagar · 3 months ago
Text
DataStax Enhances GitHub Copilot Extension to Streamline GenAI App Development
DataStax has expanded its GitHub Copilot extension to integrate with its AI Platform-as-a-Service (AI PaaS) solution, aiming to streamline the development of generative AI applications for developers. The enhanced Astra DB extension allows developers to manage databases (vector and serverless) and create Langflow AI flows directly from GitHub Copilot in VS Code using natural language commands.…
0 notes
3acesnews · 4 months ago
Photo
Tumblr media
AssemblyAI Partners with Langflow to Enhance Generative AI Capabilities
0 notes
b2bcybersecurity · 4 months ago
Text
Sicherheit in KI-Lösungen und Large Language Models
Tumblr media
Ein Unternehmen für Exposure-Management, gab die Veröffentlichung von AI Aware bekannt, einer hochentwickelten Erkennungsfunktion, die schnell feststellen kann, ob Lösungen für künstliche Intelligenz im Einsatz sind und ob KI-bezogene Schwachstellen und Sicherheitsschwächen vorliegen. Tenable AI Aware liefert Erkenntnisse über Sicherheitslücken in KI-Anwendungen, -Bibliotheken und -Plugins, so dass Unternehmen KI-Risiken zuverlässig identifizieren und beseitigen können, ohne den Geschäftsbetrieb zu beeinträchtigen. Die rasante Entwicklung und Akzeptanz von KI-Technologien in den letzten zwei Jahren hat große Cybersecurity- und Compliance-Risiken mit sich gebracht, denen Unternehmen begegnen müssen, ohne dabei auf bewährte Best Practices zurückgreifen zu können. Folglich stehen Cybersecurity-Teams vor erheblichen Herausforderungen im Zusammenhang mit KI, beispielsweise bei der Erkennung und Behebung von Schwachstellen, der Eindämmung von Datenlecks und der Unterbindung unbefugter KI-Nutzung. Schwachstellen in KI Laut aktuellen Forschungsergebnissen von Tenable stellen mehr als ein Drittel der Sicherheitsteams fest, dass in ihrer Umgebung KI-Anwendungen eingesetzt werden, die möglicherweise nicht mittels formaler Prozesse bereitgestellt wurden. So fand Tenable in einem Zeitraum von 75 Tagen zwischen Ende Juni und Anfang September über 9 Millionen Instanzen von KI-Anwendungen auf mehr als 1 Million Hosts. Das Cybersecurity-Risiko einer uneingeschränkten KI-Nutzung wird durch die zunehmende Zahl von KI-Schwachstellen noch verschärft. Tenable Research hat mehrere Schwachstellen in KI-Lösungen gefunden und veröffentlicht, unter anderem in Microsoft Copilot, Flowise und Langflow. Mit AI Aware transformiert Tenable die proaktive Sicherheit für KI-Lösungen. Tenable AI Aware setzt Agents, passives Netzwerk-Monitoring, dynamisches Testen der Anwendungssicherheit und verteilte Scan-Engines gezielt dazu ein, zugelassene und nicht zugelassene KI-Software, KI-Bibliotheken und KI‑Browser-Plugins zu erkennen und die damit verbundenen Schwachstellen aufzuspüren, um so die Risiken von Ausnutzung, Datenlecks und unberechtigtem Ressourcenverbrauch zu reduzieren. Die kombinierte Bandbreite dieser verschiedenen Bewertungsmethoden ermöglicht die umfassendste Erkennung von KI im modernen Ökosystem. Funktionen für KI-Lösungen „In dem Bestreben, mit dem durch KI eingeleiteten Wandel Schritt zu halten, sind Unternehmen auf der ganzen Welt mit voller Geschwindigkeit vorausgeeilt und haben dabei möglicherweise zahllose Risiken in Bezug auf Cybersicherheit, Datenschutz und Compliance übersehen“, so Shai Morag, Chief Product Officer bei Tenable. „Mehr noch als bei jeder anderen neuen Technologie, die wir kennen, gibt es zahlreiche Risikofaktoren zu berücksichtigen, insbesondere bei einer überstürzten Entwicklung und Implementierung. Tenable AI Aware ermöglicht Unternehmen einen sicheren Einsatz von KI und stellt sicher, dass ihre Sicherheitsmaßnahmen mit der schnellen Entwicklung von KI-Technologien Schritt halten.“ Zusätzlich zur Erkennung von KI-Software und -Schwachstellen sind in Tenable Vulnerability Management, Tenable Security Center und Tenable One weitere wichtige AI Aware-Funktionen verfügbar: - Dashboard-Ansichten bieten einen Überblick über die am häufigsten im Ökosystem festgestellte KI-Software, die wichtigsten Assets mit KI-bezogenen Schwachstellen sowie die am häufigsten von KI-Technologien genutzten Kommunikationsports. - Shadow Software Development Detection deckt das unerwartete Vorhandensein von KI-Entwicklungsbausteinen in der Umgebung auf, wodurch Unternehmen in der Lage sind, ihre Initiativen mit den Best Practices des Unternehmens abzustimmen. - Filter Findings for AI Detections ermöglicht es Teams, bei der Überprüfung der Ergebnisse von Schwachstellenbewertungen den Fokus auf Feststellungen mit KI-Bezug zu legen. In Kombination mit dem leistungsstarken Tenable Vulnerability Prioritization Rating (VPR) können Teams Schwachstellen, die durch KI-Programme und -Bibliotheken verursacht werden, effektiv bewerten und priorisieren. - Asset-Centric AI-Inventory stellt bei der Überprüfung des detaillierten Profils eines Assets eine komplette Bestandsaufnahme von mit KI verbundenen Programmen, Bibliotheken und Browser-Plugins bereit     Passende Artikel zum Thema Lesen Sie den ganzen Artikel
0 notes
criadorderiquezas · 6 months ago
Video
youtube
Langflow 1.0 - Super Novidades na Nova Versão! e Competição Valendo Prêmios!
0 notes
ai-news · 10 months ago
Link
A Quick Way to Prototype RAG Applications Based on LangChainContinue reading on Towards Data Science » #AI #ML #Automation
0 notes
hackernewsrobot · 10 months ago
Text
DataStax has acquired Langflow to accelerate generative AI development
https://www.datastax.com/blog/datastax-acquires-langflow-to-accelerate-generative-ai-app-development
0 notes
revotalk · 10 months ago
Link
DataStax made a name for itself by commercializing the open source Apache Cassandra NoSQL database, but these days, the company’s focus is squarely on using its database chops to build a “one-stop GenAI stack.” One of the first building blocks for this was to bring vector search capabilities to its hosted Astra DB service last […] © 2024 TechCrunch. All rights reserved. For personal use only.
0 notes
craigbrownphd · 10 months ago
Text
DataStax acquires Langflow creator Logspace to aid gen AI app development
https://www.infoworld.com/article/3715000/datastax-acquires-langflow-creator-logspace-to-aid-gen-ai-app-development.html?utm_source=dlvr.it&utm_medium=tumblr#tk.rss_machinelearning
0 notes
aioome2 · 1 year ago
Text
Rivet AI: Free Installation Guide for Creating Advanced AI Agents Superior to Langflow and Flowise
Tumblr media
Introducing Rivet: The IDE for Creating Complex AI Agents About Rivet Rivet is a new method of creating AI agents that focuses on creating more complex AI agents with additional toolkits and plugins. Unlike other AI development tools such as abacus.ai, Chidori, and super AGI, Rivet offers a fully open-source platform. With its visual programming environment and node-based system, Rivet simplifies the AI creation process, making it accessible for individuals with different levels of programming knowledge. Rivet's Core Strengths Rivet's core strength lies in its ability to facilitate the design and connection of notes to craft AI agents. This makes it valuable for both seasoned AI developers and beginners. Rivet shares a visual programming paradigm with Flowwise and Langflow, making it visually similar. The drag and drop interface allows users to deploy different types of agents by combining applications. Unique Features of Rivet Rivet stands out from other AI development tools due to its remote debugability, ease of embedding in a host application, and TypeScript compatibility. These features enhance its functionality and provide a more customizable way to create AI agents. Installation To install Rivet, you can use the one-click installer or install it from the source. The one-click installer is recommended for ease of use. However, if you choose to install it from the source, ensure that you have the necessary prerequisites such as Rust, Node 20+, Yarn, and Git. Follow the provided instructions to install Rivet on your desktop. Why Choose Rivet? Rivet offers a unique and invaluable feature set. Its visual programming environment allows users to visualize and build AI agents using its user-friendly interface. Remote debugging enables users to observe the execution of prompt chains in real-time, making bug identification and resolution more efficient. Rivet also promotes collaboration within teams by representing different graphs in YAML files and supporting version control. Creating AI Agents with Rivet To create an AI agent with Rivet, start by clicking on "New Graph" and create folders within it to organize your graphs. Use the provided nodes to build your AI agent. For example, the "AI Chat" node utilizes the GPT-3.5 Turbo model. You can customize the node's title, description, and prompt to suit your needs. Rivet also offers various configurations and outputs, including data storage, export options, and plugins. Getting Started Once you have installed Rivet, set your OpenAI API key in the settings tab. You can also select different themes and set the executor to browser or node. Rivet allows you to run and test your AI agent locally using a remote debugger. An Example: Creating a Chatbot Let's create a chatbot that responds as a chemist when given a prompt to identify the stability of different compounds. You can define the context and prompt in a JSON file and extract it in your AI agent. Use the available nodes and models to generate responses in the desired format. Rivet offers extensive documentation and tutorials for a more in-depth understanding of its functionalities. Conclusion Rivet is a powerful IDE for creating complex AI agents. Its unique features, such as a visual programming environment, remote debugging, and collaboration support, set it apart from other AI development tools. With Rivet, you can unleash your creativity and build innovative AI applications. Check out the links in the description to explore Rivet further. Thank you for watching, and have a great day! Thank you for taking the time to read this article! If you enjoyed it and would like to stay updated with similar content, we invite you to follow our blog. You can receive notifications directly in your inbox by subscribing to our email list. Alternatively, you can also join our Facebook fanpage, where we share more articles, news, and updates. Lastly, for those who prefer video content, we have a YouTube channel that you can subscribe to as well. Thank you again for your support, and we look forward to keeping you engaged and informed! Frequently Asked Questions 1. What is Rivet and how is it different from other AI development tools? Rivet is an IDE for creating complex AI agents with a focus on additional toolkit and plugins. It is fully open source and has a visual programming environment. Unlike other apps like abacus.ai and Chidori, Rivet is designed for creating more complex AI agents. 2. What are the core strengths of Rivet? Rivet's core strengths lie in its ability to facilitate the design and connection of nodes to craft AI agents. It simplifies the AI creation process and ensures accessibility for individuals with different levels of programming knowledge. Rivet also offers remote debuggability, ease of embedding in a host application, and TypeScript compatibility. 3. How can I install Rivet on my desktop? There are two ways to install Rivet: using the one-click installer or building from the source. The recommended method is to use the one-click installer, which is available for different operating systems. Detailed installation instructions can be found in the article. 4. What are the main features of Rivet? Rivet offers a visual programming environment, remote debugging capabilities, and collaboration features. It allows you to visualize and build AI agents using its UI, debug prompt chains in real time, and collaborate with your team by representing graphs in YAML files. 5. Can I create complex AI agents with Rivet? Yes, Rivet is designed to create complex AI agents. It offers a range of toolkits and plugins that allow for the customization and design of AI agents. The application provides a versatile platform for seasoned AI developers as well as individuals new to working with AI. Read the full article
0 notes
codemonkhq · 5 months ago
Text
Learn to build a Twitter sentiment analysis tool using Langflow and Llama 2, with no coding required. This guide shows you how to classify tweets in real-time through a simple API, helping businesses track brand sentiment and customer feedback effectively.
0 notes
cumulations · 2 years ago
Text
Thoughts on Tools, AI and the Realities of Using LangChain
Tumblr media
The use of tools, both in the realms of science and philosophy, is often considered a defining characteristic that separates humans (and a select few non-human species) from other life forms.
From a scientific perspective, tool use signifies advanced cognitive abilities, including problem-solving and planning, which are generally associated with higher forms of life such as primates, birds, and cetaceans. These species demonstrate an understanding of cause and effect relationships, a prerequisite for tool use.
For philosophers, tool use is seen as an embodiment of our capacity to manipulate our environment and shape our destiny, a testament to our unique consciousness and self-awareness. It is a manifestation of our ability to conceptualize, innovate, and transcend physical and biological limitations. It thereby distinguishes us from other species.
The discovery of tool use in some non-human species challenges the notion of human exceptionalism, prompting a reevaluation of our understanding of intelligence and consciousness in the animal kingdom. The game gets wilder when the same communities must consider the implications of tools, their development, and use by AIs.
I’m not sure how I missed Toolformer: Language Models can Teach Themselves to Use Tools in February/March, but it’s as good as this kind of research gets. It’s not a ‘how to build your LLM in a weekend’, but rather a serious work that demonstrates the advantages of tools in the next wave of offerings. It is fascinating to think about the adoption of tools as a determinant of LLM advancement. It’s also apparent to me that a good deal of the tool-building being taken on (and likely over-hyped) by the LangChain community has been using this paper as a ‘northstar’.
And, after spending hours in my own (mostly unsuccessful) attempts to use tools like LangFlow and Flowise, all built on the ‘foundational’ tool LangChain, I had to wonder whether my abilities and skillsets left me in the category of ‘beings who aspire to use tools, but can’t quite pull it off.’
I ran into this post, The Problem With LangChain, which I’ll admit is pretty harsh in its treatment of LangChain’s authors and the ecosystem that has quickly formed around it.
LangChain was by-far the popular tool of choice for RAG, so I figured it was the perfect time to learn it. I spent some time reading LangChain’s rather comprehensive documentation to get a better understanding of how to best utilize it: after a week of research, I got nowhere.
Eventually I had an existential crisis: am I a worthless machine learning engineer for not being able to figure LangChain out when very many other ML engineers can? We went back to a lower-level ReAct flow, which immediately outperformed my LangChain implementation in conversation quality and accuracy.
In all, I wasted a month learning and testing LangChain, with the big takeway that popular AI apps may not necessarily be worth the hype. …
Max Woolf, the author, goes on in detail that resonates only with those of us who’ve gone through the process of trying to design and prototype intricate LLM-based apps using LangChain and related tech. The examples are instructive, and I’ve now gone through three of the five stages of grief (denial, anger, bargaining, depression and acceptance) and am writing this post having reached the ‘depression’ stage. The good news is that I’ve learned a lot, and am feeling better about generating my own collections of code blocks that utilize old school python or javascript without the simplifications and time savings promised by some of the super-tools. I’m hoping soon to pull even with cephalopods on the “tool users” leaderboard.
0 notes
kazifatagar · 8 months ago
Text
DataStax to Launch Massive New AI Platform Updates at RAG++ Event in San Francisco
DataStax is set to unveil significant updates to its AI platform at the RAG++ event in San Francisco, partnering with LangChain, Microsoft, NVIDIA, and others. Key updates include the release of Langflow 1.0, a visual framework for RAG applications now hosted in the DataStax Cloud, and a new partnership with Unstructured.io for efficient data ingestion and preparation for AI use. Read More…
Tumblr media
View On WordPress
0 notes
3acesnews · 4 months ago
Photo
Tumblr media
AssemblyAI Partners with Langflow to Enhance Generative AI Capabilities
0 notes