#face api development
Explore tagged Tumblr posts
talenlee · 2 months ago
Text
Life With Generative Tools
In 2023, back when my posts were still being shared to Twitter because the API wasn’t paid-only, I wrote an article about the potential ramifications of generative art media going forward. My concern in the immediate was that the tools weren’t going to go away, but also the potential harm to artists was as much about general economic precarity and not people using fanart to make their D&D characters. I further added to this with a consideration of how I wanted to avoid using generative art in my game development because I didn’t want what people would say about it. That is, a social pressure about the art is what keeps me from using it, not a personal philosophical disposition. I’m an artist who already works with collage and constraints, this feels like a handy way to have something I can play with.
Well, it’s been a year and change and a sort of AI Art Apocalypse has happened, and if you’re not aware of it, it’s because you’re someone who avoids all of the pools that have been so thoroughly pissed in that they are now just piss. If you’re at all related to any part of the internet where people share a bunch of images – which is to say a lot of social media – then you’re already dealing with the place crawling with generative images. Whether it’s a fanart booru, or big sites like facebook and twitter, or god help you deviantart, there is a pretty clear sign that anywhere that opened the door to generative art became a space overwhelmingly for generative art.
I teach about this subject now and I have had some time with it in a situation away from the internet, and I’d like to give you some insights into what this stuff is for, what it does, why you shouldn’t use it, and ways it can be useful.
Content Warning: I’m going to be talking about these tools as tools that exist and leaving the philosophical/ethical arguments about ‘art theft’ and their genesis aside. I’m not including any examples. No shrimp jesus jumpscare.
You might notice I’m saying ‘generative art’ and not ‘AI art.’ Part of this is because I don’t want to buy into the idea that these tools are ‘artificial intelligence.’ Ironically, ‘AI art’ now has less of an implication of being ‘Artificial Intelligence’ and is much more of an implication of ‘it’s ugly shiny art of shrimp jesus with badly spelled signs.’
I want to focus for this conversation on generative graphical tools, and I want to do that because I don’t have much experience with the other types. The textual generators offer me something I don’t really need? I already make a ton of words of dubious quality. Those are actually the things that concern me because their natural aesthetic is authoritive and comprehensive and that’s why it’s a problem that they’re being used to present any old nonsense that may just be straight up wrong. I don’t use those tools and I avoid the platforms that use them so I’m not familiar with them.
Things Generative Art Is Good For
I already use art I don’t own, a lot, for playing. Every day for the past three years I’ve shared a custom Magic: The Gathering playing card, a game I don’t own the rights to, using a card face I don’t own the rights to, and artwork from an artist on Artstation whose artwork I did not pay for or even ask for. This is generally seen as a totally reasonable and acceptable form of playful, transformative media generation and I at no point pretend I have any rights to the material. If I take a picture of someone famous and put a speech bubble over their mouth saying ‘I drink farts,’ if I, as tumblr says, play with jpgs like dolls, that is by no means being done with rights and permission.
Which means we’re already aware that there’s a way of playing with images that both violates copyright but is generally okay to do.
The metric I use for this is if the thing you’re using generative art for doesn’t matter, then it doesn’t matter. If you’re not going to try and claim money, if you’re not going to put it on a marketplace, if you aren’t going to try and claim ownership and profit off generative material, I think you’re probably fine. I mean probably, if you’re using it to say, generate revenge porn of a classmate that’s an asshole move, but the thing is that’s a bad thing regardless of the tool you’re using. If you’re using it to bulk flood a space, like how Deviantart is full of accounts with tens of thousands of pictures made in a week, then that’s an asshole move because, again, it’s an asshole move regardless of the tool.
If you’re a roleplayer and you want a picture of your Dragonborn dude with glasses and a mohawk? That’s fine, you’re using it to give your imagination a pump, you’re using it to help your friends visualise what matters to you about your stuff. That’s fine! It’s not like you’re not making artistic choices when you do this, cycling through choices and seeing the one that works best for you. That’s not an action deprived of artistic choice!
There are also some things that are being labelled as ‘AI’ which seem to be more like something else to me. Particularly, there are software packages that resize images now, which are often calling it ‘AI upscaling,’ which it may be using some variety of these Midjourney style models to work, but which serves a purpose similar to sequences of resizes and selective blurs. There are also tools that can do things like remove people from the background of images, which is… good? It should be good and easy to get people out of pictures they didn’t consent to be in.
Things Generative Art Is Bad For
Did you know you don’t own copyright on generated art? This is pretty well established. If you generated the image, it’s not yours, because you didn’t make it. It was made by an algorithm, and algorithms aren’t people. This isn’t a complicated issue, this just means that straight up, any art you make at work that’s meant to be used for work, shouldn’t be used because people can just straight up use it. Logo design, branding, all that stuff is just immediately open for bootlegging or worse, impersonation.
Now you might think that’s a bit of a strange thing to bring up but remember, I’m dealing with students a lot. Students who want to position themselves as future prompt engineers or social media managers need to understand full well that whatever they make with these tools are not things that will have an enduring useful application. Maybe you can use it for a meme you post on an account, but it’s not something you can build branding off, because you don’t own it. Everyone owns it.
From that we get a secondary problem, because if you didn’t own it, its only use is what people say or think when they look at it, and thing is, people are already sick and tired of the aesthetics of generated art. You’re going to get people who don’t care glossing over it, and people who do care hating it. Generative art as a way of presenting your business or foregrounding your ‘vibes’ are going to think that your work is, primarily, ‘more AI art’ and not about what it’s trying to communicate. When the internet is already full of Slop, if you use these tools to represent your work, you are going to be turning your own work and media presence into slop.
What’s more, you need to be good at seeing mistakes if you’re using these tools. If you put some art out there that’s got an extra thumb or someone’s not holding a sword right, people will notice. That means you need to start developing the toolset above for fine-tuning and redrawing sections of artwork. Now, that’s not a bad thing! That’s a skill you can develop! But it means that the primary draw of these tools is going to be something that you then have to do your own original work over the top of.
The biggest reason though I recommend students not treat this work like it’s a simple tool for universal application is that it devalues you as a worker. If you’re trying to get hired for a job at a company and you can show them a bunch of generative art you’ve made to convince them that you’re available, all you are really telling them is that you can be replaced by a small script that someone else can make. Your prompts are not unique enough, your use of the tool not refined enough that you can’t just be replaced by anyone else who gets paid less. You are trying to sell yourself as a product to employers, and generative art replaces what you bring with what everyone brings.
They make you lazy! People include typos in the generative media because they’re not even looking at them or caring about what they say! And that brings me to the next point that there are just things these tools don’t do a good job doing, and that’s stuff I want to address next in…
Things That Are Interesting
Because the tools of generative art create a very impressive-seeming artistic output, they are doing it in a way that people want to accept. They want to accept them and that means accepting the problems, or finding a way to be okay with those problems. People who don’t care that much about typos and weird fingers and so on, because you know, it gets me a lot of what I want, but it doesn’t get me everything, and I don’t know how to get the everything.
If you generate an image and want to move something in it a little bit, your best way to do that is to edit the image directly. Telling the software to do that, again, but change this bit, this much, is in fact really hard because it doesn’t know what those parts are. It doesn’t have an idea of where they are, it’s all running on an alien understanding of nightmare horror imagery.
What that means is that people start to negotiate with themselves about what they want, getting to ‘good enough’ and learning how to negotiate with the software. My experiments with these tools led to me making a spreadsheet so I could isolate the terms I use that cause problems, and sometimes those results are very, very funny. In this, the tool teaches you how to use it (which most tools do), but the teaching results in a use that is wildly inappropriate to what the tool promises it’s for.
One of my earliest experiments was to take four passages from One Stone that described a character and just put that text straight into midjourney to see what it generated based on that plain text description. Turns out? Nothing like what I wanted. But when I treated it like say, I was searching for a set of tags on a booru system like danbooru or safebooru… then it was pretty good at that. Which is what brings me to the next stage of things, which is like…
These things were trained on porn sites right?
Like, you can take some very specific tags from some of the larger boorus and type them into these prompt sites and get a very reasonable representation of what it is you asked for, even if that term is a part of an idiolect, a term that’s specific to that one person in one space that’s become a repeated form of tag. Just type in an artist name and see if it can replicate their style and then check to see what kind of art that artist makes a lot of. This is why you can get a thing that can give you police batons and mirrored sunglasses just fine but if you ask for ‘police uniform’ you get some truly Tom of Finland kind of bulging stuff.
Conclusion
Nobody who dislikes generative art is wrong. I think there are definitely uses of it that are flat out bad, and I think it’s totally okay and even good to say so. Make fun of people who are using it, mock the shrimp jesuses, make it very clear you’re aware of what’s going on and why. There’s nothing wrong with that.
I do think that these tools are useful as toys, and I think that examining the art that they produce, and the art that the community around them are exalting and venerating tells us stuff. Of course, what they tell us is that there are a lot of people out there who really want porn, and there are just as many people who want the legitimisation of impressive seeming images that they don’t care about what those images are doing or what they’re for.
Now part of this defensiveness is also the risk of me being bitten. If I buy stock art that isn’t correctly disclosed as being generative art, then I might make and sell something using generative art and now I look like an asshole for not being properly good at detecting and hating ‘AI art,’ and when I’ve say, made a game using generative art that then is integrated into things like worldbuilding and the card faces, then it gets a lot harder to tear it out at the roots and render myself properly morally clean. I’m sure a bunch of the stock art I used before 2020 was made algorithmically, just pumped out slop that was reprocessing other formula or technical objects to fill up a free stock art site like Freepik.
Which is full of generative art now.
You won’t hurt yourself by understanding these things, and people who are using them for fun or to learn or explore are by no means doing something morally ill. There are every good reason to keep these things separated from anything that involves presenting yourself seriously, or using them to make money, though. If nothing else, people will look at you and go ‘oh, you’re one of those shrimp jesus assholes.’
Check it out on PRESS.exe to see it with images and links!
156 notes · View notes
mariacallous · 8 days ago
Text
It’s April, and the US is experiencing a self-inflicted trade war and a constitutional crisis over immigration. It’s a lot. It’s even enough to make you forget about Elon Musk’s so-called Department of Government Efficiency for a while. You shouldn’t.
To state the obvious: DOGE is still out there, chipping away at the foundations of government infrastructure. Slightly less obvious, maybe, is that the DOGE project has recently entered a new phase. The culling of federal workers and contracts will continue, where there’s anything left to cull. But from here on out, it’s all about the data.
Few if any entities in the world have as much access to as much sensitive data as the United States. From the start, DOGE has wanted as much of it as it could grab, and through a series of resignations, firings, and court cases, has mostly gotten its way.
In many cases it’s still unclear what exactly DOGE engineers have done or intend to do with that data. Despite Elon Musk’s protestations to the contrary, DOGE is as opaque as Vantablack. But recent reporting from WIRED and elsewhere begins to fill in the picture: For DOGE, data is a tool. It’s also a weapon.
Start with the Internal Revenue Service, where DOGE associates put the agency’s best and brightest career engineers in a room with Palantir folks for a few days last week. Their mission, as WIRED previously reported, was to build a “mega API” that would make it easier to view previously compartmentalized data from across the IRS in one place.
In isolation that may not sound so alarming. But in theory, an API for all IRS data would make it possible for any agency—or any outside party with the right permissions, for that matter—to access the most personal, and valuable, data the US government holds about its citizens. The blurriness of DOGE’s mission begins to gain focus. Even more, since we know that the IRS is already sharing its data in unprecedented ways: A deal the agency recently signed with the Department of Homeland Security provides sensitive information about undocumented immigrants.
It’s black-mirror corporate synergy, putting taxpayer data in the service of President Donald Trump’s deportation crusade.
It also extends beyond the IRS. The Washington Post reported this week that DOGE representatives across government agencies—from the Department of Housing and Urban Development to the Social Security Administration—are putting data that is normally cordoned off in service of identifying undocumented immigrants. At the Department of Labor, as WIRED reported Friday, DOGE has gained access to sensitive data about immigrants and farm workers.
And that’s just the data that stays within the government itself. This week NPR reported that a whistleblower at the National Labor Relations Board claims that staffers observed spikes in data leaving the agency after DOGE got access to its systems, with destinations unknown. The whistleblower further claims that DOGE agents appeared to take steps to “cover their tracks,” switching off or evading the monitoring tools that keep tabs on who’s doing what inside computer systems. (An NLRB spokesperson denied to NPR that DOGE had access to the agency’s systems.)
What could that data be used for? Anything. Everything. A company facing a union complaint at the NLRB could, as NPR notes, get access to “damaging testimony, union leadership, legal strategies and internal data on competitors.” There’s no confirmation that it’s been used for those things���but more to the point, there’s also currently no way to know either way.
That’s true also of DOGE’s data aims more broadly. Right now, the target is immigration. But it has hooks into so many systems, access to so much data, interests so varied both within and without government, there are very few limits to how or where it might next be deployed.
The spotlight shines a little less brightly on Elon Musk these days, as more urgent calamities take the stage. But DOGE continues to work in the wings. It has tapped into the most valuable data in the world. The real work starts when it puts that to use.
41 notes · View notes
probablyasocialecologist · 2 years ago
Text
The monetization creep has been evident for a while. Reddit has added a subscription ”Reddit premium”; offered “community rewards” as a paid super-vote ; embraced an NFT marketplace; changed the site's design for one with more recommended content; and started nudging users toward the official mobile app. The site has also been adding more restrictions to uploading and viewing “not safe for work” (NSFW) content. All this, while community requests for improvements to moderation tools and accessibility features have gone unaddressed on mobile, driving many users to third-party applications. Perhaps the worst development was announced on April 18th, when Reddit announced changes to its Data API would be starting on July 1st, including new “premium access” pricing for users of the API. While this wouldn’t affect projects on the free tier, such as moderator bots or tools used by researchers, the new pricing seems to be an existential threat to third-party applications for the site. It also bears a striking resemblance to a similar bad decision Twitter made this year under Elon Musk.
[...]
Details about Reddit’s API-specific costs were not shared, but it is worth noting that an API request is commonly no more burdensome to a server than an HTML request, i.e. visiting or scraping a web page. Having an API just makes it easier for developers to maintain their automated requests. It is true that most third-party apps tend to not show Reddit’s advertisements, and AI developers may make heavy use of the API for training data, but these applications could still (with more effort) access the same information over HTML. The heart of this fight is for what Reddit’s CEO calls their “valuable corpus of data,” i.e. the user-made content on the company’s servers, and for who gets live off this digital commons. While Reddit provides essential infrastructural support, these community developers and moderators make the site worth visiting, and any worthwhile content is the fruit of their volunteer labor. It’s this labor and worker solidarity which gives users unique leverage over the platform, in contrast to past backlash to other platforms.
179 notes · View notes
meret118 · 8 days ago
Text
How Americans Are Surveilled During Protests
https://www.wired.com/story/uncanny-valley-podcast-how-americans-are-surveilled-during-protests/
Internet Sleuths Slam Trump for Photoshopping MS-13 Tat on Deported Dad’s Hand
https://www.thedailybeast.com/internet-sleuths-slam-trump-for-photoshopping-ms-13-tat-on-deported-dads-hand/
DOGE Is Just Getting Warmed Up DOGE has tapped into some of the most sensitive and valuable data in the world. Now it’s starting to put it to work.
But in theory, an API for all IRS data would make it possible for any agency—or any outside party with the right permissions, for that matter—to access the most personal, and valuable, data the US government holds about its citizens. The blurriness of DOGE’s mission begins to gain focus.
Even more, since we know that the IRS is already sharing its data in unprecedented ways: A deal the agency recently signed with the Department of Homeland Security provides sensitive information about undocumented immigrants.
. . .
The Washington Post reported this week that DOGE representatives across government agencies—from the Department of Housing and Urban Development to the Social Security Administration—are putting data that is normally cordoned off in service of identifying undocumented immigrants. At the Department of Labor, as WIRED reported Friday, DOGE has gained access to sensitive data about immigrants and farm workers. And that’s just the data that stays within the government itself.
This week NPR reported that a whistleblower at the National Labor Relations Board claims that staffers observed spikes in data leaving the agency after DOGE got access to its systems, with destinations unknown. The whistleblower further claims that DOGE agents appeared to take steps to “cover their tracks,” switching off or evading the monitoring tools that keep tabs on who’s doing what inside computer systems. (An NLRB spokesperson denied to NPR that DOGE had access to the agency’s systems.)
What could that data be used for? Anything. Everything. A company facing a union complaint at the NLRB could, as NPR notes, get access to “damaging testimony, union leadership, legal strategies and internal data on competitors.” There’s no confirmation that it’s been used for those things—but more to the point, there’s also currently no way to know either way.
www.wired.com/story/doge-is-just-getting-warmed-up-data-immigration/
Trump Is Still Trying to Undermine Elections
Now that Trump has installed election deniers throughout his Administration, he has been busy dismantling the guardrails protecting voting and voters.
. . .
As Marc Elias, an elections lawyer who litigates on behalf of Democrats, told me, “When Donald Trump says that he does not believe there should be voting machines, you should believe him. When he says there should only be voting on Election Day, you should believe him.”
. . .
“In claiming to fire a commissioner of the Federal Election Commission, the president violates the law, the separation of powers, and generations of Supreme Court precedent.” He added that the F.E.C.’s commissioners “are confirmed by Congress to serve the vital role of protecting the democratic rights of American voters. As the only agency that regulates the president, Congress intentionally did not grant the president the power to fire FEC commissioners.”
Less than two weeks later, Trump issued an executive order that states, “No employee of the executive branch acting in their official capacity may advance an interpretation of the law as the position of the United States that contravenes the President or the Attorney General’s opinion on a matter of law.”
In plain language, this mandate cancels the independence of independent agencies and, in the context of the F.E.C., gives the President the ability to make and adjudicate campaign rules to his advantage. The Democratic National Committee, along with the Democratic Congressional and the Democratic Senatorial Campaign Committees, is now suing Trump and Bondi’s office, on the ground that the order violates federal law, but for now it stands.
More at the link.
https://www.newyorker.com/news/the-lede/trump-is-still-trying-to-undermine-elections
The Battle For American Thought If Trump can control what ideas are allowed to be discussed, he can reshape American life as we know it.
https://www.huffpost.com/entry/trump-control-american-thought_n_6802a7e9e4b0afffe5e780bf
6 notes · View notes
randomwordsgenerate · 19 days ago
Text
🚀 The Ultimate Guide to Random Generators: Tools, Tricks & Creative Uses 🎲
From sparking creativity to solving coding problems, random generators are the unsung heroes of the digital world. Whether you need words, names, colors, or even RPG characters, there’s a tool for it.
5 Types of Random Generators You Need
1️⃣ Random Word Generators (For Writers & Brainstorming)
Use Case: Beat writer’s block, create poetry, or name characters.
Example Tools:
Random Words Generate (Simple & fast)
WordGenerator.net (Advanced filters)
2️⃣ Name Generators (For Gamers & Authors)
Need a fantasy villain name or startup business idea? These tools help:
Fantasy Name Generators
Behind the Name
3️⃣ Color Palette Generators (For Designers)
Cool Tools:
Coolors.co (Instant palettes)
Adobe Color (Advanced schemes)
4️⃣ Random Number Generators (For Devs & Gamers)
Best Picks:
Random.org (True randomness)
Calculator.net’s RNG
5️⃣ Meme & Fun Generators (For Social Media)
Try:
ImgFlip Meme Generator
This Person Does Not Exist (AI faces!)
💡 Pro Tips for Using Random Generators
✔ Writers: Mix 3 random words into a story plot. (Example: "A robot, banana, and black hole walk into a bar…") ✔ Developers: Use random data for testing APIs. ✔ Designers: Generate color combos for logos.
🔗 Why Bookmark These Tools?
100% Free – No hidden payments.
No Login Needed – Instant results.
Creative Fuel – Endless ideas in seconds.
📌 Try Now: RandomWordsGenerate.com
3 notes · View notes
govindhtech · 20 days ago
Text
Pegasus 1.2: High-Performance Video Language Model
Tumblr media
Pegasus 1.2 revolutionises long-form video AI with high accuracy and low latency. Scalable video querying is supported by this commercial tool.
TwelveLabs and Amazon Web Services (AWS) announced that Amazon Bedrock will soon provide Marengo and Pegasus, TwelveLabs' cutting-edge multimodal foundation models. Amazon Bedrock, a managed service, lets developers access top AI models from leading organisations via a single API. With seamless access to TwelveLabs' comprehensive video comprehension capabilities, developers and companies can revolutionise how they search for, assess, and derive insights from video content using AWS's security, privacy, and performance. TwelveLabs models were initially offered by AWS.
Introducing Pegasus 1.2
Unlike many academic contexts, real-world video applications face two challenges:
Real-world videos might be seconds or hours lengthy.
Proper temporal understanding is needed.
TwelveLabs is announcing Pegasus 1.2, a substantial industry-grade video language model upgrade, to meet commercial demands. Pegasus 1.2 interprets long films at cutting-edge levels. With low latency, low cost, and best-in-class accuracy, model can handle hour-long videos. Their embedded storage ingeniously caches movies, making it faster and cheaper to query the same film repeatedly.
Pegasus 1.2 is a cutting-edge technology that delivers corporate value through its intelligent, focused system architecture and excels in production-grade video processing pipelines.
Superior video language model for extended videos
Business requires handling long films, yet processing time and time-to-value are important concerns. As input films increase longer, a standard video processing/inference system cannot handle orders of magnitude more frames, making it unsuitable for general adoption and commercial use. A commercial system must also answer input prompts and enquiries accurately across larger time periods.
Latency
To evaluate Pegasus 1.2's speed, it compares time-to-first-token (TTFT) for 3��60-minute videos utilising frontier model APIs GPT-4o and Gemini 1.5 Pro. Pegasus 1.2 consistently displays time-to-first-token latency for films up to 15 minutes and responds faster to lengthier material because to its video-focused model design and optimised inference engine.
Performance
Pegasus 1.2 is compared to frontier model APIs using VideoMME-Long, a subset of Video-MME that contains films longer than 30 minutes. Pegasus 1.2 excels above all flagship APIs, displaying cutting-edge performance.
Pricing
Cost Pegasus 1.2 provides best-in-class commercial video processing at low cost. TwelveLabs focusses on long videos and accurate temporal information rather than everything. Its highly optimised system performs well at a competitive price with a focused approach.
Better still, system can generate many video-to-text without costing much. Pegasus 1.2 produces rich video embeddings from indexed movies and saves them in the database for future API queries, allowing clients to build continually at little cost. Google Gemini 1.5 Pro's cache cost is $4.5 per hour of storage, or 1 million tokens, which is around the token count for an hour of video. However, integrated storage costs $0.09 per video hour per month, x36,000 less. Concept benefits customers with large video archives that need to understand everything cheaply.
Model Overview & Limitations
Architecture
Pegasus 1.2's encoder-decoder architecture for video understanding includes a video encoder, tokeniser, and big language model. Though efficient, its design allows for full textual and visual data analysis.
These pieces provide a cohesive system that can understand long-term contextual information and fine-grained specifics. It architecture illustrates that tiny models may interpret video by making careful design decisions and solving fundamental multimodal processing difficulties creatively.
Restrictions
Safety and bias
Pegasus 1.2 contains safety protections, but like any AI model, it might produce objectionable or hazardous material without enough oversight and control. Video foundation model safety and ethics are being studied. It will provide a complete assessment and ethics report after more testing and input.
Hallucinations
Occasionally, Pegasus 1.2 may produce incorrect findings. Despite advances since Pegasus 1.1 to reduce hallucinations, users should be aware of this constraint, especially for precise and factual tasks.
2 notes · View notes
dijacrypt · 2 months ago
Text
Revolutionizing DeFi Development: How STON.fi API & SDK Simplify Token Swaps
Tumblr media
The decentralized finance (DeFi) landscape is evolving rapidly, and developers are constantly seeking efficient ways to integrate token swap functionalities into their platforms. However, building seamless and optimized swap mechanisms from scratch can be complex, time-consuming, and risky.
This is where STON.fi API & SDK come into play. They provide developers with a ready-to-use, optimized solution that simplifies the process of enabling fast, secure, and cost-effective swaps.
In this article, we’ll take an in-depth look at why developers need efficient swap solutions, how the STON.fi API & SDK work, and how they can be integrated into various DeFi applications.
Why Developers Need a Robust Swap Integration
One of the core functions of any DeFi application is token swapping—the ability to exchange one cryptocurrency for another instantly and at the best possible rate.
But integrating swaps manually is not a straightforward task. Developers face several challenges:
Complex Smart Contract Logic – Handling liquidity pools, slippage, and price calculations requires expertise and rigorous testing.
Security Vulnerabilities – Improperly coded swaps can expose user funds to attacks.
Performance Issues – Slow execution or high gas fees can frustrate users and hurt adoption.
A poorly integrated swap feature can turn users away from a DeFi application, affecting engagement and liquidity. That’s why an efficient, battle-tested API and SDK can make a significant difference.
STON.fi API & SDK: What Makes Them a Game-Changer?
STON.fi has built an optimized API and SDK designed to handle the complexities of token swaps while giving developers an easy-to-use toolkit. Here’s why they stand out:
1. Seamless Swap Execution
Instead of manually routing transactions through liquidity pools, the STON.fi API automates the process, ensuring users always get the best swap rates.
2. Developer-Friendly SDK
For those who prefer working with structured development tools, the STON.fi SDK comes with pre-built functions that remove the need for extensive custom coding. Whether you’re integrating swaps into a mobile wallet, trading platform, or decentralized app, the SDK simplifies the process.
3. High-Speed Performance & Low Costs
STON.fi’s infrastructure is optimized for fast transaction execution, reducing delays and minimizing slippage. Users benefit from lower costs, while developers get a plug-and-play solution that ensures a smooth experience.
4. Secure & Scalable
Security is a major concern in DeFi, and STON.fi’s API is built with strong security measures, protecting transactions from vulnerabilities and ensuring reliability even under heavy traffic.
Practical Use Cases for Developers
1. Building Decentralized Exchanges (DEXs)
STON.fi API enables developers to integrate swap functionalities directly into their DEX platforms without having to build custom liquidity management solutions.
2. Enhancing Web3 Wallets
Crypto wallets can integrate STON.fi’s swap functionality, allowing users to exchange tokens without leaving the wallet interface.
3. Automating Trading Strategies
The API can be used to build automated trading bots that execute swaps based on real-time market conditions, improving efficiency for traders.
4. Scaling DeFi Platforms
For DeFi applications handling high transaction volumes, STON.fi API ensures fast and cost-effective execution, improving user retention.
Why Developers Should Consider STON.fi API & SDK
For developers aiming to create efficient, user-friendly, and scalable DeFi applications, STON.fi offers a robust solution that eliminates the complexities of manual integrations.
Saves Development Time – Reduces the need for custom swap coding.
Improves Security – Pre-tested smart contracts minimize vulnerabilities.
Enhances User Experience – Faster swaps create a smoother, more reliable platform.
Optimizes Performance – Low latency and cost-efficient execution ensure better outcomes.
Whether you’re working on a new DeFi project or improving an existing platform, STON.fi’s API & SDK provide a solid foundation to enhance functionality and scalability.
By leveraging STON.fi’s tools, developers can focus on building innovative features, rather than getting caught up in the technical challenges of token swaps.
3 notes · View notes
this-week-in-rust · 6 months ago
Text
This Week in Rust 572
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.
Want TWIR in your inbox? Subscribe here.
Updates from Rust Community
Official
October project goals update
Next Steps on the Rust Trademark Policy
This Development-cycle in Cargo: 1.83
Re-organising the compiler team and recognising our team members
This Month in Our Test Infra: October 2024
Call for proposals: Rust 2025h1 project goals
Foundation
Q3 2024 Recap from Rebecca Rumbul
Rust Foundation Member Announcement: CodeDay, OpenSource Science(OS-Sci), & PROMOTIC
Newsletters
The Embedded Rustacean Issue #31
Project/Tooling Updates
Announcing Intentrace, an alternative strace for everyone
Ractor Quickstart
Announcing Sycamore v0.9.0
CXX-Qt 0.7 Release
An 'Educational' Platformer for Kids to Learn Math and Reading—and Bevy for the Devs
[ZH][EN] Select HTML Components in Declarative Rust
Observations/Thoughts
Safety in an unsafe world
MinPin: yet another pin proposal
Reached the recursion limit... at build time?
Building Trustworthy Software: The Power of Testing in Rust
Async Rust is not safe with io_uring
Macros, Safety, and SOA
how big is your future?
A comparison of Rust’s borrow checker to the one in C#
Streaming Audio APIs in Rust pt. 3: Audio Decoding
[audio] InfinyOn with Deb Roy Chowdhury
Rust Walkthroughs
Difference Between iter() and into_iter() in Rust
Rust's Sneaky Deadlock With if let Blocks
Why I love Rust for tokenising and parsing
"German string" optimizations in Spellbook
Rust's Most Subtle Syntax
Parsing arguments in Rust with no dependencies
Simple way to make i18n support in Rust with with examples and tests
How to shallow clone a Cow
Beginner Rust ESP32 development - Snake
[video] Rust Collections & Iterators Demystified 🪄
Research
Charon: An Analysis Framework for Rust
Crux, a Precise Verifier for Rust and Other Languages
Miscellaneous
Feds: Critical Software Must Drop C/C++ by 2026 or Face Risk
[audio] Let's talk about Rust with John Arundel
[audio] Exploring Rust for Embedded Systems with Philip Markgraf
Crate of the Week
This week's crate is wtransport, an implementation of the WebTransport specification, a successor to WebSockets with many additional features.
Thanks to Josh Triplett for the suggestion!
Please submit your suggestions and votes for next week!
Calls for Testing
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:
RFCs
No calls for testing were issued this week.
Rust
No calls for testing were issued this week.
Rustup
No calls for testing were issued this week.
If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
Call for Participation; projects and speakers
CFP - Projects
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!
CFP - Events
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!
Updates from the Rust Project
473 pull requests were merged in the last week
account for late-bound depth when capturing all opaque lifetimes
add --print host-tuple to print host target tuple
add f16 and f128 to invalid_nan_comparison
add lp64e RISC-V ABI
also treat impl definition parent as transparent regarding modules
cleanup attributes around unchecked shifts and unchecked negation in const
cleanup op lookup in HIR typeck
collect item bounds for RPITITs from trait where clauses just like associated types
do not enforce ~const constness effects in typeck if rustc_do_not_const_check
don't lint irrefutable_let_patterns on leading patterns if else if let-chains
double-check conditional constness in MIR
ensure that resume arg outlives region bound for coroutines
find the generic container rather than simply looking up for the assoc with const arg
fix compiler panic with a large number of threads
fix suggestion for diagnostic error E0027
fix validation when lowering ? trait bounds
implement suggestion for never type fallback lints
improve missing_abi lint
improve duplicate derive Copy/Clone diagnostics
llvm: match new LLVM 128-bit integer alignment on sparc
make codegen help output more consistent
make sure type_param_predicates resolves correctly for RPITIT
pass RUSTC_HOST_FLAGS at once without the for loop
port most of --print=target-cpus to Rust
register ~const preds for Deref adjustments in HIR typeck
reject generic self types
remap impl-trait lifetimes on HIR instead of AST lowering
remove "" case from RISC-V llvm_abiname match statement
remove do_not_const_check from Iterator methods
remove region from adjustments
remove support for -Zprofile (gcov-style coverage instrumentation)
replace manual time convertions with std ones, comptime time format parsing
suggest creating unary tuples when types don't match a trait
support clobber_abi and vector registers (clobber-only) in PowerPC inline assembly
try to point out when edition 2024 lifetime capture rules cause borrowck issues
typingMode: merge intercrate, reveal, and defining_opaque_types
miri: change futex_wait errno from Scalar to IoError
stabilize const_arguments_as_str
stabilize if_let_rescope
mark str::is_char_boundary and str::split_at* unstably const
remove const-support for align_offset and is_aligned
unstably add ptr::byte_sub_ptr
implement From<&mut {slice}> for Box/Rc/Arc<{slice}>
rc/Arc: don't leak the allocation if drop panics
add LowerExp and UpperExp implementations to NonZero
use Hacker's Delight impl in i64::midpoint instead of wide i128 impl
xous: sync: remove rustc_const_stable attribute on Condvar and Mutex new()
add const_panic macro to make it easier to fall back to non-formatting panic in const
cargo: downgrade version-exists error to warning on dry-run
cargo: add more metadata to rustc_fingerprint
cargo: add transactional semantics to rustfix
cargo: add unstable -Zroot-dir flag to configure the path from which rustc should be invoked
cargo: allow build scripts to report error messages through cargo::error
cargo: change config paths to only check CARGO_HOME for cargo-script
cargo: download targeted transitive deps of with artifact deps' target platform
cargo fix: track version in fingerprint dep-info files
cargo: remove requirement for --target when invoking Cargo with -Zbuild-std
rustdoc: Fix --show-coverage when JSON output format is used
rustdoc: Unify variant struct fields margins with struct fields
rustdoc: make doctest span tweak a 2024 edition change
rustdoc: skip stability inheritance for some item kinds
mdbook: improve theme support when JS is disabled
mdbook: load the sidebar toc from a shared JS file or iframe
clippy: infinite_loops: fix incorrect suggestions on async functions/closures
clippy: needless_continue: check labels consistency before warning
clippy: no_mangle attribute requires unsafe in Rust 2024
clippy: add new trivial_map_over_range lint
clippy: cleanup code suggestion for into_iter_without_iter
clippy: do not use gen as a variable name
clippy: don't lint unnamed consts and nested items within functions in missing_docs_in_private_items
clippy: extend large_include_file lint to also work on attributes
clippy: fix allow_attributes when expanded from some macros
clippy: improve display of clippy lints page when JS is disabled
clippy: new lint map_all_any_identity
clippy: new lint needless_as_bytes
clippy: new lint source_item_ordering
clippy: return iterator must not capture lifetimes in Rust 2024
clippy: use match ergonomics compatible with editions 2021 and 2024
rust-analyzer: allow interpreting consts and statics with interpret function command
rust-analyzer: avoid interior mutability in TyLoweringContext
rust-analyzer: do not render meta info when hovering usages
rust-analyzer: add assist to generate a type alias for a function
rust-analyzer: render extern blocks in file_structure
rust-analyzer: show static values on hover
rust-analyzer: auto-complete import for aliased function and module
rust-analyzer: fix the server not honoring diagnostic refresh support
rust-analyzer: only parse safe as contextual kw in extern blocks
rust-analyzer: parse patterns with leading pipe properly in all places
rust-analyzer: support new #[rustc_intrinsic] attribute and fallback bodies
Rust Compiler Performance Triage
A week dominated by one large improvement and one large regression where luckily the improvement had a larger impact. The regression seems to have been caused by a newly introduced lint that might have performance issues. The improvement was in building rustc with protected visibility which reduces the number of dynamic relocations needed leading to some nice performance gains. Across a large swath of the perf suit, the compiler is on average 1% faster after this week compared to last week.
Triage done by @rylev. Revision range: c8a8c820..27e38f8f
Summary:
(instructions:u) mean range count Regressions ❌ (primary) 0.8% [0.1%, 2.0%] 80 Regressions ❌ (secondary) 1.9% [0.2%, 3.4%] 45 Improvements ✅ (primary) -1.9% [-31.6%, -0.1%] 148 Improvements ✅ (secondary) -5.1% [-27.8%, -0.1%] 180 All ❌✅ (primary) -1.0% [-31.6%, 2.0%] 228
1 Regression, 1 Improvement, 5 Mixed; 3 of them in rollups 46 artifact comparisons made in total
Full report here
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
[RFC] Default field values
RFC: Give users control over feature unification
Final Comment Period
Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
RFCs
[disposition: merge] Add support for use Trait::func
Tracking Issues & PRs
Rust
[disposition: merge] Stabilize Arm64EC inline assembly
[disposition: merge] Stabilize s390x inline assembly
[disposition: merge] rustdoc-search: simplify rules for generics and type params
[disposition: merge] Fix ICE when passing DefId-creating args to legacy_const_generics.
[disposition: merge] Tracking Issue for const_option_ext
[disposition: merge] Tracking Issue for const_unicode_case_lookup
[disposition: merge] Reject raw lifetime followed by ', like regular lifetimes do
[disposition: merge] Enforce that raw lifetimes must be valid raw identifiers
[disposition: merge] Stabilize WebAssembly multivalue, reference-types, and tail-call target features
Cargo
No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
No Language Team Proposals entered Final Comment Period this week.
Language Reference
No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs
[new] Implement The Update Framework for Project Signing
[new] [RFC] Static Function Argument Unpacking
[new] [RFC] Explicit ABI in extern
[new] Add homogeneous_try_blocks RFC
Upcoming Events
Rusty Events between 2024-11-06 - 2024-12-04 🦀
Virtual
2024-11-06 | Virtual (Indianapolis, IN, US) | Indy Rust
Indy.rs - with Social Distancing
2024-11-07 | Virtual (Berlin, DE) | OpenTechSchool Berlin + Rust Berlin
Rust Hack and Learn | Mirror: Rust Hack n Learn Meetup
2024-11-08 | Virtual (Jersey City, NJ, US) | Jersey City Classy and Curious Coders Club Cooperative
Rust Coding / Game Dev Fridays Open Mob Session!
2024-11-12 | Virtual (Dallas, TX, US) | Dallas Rust
Second Tuesday
2024-11-14 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup
Crafting Interpreters in Rust Collaboratively
2024-11-14 | Virtual and In-Person (Lehi, UT, US) | Utah Rust
Green Thumb: Building a Bluetooth-Enabled Plant Waterer with Rust and Microbit
2024-11-14 | Virtual and In-Person (Seattle, WA, US) | Seattle Rust User Group
November Meetup
2024-11-15 | Virtual (Jersey City, NJ, US) | Jersey City Classy and Curious Coders Club Cooperative
Rust Coding / Game Dev Fridays Open Mob Session!
2024-11-19 | Virtual (Los Angeles, CA, US) | DevTalk LA
Discussion - Topic: Rust for UI
2024-11-19 | Virtual (Washington, DC, US) | Rust DC
Mid-month Rustful
2024-11-20 | Virtual and In-Person (Vancouver, BC, CA) | Vancouver Rust
Embedded Rust Workshop
2024-11-21 | Virtual (Berlin, DE) | OpenTechSchool Berlin + Rust Berlin
Rust Hack and Learn | Mirror: Rust Hack n Learn Meetup
2024-11-21 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup
Trustworthy IoT with Rust--and passwords!
2024-11-21 | Virtual (Rotterdam, NL) | Bevy Game Development
Bevy Meetup #7
2024-11-25 | Bratislava, SK | Bratislava Rust Meetup Group
ONLINE Talk, sponsored by Sonalake - Bratislava Rust Meetup
2024-11-26 | Virtual (Dallas, TX, US) | Dallas Rust
Last Tuesday
2024-11-28 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup
Crafting Interpreters in Rust Collaboratively
2024-12-03 | Virtual (Buffalo, NY, US) | Buffalo Rust Meetup
Buffalo Rust User Group
Asia
2024-11-28 | Bangalore/Bengaluru, IN | Rust Bangalore
RustTechX Summit 2024 BOSCH
2024-11-30 | Tokyo, JP | Rust Tokyo
Rust.Tokyo 2024
Europe
2024-11-06 | Oxford, UK | Oxford Rust Meetup Group
Oxford Rust and C++ social
2024-11-06 | Paris, FR | Paris Rustaceans
Rust Meetup in Paris
2024-11-09 - 2024-11-11 | Florence, IT | Rust Lab
Rust Lab 2024: The International Conference on Rust in Florence
2024-11-12 | Zurich, CH | Rust Zurich
Encrypted/distributed filesystems, wasm-bindgen
2024-11-13 | Reading, UK | Reading Rust Workshop
Reading Rust Meetup
2024-11-14 | Stockholm, SE | Stockholm Rust
Rust Meetup @UXStream
2024-11-19 | Leipzig, DE | Rust - Modern Systems Programming in Leipzig
Daten sichern mit ZFS (und Rust)
2024-11-21 | Edinburgh, UK | Rust and Friends
Rust and Friends (pub)
2024-11-21 | Oslo, NO | Rust Oslo
Rust Hack'n'Learn at Kampen Bistro
2024-11-23 | Basel, CH | Rust Basel
Rust + HTMX - Workshop #3
2024-11-27 | Dortmund, DE | Rust Dortmund
Rust Dortmund
2024-11-28 | Aarhus, DK | Rust Aarhus
Talk Night at Lind Capital
2024-11-28 | Augsburg, DE | Rust Meetup Augsburg
Augsburg Rust Meetup #10
2024-11-28 | Berlin, DE | OpenTechSchool Berlin + Rust Berlin
Rust and Tell - Title
North America
2024-11-07 | Chicago, IL, US | Chicago Rust Meetup
Chicago Rust Meetup
2024-11-07 | Montréal, QC, CA | Rust Montréal
November Monthly Social
2024-11-07 | St. Louis, MO, US | STL Rust
Game development with Rust and the Bevy engine
2024-11-12 | Ann Arbor, MI, US | Detroit Rust
Rust Community Meetup - Ann Arbor
2024-11-14 | Mountain View, CA, US | Hacker Dojo
Rust Meetup at Hacker Dojo
2024-11-15 | Mexico City, DF, MX | Rust MX
Multi threading y Async en Rust parte 2 - Smart Pointes y Closures
2024-11-15 | Somerville, MA, US | Boston Rust Meetup
Ball Square Rust Lunch, Nov 15
2024-11-19 | San Francisco, CA, US | San Francisco Rust Study Group
Rust Hacking in Person
2024-11-23 | Boston, MA, US | Boston Rust Meetup
Boston Common Rust Lunch, Nov 23
2024-11-25 | Ferndale, MI, US | Detroit Rust
Rust Community Meetup - Ferndale
2024-11-27 | Austin, TX, US | Rust ATX
Rust Lunch - Fareground
Oceania
2024-11-12 | Christchurch, NZ | Christchurch Rust Meetup Group
Christchurch Rust Meetup
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Jobs
Please see the latest Who's Hiring thread on r/rust
Quote of the Week
Any sufficiently complicated C project contains an adhoc, informally specified, bug ridden, slow implementation of half of cargo.
– Folkert de Vries at RustNL 2024 (youtube recording)
Thanks to Collin Richards for the suggestion!
Please submit quotes and vote for next week!
This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.
Email list hosting is sponsored by The Rust Foundation
Discuss on r/rust
3 notes · View notes
obanicrypto · 4 months ago
Text
Unlocking New Possibilities with the STON.fi API & SDK Demo App
Tumblr media
Have you ever faced a moment in your development journey where you wanted to integrate a powerful feature but didn’t know where to start? That’s a common feeling, especially in the blockchain space where innovation moves faster than most of us can keep up with. I’ve been there, too, and that’s why I’m excited to share something that will make your work simpler and more impactful—the STON.fi API & SDK Demo App.
This isn’t just another tool in the blockchain world; it’s a resource designed to give you clarity and confidence when building on the TON ecosystem.
What Is the STON.fi Demo App All About
Let’s start with a quick analogy. Think of building a blockchain app like constructing a house. You need the right tools and a clear blueprint. The STON.fi demo app is like a pre-built room that shows you exactly how everything fits together. You can study it, replicate it, and adapt it to your project.
This app showcases how to seamlessly integrate STON.fi’s swap function using its API and SDK. It’s a working example that’s not just theoretical—it’s real, functional, and ready to inspire your next project.
Why Does This Matter
Let’s make this relatable. Imagine you’re tasked with creating a crypto wallet that allows users to swap tokens. Without guidance, you’d spend hours (or even days) trying to figure out the right implementation. The STON.fi demo app eliminates that guesswork.
Here’s why it’s a big deal:
1. It’s a Hands-On Guide
You don’t have to learn by trial and error. The demo gives you a live example of how everything works, so you’re not starting from scratch.
2. Saves Time and Energy
Time is money, especially in tech. Instead of spending countless hours debugging, you can focus on customizing and enhancing your app.
3. Showcases the Full Potential of STON.fi
The demo isn’t limited to swaps—it’s a showcase of how versatile and powerful the STON.fi SDK can be.
Real-Life Applications
Here’s where it gets exciting. Whether you’re a solo developer or part of a team, this demo app can simplify your work and spark new ideas.
Let’s say you’re building a decentralized exchange (DEX). Token swaps are a core feature, but implementing them can feel overwhelming. The STON.fi demo app gives you a starting point that’s already proven to work.
Or maybe you’re creating a DeFi lending platform. Adding a token swap feature can enhance your offering, making it more attractive to users. The demo app provides the tools you need to make that happen without reinventing the wheel.
Breaking Down the Benefits
Think of this as a tool that bridges the gap between “I want to build this” and “I just built this.”
1. Clarity in Implementation: The app gives you a clear example of how to integrate STON.fi’s features. It’s like having a mentor guide you through the process.
2. Reduced Complexity: Blockchain development can feel like trying to solve a Rubik’s cube blindfolded. This app removes unnecessary complexity, allowing you to focus on what matters.
3. Endless Inspiration: Beyond swaps, this demo can inspire you to explore other possibilities within the TON ecosystem.
Why I Believe This Matters
I remember when I first started exploring blockchain development. Every step felt like climbing a mountain, and sometimes I wasn’t even sure if I was on the right trail. Resources like the STON.fi demo app would have made a world of difference back then.
This tool isn’t just for seasoned developers—it’s for anyone looking to make their mark in the blockchain space. It’s accessible, practical, and built to help you succeed.
Try the SDK
Final Thoughts
The STON.fi API & SDK Demo App isn’t just a resource; it’s a catalyst for creativity and innovation. Whether you’re building your first app or looking to level up an existing project, this demo provides the clarity and direction you need.
Take the time to explore it, experiment with it, and let it inspire your next big idea. In a space as dynamic as blockchain, having the right tools can make all the difference.
This is your opportunity to simplify your process, save time, and unlock the full potential of the TON ecosystem. Don’t just take my word for it—try it out and see what’s possible.
3 notes · View notes
obavee · 4 months ago
Text
Exploring the Power of STON.fi’s API & SDK Demo App: A Developer’s Game-Changer
Tumblr media
If you're a developer diving into the world of blockchain, especially within the TON ecosystem, you know how overwhelming it can feel to get everything just right. From understanding decentralized finance (DeFi) to building seamless integration, the process is filled with complex layers. But what if there was an easier way to get started with integrating powerful features like token swaps into your app?
STON.fi has launched something that could change the way you work: the STON.fi API & SDK Demo App. This isn’t just a demo—it’s a practical, real-world example that brings the power of the TON blockchain right into your hands.
Why Should You Care?
Think of this demo app as your personal guide in a foreign city. You know your destination, but the roads are unfamiliar. This app shows you exactly how to integrate key features (like swaps) into your decentralized apps (dApps), step by step. Rather than spending hours or days figuring it out, you get a hands-on example of what works, which can save you a lot of time.
Here’s What Makes This Demo App Special
1. It’s Easy to Understand
Instead of vague documentation or unclear tutorials, this demo app shows you a live, working example. It’s like watching a professional chef cook a meal and then being handed the recipe. You see exactly how everything comes together.
2. A Quick Head Start
Let’s face it—time is valuable. As a developer, you don’t want to reinvent the wheel. This demo gives you a clear starting point, making the development process smoother and faster.
3. Shows You the Full Potential
The app isn’t just about showing you how to add one simple feature; it gives you insight into what’s possible with STON.fi’s tools, opening up the door to more advanced ideas and future projects.
Let’s Break It Down
Imagine you’re designing an app where users can trade tokens easily. You’ve got the app’s main features in mind, but you need to figure out how to incorporate a simple token swap function. The STON.fi demo app is your perfect starting point. It gives you a pre-built, working example of how to integrate token swaps, saving you the headache of figuring out how to do it from scratch.
It’s like buying a DIY kit for a project—you get the materials (API & SDK), plus a guide that shows you how to put it all together. No guesswork.
Real Use Cases
1. Building a Wallet with Token Swaps
If you're building a wallet app and want to offer token swaps to your users, this demo app shows you exactly how to integrate that feature. Instead of piecing it together through trial and error, you get a clear roadmap that saves you time.
2. Creating a DeFi Lending Platform
Maybe you’re working on a DeFi lending platform where users can exchange assets. While your main goal is lending, a swap feature makes the process smoother for your users. This demo app walks you through how to add that feature seamlessly.
Why This is a Big Deal
Here’s the thing: the blockchain space is about simplifying processes and opening up new opportunities. Developers should be able to focus on innovation, not get stuck on the technical details. This demo app cuts through the complexity and provides a working example, so you can get to building quickly.
In traditional finance, things move slowly because they’re burdened with outdated systems and red tape. Blockchain, on the other hand, thrives on speed, innovation, and flexibility. Tools like the STON.fi demo app are key in accelerating this innovation by making the technical side easier for developers.
I’ve been in the blockchain space long enough to see how frustrating and time-consuming integrating new features can be. It’s like being handed a complex puzzle without a guide—sure, you might get there eventually, but it’s a lot of wasted effort.
That’s why tools like this demo app are so powerful. They give you clarity. They give you a head start. And most importantly, they save you time. I’ve seen developers waste days just trying to figure out how to add simple functions. With this tool, you can skip that frustration and jump straight into what matters: building great apps.
If you're a developer in the TON ecosystem, the STON.fi API & SDK Demo App is a must-have resource. It’s more than just a demo; it’s a practical tool that helps you integrate powerful features into your projects faster and more efficiently.
Try the API and DEMO
Innovation in blockchain shouldn’t be complicated. The right tools can make all the difference in helping you bring your ideas to life. So dive in, explore the demo, and see how STON.fi can enhance your development experience.
Great things happen when developers have the tools they need. This is one of those tools.
5 notes · View notes
canmom · 1 year ago
Text
VR observations, 10 months in
I've been a game dev for 10 months now. It's pretty great, I'm enjoying it a lot, I get to spend my days doing crazy shader shit and animations and voxels and visual effects. Hopefully the game that will come out of all this will be one people enjoy, and in any case I'm learning so much that will eventually come back to the personal ~artistic~ side of things. I can't talk about that game just yet though (but soon it will be announced, I'm pretty sure). So this is a post about other games.
Mind you, I don't actually play very many VR games, or games in general these days, because I'm too busy developing the dang things. but sometimes I do! And I think it's interesting to talk about them.
These aren't really reviews as such. You could project all sorts of ulterior motives if it was. Like my livelihood does sorta depend on people buying VR headsets and then games on them. This is more just like things I observe.
Headsets
The biggest problem with VR at the moment is wearing a headset for too long kinda sucks. The weight of the headset is all effectively held on a lever arm and it presses on your face. However, this is heavily dependent on the strap you use to hold it to your head. A better balanced and cushioned strap can hold the headset still with less pressure and better balance the forces.
The strap that comes with the Quest 3 is absolute dogshit. So a big part of the reason I wouldn't play VR games for fun is because after wearing the headset for 30-60 minutes in the daily meeting, the absolute last thing I'd want to do is wear it any longer. Recently I got a new strap (a ~£25 Devaso one, the low end of straps), and it's markedly improved. It would probably be even better if I got one of the high end Bobo straps. So please take it from me: if you wanna get into VR, get a decent strap.
I hear the Apple Vision Pro is a lot more comfortable to wear for long periods, though I won't have a chance to try it until later this month.
During the time I've been working at Holonautic, Meta released their Quest 3, and more recently Apple released their hyper expensive Vision Pro for much fanfare.
The Quest 3 is a decent headset and probably the one I'd recommend if you're getting into VR and can afford a new console. It's not a massive improvement over the Quest 2 - the main thing that's better is the 'passthrough' (aka 'augmented reality', the mode where the 3D objects are composited into video of what's in front of you), which is now in full colour, and feels a lot less intrusive than the blown out greyscale that the Quest 2 did. But it still has some trouble with properly taking into account depth when combining the feeds from multiple cameras, so you get weird space warping effects when something in the foreground moves over something in the background.
The Vision Pro is by all accounts the bees knees, though it costs $3500 and already sold out, so good luck getting one. It brings a new interaction mode based on eye tracking, where you look at a thing with your eyes to select it like with a mouse pointer, and hold your hands in your lap and pinch to interact. Its passthrough is apparently miles ahead, it's got a laptop tier chip, etc etc. I'm not gonna talk about that though, if you want to read product reviews there are a million places you can do it.
Instead I wanna talk about rendering, since I think this is something that only gets discussed among devs, and maybe people outside might be interested.
Right now there is only one game engine that builds to the Vision Pro, which is Unity. However, Apple have their own graphics API, and the PolySpatial API used for the mixed reality mode is pretty heavily locked down in terms of what you can do.
So what Unity does is essentially run a transpilation step to map its own constructs into PolySpatial ones. For example, say you make a shader in Shader Graph (you have to use shader graph, it won't take HLSL shaders in general) - Unity will generate a vision pro compatible shader (in MaterialX format) from that. Vertex and fragment shaders mostly work, particle systems mostly don't, you don't get any postprocessing shaders, anything that involves a compute shader is right out (which means no VFX graph), Entities Graphics doesn't work. I don't think you get much control over stuff like batching. It's pretty limited compared to what we're used to on other platforms.
I said fragment shaders mostly work. It's true that most Shader Graph nodes work the same. However, if you're doing custom lighting calculations in a Unity shader, a standard way to do things is to use the 'main light' property provided by Unity. On the Vision Pro, you don't get a main light.
The Vision Pro actually uses an image-based lighting model, which uses the actual room around you to provide lighting information. This is great because objects in VR look like they actually belong in the space you're in, but it would of course be a huge security issue if all programs could get realtime video of your room, and I imagine the maths involved is pretty complex. So the only light information you get is a shader graph node which does a PBR lighting calculation based on provided parameters (albedo, normal, roughness, metallicity etc.). You can then instruct it to do whatever you want with the output of that inside the shader.
The upshot of this is that we have to make different versions of all our shaders for the Vision Pro version of the game.
Once the game is announced we'll probably have a lot to write about developing interactions for the vision pro vs the quest, so I'll save that for now. It's pretty fascinating though.
Anyway, right now I've still yet to wear a Vision Pro. Apple straight up aren't handing out devkits, we only have two in the company still, so mostly I'm hearing about things second hand.
Shores of Loci
Tumblr media
A few genres of VR game have emerged by now. Shooting and climbing are two pretty well-solved problems, so a lot of games involve that. But another one is 3D puzzles. This is something that would be incredibly difficult on a flat screen, where manipulating 3D objects is quite difficult, but becomes quite natural and straightforward in VR.
I've heard about one such game that uses 3D scans of real locations, but Shores of Loci is all about very environment artist authored levels, lots of grand sweeping vistas and planets hanging in the sky and so on. Basically you go through a series of locations and assemble teetering ramshackle buildings and chunks of landscape, which then grow really big and settle into the water. You can pull the pieces towards you with your hand, and then when you rotate them into roughly the right position and orientation relative to another piece, they snap together.
Tumblr media
It's diverting, if kinda annoying when you just can't find the place the piece should go - especially if the answer turns out to be that there's an intermediate piece that floated off somewhere. The environments are well-designed and appealing, it's cool to see the little guys appearing to inhabit them. That said it does kinda just... repeat that concept a bunch. The narrative is... there's a big stone giant who appears and gives you pieces sometimes. That's it basically.
Still, it's interesting to see the different environment concepts. Transitions have this very cool distorted sky/black hole effect.
However, the real thing that got me with this game, the thing that I'm writing about now, was the water. They got planar reflections working. On the Quest! This is something of a white whale for me. Doing anything that involves reading from a render texture is so expensive that it's usually a no-go, and yet here it's working great - planar reflections complete with natural looking distortion from ripples. There's enough meshes that I assume there must be a reasonably high number of draw calls, and yet... it's definitely realtime planar reflections, reflections move with objects, it all seems to work.
There's a plugin called Mirrors and Reflections for VR that provides an implementation, but so far my experience has been that the effect is too expensive (in terms of rendertime) to keep 72fps in a more complex scene. I kind of suspect the devs are using this plugin, but I'm really curious how they optimised the draw calls down hard enough to work with it, since there tends to be quite a bit going on...
Moss
This game's just straight up incredibly cute.
youtube
Third person VR games, where you interact with a character moving across a diorama-like level, are a tiny minority of VR games at the moment. I think it's a shame because the concept is fantastic.
Moss is a puzzle-platformer with light combat in a Redwall/Mouse Guard-like setting. The best part of Moss is 1000% interacting with your tiny little mousegirl, who is really gorgeously animated - her ears twitch, her tail swings back and forth, she tumbles, clambers, and generally moves in a very convincing and lifelike way.
Arguably this is the kind of game that doesn't need to be made in VR - we already have strong implementations of 'platformer' for flatscreen. What I think the VR brings in this case is this wonderful sense of interacting with a tiny 3D world like a diorama. In some ways it's sorta purposefully awkward - if Quill walks behind something, you get a glowing outline, but you might need to crane your neck to see her - but having the level laid out in this way as a 3D structure you can play with is really endearing.
Mechanically, you move Quill around with the analogue stick, and make her jump with the buttons, standard stuff. Various level elements can be pushed or pulled by grabbing them with the controllers, and you can also drag enemies around to make them stand on buttons, so solving a level is a combination of moving pieces of the level and then making Quill jump as appropriate.
The fact that you're instantiated in the level, separate from Quill, also adds an interesting wrinkle in terms of 'identification with player character'. In most third person games, you tend to feel that the player character is you to some degree. In Moss, it feels much more like Quill is someone I've been made responsible for, and I feel guilty whenever I accidentally make her fall off a cliff or something.
A lot is clearly designed around fostering that protective vibe - to heal Quill, you have to reach out and hold her with your hand, causing her to glow briefly. When you complete some levels, she will stop to give you a high five or celebrate with you. Even though the player is really just here as 'puzzle solver' and 'powerful macguffin', it puts some work in to make you feel personally connected to Quill.
Since the camera is not locked to the character, the controls are instead relative to the stage, i.e. you point the stick in the direction on the 2D plane you want Moss to move. This can make certain bits of platforming, like moving along a narrow ledge or tightrope, kinda fiddly. In general it's pretty manageable though.
The combat system is straightforward but solid enough. Quill has a three button string, and it can be cancelled into a dash using the jump button, and directed with the analogue stick. Enemies telegraph their attacks pretty clearly, so it's rarely difficult, but there's enough there to be engaging.
The game is built in Unreal, unlike most Quest games (almost all are made in Unity). It actually doesn't feel so very different though - likely because the lighting calculations that are cheap enough to run in Unity are the same ones that are cheap enough to run in Unreal. It benefits a lot from baked lighting. Some things are obvious jank - anything behind where the player is assumed to be sitting tends not to be modelled or textured - but the environments are in general very lively and I really like some of the interactions: you can slash through the grass and floating platforms rock as you jump onto them.
The story is sadly pretty standard high fantasy royalist chosen one stuff, nothing exciting really going on there. Though there are some very cute elements - the elf queen has a large frog which gives you challenges to unlock certain powers, and you can pet the frog, and even give it a high five. Basically all the small scale stuff is done really well, I just wish they'd put some more thought into what it's about. The Redwall/Mouse Guard style has a ton of potential - what sort of society would these sapient forest animals have? They just wanted a fairytale vibe though evidently.
Cutscene delivery is a weak point. You pull back into a cathedral-like space where you're paging through a large book, which is kinda cool, and listening to narration while looking at illustrations. In general I think these cutscenes would have worked better if you just stayed in the diorama world and watched the characters have animated interactions. Maybe it's a cost-saving measure. I guess having you turn the pages of the book is also a way to give you something to do, since sitting around watching NPCs talk is notoriously not fun in VR.
There are some very nice touches in the environment design though! In one area you walk across a bunch of human sized suits of armour and swords that are now rusting - nobody comments, but it definitely suggests that humans did exist in this world at some point. The actual puzzle levels tend to make less sense, they're very clearly designed as puzzles first and 'spaces people would live in' not at all, but they do tend to look pretty, and there's a clear sense of progression through different architectural areas - so far fairly standard forest, swamp, stone ruins etc. but I'll be curious to see if it goes anywhere weird with it later.
Weak story aside, I'm really impressed with Moss. Glad to see someone else giving third person VR a real shot. I'm looking forward to playing the rest of it.
...that's kinda all I played in a while huh. For example, I still haven't given Asgard's Wrath II, the swordfighting game produced internally at Meta that you get free on the Quest 3, a shot. Or Boneworks. I still haven't finished Half Life Alyx, even! Partly that's because the Quest 3 did not get on well with my long USB A to C cable - for some reason it only seems to work properly on a high quality C to C cable - and that restricts me from playing PCVR games that require too much movement. Still though...
Anyway, the game I've been working on these past 10 months should be ready to announce pretty soon. So I'm very excited for that.
9 notes · View notes
vanillaxoshi · 1 year ago
Note
So...... I got an idea.
Kaizo pov on Cahaya and Fang's develop Friendships.
(before galaxy season)
.
.
.
.
.
.
.
(Before Fang go to earth .)
" You want me to send Fang to earth by himself ? "
Kaizo didn't believe what admiral Maskmana just said .
"But all of the targets are just mere children. I can retrieve the power watches in no time ! "
He points to the files photos on the hologram's screen as he want to persuade his admiral this is not a good idea .
"That the point Kaizo ."
"Because they are kids . The kids who are all same age as Fang , It would be easier for him to interact them ."
"But, it still- "
Kaizo still want him to change the idea but Maskmana cut off him .
"Kaizo."
"Are you underestimating Fang's ability ?"
"......"
"No sir , I believe his can accomplish the mission ."
"It settles then. Besides-"
"I have other mission for you .I think it's better if Fang doesn't get involve in this mission ."
".......Yes sir."
(Hours later in Fang's room)
"Fang ,take these files . You need it on the next mission and-"
Kaizo put a stack of books in front of Fang .
" I want you to master all these language and cultures before I send you for the next mission ."
"Wow, that's a lot."
Fang flip throughs the pages as he examine the books.
"So , when will our mission starts captain ?"
"...... Not ours , It's yours"
"Hmm?"
"This will be your first solo mission ."
"Do you think you can do it ?"
"wha- Yes ! Abang-. I mean- . Captain!"
"I'll send you off once you're ready. Remember the targets face first, that's many of them. "
"Ok?"
" Lets see -"
" Gopal , Yaya , Ying , Petir , Angin, Tanah, Aip?- wait why are the faces look the same ?, Air ?, Daun? And Cahaya???- there's seven of them ?!? -"
Kaizo leaves the room as Fang continue list out the targets info .
There's no point to stop Fang now, soon or later he will need to stand on his feet own .
....... Now he have to get prepare for the next mission.
.
.
.
.
.
.
.
It's been weeks that Fang have stay on earth and he have receive messages from him that he didn't have time to read .
At first, it is all about the mission progress .
"I have successfully enroll the same school with the targets ." "The power sphera that give them the powers - Ocobot , with unknow reason it will faints whenever it sees me . Which makes me suspicions to them." "Today the targets Pertir , Angin and Tanah still alert to me while the others who didn't have power still remain unknow . The elder sibling seem tell them to avoid me purposely ."
But it starts to change a bit .
" Today I met the target Cahaya and talks to him . He's very smart compare to the others ,he hypes up and become very talkative when he's sharing knowledge. " "The elder siblings seems not please the fact that they find out that target Cahaya become closer to me." "I have solve the problem with the power sphera , the targets have show there trust on me more ." " I have start walking to school with Cahaya every morning ."
He have change the way he address them .
" Aip have gain his power but he's unable to sleep peacefully because of the stress. So I tried to help but it didn't end ......well . "It seems like Api have a fight with Tanah , which is the reason that Api is stress . But they are now ease now as both of them apologizes to each other." "Today Air have active his power but ...... under some reason he need to loss weight before he go to fight Boboibot ."
"Cahaya is very curious about Boboibot and tried to dismantle it after Boboibot is defeat but Tanah and Petir stops him because it's was dangerous ."
It seems like Fang have befriend all of them but what make Kaizo more curious is "Cahaya".
Accorded to the files he read ,he's the youngest among the siblings and his power haven't been discover yet.
But with some reason Fang is attach to him , He's the one always get mentioned in his messages even it doesn't relate with the mission.
Interesting......
.
.
.
.
.
.
.
Ejo Jo have escape from the hospital as he say he will get his revenge on earth with the targets .
Which mean it give him a chance to see how attach Fang to the childrens.
Kaizo have inform Fang that he will capture him on earth and retrieve the power watches form the targets snice he didn't make any progress .
.
.
.
.
.
.
.
Interesting.
Fang's face panicked as I order him to fight with them.
Fang tired to look away as "Cahaya" wants confronts him .
Fang's moves got effected when "Cahaya" yells him through the force field barrier while he's fighting with his siblings.
Fang disobey my order and when against me .
"Cahaya" look very horrified when he witness me to strangle Fang as he tired begged me to stop and......
Are Api and Air seriously just destroy each other's attack and having a fight in the middle of the battle ?
Are they for real ?
Petir surprisingly stronger that I thought but not enough .
To surprise me the most is Fang have snatch everyone's watches and told me to stop the fight as he shows his loyalty to me .
Fang definitely look very regret after saying that he is not their friend, especially " Cahaya " starts to sob and asking him is he lying to him all the time being his friend .
Fang admitted that he have lied to them but definitely enjoy being friend with him that is definitely not a lie and apologies to him that he have find out he an alien in this way and sorry he have to leave now .
...... Kaizo definitely notice Fang is trying to hold back his tears after he have arrive to the ship.
.
.
.
.
.
.
.
"This is me Adudu the captain of zero zero super evil ! "
An alien have the guts to attack his space ship but to surprise Kaizo more is -
"Release Fang now ! Give him back to me ! "
Cahaya appear on the screen .
"Haya ? "
Fang is definitely shock to see Cahaya too.
" Fang don't you dare leave after saying that you're sorry !" " I still mad at you for what you done but -" " I still want you to stay !"
Kaizo can see Fang's eyes light up as he heard what Cahaya just said.
"Lahap, Fang go to the control room ."
Kaizo cut off the connection .
" We have intruders to deal with."
Cahaya really have some guts don't he ?
Knowing he's powerless still have the guts to chase us to the space ?
That's impressive.
.
.
.
.
.
.
.
" I'll stay on earth Captain ."
" That's my choices ."
"......Very well Fang ."
Kaizo have approve them have the right to keep the watch . Also tried to recruit them but failed .
" Before I send you guys back to earth-"
"I will need a word with Fang first."
"Privately."
He can feel the sharp glance comes from Cahaya .
" Yes , Captain."
"Wait Fang-"
"Don't worry Haya . He will not harm me now ."
"......Fine. Please be safe ."
(Kaizo and Fang is now in the control room.)
"What do you need to talk Cap -"
" Pang this is no longer a conversation between captain and soldier . But a conversation between a big brother with little brother. So ease up a little."
"Ok? Abang ,what do you want to talk ?"
"You're really attach to the youngest sibling, Cahaya . Don't you ?"
Fang's face flushed by Kaizo words .
"Well..... Yeah I admitted that , so is there a problem ?"
"So, what's he's like ?"
"Pardon ?"
"What's he like when he's with you ?"
"Oh-,that?-um......"
" He's really talkative around me , He's always curios about every things . The stars ,the space , the planet , the - hu, I couldn't even count ! He's someone who seek knowledge if he got a chance ! And-"
Kaizo look at Fang who is very tense just now , relaxed and happy as he keep saying things about Cahaya.
Kaizo make a conclusion . Fang may have befriend all of them but Cahaya holds a special space in his heart and that conclusion -
Also applied on Cahaya too.
" Hey Fang ?"
" Yes ? "
"Cherish him, It's hard to found someone like him nowadays."
"Even without you saying , I will and aways will !"
"And one last thing......"
"Do you think I didn't notice you sneakily hiding behind the door Cahaya ?"
Kaizo opens the door as Cahaya fell on the ground .
"!?!?"
" I understand that you're worry of Fang but -"
"I believe it wasn't half bad to heard what Fang think about you ."
"I'll leave you guys here now."
Kaizo septs out the control room as he leave them some privacy.
They still have a long way to go .
Kaizo and his ability to leave his brother alone on earth explained
And kaizo slowly seeing Fang get attached to them
Fang having his words change, his thoughts changed its sweet
Kaizo sees Fang is enjoying himself with them(despite mentally and physically abusing him through the fight) i guess there good intentions?? Yeah
14 notes · View notes
bbsprint · 1 year ago
Text
PMT01: Scaffold nano & Trowel pico
Scaffold nano & Trowel pico v1.1.0 have been released tonight.
Tumblr media Tumblr media Tumblr media
Updates add a myStages to Scaffold, and QoL improvements to the powerup editing experience in Trowel pico.
Now that it's out, I'd like to also take this moment to talk about the development and release of both.
One year minus one day ago I had announced in Twitter I would work on the level editor for the demo version of BRICKBREAKER SPRINT (nano) so fans could breathe in more life to the game even in its limited state and to also showcase how much it can do with its limited toolset at the current time.
Tumblr media
The reasoning for a completely separate technology version was simple, WebGL. WebGL in Unity is so limited for even the most basic things (I had to install a package to support cross-app copy/paste before nano+'s release!) and I said "hey, might as well give the multiplatform users something to be able to edit with, doesn't need to be just windows which is what the currently unpolished-ish trowel desktop is!"
But then it hit me that the same limitations would likely make it hell for me to support just levels downloaded as files (internally stored with "bxtp" extension btw :) ) so then... I came at a crossroads
just say FKIT and not do it, make ppl wait for BB Lite
still say FKIT and kickstart the foundation of the online service that I had planned since the first design iteration of the game
guess what I took?
Tumblr media
I don't regret it.
Currently, Scaffold's login system is attached to itch.io. In the future, this will use my own account system (called Luna, still in development)
Now then... developing Scaffold's frontend was a challenge in itself, because I'm a masochist. I chose yet another technology to make the frontend in, this time Svelte, Sveltekit as backend saying "okay, no more fear of ServerSideRendering now that I can afford a VPS for this"
but... Svelte has been an absolute joy to work with. Its learning curve even smoother than React's (my first frontend framework, which powers cometSpectrum!), and I got everything rolling very quickly
Tumblr media
At first, I was going to use my regular website design language, but it's kinda jank, so I said NO and started from scratch with a simpler façade. For the style I wanted to pursue... it was perfect. Some people have complimented the graphic aspect of the site, and I'm happy I could deliver exactly what I wanted. (and yes the icons being in opposite directions compared to bbsprint's UI is entirely on purpose)
BTW, the site is made in such a way that a BB theme could be used as the site's theme and every color will change except for PNG icons, i love it (this functionality is used for people using their OS's Light theme)
This is actually the first made-by-me website project that has "public" facing write actions to a database and stores actual files. It works... very well, and I am very happy about it. It's like a combination of everything I've learnt up to this point, down to API design and interop between programs.
I did say this was the foundation for Scaffold, and I plan for this game to keep Scaffold as its prime way of getting stages, even when getting to storefronts like Steam or GOG. Kind of like an osu! situation.
You're at this part of the message... wanna see how Scaffold's logo used to look like?
Tumblr media Tumblr media Tumblr media
5 notes · View notes
govindhtech · 6 months ago
Text
How To Use Llama 3.1 405B FP16 LLM On Google Kubernetes
Tumblr media
How to set up and use large open models for multi-host generation AI over GKE
Access to open models is more important than ever for developers as generative AI grows rapidly due to developments in LLMs (Large Language Models). Open models are pre-trained foundational LLMs that are accessible to the general population. Data scientists, machine learning engineers, and application developers already have easy access to open models through platforms like Hugging Face, Kaggle, and Google Cloud’s Vertex AI.
How to use Llama 3.1 405B
Google is announcing today the ability to install and run open models like Llama 3.1 405B FP16 LLM over GKE (Google Kubernetes Engine), as some of these models demand robust infrastructure and deployment capabilities. With 405 billion parameters, Llama 3.1, published by Meta, shows notable gains in general knowledge, reasoning skills, and coding ability. To store and compute 405 billion parameters at FP (floating point) 16 precision, the model needs more than 750GB of GPU RAM for inference. The difficulty of deploying and serving such big models is lessened by the GKE method discussed in this article.
Customer Experience
You may locate the Llama 3.1 LLM as a Google Cloud customer by selecting the Llama 3.1 model tile in Vertex AI Model Garden.
Once the deploy button has been clicked, you can choose the Llama 3.1 405B FP16 model and select GKE.Image credit to Google Cloud
The automatically generated Kubernetes yaml and comprehensive deployment and serving instructions for Llama 3.1 405B FP16 are available on this page.
Deployment and servicing multiple hosts
Llama 3.1 405B FP16 LLM has significant deployment and service problems and demands over 750 GB of GPU memory. The total memory needs are influenced by a number of parameters, including the memory used by model weights, longer sequence length support, and KV (Key-Value) cache storage. Eight H100 Nvidia GPUs with 80 GB of HBM (High-Bandwidth Memory) apiece make up the A3 virtual machines, which are currently the most potent GPU option available on the Google Cloud platform. The only practical way to provide LLMs such as the FP16 Llama 3.1 405B model is to install and serve them across several hosts. To deploy over GKE, Google employs LeaderWorkerSet with Ray and vLLM.
LeaderWorkerSet
A deployment API called LeaderWorkerSet (LWS) was created especially to meet the workload demands of multi-host inference. It makes it easier to shard and run the model across numerous devices on numerous nodes. Built as a Kubernetes deployment API, LWS is compatible with both GPUs and TPUs and is independent of accelerators and the cloud. As shown here, LWS uses the upstream StatefulSet API as its core building piece.
A collection of pods is controlled as a single unit under the LWS architecture. Every pod in this group is given a distinct index between 0 and n-1, with the pod with number 0 being identified as the group leader. Every pod that is part of the group is created simultaneously and has the same lifecycle. At the group level, LWS makes rollout and rolling upgrades easier. For rolling updates, scaling, and mapping to a certain topology for placement, each group is treated as a single unit.
Each group’s upgrade procedure is carried out as a single, cohesive entity, guaranteeing that every pod in the group receives an update at the same time. While topology-aware placement is optional, it is acceptable for all pods in the same group to co-locate in the same topology. With optional all-or-nothing restart support, the group is also handled as a single entity when addressing failures. When enabled, if one pod in the group fails or if one container within any of the pods is restarted, all of the pods in the group will be recreated.
In the LWS framework, a group including a single leader and a group of workers is referred to as a replica. Two templates are supported by LWS: one for the workers and one for the leader. By offering a scale endpoint for HPA, LWS makes it possible to dynamically scale the number of replicas.
Deploying multiple hosts using vLLM and LWS
vLLM is a well-known open source model server that uses pipeline and tensor parallelism to provide multi-node multi-GPU inference. Using Megatron-LM’s tensor parallel technique, vLLM facilitates distributed tensor parallelism. With Ray for multi-node inferencing, vLLM controls the distributed runtime for pipeline parallelism.
By dividing the model horizontally across several GPUs, tensor parallelism makes the tensor parallel size equal to the number of GPUs at each node. It is crucial to remember that this method requires quick network connectivity between the GPUs.
However, pipeline parallelism does not require continuous connection between GPUs and divides the model vertically per layer. This usually equates to the quantity of nodes used for multi-host serving.
In order to support the complete Llama 3.1 405B FP16 paradigm, several parallelism techniques must be combined. To meet the model’s 750 GB memory requirement, two A3 nodes with eight H100 GPUs each will have a combined memory capacity of 1280 GB. Along with supporting lengthy context lengths, this setup will supply the buffer memory required for the key-value (KV) cache. The pipeline parallel size is set to two for this LWS deployment, while the tensor parallel size is set to eight.
In brief
We discussed in this blog how LWS provides you with the necessary features for multi-host serving. This method maximizes price-to-performance ratios and can also be used with smaller models, such as the Llama 3.1 405B FP8, on more affordable devices. Check out its Github to learn more and make direct contributions to LWS, which is open-sourced and has a vibrant community.
You can visit Vertex AI Model Garden to deploy and serve open models via managed Vertex AI backends or GKE DIY (Do It Yourself) clusters, as the Google Cloud Platform assists clients in embracing a gen AI workload. Multi-host deployment and serving is one example of how it aims to provide a flawless customer experience.
Read more on Govindhtech.com
2 notes · View notes
digicode1 · 7 months ago
Text
Cloud Agnostic: Achieving Flexibility and Independence in Cloud Management
As businesses increasingly migrate to the cloud, they face a critical decision: which cloud provider to choose? While AWS, Microsoft Azure, and Google Cloud offer powerful platforms, the concept of "cloud agnostic" is gaining traction. Cloud agnosticism refers to a strategy where businesses avoid vendor lock-in by designing applications and infrastructure that work across multiple cloud providers. This approach provides flexibility, independence, and resilience, allowing organizations to adapt to changing needs and avoid reliance on a single provider.
Tumblr media
What Does It Mean to Be Cloud Agnostic?
Being cloud agnostic means creating and managing systems, applications, and services that can run on any cloud platform. Instead of committing to a single cloud provider, businesses design their architecture to function seamlessly across multiple platforms. This flexibility is achieved by using open standards, containerization technologies like Docker, and orchestration tools such as Kubernetes.
Key features of a cloud agnostic approach include:
Interoperability: Applications must be able to operate across different cloud environments.
Portability: The ability to migrate workloads between different providers without significant reconfiguration.
Standardization: Using common frameworks, APIs, and languages that work universally across platforms.
Benefits of Cloud Agnostic Strategies
Avoiding Vendor Lock-InThe primary benefit of being cloud agnostic is avoiding vendor lock-in. Once a business builds its entire infrastructure around a single cloud provider, it can be challenging to switch or expand to other platforms. This could lead to increased costs and limited innovation. With a cloud agnostic strategy, businesses can choose the best services from multiple providers, optimizing both performance and costs.
Cost OptimizationCloud agnosticism allows companies to choose the most cost-effective solutions across providers. As cloud pricing models are complex and vary by region and usage, a cloud agnostic system enables businesses to leverage competitive pricing and minimize expenses by shifting workloads to different providers when necessary.
Greater Resilience and UptimeBy operating across multiple cloud platforms, organizations reduce the risk of downtime. If one provider experiences an outage, the business can shift workloads to another platform, ensuring continuous service availability. This redundancy builds resilience, ensuring high availability in critical systems.
Flexibility and ScalabilityA cloud agnostic approach gives companies the freedom to adjust resources based on current business needs. This means scaling applications horizontally or vertically across different providers without being restricted by the limits or offerings of a single cloud vendor.
Global ReachDifferent cloud providers have varying levels of presence across geographic regions. With a cloud agnostic approach, businesses can leverage the strengths of various providers in different areas, ensuring better latency, performance, and compliance with local regulations.
Challenges of Cloud Agnosticism
Despite the advantages, adopting a cloud agnostic approach comes with its own set of challenges:
Increased ComplexityManaging and orchestrating services across multiple cloud providers is more complex than relying on a single vendor. Businesses need robust management tools, monitoring systems, and teams with expertise in multiple cloud environments to ensure smooth operations.
Higher Initial CostsThe upfront costs of designing a cloud agnostic architecture can be higher than those of a single-provider system. Developing portable applications and investing in technologies like Kubernetes or Terraform requires significant time and resources.
Limited Use of Provider-Specific ServicesCloud providers often offer unique, advanced services—such as machine learning tools, proprietary databases, and analytics platforms—that may not be easily portable to other clouds. Being cloud agnostic could mean missing out on some of these specialized services, which may limit innovation in certain areas.
Tools and Technologies for Cloud Agnostic Strategies
Several tools and technologies make cloud agnosticism more accessible for businesses:
Containerization: Docker and similar containerization tools allow businesses to encapsulate applications in lightweight, portable containers that run consistently across various environments.
Orchestration: Kubernetes is a leading tool for orchestrating containers across multiple cloud platforms. It ensures scalability, load balancing, and failover capabilities, regardless of the underlying cloud infrastructure.
Infrastructure as Code (IaC): Tools like Terraform and Ansible enable businesses to define cloud infrastructure using code. This makes it easier to manage, replicate, and migrate infrastructure across different providers.
APIs and Abstraction Layers: Using APIs and abstraction layers helps standardize interactions between applications and different cloud platforms, enabling smooth interoperability.
When Should You Consider a Cloud Agnostic Approach?
A cloud agnostic approach is not always necessary for every business. Here are a few scenarios where adopting cloud agnosticism makes sense:
Businesses operating in regulated industries that need to maintain compliance across multiple regions.
Companies require high availability and fault tolerance across different cloud platforms for mission-critical applications.
Organizations with global operations that need to optimize performance and cost across multiple cloud regions.
Businesses aim to avoid long-term vendor lock-in and maintain flexibility for future growth and scaling needs.
Conclusion
Adopting a cloud agnostic strategy offers businesses unparalleled flexibility, independence, and resilience in cloud management. While the approach comes with challenges such as increased complexity and higher upfront costs, the long-term benefits of avoiding vendor lock-in, optimizing costs, and enhancing scalability are significant. By leveraging the right tools and technologies, businesses can achieve a truly cloud-agnostic architecture that supports innovation and growth in a competitive landscape.
Embrace the cloud agnostic approach to future-proof your business operations and stay ahead in the ever-evolving digital world.
2 notes · View notes
javafullstackdev · 10 months ago
Text
FullStackJava: Mastering Both Ends of the Stack
Java isn't just for backend anymore! As a full stack Java developer, you'll wield powerful tools on both sides:
Frontend:
JavaServer Faces (JSF)
Thymeleaf
Vaadin
Backend:
Spring Boot
Hibernate ORM
RESTful APIs
Database:
JDBC
JPA
Build & Deploy:
Maven/Gradle
Docker
Jenkins
Embrace the versatility. Java full stack = limitless possibilities.
3 notes · View notes