#face api development
Explore tagged Tumblr posts
Link
Learn how to build a face recognition app using artificial intelligence and computer vision technologies. Discover the key steps involved in app development and explore the latest tools and techniques for creating facial recognition App.
#face recognition app#app development#artificial intelligence#facial recognition technology#Facial Recognitions APIS#facial recoginition apps
0 notes
Text
The monetization creep has been evident for a while. Reddit has added a subscription ”Reddit premium”; offered “community rewards” as a paid super-vote ; embraced an NFT marketplace; changed the site's design for one with more recommended content; and started nudging users toward the official mobile app. The site has also been adding more restrictions to uploading and viewing “not safe for work” (NSFW) content. All this, while community requests for improvements to moderation tools and accessibility features have gone unaddressed on mobile, driving many users to third-party applications. Perhaps the worst development was announced on April 18th, when Reddit announced changes to its Data API would be starting on July 1st, including new “premium access” pricing for users of the API. While this wouldn’t affect projects on the free tier, such as moderator bots or tools used by researchers, the new pricing seems to be an existential threat to third-party applications for the site. It also bears a striking resemblance to a similar bad decision Twitter made this year under Elon Musk.
[...]
Details about Reddit’s API-specific costs were not shared, but it is worth noting that an API request is commonly no more burdensome to a server than an HTML request, i.e. visiting or scraping a web page. Having an API just makes it easier for developers to maintain their automated requests. It is true that most third-party apps tend to not show Reddit’s advertisements, and AI developers may make heavy use of the API for training data, but these applications could still (with more effort) access the same information over HTML. The heart of this fight is for what Reddit’s CEO calls their “valuable corpus of data,” i.e. the user-made content on the company’s servers, and for who gets live off this digital commons. While Reddit provides essential infrastructural support, these community developers and moderators make the site worth visiting, and any worthwhile content is the fruit of their volunteer labor. It’s this labor and worker solidarity which gives users unique leverage over the platform, in contrast to past backlash to other platforms.
179 notes
·
View notes
Text
The FreeCodeCamp Study Challenge!
I literally just completed this challenge and I thought why not share the challenge on here for other people to take part in if they wanted to!
FreeCodeCamp is an open-source platform that offers various coding courses and certifications for web developers. The goal of this challenge is to choose one of the available courses on the FreeCodeCamp platform, complete the course, and earn the certificate at the end.
The challenge is self-paced, so the duration is entirely up to you. The challenge is there to motivate people into coding and/or continue their coding studies! Especially people in the Codeblr community!
FreeCodeCamp [LINK] offers the following courses:
(NEW) Responsive Web Design Certification (I've done this one)
JavaScript Algorithms and Data Structures Certification (I am going to do this one next)
Front End Libraries Certification
Data Visualization Certification
APIs and Microservices Certification
Quality Assurance Certification
Scientific Computing with Python Certification
Data Analysis with Python Certification
Information Security Certification
Machine Learning with Python Certification
Each course is broken down into multiple sections, and completing all the sections in a course will earn you a certification for that course.
To start the FreeCodeCamp Challenge, follow the steps below:
Choose a course on the FreeCodeCamp platform that you would like to complete.
Complete the course and earn the certificate.
Post about your progress every day that you study using the #freecodecampchallenge hashtag. You can post about what you have done towards the challenge, what you have learned, and any challenges you faced and how you overcame them.
The FreeCodeCamp Challenge is an excellent opportunity to improve your coding skills and earn a valuable certification!!!! Even add that to your resume/CV! I completed this challenge and you can see me posting about it - LINK.
Remember to post about your progress using the #freecodecampchallenge hashtag to track your progress and connect with other participants AND you don't have to study straight days, meaning you can take days off whenever you feel like it!
Good luck!
#freecodecampchallenge#freecodecamp#studyblr challenge#study challenge#codeblr#learn to code#progblr#studyblr#cs studyblr#cs academia#computer science#online learning#coding#programming#compsci#studying#webdev#frontend development#html css#html#css#comp sci#100 days of code#coding study#coding bootcamp
255 notes
·
View notes
Text
This Week in Rust 572
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.
Want TWIR in your inbox? Subscribe here.
Updates from Rust Community
Official
October project goals update
Next Steps on the Rust Trademark Policy
This Development-cycle in Cargo: 1.83
Re-organising the compiler team and recognising our team members
This Month in Our Test Infra: October 2024
Call for proposals: Rust 2025h1 project goals
Foundation
Q3 2024 Recap from Rebecca Rumbul
Rust Foundation Member Announcement: CodeDay, OpenSource Science(OS-Sci), & PROMOTIC
Newsletters
The Embedded Rustacean Issue #31
Project/Tooling Updates
Announcing Intentrace, an alternative strace for everyone
Ractor Quickstart
Announcing Sycamore v0.9.0
CXX-Qt 0.7 Release
An 'Educational' Platformer for Kids to Learn Math and Reading—and Bevy for the Devs
[ZH][EN] Select HTML Components in Declarative Rust
Observations/Thoughts
Safety in an unsafe world
MinPin: yet another pin proposal
Reached the recursion limit... at build time?
Building Trustworthy Software: The Power of Testing in Rust
Async Rust is not safe with io_uring
Macros, Safety, and SOA
how big is your future?
A comparison of Rust’s borrow checker to the one in C#
Streaming Audio APIs in Rust pt. 3: Audio Decoding
[audio] InfinyOn with Deb Roy Chowdhury
Rust Walkthroughs
Difference Between iter() and into_iter() in Rust
Rust's Sneaky Deadlock With if let Blocks
Why I love Rust for tokenising and parsing
"German string" optimizations in Spellbook
Rust's Most Subtle Syntax
Parsing arguments in Rust with no dependencies
Simple way to make i18n support in Rust with with examples and tests
How to shallow clone a Cow
Beginner Rust ESP32 development - Snake
[video] Rust Collections & Iterators Demystified 🪄
Research
Charon: An Analysis Framework for Rust
Crux, a Precise Verifier for Rust and Other Languages
Miscellaneous
Feds: Critical Software Must Drop C/C++ by 2026 or Face Risk
[audio] Let's talk about Rust with John Arundel
[audio] Exploring Rust for Embedded Systems with Philip Markgraf
Crate of the Week
This week's crate is wtransport, an implementation of the WebTransport specification, a successor to WebSockets with many additional features.
Thanks to Josh Triplett for the suggestion!
Please submit your suggestions and votes for next week!
Calls for Testing
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:
RFCs
No calls for testing were issued this week.
Rust
No calls for testing were issued this week.
Rustup
No calls for testing were issued this week.
If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
Call for Participation; projects and speakers
CFP - Projects
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!
CFP - Events
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!
Updates from the Rust Project
473 pull requests were merged in the last week
account for late-bound depth when capturing all opaque lifetimes
add --print host-tuple to print host target tuple
add f16 and f128 to invalid_nan_comparison
add lp64e RISC-V ABI
also treat impl definition parent as transparent regarding modules
cleanup attributes around unchecked shifts and unchecked negation in const
cleanup op lookup in HIR typeck
collect item bounds for RPITITs from trait where clauses just like associated types
do not enforce ~const constness effects in typeck if rustc_do_not_const_check
don't lint irrefutable_let_patterns on leading patterns if else if let-chains
double-check conditional constness in MIR
ensure that resume arg outlives region bound for coroutines
find the generic container rather than simply looking up for the assoc with const arg
fix compiler panic with a large number of threads
fix suggestion for diagnostic error E0027
fix validation when lowering ? trait bounds
implement suggestion for never type fallback lints
improve missing_abi lint
improve duplicate derive Copy/Clone diagnostics
llvm: match new LLVM 128-bit integer alignment on sparc
make codegen help output more consistent
make sure type_param_predicates resolves correctly for RPITIT
pass RUSTC_HOST_FLAGS at once without the for loop
port most of --print=target-cpus to Rust
register ~const preds for Deref adjustments in HIR typeck
reject generic self types
remap impl-trait lifetimes on HIR instead of AST lowering
remove "" case from RISC-V llvm_abiname match statement
remove do_not_const_check from Iterator methods
remove region from adjustments
remove support for -Zprofile (gcov-style coverage instrumentation)
replace manual time convertions with std ones, comptime time format parsing
suggest creating unary tuples when types don't match a trait
support clobber_abi and vector registers (clobber-only) in PowerPC inline assembly
try to point out when edition 2024 lifetime capture rules cause borrowck issues
typingMode: merge intercrate, reveal, and defining_opaque_types
miri: change futex_wait errno from Scalar to IoError
stabilize const_arguments_as_str
stabilize if_let_rescope
mark str::is_char_boundary and str::split_at* unstably const
remove const-support for align_offset and is_aligned
unstably add ptr::byte_sub_ptr
implement From<&mut {slice}> for Box/Rc/Arc<{slice}>
rc/Arc: don't leak the allocation if drop panics
add LowerExp and UpperExp implementations to NonZero
use Hacker's Delight impl in i64::midpoint instead of wide i128 impl
xous: sync: remove rustc_const_stable attribute on Condvar and Mutex new()
add const_panic macro to make it easier to fall back to non-formatting panic in const
cargo: downgrade version-exists error to warning on dry-run
cargo: add more metadata to rustc_fingerprint
cargo: add transactional semantics to rustfix
cargo: add unstable -Zroot-dir flag to configure the path from which rustc should be invoked
cargo: allow build scripts to report error messages through cargo::error
cargo: change config paths to only check CARGO_HOME for cargo-script
cargo: download targeted transitive deps of with artifact deps' target platform
cargo fix: track version in fingerprint dep-info files
cargo: remove requirement for --target when invoking Cargo with -Zbuild-std
rustdoc: Fix --show-coverage when JSON output format is used
rustdoc: Unify variant struct fields margins with struct fields
rustdoc: make doctest span tweak a 2024 edition change
rustdoc: skip stability inheritance for some item kinds
mdbook: improve theme support when JS is disabled
mdbook: load the sidebar toc from a shared JS file or iframe
clippy: infinite_loops: fix incorrect suggestions on async functions/closures
clippy: needless_continue: check labels consistency before warning
clippy: no_mangle attribute requires unsafe in Rust 2024
clippy: add new trivial_map_over_range lint
clippy: cleanup code suggestion for into_iter_without_iter
clippy: do not use gen as a variable name
clippy: don't lint unnamed consts and nested items within functions in missing_docs_in_private_items
clippy: extend large_include_file lint to also work on attributes
clippy: fix allow_attributes when expanded from some macros
clippy: improve display of clippy lints page when JS is disabled
clippy: new lint map_all_any_identity
clippy: new lint needless_as_bytes
clippy: new lint source_item_ordering
clippy: return iterator must not capture lifetimes in Rust 2024
clippy: use match ergonomics compatible with editions 2021 and 2024
rust-analyzer: allow interpreting consts and statics with interpret function command
rust-analyzer: avoid interior mutability in TyLoweringContext
rust-analyzer: do not render meta info when hovering usages
rust-analyzer: add assist to generate a type alias for a function
rust-analyzer: render extern blocks in file_structure
rust-analyzer: show static values on hover
rust-analyzer: auto-complete import for aliased function and module
rust-analyzer: fix the server not honoring diagnostic refresh support
rust-analyzer: only parse safe as contextual kw in extern blocks
rust-analyzer: parse patterns with leading pipe properly in all places
rust-analyzer: support new #[rustc_intrinsic] attribute and fallback bodies
Rust Compiler Performance Triage
A week dominated by one large improvement and one large regression where luckily the improvement had a larger impact. The regression seems to have been caused by a newly introduced lint that might have performance issues. The improvement was in building rustc with protected visibility which reduces the number of dynamic relocations needed leading to some nice performance gains. Across a large swath of the perf suit, the compiler is on average 1% faster after this week compared to last week.
Triage done by @rylev. Revision range: c8a8c820..27e38f8f
Summary:
(instructions:u) mean range count Regressions ❌ (primary) 0.8% [0.1%, 2.0%] 80 Regressions ❌ (secondary) 1.9% [0.2%, 3.4%] 45 Improvements ✅ (primary) -1.9% [-31.6%, -0.1%] 148 Improvements ✅ (secondary) -5.1% [-27.8%, -0.1%] 180 All ❌✅ (primary) -1.0% [-31.6%, 2.0%] 228
1 Regression, 1 Improvement, 5 Mixed; 3 of them in rollups 46 artifact comparisons made in total
Full report here
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
[RFC] Default field values
RFC: Give users control over feature unification
Final Comment Period
Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
RFCs
[disposition: merge] Add support for use Trait::func
Tracking Issues & PRs
Rust
[disposition: merge] Stabilize Arm64EC inline assembly
[disposition: merge] Stabilize s390x inline assembly
[disposition: merge] rustdoc-search: simplify rules for generics and type params
[disposition: merge] Fix ICE when passing DefId-creating args to legacy_const_generics.
[disposition: merge] Tracking Issue for const_option_ext
[disposition: merge] Tracking Issue for const_unicode_case_lookup
[disposition: merge] Reject raw lifetime followed by ', like regular lifetimes do
[disposition: merge] Enforce that raw lifetimes must be valid raw identifiers
[disposition: merge] Stabilize WebAssembly multivalue, reference-types, and tail-call target features
Cargo
No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
No Language Team Proposals entered Final Comment Period this week.
Language Reference
No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs
[new] Implement The Update Framework for Project Signing
[new] [RFC] Static Function Argument Unpacking
[new] [RFC] Explicit ABI in extern
[new] Add homogeneous_try_blocks RFC
Upcoming Events
Rusty Events between 2024-11-06 - 2024-12-04 🦀
Virtual
2024-11-06 | Virtual (Indianapolis, IN, US) | Indy Rust
Indy.rs - with Social Distancing
2024-11-07 | Virtual (Berlin, DE) | OpenTechSchool Berlin + Rust Berlin
Rust Hack and Learn | Mirror: Rust Hack n Learn Meetup
2024-11-08 | Virtual (Jersey City, NJ, US) | Jersey City Classy and Curious Coders Club Cooperative
Rust Coding / Game Dev Fridays Open Mob Session!
2024-11-12 | Virtual (Dallas, TX, US) | Dallas Rust
Second Tuesday
2024-11-14 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup
Crafting Interpreters in Rust Collaboratively
2024-11-14 | Virtual and In-Person (Lehi, UT, US) | Utah Rust
Green Thumb: Building a Bluetooth-Enabled Plant Waterer with Rust and Microbit
2024-11-14 | Virtual and In-Person (Seattle, WA, US) | Seattle Rust User Group
November Meetup
2024-11-15 | Virtual (Jersey City, NJ, US) | Jersey City Classy and Curious Coders Club Cooperative
Rust Coding / Game Dev Fridays Open Mob Session!
2024-11-19 | Virtual (Los Angeles, CA, US) | DevTalk LA
Discussion - Topic: Rust for UI
2024-11-19 | Virtual (Washington, DC, US) | Rust DC
Mid-month Rustful
2024-11-20 | Virtual and In-Person (Vancouver, BC, CA) | Vancouver Rust
Embedded Rust Workshop
2024-11-21 | Virtual (Berlin, DE) | OpenTechSchool Berlin + Rust Berlin
Rust Hack and Learn | Mirror: Rust Hack n Learn Meetup
2024-11-21 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup
Trustworthy IoT with Rust--and passwords!
2024-11-21 | Virtual (Rotterdam, NL) | Bevy Game Development
Bevy Meetup #7
2024-11-25 | Bratislava, SK | Bratislava Rust Meetup Group
ONLINE Talk, sponsored by Sonalake - Bratislava Rust Meetup
2024-11-26 | Virtual (Dallas, TX, US) | Dallas Rust
Last Tuesday
2024-11-28 | Virtual (Charlottesville, NC, US) | Charlottesville Rust Meetup
Crafting Interpreters in Rust Collaboratively
2024-12-03 | Virtual (Buffalo, NY, US) | Buffalo Rust Meetup
Buffalo Rust User Group
Asia
2024-11-28 | Bangalore/Bengaluru, IN | Rust Bangalore
RustTechX Summit 2024 BOSCH
2024-11-30 | Tokyo, JP | Rust Tokyo
Rust.Tokyo 2024
Europe
2024-11-06 | Oxford, UK | Oxford Rust Meetup Group
Oxford Rust and C++ social
2024-11-06 | Paris, FR | Paris Rustaceans
Rust Meetup in Paris
2024-11-09 - 2024-11-11 | Florence, IT | Rust Lab
Rust Lab 2024: The International Conference on Rust in Florence
2024-11-12 | Zurich, CH | Rust Zurich
Encrypted/distributed filesystems, wasm-bindgen
2024-11-13 | Reading, UK | Reading Rust Workshop
Reading Rust Meetup
2024-11-14 | Stockholm, SE | Stockholm Rust
Rust Meetup @UXStream
2024-11-19 | Leipzig, DE | Rust - Modern Systems Programming in Leipzig
Daten sichern mit ZFS (und Rust)
2024-11-21 | Edinburgh, UK | Rust and Friends
Rust and Friends (pub)
2024-11-21 | Oslo, NO | Rust Oslo
Rust Hack'n'Learn at Kampen Bistro
2024-11-23 | Basel, CH | Rust Basel
Rust + HTMX - Workshop #3
2024-11-27 | Dortmund, DE | Rust Dortmund
Rust Dortmund
2024-11-28 | Aarhus, DK | Rust Aarhus
Talk Night at Lind Capital
2024-11-28 | Augsburg, DE | Rust Meetup Augsburg
Augsburg Rust Meetup #10
2024-11-28 | Berlin, DE | OpenTechSchool Berlin + Rust Berlin
Rust and Tell - Title
North America
2024-11-07 | Chicago, IL, US | Chicago Rust Meetup
Chicago Rust Meetup
2024-11-07 | Montréal, QC, CA | Rust Montréal
November Monthly Social
2024-11-07 | St. Louis, MO, US | STL Rust
Game development with Rust and the Bevy engine
2024-11-12 | Ann Arbor, MI, US | Detroit Rust
Rust Community Meetup - Ann Arbor
2024-11-14 | Mountain View, CA, US | Hacker Dojo
Rust Meetup at Hacker Dojo
2024-11-15 | Mexico City, DF, MX | Rust MX
Multi threading y Async en Rust parte 2 - Smart Pointes y Closures
2024-11-15 | Somerville, MA, US | Boston Rust Meetup
Ball Square Rust Lunch, Nov 15
2024-11-19 | San Francisco, CA, US | San Francisco Rust Study Group
Rust Hacking in Person
2024-11-23 | Boston, MA, US | Boston Rust Meetup
Boston Common Rust Lunch, Nov 23
2024-11-25 | Ferndale, MI, US | Detroit Rust
Rust Community Meetup - Ferndale
2024-11-27 | Austin, TX, US | Rust ATX
Rust Lunch - Fareground
Oceania
2024-11-12 | Christchurch, NZ | Christchurch Rust Meetup Group
Christchurch Rust Meetup
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Jobs
Please see the latest Who's Hiring thread on r/rust
Quote of the Week
Any sufficiently complicated C project contains an adhoc, informally specified, bug ridden, slow implementation of half of cargo.
– Folkert de Vries at RustNL 2024 (youtube recording)
Thanks to Collin Richards for the suggestion!
Please submit quotes and vote for next week!
This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.
Email list hosting is sponsored by The Rust Foundation
Discuss on r/rust
3 notes
·
View notes
Text
How To Use Llama 3.1 405B FP16 LLM On Google Kubernetes
How to set up and use large open models for multi-host generation AI over GKE
Access to open models is more important than ever for developers as generative AI grows rapidly due to developments in LLMs (Large Language Models). Open models are pre-trained foundational LLMs that are accessible to the general population. Data scientists, machine learning engineers, and application developers already have easy access to open models through platforms like Hugging Face, Kaggle, and Google Cloud’s Vertex AI.
How to use Llama 3.1 405B
Google is announcing today the ability to install and run open models like Llama 3.1 405B FP16 LLM over GKE (Google Kubernetes Engine), as some of these models demand robust infrastructure and deployment capabilities. With 405 billion parameters, Llama 3.1, published by Meta, shows notable gains in general knowledge, reasoning skills, and coding ability. To store and compute 405 billion parameters at FP (floating point) 16 precision, the model needs more than 750GB of GPU RAM for inference. The difficulty of deploying and serving such big models is lessened by the GKE method discussed in this article.
Customer Experience
You may locate the Llama 3.1 LLM as a Google Cloud customer by selecting the Llama 3.1 model tile in Vertex AI Model Garden.
Once the deploy button has been clicked, you can choose the Llama 3.1 405B FP16 model and select GKE.Image credit to Google Cloud
The automatically generated Kubernetes yaml and comprehensive deployment and serving instructions for Llama 3.1 405B FP16 are available on this page.
Deployment and servicing multiple hosts
Llama 3.1 405B FP16 LLM has significant deployment and service problems and demands over 750 GB of GPU memory. The total memory needs are influenced by a number of parameters, including the memory used by model weights, longer sequence length support, and KV (Key-Value) cache storage. Eight H100 Nvidia GPUs with 80 GB of HBM (High-Bandwidth Memory) apiece make up the A3 virtual machines, which are currently the most potent GPU option available on the Google Cloud platform. The only practical way to provide LLMs such as the FP16 Llama 3.1 405B model is to install and serve them across several hosts. To deploy over GKE, Google employs LeaderWorkerSet with Ray and vLLM.
LeaderWorkerSet
A deployment API called LeaderWorkerSet (LWS) was created especially to meet the workload demands of multi-host inference. It makes it easier to shard and run the model across numerous devices on numerous nodes. Built as a Kubernetes deployment API, LWS is compatible with both GPUs and TPUs and is independent of accelerators and the cloud. As shown here, LWS uses the upstream StatefulSet API as its core building piece.
A collection of pods is controlled as a single unit under the LWS architecture. Every pod in this group is given a distinct index between 0 and n-1, with the pod with number 0 being identified as the group leader. Every pod that is part of the group is created simultaneously and has the same lifecycle. At the group level, LWS makes rollout and rolling upgrades easier. For rolling updates, scaling, and mapping to a certain topology for placement, each group is treated as a single unit.
Each group’s upgrade procedure is carried out as a single, cohesive entity, guaranteeing that every pod in the group receives an update at the same time. While topology-aware placement is optional, it is acceptable for all pods in the same group to co-locate in the same topology. With optional all-or-nothing restart support, the group is also handled as a single entity when addressing failures. When enabled, if one pod in the group fails or if one container within any of the pods is restarted, all of the pods in the group will be recreated.
In the LWS framework, a group including a single leader and a group of workers is referred to as a replica. Two templates are supported by LWS: one for the workers and one for the leader. By offering a scale endpoint for HPA, LWS makes it possible to dynamically scale the number of replicas.
Deploying multiple hosts using vLLM and LWS
vLLM is a well-known open source model server that uses pipeline and tensor parallelism to provide multi-node multi-GPU inference. Using Megatron-LM’s tensor parallel technique, vLLM facilitates distributed tensor parallelism. With Ray for multi-node inferencing, vLLM controls the distributed runtime for pipeline parallelism.
By dividing the model horizontally across several GPUs, tensor parallelism makes the tensor parallel size equal to the number of GPUs at each node. It is crucial to remember that this method requires quick network connectivity between the GPUs.
However, pipeline parallelism does not require continuous connection between GPUs and divides the model vertically per layer. This usually equates to the quantity of nodes used for multi-host serving.
In order to support the complete Llama 3.1 405B FP16 paradigm, several parallelism techniques must be combined. To meet the model’s 750 GB memory requirement, two A3 nodes with eight H100 GPUs each will have a combined memory capacity of 1280 GB. Along with supporting lengthy context lengths, this setup will supply the buffer memory required for the key-value (KV) cache. The pipeline parallel size is set to two for this LWS deployment, while the tensor parallel size is set to eight.
In brief
We discussed in this blog how LWS provides you with the necessary features for multi-host serving. This method maximizes price-to-performance ratios and can also be used with smaller models, such as the Llama 3.1 405B FP8, on more affordable devices. Check out its Github to learn more and make direct contributions to LWS, which is open-sourced and has a vibrant community.
You can visit Vertex AI Model Garden to deploy and serve open models via managed Vertex AI backends or GKE DIY (Do It Yourself) clusters, as the Google Cloud Platform assists clients in embracing a gen AI workload. Multi-host deployment and serving is one example of how it aims to provide a flawless customer experience.
Read more on Govindhtech.com
#Llama3.1#Llama#LLM#GoogleKubernetes#GKE#405BFP16LLM#AI#GPU#vLLM#LWS#News#Technews#Technology#Technologynews#Technologytrends#govindhtech
2 notes
·
View notes
Text
VR observations, 10 months in
I've been a game dev for 10 months now. It's pretty great, I'm enjoying it a lot, I get to spend my days doing crazy shader shit and animations and voxels and visual effects. Hopefully the game that will come out of all this will be one people enjoy, and in any case I'm learning so much that will eventually come back to the personal ~artistic~ side of things. I can't talk about that game just yet though (but soon it will be announced, I'm pretty sure). So this is a post about other games.
Mind you, I don't actually play very many VR games, or games in general these days, because I'm too busy developing the dang things. but sometimes I do! And I think it's interesting to talk about them.
These aren't really reviews as such. You could project all sorts of ulterior motives if it was. Like my livelihood does sorta depend on people buying VR headsets and then games on them. This is more just like things I observe.
Headsets
The biggest problem with VR at the moment is wearing a headset for too long kinda sucks. The weight of the headset is all effectively held on a lever arm and it presses on your face. However, this is heavily dependent on the strap you use to hold it to your head. A better balanced and cushioned strap can hold the headset still with less pressure and better balance the forces.
The strap that comes with the Quest 3 is absolute dogshit. So a big part of the reason I wouldn't play VR games for fun is because after wearing the headset for 30-60 minutes in the daily meeting, the absolute last thing I'd want to do is wear it any longer. Recently I got a new strap (a ~£25 Devaso one, the low end of straps), and it's markedly improved. It would probably be even better if I got one of the high end Bobo straps. So please take it from me: if you wanna get into VR, get a decent strap.
I hear the Apple Vision Pro is a lot more comfortable to wear for long periods, though I won't have a chance to try it until later this month.
During the time I've been working at Holonautic, Meta released their Quest 3, and more recently Apple released their hyper expensive Vision Pro for much fanfare.
The Quest 3 is a decent headset and probably the one I'd recommend if you're getting into VR and can afford a new console. It's not a massive improvement over the Quest 2 - the main thing that's better is the 'passthrough' (aka 'augmented reality', the mode where the 3D objects are composited into video of what's in front of you), which is now in full colour, and feels a lot less intrusive than the blown out greyscale that the Quest 2 did. But it still has some trouble with properly taking into account depth when combining the feeds from multiple cameras, so you get weird space warping effects when something in the foreground moves over something in the background.
The Vision Pro is by all accounts the bees knees, though it costs $3500 and already sold out, so good luck getting one. It brings a new interaction mode based on eye tracking, where you look at a thing with your eyes to select it like with a mouse pointer, and hold your hands in your lap and pinch to interact. Its passthrough is apparently miles ahead, it's got a laptop tier chip, etc etc. I'm not gonna talk about that though, if you want to read product reviews there are a million places you can do it.
Instead I wanna talk about rendering, since I think this is something that only gets discussed among devs, and maybe people outside might be interested.
Right now there is only one game engine that builds to the Vision Pro, which is Unity. However, Apple have their own graphics API, and the PolySpatial API used for the mixed reality mode is pretty heavily locked down in terms of what you can do.
So what Unity does is essentially run a transpilation step to map its own constructs into PolySpatial ones. For example, say you make a shader in Shader Graph (you have to use shader graph, it won't take HLSL shaders in general) - Unity will generate a vision pro compatible shader (in MaterialX format) from that. Vertex and fragment shaders mostly work, particle systems mostly don't, you don't get any postprocessing shaders, anything that involves a compute shader is right out (which means no VFX graph), Entities Graphics doesn't work. I don't think you get much control over stuff like batching. It's pretty limited compared to what we're used to on other platforms.
I said fragment shaders mostly work. It's true that most Shader Graph nodes work the same. However, if you're doing custom lighting calculations in a Unity shader, a standard way to do things is to use the 'main light' property provided by Unity. On the Vision Pro, you don't get a main light.
The Vision Pro actually uses an image-based lighting model, which uses the actual room around you to provide lighting information. This is great because objects in VR look like they actually belong in the space you're in, but it would of course be a huge security issue if all programs could get realtime video of your room, and I imagine the maths involved is pretty complex. So the only light information you get is a shader graph node which does a PBR lighting calculation based on provided parameters (albedo, normal, roughness, metallicity etc.). You can then instruct it to do whatever you want with the output of that inside the shader.
The upshot of this is that we have to make different versions of all our shaders for the Vision Pro version of the game.
Once the game is announced we'll probably have a lot to write about developing interactions for the vision pro vs the quest, so I'll save that for now. It's pretty fascinating though.
Anyway, right now I've still yet to wear a Vision Pro. Apple straight up aren't handing out devkits, we only have two in the company still, so mostly I'm hearing about things second hand.
Shores of Loci
A few genres of VR game have emerged by now. Shooting and climbing are two pretty well-solved problems, so a lot of games involve that. But another one is 3D puzzles. This is something that would be incredibly difficult on a flat screen, where manipulating 3D objects is quite difficult, but becomes quite natural and straightforward in VR.
I've heard about one such game that uses 3D scans of real locations, but Shores of Loci is all about very environment artist authored levels, lots of grand sweeping vistas and planets hanging in the sky and so on. Basically you go through a series of locations and assemble teetering ramshackle buildings and chunks of landscape, which then grow really big and settle into the water. You can pull the pieces towards you with your hand, and then when you rotate them into roughly the right position and orientation relative to another piece, they snap together.
It's diverting, if kinda annoying when you just can't find the place the piece should go - especially if the answer turns out to be that there's an intermediate piece that floated off somewhere. The environments are well-designed and appealing, it's cool to see the little guys appearing to inhabit them. That said it does kinda just... repeat that concept a bunch. The narrative is... there's a big stone giant who appears and gives you pieces sometimes. That's it basically.
Still, it's interesting to see the different environment concepts. Transitions have this very cool distorted sky/black hole effect.
However, the real thing that got me with this game, the thing that I'm writing about now, was the water. They got planar reflections working. On the Quest! This is something of a white whale for me. Doing anything that involves reading from a render texture is so expensive that it's usually a no-go, and yet here it's working great - planar reflections complete with natural looking distortion from ripples. There's enough meshes that I assume there must be a reasonably high number of draw calls, and yet... it's definitely realtime planar reflections, reflections move with objects, it all seems to work.
There's a plugin called Mirrors and Reflections for VR that provides an implementation, but so far my experience has been that the effect is too expensive (in terms of rendertime) to keep 72fps in a more complex scene. I kind of suspect the devs are using this plugin, but I'm really curious how they optimised the draw calls down hard enough to work with it, since there tends to be quite a bit going on...
Moss
This game's just straight up incredibly cute.
youtube
Third person VR games, where you interact with a character moving across a diorama-like level, are a tiny minority of VR games at the moment. I think it's a shame because the concept is fantastic.
Moss is a puzzle-platformer with light combat in a Redwall/Mouse Guard-like setting. The best part of Moss is 1000% interacting with your tiny little mousegirl, who is really gorgeously animated - her ears twitch, her tail swings back and forth, she tumbles, clambers, and generally moves in a very convincing and lifelike way.
Arguably this is the kind of game that doesn't need to be made in VR - we already have strong implementations of 'platformer' for flatscreen. What I think the VR brings in this case is this wonderful sense of interacting with a tiny 3D world like a diorama. In some ways it's sorta purposefully awkward - if Quill walks behind something, you get a glowing outline, but you might need to crane your neck to see her - but having the level laid out in this way as a 3D structure you can play with is really endearing.
Mechanically, you move Quill around with the analogue stick, and make her jump with the buttons, standard stuff. Various level elements can be pushed or pulled by grabbing them with the controllers, and you can also drag enemies around to make them stand on buttons, so solving a level is a combination of moving pieces of the level and then making Quill jump as appropriate.
The fact that you're instantiated in the level, separate from Quill, also adds an interesting wrinkle in terms of 'identification with player character'. In most third person games, you tend to feel that the player character is you to some degree. In Moss, it feels much more like Quill is someone I've been made responsible for, and I feel guilty whenever I accidentally make her fall off a cliff or something.
A lot is clearly designed around fostering that protective vibe - to heal Quill, you have to reach out and hold her with your hand, causing her to glow briefly. When you complete some levels, she will stop to give you a high five or celebrate with you. Even though the player is really just here as 'puzzle solver' and 'powerful macguffin', it puts some work in to make you feel personally connected to Quill.
Since the camera is not locked to the character, the controls are instead relative to the stage, i.e. you point the stick in the direction on the 2D plane you want Moss to move. This can make certain bits of platforming, like moving along a narrow ledge or tightrope, kinda fiddly. In general it's pretty manageable though.
The combat system is straightforward but solid enough. Quill has a three button string, and it can be cancelled into a dash using the jump button, and directed with the analogue stick. Enemies telegraph their attacks pretty clearly, so it's rarely difficult, but there's enough there to be engaging.
The game is built in Unreal, unlike most Quest games (almost all are made in Unity). It actually doesn't feel so very different though - likely because the lighting calculations that are cheap enough to run in Unity are the same ones that are cheap enough to run in Unreal. It benefits a lot from baked lighting. Some things are obvious jank - anything behind where the player is assumed to be sitting tends not to be modelled or textured - but the environments are in general very lively and I really like some of the interactions: you can slash through the grass and floating platforms rock as you jump onto them.
The story is sadly pretty standard high fantasy royalist chosen one stuff, nothing exciting really going on there. Though there are some very cute elements - the elf queen has a large frog which gives you challenges to unlock certain powers, and you can pet the frog, and even give it a high five. Basically all the small scale stuff is done really well, I just wish they'd put some more thought into what it's about. The Redwall/Mouse Guard style has a ton of potential - what sort of society would these sapient forest animals have? They just wanted a fairytale vibe though evidently.
Cutscene delivery is a weak point. You pull back into a cathedral-like space where you're paging through a large book, which is kinda cool, and listening to narration while looking at illustrations. In general I think these cutscenes would have worked better if you just stayed in the diorama world and watched the characters have animated interactions. Maybe it's a cost-saving measure. I guess having you turn the pages of the book is also a way to give you something to do, since sitting around watching NPCs talk is notoriously not fun in VR.
There are some very nice touches in the environment design though! In one area you walk across a bunch of human sized suits of armour and swords that are now rusting - nobody comments, but it definitely suggests that humans did exist in this world at some point. The actual puzzle levels tend to make less sense, they're very clearly designed as puzzles first and 'spaces people would live in' not at all, but they do tend to look pretty, and there's a clear sense of progression through different architectural areas - so far fairly standard forest, swamp, stone ruins etc. but I'll be curious to see if it goes anywhere weird with it later.
Weak story aside, I'm really impressed with Moss. Glad to see someone else giving third person VR a real shot. I'm looking forward to playing the rest of it.
...that's kinda all I played in a while huh. For example, I still haven't given Asgard's Wrath II, the swordfighting game produced internally at Meta that you get free on the Quest 3, a shot. Or Boneworks. I still haven't finished Half Life Alyx, even! Partly that's because the Quest 3 did not get on well with my long USB A to C cable - for some reason it only seems to work properly on a high quality C to C cable - and that restricts me from playing PCVR games that require too much movement. Still though...
Anyway, the game I've been working on these past 10 months should be ready to announce pretty soon. So I'm very excited for that.
9 notes
·
View notes
Note
So...... I got an idea.
Kaizo pov on Cahaya and Fang's develop Friendships.
(before galaxy season)
.
.
.
.
.
.
.
(Before Fang go to earth .)
" You want me to send Fang to earth by himself ? "
Kaizo didn't believe what admiral Maskmana just said .
"But all of the targets are just mere children. I can retrieve the power watches in no time ! "
He points to the files photos on the hologram's screen as he want to persuade his admiral this is not a good idea .
"That the point Kaizo ."
"Because they are kids . The kids who are all same age as Fang , It would be easier for him to interact them ."
"But, it still- "
Kaizo still want him to change the idea but Maskmana cut off him .
"Kaizo."
"Are you underestimating Fang's ability ?"
"......"
"No sir , I believe his can accomplish the mission ."
"It settles then. Besides-"
"I have other mission for you .I think it's better if Fang doesn't get involve in this mission ."
".......Yes sir."
(Hours later in Fang's room)
"Fang ,take these files . You need it on the next mission and-"
Kaizo put a stack of books in front of Fang .
" I want you to master all these language and cultures before I send you for the next mission ."
"Wow, that's a lot."
Fang flip throughs the pages as he examine the books.
"So , when will our mission starts captain ?"
"...... Not ours , It's yours"
"Hmm?"
"This will be your first solo mission ."
"Do you think you can do it ?"
"wha- Yes ! Abang-. I mean- . Captain!"
"I'll send you off once you're ready. Remember the targets face first, that's many of them. "
"Ok?"
" Lets see -"
" Gopal , Yaya , Ying , Petir , Angin, Tanah, Aip?- wait why are the faces look the same ?, Air ?, Daun? And Cahaya???- there's seven of them ?!? -"
Kaizo leaves the room as Fang continue list out the targets info .
There's no point to stop Fang now, soon or later he will need to stand on his feet own .
....... Now he have to get prepare for the next mission.
.
.
.
.
.
.
.
It's been weeks that Fang have stay on earth and he have receive messages from him that he didn't have time to read .
At first, it is all about the mission progress .
"I have successfully enroll the same school with the targets ." "The power sphera that give them the powers - Ocobot , with unknow reason it will faints whenever it sees me . Which makes me suspicions to them." "Today the targets Pertir , Angin and Tanah still alert to me while the others who didn't have power still remain unknow . The elder sibling seem tell them to avoid me purposely ."
But it starts to change a bit .
" Today I met the target Cahaya and talks to him . He's very smart compare to the others ,he hypes up and become very talkative when he's sharing knowledge. " "The elder siblings seems not please the fact that they find out that target Cahaya become closer to me." "I have solve the problem with the power sphera , the targets have show there trust on me more ." " I have start walking to school with Cahaya every morning ."
He have change the way he address them .
" Aip have gain his power but he's unable to sleep peacefully because of the stress. So I tried to help but it didn't end ......well . "It seems like Api have a fight with Tanah , which is the reason that Api is stress . But they are now ease now as both of them apologizes to each other." "Today Air have active his power but ...... under some reason he need to loss weight before he go to fight Boboibot ."
"Cahaya is very curious about Boboibot and tried to dismantle it after Boboibot is defeat but Tanah and Petir stops him because it's was dangerous ."
It seems like Fang have befriend all of them but what make Kaizo more curious is "Cahaya".
Accorded to the files he read ,he's the youngest among the siblings and his power haven't been discover yet.
But with some reason Fang is attach to him , He's the one always get mentioned in his messages even it doesn't relate with the mission.
Interesting......
.
.
.
.
.
.
.
Ejo Jo have escape from the hospital as he say he will get his revenge on earth with the targets .
Which mean it give him a chance to see how attach Fang to the childrens.
Kaizo have inform Fang that he will capture him on earth and retrieve the power watches form the targets snice he didn't make any progress .
.
.
.
.
.
.
.
Interesting.
Fang's face panicked as I order him to fight with them.
Fang tired to look away as "Cahaya" wants confronts him .
Fang's moves got effected when "Cahaya" yells him through the force field barrier while he's fighting with his siblings.
Fang disobey my order and when against me .
"Cahaya" look very horrified when he witness me to strangle Fang as he tired begged me to stop and......
Are Api and Air seriously just destroy each other's attack and having a fight in the middle of the battle ?
Are they for real ?
Petir surprisingly stronger that I thought but not enough .
To surprise me the most is Fang have snatch everyone's watches and told me to stop the fight as he shows his loyalty to me .
Fang definitely look very regret after saying that he is not their friend, especially " Cahaya " starts to sob and asking him is he lying to him all the time being his friend .
Fang admitted that he have lied to them but definitely enjoy being friend with him that is definitely not a lie and apologies to him that he have find out he an alien in this way and sorry he have to leave now .
...... Kaizo definitely notice Fang is trying to hold back his tears after he have arrive to the ship.
.
.
.
.
.
.
.
"This is me Adudu the captain of zero zero super evil ! "
An alien have the guts to attack his space ship but to surprise Kaizo more is -
"Release Fang now ! Give him back to me ! "
Cahaya appear on the screen .
"Haya ? "
Fang is definitely shock to see Cahaya too.
" Fang don't you dare leave after saying that you're sorry !" " I still mad at you for what you done but -" " I still want you to stay !"
Kaizo can see Fang's eyes light up as he heard what Cahaya just said.
"Lahap, Fang go to the control room ."
Kaizo cut off the connection .
" We have intruders to deal with."
Cahaya really have some guts don't he ?
Knowing he's powerless still have the guts to chase us to the space ?
That's impressive.
.
.
.
.
.
.
.
" I'll stay on earth Captain ."
" That's my choices ."
"......Very well Fang ."
Kaizo have approve them have the right to keep the watch . Also tried to recruit them but failed .
" Before I send you guys back to earth-"
"I will need a word with Fang first."
"Privately."
He can feel the sharp glance comes from Cahaya .
" Yes , Captain."
"Wait Fang-"
"Don't worry Haya . He will not harm me now ."
"......Fine. Please be safe ."
(Kaizo and Fang is now in the control room.)
"What do you need to talk Cap -"
" Pang this is no longer a conversation between captain and soldier . But a conversation between a big brother with little brother. So ease up a little."
"Ok? Abang ,what do you want to talk ?"
"You're really attach to the youngest sibling, Cahaya . Don't you ?"
Fang's face flushed by Kaizo words .
"Well..... Yeah I admitted that , so is there a problem ?"
"So, what's he's like ?"
"Pardon ?"
"What's he like when he's with you ?"
"Oh-,that?-um......"
" He's really talkative around me , He's always curios about every things . The stars ,the space , the planet , the - hu, I couldn't even count ! He's someone who seek knowledge if he got a chance ! And-"
Kaizo look at Fang who is very tense just now , relaxed and happy as he keep saying things about Cahaya.
Kaizo make a conclusion . Fang may have befriend all of them but Cahaya holds a special space in his heart and that conclusion -
Also applied on Cahaya too.
" Hey Fang ?"
" Yes ? "
"Cherish him, It's hard to found someone like him nowadays."
"Even without you saying , I will and aways will !"
"And one last thing......"
"Do you think I didn't notice you sneakily hiding behind the door Cahaya ?"
Kaizo opens the door as Cahaya fell on the ground .
"!?!?"
" I understand that you're worry of Fang but -"
"I believe it wasn't half bad to heard what Fang think about you ."
"I'll leave you guys here now."
Kaizo septs out the control room as he leave them some privacy.
They still have a long way to go .
Kaizo and his ability to leave his brother alone on earth explained
And kaizo slowly seeing Fang get attached to them
Fang having his words change, his thoughts changed its sweet
Kaizo sees Fang is enjoying himself with them(despite mentally and physically abusing him through the fight) i guess there good intentions?? Yeah
13 notes
·
View notes
Text
PMT01: Scaffold nano & Trowel pico
Scaffold nano & Trowel pico v1.1.0 have been released tonight.
Updates add a myStages to Scaffold, and QoL improvements to the powerup editing experience in Trowel pico.
Now that it's out, I'd like to also take this moment to talk about the development and release of both.
One year minus one day ago I had announced in Twitter I would work on the level editor for the demo version of BRICKBREAKER SPRINT (nano) so fans could breathe in more life to the game even in its limited state and to also showcase how much it can do with its limited toolset at the current time.
The reasoning for a completely separate technology version was simple, WebGL. WebGL in Unity is so limited for even the most basic things (I had to install a package to support cross-app copy/paste before nano+'s release!) and I said "hey, might as well give the multiplatform users something to be able to edit with, doesn't need to be just windows which is what the currently unpolished-ish trowel desktop is!"
But then it hit me that the same limitations would likely make it hell for me to support just levels downloaded as files (internally stored with "bxtp" extension btw :) ) so then... I came at a crossroads
just say FKIT and not do it, make ppl wait for BB Lite
still say FKIT and kickstart the foundation of the online service that I had planned since the first design iteration of the game
guess what I took?
I don't regret it.
Currently, Scaffold's login system is attached to itch.io. In the future, this will use my own account system (called Luna, still in development)
Now then... developing Scaffold's frontend was a challenge in itself, because I'm a masochist. I chose yet another technology to make the frontend in, this time Svelte, Sveltekit as backend saying "okay, no more fear of ServerSideRendering now that I can afford a VPS for this"
but... Svelte has been an absolute joy to work with. Its learning curve even smoother than React's (my first frontend framework, which powers cometSpectrum!), and I got everything rolling very quickly
At first, I was going to use my regular website design language, but it's kinda jank, so I said NO and started from scratch with a simpler façade. For the style I wanted to pursue... it was perfect. Some people have complimented the graphic aspect of the site, and I'm happy I could deliver exactly what I wanted. (and yes the icons being in opposite directions compared to bbsprint's UI is entirely on purpose)
BTW, the site is made in such a way that a BB theme could be used as the site's theme and every color will change except for PNG icons, i love it (this functionality is used for people using their OS's Light theme)
This is actually the first made-by-me website project that has "public" facing write actions to a database and stores actual files. It works... very well, and I am very happy about it. It's like a combination of everything I've learnt up to this point, down to API design and interop between programs.
I did say this was the foundation for Scaffold, and I plan for this game to keep Scaffold as its prime way of getting stages, even when getting to storefronts like Steam or GOG. Kind of like an osu! situation.
You're at this part of the message... wanna see how Scaffold's logo used to look like?
5 notes
·
View notes
Text
youtube
Stumbled on this - so for anyone out of the loop part of Reddit blowing up last year was because it was making use of it's API prohibitively expensive for the average person to use, killing off a lot of (superior) third party apps used to both browse and moderate the platform on mobile.
I don't know if it was stated explicitly at the time, but for me the writing was on the wall - this was purely to fence off Reddit's data from being trawled by web scraping bots - exactly the same thing Elon Musk did when he took over Twitter so he could wall off that data for his own AI development.
So it comes as absolutely zero surprise to me that with Reddit's IPO filing, AI and LLM (Large Language Models) are mentioned SEVERAL times. This is all to tempt a public buyer.
What they do acknowledge though, which is why this video is titled 'Reddit's Trojan Horse' is the fact that while initially this might work and be worth a lot - as the use of AI grows, so will the likelihood that AI generated content being passed off as 'human generated' on the platform will grow - essentially nulling the value of having a user-generated dataset, if not actively MAKING IT WORSE.
As stated in the video - it's widely known that feeding AI content into an AI causes 'model collapse', or complete degeneration into gibberish and 'hallucinations'. This goes for both LLM's and Image Generation AI.
Now given current estimates that 90% of the internet's content will be AI generated by 2026 that means most of the internet is going to turn into a potential minefield for web-scraping content to shove into a training dataset, because now you have to really start paying attention what your bot is sucking up - because lets face it, no one is really going to look at what is in that dataset because it's simply too huge (unless you're one of those poor people in Kenya being paid jack shit to basically weed out the most disgusting and likely traumatizing content from a massive dataset).
What I know about current web-scraping, is OpenAI at least has built it's bot to recognize AI generated image content and exclude it from the scrape. An early version of image protection on the side of Artists was something like this - it basically injected a little bit of data to make the bot think it was AI generated and leave it alone. Now of course we have Nightshade and Glaze, which actively work against training the model and 'poison' the dataset, making Model Collapse worse.
So right now, the best way to protect your images (and I mean all images you post online publicly, not just art) from being scraped is to Glaze/Nightshade them, because either these bots will likely be programmed to avoid them - but if not, good news! You poisoned the dataset.
What I was kind of stumped on is Language Models. While feeding AI LLM's their own data also causes Model Collapse, it's harder to understand why. With an image it makes sense - it's all 1's and 0's to a machine, and there is some underlying pattern within that data which gets further reinforced and contributes to the Model Collapse. But with text?
You can't really Nightshade/Glaze text.
Or can you?
Much like with images, there is clearly something about the way a LLM chooses words and letters that has a similar pattern that when reinforced contributes to this Model Collapse. It may read perfectly fine to us, but in a way that text is poisoned for the AI. There's talk of trying to figure out a way to 'watermark' generated text, but probably won't figure that one out any time soon given they're not really sure how it's happening in the first place. But AI has turned into a global arms race of development, they need data and they need it yesterday.
For those who want to disrupt LLM's, I have a proposal - get your AI to reword your shit. Just a bit. Just enough, that it's got this pattern injected.
These companies have basically opened Pandora's Box to the internet before even knowing this would be a problem - they were too focused on getting money (surprise! It's capitalism again). And well, Karma's about to be a massive bitch to them for rushing it out the door and stealing a metric fucktonne of data without permission.
If they want good data? They will have to come to the people who hold the good data, in it's untarnished, pure form.
I don't know how accurate this language poisoning method could be, I'm just spitballing hypotheticals here based on the stuff I know and current commentary in AI tech spaces. Either way, the tables are gonna turn soon.
So hang in there. Don't let corpos convince you that you don't have control here - you soon will have a lot of control. Trap the absolute fuck out of everything you post online, let it become a literal minefield for them.
Let them get desperate. And if they want good data? Well they're just going to have to pay for it like they should have done in the first place.
Fuck corpos. Poison the machine. Give them nothing for free.
#kerytalk#anti ai#honestly the fact that language models can't identify it's own text should have hit me a LOT sooner#long post#Sorry I am enjoying the fuck out of this and the direction it's going in - like for once Karma might ACTUALLY WORK#especially enjoying it since yeah AI image generation dropping killed my creative motivation big time and I'm still struggling with it#these fuckers need to pay#fuck corpos#tech dystopia#my commentary#is probably a more accurate tag I'll need to change to#Youtube
6 notes
·
View notes
Text
Cloud Agnostic: Achieving Flexibility and Independence in Cloud Management
As businesses increasingly migrate to the cloud, they face a critical decision: which cloud provider to choose? While AWS, Microsoft Azure, and Google Cloud offer powerful platforms, the concept of "cloud agnostic" is gaining traction. Cloud agnosticism refers to a strategy where businesses avoid vendor lock-in by designing applications and infrastructure that work across multiple cloud providers. This approach provides flexibility, independence, and resilience, allowing organizations to adapt to changing needs and avoid reliance on a single provider.
What Does It Mean to Be Cloud Agnostic?
Being cloud agnostic means creating and managing systems, applications, and services that can run on any cloud platform. Instead of committing to a single cloud provider, businesses design their architecture to function seamlessly across multiple platforms. This flexibility is achieved by using open standards, containerization technologies like Docker, and orchestration tools such as Kubernetes.
Key features of a cloud agnostic approach include:
Interoperability: Applications must be able to operate across different cloud environments.
Portability: The ability to migrate workloads between different providers without significant reconfiguration.
Standardization: Using common frameworks, APIs, and languages that work universally across platforms.
Benefits of Cloud Agnostic Strategies
Avoiding Vendor Lock-InThe primary benefit of being cloud agnostic is avoiding vendor lock-in. Once a business builds its entire infrastructure around a single cloud provider, it can be challenging to switch or expand to other platforms. This could lead to increased costs and limited innovation. With a cloud agnostic strategy, businesses can choose the best services from multiple providers, optimizing both performance and costs.
Cost OptimizationCloud agnosticism allows companies to choose the most cost-effective solutions across providers. As cloud pricing models are complex and vary by region and usage, a cloud agnostic system enables businesses to leverage competitive pricing and minimize expenses by shifting workloads to different providers when necessary.
Greater Resilience and UptimeBy operating across multiple cloud platforms, organizations reduce the risk of downtime. If one provider experiences an outage, the business can shift workloads to another platform, ensuring continuous service availability. This redundancy builds resilience, ensuring high availability in critical systems.
Flexibility and ScalabilityA cloud agnostic approach gives companies the freedom to adjust resources based on current business needs. This means scaling applications horizontally or vertically across different providers without being restricted by the limits or offerings of a single cloud vendor.
Global ReachDifferent cloud providers have varying levels of presence across geographic regions. With a cloud agnostic approach, businesses can leverage the strengths of various providers in different areas, ensuring better latency, performance, and compliance with local regulations.
Challenges of Cloud Agnosticism
Despite the advantages, adopting a cloud agnostic approach comes with its own set of challenges:
Increased ComplexityManaging and orchestrating services across multiple cloud providers is more complex than relying on a single vendor. Businesses need robust management tools, monitoring systems, and teams with expertise in multiple cloud environments to ensure smooth operations.
Higher Initial CostsThe upfront costs of designing a cloud agnostic architecture can be higher than those of a single-provider system. Developing portable applications and investing in technologies like Kubernetes or Terraform requires significant time and resources.
Limited Use of Provider-Specific ServicesCloud providers often offer unique, advanced services—such as machine learning tools, proprietary databases, and analytics platforms—that may not be easily portable to other clouds. Being cloud agnostic could mean missing out on some of these specialized services, which may limit innovation in certain areas.
Tools and Technologies for Cloud Agnostic Strategies
Several tools and technologies make cloud agnosticism more accessible for businesses:
Containerization: Docker and similar containerization tools allow businesses to encapsulate applications in lightweight, portable containers that run consistently across various environments.
Orchestration: Kubernetes is a leading tool for orchestrating containers across multiple cloud platforms. It ensures scalability, load balancing, and failover capabilities, regardless of the underlying cloud infrastructure.
Infrastructure as Code (IaC): Tools like Terraform and Ansible enable businesses to define cloud infrastructure using code. This makes it easier to manage, replicate, and migrate infrastructure across different providers.
APIs and Abstraction Layers: Using APIs and abstraction layers helps standardize interactions between applications and different cloud platforms, enabling smooth interoperability.
When Should You Consider a Cloud Agnostic Approach?
A cloud agnostic approach is not always necessary for every business. Here are a few scenarios where adopting cloud agnosticism makes sense:
Businesses operating in regulated industries that need to maintain compliance across multiple regions.
Companies require high availability and fault tolerance across different cloud platforms for mission-critical applications.
Organizations with global operations that need to optimize performance and cost across multiple cloud regions.
Businesses aim to avoid long-term vendor lock-in and maintain flexibility for future growth and scaling needs.
Conclusion
Adopting a cloud agnostic strategy offers businesses unparalleled flexibility, independence, and resilience in cloud management. While the approach comes with challenges such as increased complexity and higher upfront costs, the long-term benefits of avoiding vendor lock-in, optimizing costs, and enhancing scalability are significant. By leveraging the right tools and technologies, businesses can achieve a truly cloud-agnostic architecture that supports innovation and growth in a competitive landscape.
Embrace the cloud agnostic approach to future-proof your business operations and stay ahead in the ever-evolving digital world.
2 notes
·
View notes
Text
FullStackJava: Mastering Both Ends of the Stack
Java isn't just for backend anymore! As a full stack Java developer, you'll wield powerful tools on both sides:
Frontend:
JavaServer Faces (JSF)
Thymeleaf
Vaadin
Backend:
Spring Boot
Hibernate ORM
RESTful APIs
Database:
JDBC
JPA
Build & Deploy:
Maven/Gradle
Docker
Jenkins
Embrace the versatility. Java full stack = limitless possibilities.
3 notes
·
View notes
Text
Legal Issues Facing Discord Music Bots: What You Need to Know
In the world of Discord, music bots have become a popular way to enhance the user experience by allowing server members to listen to music together in real-time. These bots can play songs from various streaming platforms like YouTube, Spotify, and SoundCloud, making them a central feature for many communities. However, the use of these music bots has not been without controversy, especially concerning legal issues surrounding copyright and intellectual property rights. In recent years, several high-profile music bots have been shut down due to legal pressures, raising important questions for both users and developers. This article delves into the legal challenges that Discord music bots face, the implications for users, and what the future may hold.
Background on Legal Challenges: Discord music bots like Groovy and Rythm were once among the most popular bots on the platform, boasting millions of active users. These bots allowed users to stream music from YouTube and other platforms directly into their Discord servers. However, their popularity also caught the attention of major record labels and streaming platforms, which led to a series of legal actions that culminated in the shutdown of these bots.
The Rise of Music Bots:
Music bots first gained traction as a fun and easy way to share music in group settings. Their ability to pull audio from platforms like YouTube made them a go-to choice for community servers.
Bots like Groovy and Rythm became ubiquitous, often installed on thousands of servers, offering features such as playlist creation, queue management, and high-quality streaming.
Legal Notices and Shutdowns:
In 2021, YouTube issued cease-and-desist letters to both Groovy and Rythm, citing violations of their terms of service. The main issue was that these bots were pulling audio from YouTube videos without proper licensing or permission.
Groovy was the first to shut down, followed by Rythm shortly after. These shutdowns sent shockwaves through the Discord community, as many servers relied on these bots for their music needs.
The legal notices highlighted the importance of adhering to copyright laws, even in seemingly informal settings like Discord servers.
Understanding Copyright Law:
Copyright law protects creators' rights over their original works, including music. When music is played publicly or shared, it generally requires a license from the copyright holder or a performing rights organization (PRO).
Platforms like YouTube and Spotify have agreements with PROs that allow them to stream music legally. However, when a bot extracts and plays this music on another platform (like Discord), it can violate these agreements if proper licensing is not obtained.
Current Legal Status of Music Bots: The shutdown of Groovy and Rythm was a wake-up call for both developers and users of Discord music bots. Since then, the legal landscape has become more complex, with new bots emerging that attempt to navigate these challenges while staying within legal boundaries.
Emergence of Legal-Compliant Bots:
After the shutdown of major bots, developers began to explore ways to create music bots that could operate legally. This has led to the emergence of bots like Hydra and Chip, which use APIs provided by streaming platforms to play music in a way that complies with copyright laws.
Some bots have adopted a freemium model, where basic features are free, but advanced features (like high-quality streaming or playlist management) require a paid subscription. The revenue from these subscriptions helps cover licensing fees.
Platform-Specific Bots:
Some music bots are now designed to work specifically with platforms that offer API access and licensing agreements. For example, bots that pull music from Spotify do so through Spotify's official API, which ensures that all music played is properly licensed.
These platform-specific bots often come with restrictions, such as requiring users to link their personal accounts or limiting the number of tracks that can be played from certain artists or albums.
Risks for Server Admins:
While newer bots strive to operate within legal limits, server admins must still be cautious. Using bots that do not comply with copyright laws can expose the server owner to legal risks, including potential fines or the shutdown of their server.
To mitigate these risks, server admins should only use bots that explicitly state their compliance with copyright laws and avoid using bots that scrape audio from platforms without permission.
What This Means for Users: For the average Discord user, the legal issues surrounding music bots can be confusing. However, understanding these challenges is crucial for making informed decisions about which bots to use and how to use them responsibly.
Choosing the Right Bot:
Users should look for music bots that operate transparently and within legal boundaries. Bots that use official APIs from platforms like Spotify or YouTube are generally safer to use.
Avoid bots that offer suspiciously unlimited features for free, as these are more likely to be operating without proper licensing, putting both the bot and the server at risk.
Understanding Your Rights and Responsibilities:
When using a music bot, users are indirectly involved in the public performance of music. While this might seem trivial, it falls under the purview of copyright law.
Users should be aware that even if a bot is free to use, it doesn’t necessarily mean it’s legal. Always check the bot’s terms of service and any disclaimers provided by the developers.
Potential Consequences:
Using illegal music bots can result in the bot being shut down, leading to the loss of playlists, queues, and other features. In some cases, Discord itself may take action against servers that consistently violate copyright laws.
It’s also possible that continued use of illegal bots could lead to broader legal action against server owners, especially if the server has a large following or generates revenue.
Future Legal Considerations: The legal landscape for Discord music bots is likely to continue evolving as both developers and copyright holders navigate the challenges of digital music distribution. Here are some potential future developments:
Stricter Enforcement by Platforms:
As streaming platforms like YouTube and Spotify continue to crack down on unauthorized use of their content, we may see even stricter enforcement measures. This could include more frequent shutdowns of non-compliant bots or legal actions against developers.
Platforms may also develop more robust tools for detecting and blocking unauthorized bots, making it harder for illegal bots to operate undetected.
Development of Licensing Solutions:
There is potential for the development of licensing solutions specifically for music bots. This could involve partnerships between Discord and major streaming platforms to offer legal, licensed music bot options.
Developers could also explore ways to integrate more direct licensing options, allowing server owners to pay for the rights to stream music legally within their communities.
AI and Music Bots:
As AI technology continues to advance, we may see the development of music bots that can create or curate music in real-time, reducing reliance on copyrighted material. AI-generated music could offer a legal alternative, though this raises its own set of legal and ethical questions.
AI could also be used to monitor and manage the use of copyrighted music, ensuring that bots remain compliant with licensing agreements in real-time.
Conclusion: The legal issues facing Discord music bots highlight the complexities of digital music distribution and the importance of respecting copyright laws. While bots like Groovy and Rythm brought joy to millions of users, their shutdowns underscored the need for compliance with legal frameworks. As new, legally compliant bots emerge, users and server admins must be vigilant in choosing and using these tools responsibly. By staying informed and adhering to legal guidelines, Discord communities can continue to enjoy the benefits of music bots without the risk of legal repercussions.
2 notes
·
View notes
Note
your blog is cool and youre the only person on tumblr i really follow on tech stuff. is the transition to manifest v3 really worth all the hubbub?
first of all thank you! and second of all, lol. to be honest I haven't had a major eye on the v2 > v3 transition but you may be unsurprised to learn that I'm not taking hard sides here and am both slightly suspicious of the changeover and also less than convinced by some of the "this is the worst shit ever" blowback. not to suck google's dick either (unless any recruiters are reading this...?) (jk my least favorite person on r/technicalwriting works at google so unless you can guarantee that I will not come into contact with that man, it's gonna be a hard no) but to some extent I think this is one of those things where google, as the de facto governing entity for how internet browsers are designed[1] is, for better or worse, in the seat to steer the ship right now and inevitably has to make design choices that will shape the future of (how people will access) the web.[2]
[1] insert comment about firefox here but considering firefox is almost singlehandedly bankrolled by google it works out the same in the end. hence my perpetual dislike of the way-oversimplified "maverick underdog mozilla singlehandedly holding the line again google" narrative... go tell me where the money is coming from!!!
[2] also I know the W3C is the actual governing entity for internet protocol design and has influenced browser design on a more abstract level but that's still a degree of separation away and tbh I'm not super familiar with W3C drama. although I can only assume there's drama lol.
and google being google has both real and imagined interests in shaping the web by virtue of their other business ventures (e.g. but not exclusively e.g., advertising) and so I think some amount of blowback is gonna be inevitable when they propose Big Fundamental Changes. which, like, I'm the last person who's gonna say "no we should definitely drop our defenses and approach this without an ounce of skepticism" lol so I think the knee-jerk Uh Oh impulse is totally fair and maybe even warranted. but after the initial jerk I also think it's worth hearing shit out and, you know, on the face of it I can see why the changes outlined in the v3 manifest bring positive changes to the table. security and performance and shit. but security and performance are relatively boring selling points, and when google has earned a poor public reputation thanks to the other shit they've pulled I think it's understandable that even well-meaning changes will be met with general suspicion.
buuuut I still get irritated by the verging-on-clickbait headlines where literally every change about v3 is framed as "google is finally killing ad blockers" and then you read the article and ad blockers aren't mentioned directly a single time. like it'll literally just be about v3 lol. arguably I'm just being naive/willfully ignorant because of course it's all really about ad blockers since google is an advertising business and the other benefits are a smokescreen and blah blah blah but I do kinda feel like that borders on conspiratorial thinking, especially since ad blockers will work in v3, albeit differently, and google is actively working with/taking feedback from extension developers (including ad block extension developers). a lot of it genuinely just seems like "major version change will require significant technical work to implement, more at 11".
who knows though, I could eat my words :shrug:
kind of related but I was always kind of surprised by the amount of pushback against the web integrity API thing because I read the proposal behind it and it seemed pretty well-intentioned to me. granted there were some fair/serious concerns that even the proposal pointed out and a lot unanswered implementation details (and tbf it was a proposal/WIP) but I got Why they were proposing it, invalid traffic being the bogeyman it is. and like I am not a cryptography guy in the slightest but as I understand it the WEI was basically just an SSL certificate in reverse?
a lot of it makes me think about the web3 article from a few years back where a guy talked about designing an NFT that looked cool on various storefronts but looked like a poop emoji in your actual wallet after you bought it, which, in the process of trying to google it to link here, led me to this substack post where someone summarized it as "NFTs are centralized and no one cares." which is pretty much exactly what I was getting at (and why I thought about it in this context) with how even ostensibly open protocols can devolve into walled gardens built around those protocols with bonus features tacked on, if the protocols themselves don't offer those features out the gates (and enough people want them). idk. food for thought I guess. I really am just rambling here though so let me humble myself by reminding us all that I have a B.A. in english and love to speculate lolol. not an expert!
8 notes
·
View notes
Text
Scrape Telecommunications Data - Web scraping for Telecom Businesses
Web Scraping Services for telecommunications companies is enabling the development of new services for subscribers. High-quality web data opens up new ways to predict consumer trends, monitor competitors, automate compliance and build new services for end-users and B2B customers. We scrape telecommunications company data in countries like USA, UK, UAE, India, & Germany.
Get Personalized Solution
Data extraction from websites for telecommunication companies allows new service development for clients.
Quality web data open many new doors to track competition, predict consumer trends, automate compliance, and design new services for end customers and B2B clients.
How Quickly is the world moving in front of us
The telecom industry is facing huge changes in its operations. Profit margins and ARPUs have constantly been dropping since smartphone era began. Further, the data quantity in this industry has been increasing with 2x speed every three years as per updates from various sources.
Great tool for data extraction. I found Real Data API to be the best web scraping, and no user-friendly tool I could find for my needs.
Martin P
New Zealand
Offering value-driven data to top telecom companies
How web automation and data scraping are reforming the Telecommunication industry
Social Media Tracking
Price monitoring
Product tracking
Product development
Web Automation for Telecommunication
Social Media Tracking
Collect insights on your brand and your competing telecom brands from various social media platforms like Reddit, LinkedIn, Twitter, and Instagram to check the brand reputation. Gauge the growth potential, and work on marketing strategies accordingly. Automate follower tracking, image saving, comment, and mention scraping.
Get a personalized Telecommunication web scraper for your business need
Hire the best experts to develop web scraping API projects for your data requirements.
Scrape the data exactly when you want it using the customized scheduler.
Schedule the tracking of targeted websites; we will manage their maintenance and support.
Get well-structured, high-quality data in preferred formats like CSV, XML, JSON, or HTML, and use it further without processing.
To reduce the risk of manual errors, use automatic data upload with the help of readymade APIs and integrations.
Get Personalized Solution
Scrape web data for your Telecommunication requirements from any website with Real Data API
Request a data sample
Why are Telecommunication companies choosing Real Data API?
Flexibility
Real Data API can provide anything without any limit regarding data scraping and web automation. We follow nothing is impossible thought.
Reliability
The Real Data API team will streamline your solution and ensure it keeps running without any bugs. We also ensure you get reliable data to make correct decisions.
Scalability
As you keep growing, we can keep adjusting your solution to scale up the data extraction. As per your needs, we can extract millions of pages to get data in TBs.
The market is progressively data-driven. Real Data API helps you get the correct data for your telecom business.
Know More: https://www.realdataapi.com/scrape-telecommunications-data.php
Contact : https://www.realdataapi.com/contact.php
#ScrapeTelecommunicationsData#ExtractTelecommunicationsData#TelecommunicationsDataCollection#scrapingTelecomData#webscrapingapi#datascraping#dataanalytics#dataharvest#datacollection#dataextraction#RealDataAPI#usa#uk#uae#germany#australia#canada
2 notes
·
View notes
Text
Elevating Your Full-Stack Developer Expertise: Exploring Emerging Skills and Technologies
Introduction: In the dynamic landscape of web development, staying at the forefront requires continuous learning and adaptation. Full-stack developers play a pivotal role in crafting modern web applications, balancing frontend finesse with backend robustness. This guide delves into the evolving skills and technologies that can propel full-stack developers to new heights of expertise and innovation.
Pioneering Progress: Key Skills for Full-Stack Developers
1. Innovating with Microservices Architecture:
Microservices have redefined application development, offering scalability and flexibility in the face of complexity. Mastery of frameworks like Kubernetes and Docker empowers developers to architect, deploy, and manage microservices efficiently. By breaking down monolithic applications into modular components, developers can iterate rapidly and respond to changing requirements with agility.
2. Embracing Serverless Computing:
The advent of serverless architecture has revolutionized infrastructure management, freeing developers from the burdens of server maintenance. Platforms such as AWS Lambda and Azure Functions enable developers to focus solely on code development, driving efficiency and cost-effectiveness. Embrace serverless computing to build scalable, event-driven applications that adapt seamlessly to fluctuating workloads.
3. Crafting Progressive Web Experiences (PWEs):
Progressive Web Apps (PWAs) herald a new era of web development, delivering native app-like experiences within the browser. Harness the power of technologies like Service Workers and Web App Manifests to create PWAs that are fast, reliable, and engaging. With features like offline functionality and push notifications, PWAs blur the lines between web and mobile, captivating users and enhancing engagement.
4. Harnessing GraphQL for Flexible Data Management:
GraphQL has emerged as a versatile alternative to RESTful APIs, offering a unified interface for data fetching and manipulation. Dive into GraphQL's intuitive query language and schema-driven approach to simplify data interactions and optimize performance. With GraphQL, developers can fetch precisely the data they need, minimizing overhead and maximizing efficiency.
5. Unlocking Potential with Jamstack Development:
Jamstack architecture empowers developers to build fast, secure, and scalable web applications using modern tools and practices. Explore frameworks like Gatsby and Next.js to leverage pre-rendering, serverless functions, and CDN caching. By decoupling frontend presentation from backend logic, Jamstack enables developers to deliver blazing-fast experiences that delight users and drive engagement.
6. Integrating Headless CMS for Content Flexibility:
Headless CMS platforms offer developers unprecedented control over content management, enabling seamless integration with frontend frameworks. Explore platforms like Contentful and Strapi to decouple content creation from presentation, facilitating dynamic and personalized experiences across channels. With headless CMS, developers can iterate quickly and deliver content-driven applications with ease.
7. Optimizing Single Page Applications (SPAs) for Performance:
Single Page Applications (SPAs) provide immersive user experiences but require careful optimization to ensure performance and responsiveness. Implement techniques like lazy loading and server-side rendering to minimize load times and enhance interactivity. By optimizing resource delivery and prioritizing critical content, developers can create SPAs that deliver a seamless and engaging user experience.
8. Infusing Intelligence with Machine Learning and AI:
Machine learning and artificial intelligence open new frontiers for full-stack developers, enabling intelligent features and personalized experiences. Dive into frameworks like TensorFlow.js and PyTorch.js to build recommendation systems, predictive analytics, and natural language processing capabilities. By harnessing the power of machine learning, developers can create smarter, more adaptive applications that anticipate user needs and preferences.
9. Safeguarding Applications with Cybersecurity Best Practices:
As cyber threats continue to evolve, cybersecurity remains a critical concern for developers and organizations alike. Stay informed about common vulnerabilities and adhere to best practices for securing applications and user data. By implementing robust security measures and proactive monitoring, developers can protect against potential threats and safeguard the integrity of their applications.
10. Streamlining Development with CI/CD Pipelines:
Continuous Integration and Deployment (CI/CD) pipelines are essential for accelerating development workflows and ensuring code quality and reliability. Explore tools like Jenkins, CircleCI, and GitLab CI/CD to automate testing, integration, and deployment processes. By embracing CI/CD best practices, developers can deliver updates and features with confidence, driving innovation and agility in their development cycles.
#full stack developer#education#information#full stack web development#front end development#web development#frameworks#technology#backend#full stack developer course
2 notes
·
View notes
Text
How Salesforce Developers Shape the Future of Project Management Success?
The ever-changing field of project management has made technology developments crucial to the achievement of desired results. With the help of knowledgeable developers and consultants, Salesforce is a platform that can truly alter businesses, even in the face of an extensive number of competing offerings.
A Salesforce consultant will have a huge influence on how project managers succeed in the future. They will use Salesforce's features to improve teamwork, accelerate efficiency, and streamline procedures.
In this blog, we'll reveal the critical role that Salesforce developers play in influencing the success of project management. We'll explore their experience streamlining processes, streamlining work, and customizing solutions to drive productivity and cooperation in the fast-paced project environments of today.
Customized Solutions Crafting
Explore the ways in which developers modify modules, improve user experience, and guarantee scalability to ensure future-proofing of Salesforce systems.
Adapting Salesforce Modules:
The modules in Salesforce's suite are easily navigated by developers, who may easily customize features to fit project workflows. Whether creating complex workflows, setting unique items, or connecting third-party apps, developers take use of Salesforce's adaptability to create solutions that align with project goals.
User Experience Enhancement:
Developers may simply explore the modules in Salesforce's suite and modify functionalities to suit project procedures. Whether establishing custom items, integrating third-party apps, or building intricate workflows, developers leverage Salesforce's flexibility to build solutions that support project objectives
Scalability and Future-Proofing:
Future-focused, scalable, and flexible solutions are designed by developers. They future-proof project management systems by foreseeing possible expansion and changing needs, providing the groundwork for long-term success and adaptability.
Seamless Collaboration Integration
Examine how seamless collaboration integration may strengthen teamwork, bridge systems, and enable data-driven decision-making.
System Integration:
By utilizing middleware and APIs, developers can plan the smooth connection of Salesforce with other vital programs and systems. Integration facilitates data flow and guarantees a cohesive environment through connections with project management software, communication tools, and enterprise resource planning (ERP) systems.
Collaborative Workspace:
Within Salesforce, developers create collaborative workspaces that enable teams to share insights, interact in real time, and centralize communication. Transparent communication and knowledge sharing are facilitated by features like Chatter, Communities, and interfaces with Slack and other collaborative applications.
Data-Driven Decision Synthesis:
Developers facilitate the extraction of meaningful insights from heterogeneous data sources for project stakeholders by providing integrated analytics and reporting functionalities. Through the synthesis of data in Salesforce, ranging from project status to customer feedback, stakeholders can efficiently minimize risks, make well-informed decisions, and drive strategic objectives.
Automation for Enhanced Efficiency
Investigating data synthesis, collaborative workspaces, and efficient procedures for well-informed decision-making.
Workflow Automation:
Developers use Salesforce's automation features, such Flow and Process Builder, to standardize procedures and automate time-consuming tasks. They manage workflows that reduce human error, speed up task completion, and increase overall efficiency by specifying triggers, actions, and approval processes.
AI-Powered Insights:
By using artificial intelligence (AI) tools such as Salesforce Einstein, developers are able to introduce intelligence into project management procedures. AI-driven insights enable project teams to make data-driven decisions quickly, from sentiment analysis that measures stakeholder satisfaction to predictive analytics that predicts project timeframes.
Mobile Optimization:
Salesforce is optimized for mobile devices by developers who understand how important mobility is in today's dynamic work environment. They ensure that project stakeholders can access vital information and complete activities while on the go by utilizing native app development and responsive design, which promotes responsiveness and productivity.
Conclusion
In conclusion, Salesforce developers are the engine of innovation, using the platform's potential to entirely rethink the project management sector in conjunction with Salesforce consulting experience. By means of customization, automation, and integration, they facilitate enterprises in achieving unparalleled levels of efficiency, collaboration, and success. The combined experience of consultants and Salesforce developers will be essential in steering project management's future course toward even higher success and quality as it develops.
FAQs About Salesforce Developers and Project Management
How do Salesforce developers contribute to project management success?
Salesforce developers streamline project workflows, automate tasks, and customize solutions, enhancing efficiency and collaboration for project teams.
What skills do Salesforce developers bring to project management?
Salesforce developers possess expertise in coding, data management, and platform customization, enabling them to tailor solutions that align with project goals and requirements.
Why is Salesforce considered crucial for future project management?
Salesforce's robust platform offers scalable solutions, real-time insights, and seamless integration capabilities, empowering project managers to drive innovation and achieve project success efficient
#remote work#technology#hire salesforce developer#hire salesforce consultant#project manager#tech jobs#Future of businesses
3 notes
·
View notes