#yuble
Explore tagged Tumblr posts
contentbustaz · 1 year ago
Text
AOK PLUS - Yuble Onlinekurse
vimeo
AOK PLUS - Yuble-Onlinekurse
Für die AOK Sachsen wurden kurze Imageclips für die Plattform Yuble produziert. Yuble bietet off- aber vor allem online Gesundheitskurse an.
copyright AOK Sachsen
produktion. LUMALENSCAPE GMBH
camera. DANIEL REMLER
director. MARTIN GRAU
0 notes
the-geek-librarian · 7 months ago
Text
I was thinking about the Yubel and Zane fight agian-
Tumblr media
Fucking,,,, Do not ask what was going through my head I do not have an answer for you
5 notes · View notes
ihearasound · 10 months ago
Text
Trying to figure out whether Yubl is more nu/machi or kan/baru coded. Por que no los dos because their arc is about making decisions!! And responsibility!! And I need to make a decision and yoob needs to learn to take responsibility without being weird about it. She and I are in this together
1 note · View note
jtsteiny · 2 years ago
Photo
Tumblr media
Big Pages Eighty - ABAW Publishing - jt-steiny.com https://www.instagram.com/p/Cm997j-yUbl/?igshid=NGJjMDIxMWI=
0 notes
noplannomention · 2 years ago
Photo
Tumblr media
지옥같은 연말을 잘버티고 밝은 새해를 맞이하리라고 매년 다짐하다보니 15년이 흘렀다. 15년동안 한결같이 변함 없어서 다행이라고 해야하나? https://www.instagram.com/p/ClkTyG-yuBl/?igshid=NGJjMDIxMWI=
0 notes
hackernewsrobot · 8 years ago
Text
Ops in the serverless world: monitoring, logging, config management
https://medium.com/@theburningmonk/yubls-road-to-serverless-part-3-ops-6c82139bb7ee Comments
2 notes · View notes
yublingdiyjewelry-blog · 7 years ago
Text
We have to close YuBling for a couple months for re-designing.
Because we haven’t made any revenue for a long time we had to re-think and re-design our website. We made YuBling for you and offered it for free, but free doesn’t pay the costs to keep it going. So we’ve decided to re-design it as a DIY jewelry multi-vendor marketplace and focusing on jewelry under $50. There will be some very small charges such as commissions on sales and possible some other costs, but it will be worth it. We were the first marketplace to focus only on jewelry and make it free for everyone. 
Please check back on www.YuBling.com after about two months. We hope to have a completely new and user-friendly design that we’re sure you will love.
Thanks for being part of the YuBling family.
Your friends,
The YuBling Team
0 notes
techvedi · 8 years ago
Link
Just before Yubl’s untimely demise we did an interesting piece of work to redesign the system for sending targeted push notifications to our users to improve retention. The old system relied on MixPanel for both selecting users as well as sending out the push notifications. Read more
0 notes
mikegchambers · 8 years ago
Text
3 Pro Tips for Developers using AWS Lambda with Kinesis Streams
TL; DR: Lessons learned from our pitfalls include considering partial failures, using dead letter queues, and avoiding hot streams
AWS Lambda and Kinesis sitting on a tree
Yubl was a social networking app with a timeline feature similar to Twitter. The development team leveraged a serverless architecture where Lambda and Kinesis became a prominent feature of our design.
As part of the design, we tried to keep in mind that the characteristics that define a system that processes Kinesis events — which for me must have at least these 3 qualities:
The system should be real-time — as in “within a few seconds”
The system should retry failed events — but retries should not violate the realtime constraint on the system
The system should be able to retrieve events that could not be processed — so someone can investigate root cause or provide manual intervention
Yubl had around 170 Lambda functions running in production — gluing everything together
Whilst our experience using Lambda with Kinesis was great in general, there were a couple of lessons that we had to learn along the way. Here are 3 useful tips to help you avoid some of the pitfalls we fell into and accelerate your own adoption of Lambda and Kinesis.
ProTip #1: Consider Partial Failures
AWS Lambda polls your stream and invokes your Lambda function. Therefore, if a Lambda function fails, AWS Lambda attempts to process the erring batch of records until the time the data expires …
Because of the way Lambda functions are retried, if you allow your function to err on partial failures, then the default behavior is to retry the entire batch until success or the data expires from the stream.
To decide if this default behavior is right for you, you have to answer certain questions:
Can events be processed more than once?
What if those partial failures are persistent? Perhaps due to a bug in the business logic that is not handling certain edge cases gracefully?
Is it more important to process every event till success than keeping the overall system real-time?
In the case of Yubl, we found it was more important to keep the system flowing than to halt processing for any failed events, even if for a minute.
For instance, when a user created a new post, we would distribute it to all of your followers by processing the yubl-posted event. The 2 basic choices we’re presented with are:
allow errors to bubble up and fail the invocation — we give every event every opportunity to be processed; but if some events fail persistently then no one will receive new posts in their feed and the system appears unavailable
catch and swallow partial failures — failed events are discarded, some users will miss some posts but the system appears to be running normally to users; even affected users might not realize that they had missed some posts
Of course, it doesn’t have to be a binary choice. There’s plenty of room to add smarter handling for partial failures which we will discuss shortly.
When you create a new post in the Yubl app, your content is distributed to your followers’ feeds
Yubl’s architecture for distributing a user’s posts to his followers’ feeds
We encapsulated these 2 choices as part of our tooling so that we get the benefit of reusability and the developers can make an explicit choice for every Kinesis processor they create
Depending on the problem you’re solving, you would apply different choices. The important thing is to always consider how partial failures would affect your system as a whole.
ProTip #2 : Use Dead Letter Queues (DLQ)
AWS announced support for Dead Letter Queues (DLQ) at the end of 2016. While Lambda support for DLQ extends to asynchronous invocations such as SNS and S3, it does not support poll-based invocations such as Kinesis and DynamoDB streams. Until AWS updates the DLQ features, there’s nothing stopping you from applying the concepts to Kinesis streams yourself.
First, let’s roll back the clock to a time when we didn’t have Lambda. Back then, we’d use long running applications to poll Kinesis streams ourselves. Heck, I even wrote my own producer and consumer libraries because when AWS rolled out Kinesis they totally ignored anyone not running on the JVM!
Lambda has taken over a lot of the responsibilities — polling, tracking where you are in the stream, error handling, etc. — but as we have discussed above it doesn’t remove you from the need to think for yourself. Prior to using Lambda, my long running application to poll Kinesis would:
poll Kinesis for events
process the events by passing them to a delegate function (your code)
failed events are retried 2 additional times
after the 2 retries are exhausted, they are saved into a SQS queue
record the last sequence number of the batch so that we don’t lose the current progress if the host VM dies or the application crashes
another long running application would poll the SQS queue for events that couldn’t be process realtime
process the failed events by passing them to the same delegate function as above (your code)
after the max no. of retrievals the events are passed off to a DLQ
this triggers CloudWatch alarms and someone can manually retrieve the event from the DLQ to investigate
A Lambda function that processes Kinesis events should also:
retry failed events X times depending on processing time
send failed events to a DLQ after exhausting X retries
Since SNS already comes with DLQ support, you can simplify your setup by sending the failed events to a SNS topic instead. Lambda would then process it a further 3 times before passing it off to the designated DLQ.
Tip: Keep the functions that process Kinesis and SNS in the same service so they can share the same processing logic
ProTip #3 : Avoid “Hot” Streams
We found that when a Kinesis stream has 5 or more Lambda function, subscribers we would start to see lots ReadProvisionedThroughputExceeded errors in CloudWatch. Fortunately these errors are silent to us as they happen to, and are handled by, the Lambda service polling the stream.
However, we occasionally see spikes in the GetRecords.IteratorAge metric, which tells us that a Lambda function will sometimes lag behind. This did not happen frequently enough to present a problem but the spikes were unpredictable and did not correlate to spikes in traffic or number of incoming Kinesis events.
Increasing the number of shards in the stream made matters worse and the number of ReadProvisionedThroughputExceeded increased proportionally.
According to the Kinesis documentation … each shard can support up to 5 transactions per second for reads, up to a maximum total data reads of 2 MB per second.
And the Lambda documentation … If your stream has 100 active shards, there will be 100 Lambda functions running concurrently. Then, each Lambda function processes events on a shard in the order that they arrive.
One would assume that each of the aforementioned Lambda functions would be polling its shard independently. Since the problem is having too many Lambda functions poll the same shard, it makes sense that adding new shards will only escalate the problem further.
All problems in computer science can be solved by another level of indirection. — David Wheeler
After speaking to the AWS support team about this, the only advice we received was to apply the fan out pattern — by adding another layer of Lambda function who would distribute the Kinesis events to others.
Applying the “fan out” pattern with Lambda functions and Kinesis
Whilst this is simple to implement, it has some downsides:
it vastly complicates the logic for handling partial failures (see above)
all Lambda functions now process events at the rate of the slowest function, potentially damaging the realtime-ness of the system
We also considered and discounted several other alternatives, including
have one stream per subscriber — this has a significant cost implication, and more importantly it means publishers would need to publish the same event to multiple Kinesis streams in a “transaction” with no easy way to rollback on partial failures since you can’t unpublish an event in Kinesis
roll multiple subscriber logic into one — this corrodes our service boundary as different subsystems are bundled together to artificially reduce the no. of subscribers
In the end, we didn’t find a truly satisfying solution and decided to reconsider if Kinesis was the right choice for our Lambda functions on a case by case basis.
For subsystems that do not have to be realtime, use S3 as source instead. All our Kinesis events are persisted to S3 via Kinesis Firehose. The resulting S3 files can then be processed by these subsystems using Lambda functions that stream events to Google BigQuery for BI.
For work that are task-based (i.e. order is not important), use SNS/SQS as source instead. SNS is natively supported by Lambda, and we implemented a proof-of-concept architecture for processing SQS events with recursive Lambda functions, with elastic scaling. Now that SNS has DLQ support, it would definitely be the preferred option provided that its degree of parallelism would not flood and overwhelm downstream systems such as databases, etc.
For everything else, continue to use Kinesis and apply the fan out pattern as an absolute last resort.
Wrapping up…
So there you have it, 3 pro tips from a group of developers who have had the pleasure of working extensively with Lambda and Kinesis.
I hope you find this post useful, if you have any interesting observations or learning from your own experience working with Lambda and Kinesis, please share them in the comments section below.
3 Pro Tips for Developers using AWS Lambda with Kinesis Streams was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.
from A Cloud Guru - Medium http://ift.tt/2oncUZK
0 notes
devinsba · 8 years ago
Link
I just read "Yubl’s road to Serverless — Part 3, Ops" on @Medium and I think you should too! http://ift.tt/2nUs8Vo
0 notes
contentbustaz · 1 year ago
Text
vimeo
AOK PLUS - Yuble Onlinekurse
Für die AOK Sachsen wurden kurze Imageclips für die Plattform Yuble produziert. Yuble bietet off- aber vor allem online Gesundheitskurse an.
© AOK Sachsen
produktion. LUMALENSCAPE GMBH
camera. DANIEL REMLER
director. MARTIN GRAU
0 notes
the-geek-librarian · 11 months ago
Text
They are the Trio I didn't know I needed till yesterday, they are on their way to commit arson. Undines one and only concern in all of this is that Lolo is going to fall over and she doesn't have any Bandages on her at the moment.
It's just a Thursday for Jaden and Jesse tbh, Yuble is considering teaching Lolo how to throw a punch and Jaden is encouraging her.
Jaden: Wanna go set the Obulisk blue campus on fire : D
Jesse: Sure! Why not?
Lolo: What-
Jaden: Rich assholes with an egos bigger than the entire school
Jesse nodding:
Lolo: I'm in.
Jaden, Jesse and Lolopechkas enter dynamic is literally just:
Jesse: Looks like we gotta kill this guy Lolopechk.
Lolo: Damn...
Jaden with the background loading a shotgun: Yeah... Damn
Yubel is cheering them on and Undine is too bit she is to prideful to admit it
5 notes · View notes
askteammoonlight · 10 years ago
Photo
Tumblr media
A drawing I did a while ago.
Yuble FTW! (sorry about the image.... shaky hands)
12 notes · View notes
contentbustaz · 1 year ago
Text
vimeo
AOK PLUS - Yuble Onlinekurse
Für die AOK Sachsen wurden kurze Imageclips für die Plattform Yuble produziert. Yuble bietet off- aber vor allem online Gesundheitskurse an.
© AOK Sachsen
produktion. LUMALENSCAPE GMBH
camera. DANIEL REMLER
director. MARTIN GRAU
0 notes