#it makes me not wanting to ever initiating interaction with anyone on this blasted website
Explore tagged Tumblr posts
Text
"I don't know the answer for this question" is perfectly valid answer if you don't know stuff about a certain topics or don't care enough to research the information for whatever reason.
It is okay not knowing everything.
But please. Pretty please. Don't use AI when you are composing an asnwer for your asks. Don't use AI at all!
Not just as an artist, but as a person I find it insulting and disrespectful.
#anti-AI#this blogs does not support AI#i would appreciate if moots didn't do this to me#also don't use this post to spread hate.#there are so many resources on the topic people already ignore because destroying creative industry and our environment is too abstract#people need to understand why is it harming on personal level too#for me this translated as: I don't care enough about this person to even make an effort.#Admitting you don't know things is perfectly valid and I appreciate this kind of sincerity more#so please don't ever do this again. not just to me but to anyone.#it makes me not wanting to ever initiating interaction with anyone on this blasted website
9 notes
·
View notes
Text
I don't think I have a ton of followers, but for those who are here, below the cut is basically a very long think piece on jcink skinning that will probably offend a lot of people, but it's my honest take and I'm willing to hear other people's constructively expressed criticisms, and am very open to revising my opinion. I just was surprised by the SOPs I found in the jcink rpc when I started writing this skin and was doing research into whether I wanted to try selling it or not. For anyone wondering about the future the TLDR is as follows: I will probably make another skin and sell it as a multisale and see how that goes. This will probably happen a long time from now. I will probably never take custom commissions. I will probably charge more than people would like for my skins, but I think I have good reasons for that. More below the cut on the actual thoughts.
There were a few obstacles that became immediately obvious when beginning to work on a jcink skin for the first time.
First off, jcink is imo fundamentally bugged.
It has a slow initial load time even on the default skin, always over a second, usually over 2 in my testing. Industry standard is .5, and php (the backend language of jcink's templating system, and i assume the databases), though old, is perfectly capable of meeting that threshold. I personally find this an infuriating fact about jcink, especially since I can do nothing about it.
There are two distinct skins on any site by default, neither of which are responsive. Responsiveness basically refers to whether or not a website looks good regardless of screen size. It's a universal concern in any modern tech company. I understand that when jcink was first written, a lot of what's possible today wasn't available, however we are long since past the time (in my opinion) that it would have been simple for John to reskin the default view of jcink to be responsive and provide that as an alternative to the og defaults. If nothing else, this would provide a better standard for what a 'functional' skin might look like, and maybe even a baseline for people to build off of. Instead we have tables which are generally unusable in modern web design.
Jcink is also not terribly accessible by default. I found text labels missing in several places (for screen reader users), no keyboard accessibility on a few default functions (for people who can't use a mouse), and of course, nothing in the docs to remind people to keep what features there are in the default skin.
Then we have the added complications of the jcink rpc community, the skinners and coder in general within the space. I think I can break down my thoughts on this into three main categories: Price, Product, and People. I have a lot of thoughts about these topics, so bear with me.
Price
Look me in the eyes. Not one single skinner I could find was charging a market rate for front end web development. The most expensive person I could find wasn't even asking for 25% of the money a front end website developer would make for a skin professionally, and I found them because people were putting them on blast for charging as much as they did. The jcink rpc has been getting bargain basement prices on code for EVER, and seems to have NO concept of the time and complexity of writing actually good code.
I have mixed feelings on this.
I have not seen a professional grade skin in the wild, not once, not even my own. The css on my skin is sloppy. There are areas I got a serious case of the fuckits on and wrote some ugly code. There's at least one info page on my site that looks weird af on mobile and I have no intention of fixing it. I say this with love, compassion, and appreciation for everyone who codes for this community. Not one single jcink skin I have ever interacted with would constitute a professional level of work in my field. No one whose work I have interacted with (again, including my own!!) should be charging a fully professional rate.
This is a hobby. We do this for fun. We shouldn't be in an arms race for the prettiest skins, but we are. People like nice interfaces, it will affect their decision to consider or nope out of a site. So, we're in a situation that for a 'not boring' skin, custom skinning has become much more of a norm. In order to have a successful site, admins generally need to invest in a decent skin. With a custom skin, you can easily get to $250 in cost, and I've seen it go quite high from there. That's a lot of money for a site which may not survive. That's a big fuckin' deal. Some people are serial site starters so if one fails, that's fine, the skin can be reused, but I personally have not adminned a site since 2016 until a month ago. If I had sunk $250 into a skin, plus $80 for jcink premium for a site that never took off, suddenly my 'free' hobby becomes quite dear. I think it's only right that there be a certain degree of friendliness in the community when it comes to pricing because of this, even for truly professional coders (again, of which I've seen zero). Skinners are part of the ecosystem and deserve to be properly valued. Admins shouldn't have to pay out the ass just to have a better chance of their site taking off.
There's a great deal of risk involved in most skinning transactions I know of. First off, unlike the real world, we often don't know each other's irl identities. This makes a situation rife for scammers on both ends of the transaction- from skinners not getting paid for delivered work, and customers not getting the product they asked for, if they get it at all. This rightfully affects both parties' feelings on how to adjust pricing to account for risk. If you're a coder, prices should be higher, if you're an admin, prices should be lower. This has been more or less solved for multisales with payhip etc, but custom skins are still fraught, and in a world where plenty of players won't consider a site if they've seen the skin too much, that is still a significant chunk of the activity at play.
So do I think skinners should be paid fully professional rates? No. From what I have seen and learned, absolutely not. It didn't occur to me to track the time spent on my skin until quite late in the process, but if I had to guess I'd put it at about 40 hours worth of work. I'll talk more about this in the product section if that seems like a lot of time to you. I'll throw out a very loose figure and say that $50/hr is about right as a figure for what a professional developer would make doing this kind of work, after tax etc. If you're being responsible about the IRS, it would bump significantly higher. If you multiply that by the 40 hours I spent on my skin, we're getting into multiple thousands of dollars for a custom skin if people were charging professional rates. Now it's very possible that if I made skinning My Thing, that I'd build up a library of components I could pull on to make skins much more quickly. I know for a fact that many skinners do. But even if we say I halve my time on my second skin (optimistic but v possible), we're at $1000 if I was charging the prices I charge my employer to keep me on board. That is CLEARLY unsustainable for a hobby centric community where money never gets involved. So what should pricing look like? I really think that depends on the product.
Product
So. I think there's some room for honest reflection in jcink skinning communities about what is being sold. To contextualize this, I have to lay out the basics of what my general mindset is around what makes a website good for its users.
The obvious one, and the one that I think gets the most attention in jcink skinning is aesthetics. How a site looks, whether it's pretty, etc. I think this is important, I care about things looking good, but out of these five concerns, this one is frankly last on my list of what's important.
UX Design/Functionality. No, this is not the same as aesthetics. Is a site easy to understand, use and navigate. Does it make it seamless for a user to get where they want. Does it provide the contextual information they expect from the page they're visiting. These questions are fundamental for me, and I'd rather have a well designed user experience, than a well designed asethetic experience on a site.
Accessibility. Is a site readable. IS A SITE READABLE. IS A SITE READABLE. Is there enough contrast on a page that a colorblind user could read it? Can you navigate it with a keyboard? How about a screenreader? Is the text large enough for standard screen sizes. Does it stay large enough across devices?
Responsiveness. If someone on a phone visits my site, will they have a good experience in every view? Will they have the functionality they need? What about a tablet? What about someone with a small desktop screen, or a huge one? If I have hovers somewhere on my site, is it still navigable on mobile, or is it now an unintuitive situation? Phones take way longer to load than most computers, do all the assets (gifs, images, multiple fonts, etc) on my site make it frustrating for a mobile user to visit my pages?
Performance. When I said jcink is bugged earlier, performance was one my complaints. The base page load time of jcink is shitty. However, what's worse is when a skin takes a baseline of 2 seconds for pages to load, and bumps it up to 5, 8 or (the worst I've seen) 13 seconds to load after someone tries to visit a page. I have not seen a single jcink skin (aside from my own), which adds less than a second of loadtime to jcink's default performance. Again, in an industry where the basic standard is under half a second, jcink skins do not perform to a professional level.
If a website fails along these metrics, it cannot be considered professional for general public consumption. The problem is, everything except aesthetics requires a considerable baseline of knowledge and practice to do well. These are problems that many fortune 500 companies have not figured out (that's because their execs are dumb dinosaurs, but still). So when it comes to the question of 'how much should a skin cost', I think a skinner is obligated to consider their product. Is it aesthetically pleasing? Is it functional? Is it accessible? Is it responsive? Is it performant? For most skins, the answer to at least three of these questions is NO, and I think that pricing should reflect that. In my opinion, I think most skinners do well with asethetics, some skinners do well with functionality, and I have yet to see any truly accessible, responsive, or performant skins in the wild. When it came to writing mine, I think I needed about 5 hours to get familiar with how jcink worked, and then if I only wanted my site to look good on a desktop monitor, I probably could have spent 10-15 hours to write my skin as a noob having to look up every php variable John uses in jcink's terrible docs. That is not what I did. My skin is fully responsive, it is fully accessible, it is to my personal taste aesthetically, and is very performant (on average .25-.35 added seconds to my load time) despite having piles of functional scripts (all of which i wrote myself) on several pages. I also wrote several things that make my life and my member's lives easier. I have a member directory and face claim that require no work on my end past sorting an accepted character into the right member group. I have an autotracker built into member profiles so people can keep track of their threads. I have a button which allows members with lower end computers to turn off most graphics on the site so their computers don't sound like airplanes taking off. I have a light mode/dark mode switch that guests and members can both use. Personally, I would not feel like a freak for charging $1000 for a custom skin of this caliber. It's half what I would earn normally (using the $50/hr figure from earlier), and it lives up to all my standards for what professional code ought to do for its users vs. one or two. The css is not my best work, but I can say without undue arrogance that it's far and away easier to touch without breaking things than any other skin I've looked at, and the actual interface that people see and interact with is great. Again, the aesthetics are simple and to my taste, but it makes sense and has lots of quality of life bits and bobs in it. I'm proud of it. I understand if that entire paragraph read wildly, but I don't say any of it lightly or with the intention to belittle anyone. I'm trying to contextualize how I think about how price relates to what is actually being delivered. But even though I genuinely feel $1000 would be an excellent deal for the work I have done on my skin, I could not possibly stomach charging that much money to a single person for this skin, which (in addition to me using it on my own site) is why it is not going on sale.
To be honest, I have NO idea how much time a typical skinner spends on a custom skin. My approach is different from most people who do this for the community, and I also do adjacent work professionally and have for many years. I suspect that if asked to achieve all 5 of the above criteria, I would be able to do that significantly faster than most skinners. That said, I have no library of components I can turn to, and I hate a lot of the standard choices in building skins and would rather write my own. Isotope, cfs, etc can burn, they're bad code. There's tradeoffs to how I'd do things and how others would, which have significant impacts on time spent, but also on the outcome. Generally however, when it comes to jcink pricing, and how much people should charge or spend on a skin, I think the above 5 things should be the primary metrics. I made my own because I couldn't find a single skin that was truly accessible or responsive, and because I know how the sausage is made I simply couldn't stomach it. I can't really tell you how much you SHOULD pay for a skin that only does Aesthetics and Function, or only Function and Accessibility or any other combination. It's really not my place. I have thoughts on what I would charge, trying to be fair to myself and others if I were to make a multisale. But that's for a different post.
People
And then there's people. I touched on this earlier with some talk about scamming. I have heard horror stories on both ends of this interaction, and I think, simply speaking, it has created a lot of distrust around something which is a core decision for a lot of boards- which skinner to work with, what standards to set, how much things should cost, how to arrange payment. Payment structure has to take care of both you and the commissioner, but it also has to take into account that lots of people are gonna drive you up a wall, go ten rounds on their requirements, and then expect finished work two weeks after they finally got back to you about a core feature. With all due respect, clients who know NOTHING about tech and still have a lot of opinions on exactly how things should be done are my personal nightmare. I have not figured out any good way to account for this. I think custom commissions can be great for both the coder and the customer, but it's a total crapshoot as far as I can tell with no solve that I know of. I think there is also greater than average honesty and flexibility required from both parties in a non-professional setting(like jcink coding is) where money is still being exchanged. I think skinners have to be honest about their capabilities, their timeline, and how they want to work with customers. I think customers have to be honest about their expectations, their priorities and their consistency (are they going to change their mind frequently). I think both people need to be willing to find compromise. Since NO ONE is producing professional products and NO ONE is paying professional prices, there needs to be an understanding that sometimes things need to adjust. But, with custom skinning, it's often a lot of money for people. 250+ is a significant chunk of change. It feels wild to pay that much and not get exactly what you want. However, exactly what you want may be outside the skillset of a hobbyist, or it may simply be difficult or tedious or finicky, even for a professional. You want me to do custom svg clipping all over a skin? I'd rather die than do that during my free time. Furthermore, no skinner is being paid to do EVERYTHING that a professional site might demand. Being unhappy that you didn't get every concern addressed is not reasonable with the rates getting paid right now. There's ground to give on both sides. Flexibility is key, and it gets hard when there's money on the line.
Okay but so what?
Bish i don't know!!! Skinning is difficult niche stuff, especially if you're actually meeting any kind of professional standard. It's really easy to do badly. Paying for products where there is literally no professional available is always complicated. Idk what to tell you man. I'm just saying that I don't think anyone is getting paid a rate they deserve relative to the time they put in (probably), and simultaneously people don't get a product that lives up to the rest of the web right now because there's no real industry level professionalism available. And what's worst is that it's nobody's fault!! It's a tough spot to be in as a community! As for me, I'll probably do multisales in the future, and I'll probably charge significantly more than others because I'll be delivering significantly more. I am simply incapable of coding something that isn't responsive and accessible and performant if I'm going to charge money for it. It simply shan't happen, which means my time and skill will be reflected in the work and ergo the price. As for commissions, other people's taste irritates me too much to willingly get into typical freelancing in any capacity (affectionately, i will never be doing certain aesthetics unless people are willing to pay me my full rate, which no one should lol). I will almost certainly never do fully custom commission work. It is simultaneously not worth my time to do it for the rates offered by the jcink rpc, and it's not worth a commissioners money to pay for my skills when people are well used to skins that don't rise to professional standards anyway. Since I've finished my skin I've started posting some of the scripts I whipped up on caution. You can scroll back in this blog and find some of them. I'll probably continue to do that with anything I think another skinner could use. I will probably also start posting tutorials for discrete components so people have some examples of what responsive coding looks like. It's a tough nut to crack if you've never seen it up close before! If I'm really going to be a good member of the community I should probably start posting those here too more regularly. I basically just really want to help people out with their coding and contribute to the overall health of jcink skinning without undervaluing myself or gatekeeping good code. I will probably post my thoughts on how I'll be structuring the pricing for upcoming work in the next week or so. Expect updates after the holidays on upcoming skin ideas. Most of them have to do with implementing fun design stuff I never get to do at work- parallax effects, color manipulations, funky shapes, abusing css filter rules. I'd love to hear what kinds of things people would be excited to see first!
6 notes
·
View notes
Text
How You Can Drive Sales with Facebook Messenger Chatbots
Artificial Intelligence (AI), neuro-linguistic programming (NLP) and Machine Learning (ML) all sound like visions of the future. Or at least something reserved for the elite programmers and smartest engineers at companies like Google, Amazon and SpaceX.
The reality is that chatbots are bringing these innovations to everyone. While these technologies are still young, they have tremendous potential to help you grow your business today – and they can be implemented with no prior knowledge of programming at all.
There are a lot of products nowadays that make life easy for e-commerce entrepreneurs. You’ve got companies like Shopify that make it easy to set up your store, and companies like Oberlo that make it easy to add products to your store.
Chatbots are another tool you can add to the mix to make it easier to get sales.
Chatbots have the potential to do to e-commerce what e-commerce has done to traditional retail.
Think about that.
The Future of Shopping
We may not be too far from a time when customers no longer shop at superstores like Amazon, but within personal, real conversations privately.
Let’s take shopping for sneakers as an example. What’s the current buying experience like?
You go to a website you like – maybe you heard about it from a friend or found it through an intriguing Facebook promotion. Once you’re there, you find the shoe section, you filter by style, size and color, and then you’re presented with a list of options.
You compare a bunch or them and then you add a few to your cart so you can come back later. Maybe you will, maybe you won’t, although the research says that you probably won’t.
Now maybe you’ll get hit with a retargeted ad or see an advertisement on the way to work the following week and return to your abandoned cart to follow through with the purchase.
But what if you received a friendly reminder right on your phone? What if this reminder was personal, human even? What if this message came through the same channel that you use to catch up with your friends and family throughout your hectic life of e-mails and meetings, not to mention those telemarketers ignoring the fact that you’re on the Do Not Call list?
Do you think that might make it more compelling? Do you think your subconscious might connect better with that feeling of familiarity and individualism?
Now imagine that two weeks after your order arrives you get a friendly check-in from the customer service team:
“Hey Dave, how are you diggin’ your new kicks?”
You know they’re ready to help you with any issues, so you shoot back:
“They’re awesome! Thanks so much!”
And then you get an immediate response:
“Cool! As always, let me know if you need anything! If you’d like to leave a review about the Jordans you just bought, here’s the link: sneakerstore.com/review”
Now six months down the road (the usual time when you’d be looking for another pair of shoes, based on your order history), that chat comes back to life with a friendly check-in:
“Hey Dave, how’s it going? I just wanted to let you know about our big summer sale coming up!”
The chatbot becomes your personal open door to dialoguing with the company.
They’ll send you tracking updates, process returns, and keep you up to date with the latest promotions, all customized to your taste.
If done properly, Facebook Chatbots provide enormous potential (a connection to 900 million people on FB Messenger) to drive sales and enhance the customer relationship. But these bots are new and foreign to most of us and aren’t superhuman (we’ll debate whether a computer can be superhuman later, okay?).
But if you aren’t careful, using a chatbot may backfire. You could annoy or confuse a customer, provide bad information or make it difficult to complete a purchase. A frustrated customer not only won’t buy, but they’re likely to suggest that others steer clear of your brand as well.
Back in 2015, messaging apps surpassed social apps in the number of monthly active users – and Facebook Messenger has 1.2 billion monthly users on its own. People around the world use it to easily keep in touch with friends and family – even those who tend to avoid that “social media stuff.”

Let’s look at a few ways that e-commerce businesses are using chatbots to grow their business as well as some important tips for building the most effective bots you can.
How to Use Facebook Messenger Bots
The best thing about chatbots is that they give you an automated, cost-effective way to communicate with your customers in a way that is more direct and personal than ever before.
It’s something you can easily add to your Facebook marketing strategy.
Here are some great ways to leverage the power of Facebook Messenger to connect with your customers:
1) Customer Service
Both social media and chat are becoming increasingly important for e-commerce customer service. One study found that you’ll lose as many as 15% of your customers if you ignore social media requests and that revenue per customer can grow 20-40% for businesses that do respond. Here’s what another study found:
44% of customers believe live chat is the most important feature a site can offer during a purchase.
There are a few ways to use your chatbot to help with customer service. For starters, you can use it to field FAQs and provide simple answers. You can also use it to collect some basic information like e-mail, order number and description of the problem before passing it along to one of your support reps to take over.
It’s important to remember that this technology isn’t state of the art yet. You can’t expect your bot to handle any complicated or unique requests, so it’s essential that whenever you use bots you should have human support on standby to help answer any such questions.
2) Order Confirmation and Updates
You can also offer your customers an opportunity to use Messenger to handle order and shipping confirmations. Send out tracking numbers, order updates and solicit feedback all from a single message thread.
One way to really boost sales is to make it easy to reorder via Messenger. If the bot has a customer’s order history, you can offer options to browse and reorder straight from their phone or browser.
3) Upsell and Cross-Sell
While improving the buying experience will certainly increase sales in the long run, you can also promote new products and offers directly to your customers via Messenger.
You already know their buying habits and their demographic information. You’ve established a line of communication. Now you can use this platform to remind customers of products they might want to check out or new promotions you have coming up which might interest them.
There is one caveat here.
Do not be too pushy or annoy customers. Unless they opt in for it, you shouldn’t be blasting them with the latest promotions every weekend – just the promos and offers that are catered specifically to their interests and purchase history.
These messages can be very invasive to a customer as they’re going directly to their personal phones and computers in a way that mimics the conversational experience they have when chatting up a friend or a cute girl they met last Friday. The occasional, useful promotion will be welcome by most– but salesy spam will not.
Your bot serves at the pleasure of the customer. If you annoy them, not only will the communication line be severed, but it’ll leave a bad taste in the customers’ mouth, too.
4) Facilitate Sales
You can also try to facilitate your sales directly through chat.
However, keep in mind that these bots are still pretty basic. If you sell products like apparel with multiple options (size, color, style, etc.), it might be easier to direct your customers to browse your site in order to provide a better experience.
If not, you’ll definitely want to have a human on deck to help pitch in, as you can see from the example above. But if you sell a few straightforward products or wish to facilitate re-orders and add-on sales, this does provide an exciting opportunity.
Tips For Building Effective Chatbots
It’s important to keep in mind that no matter how cool and exciting these new bots are, they are still in the early stages and customers might not be used to interacting with them. Here are a few tips to help make the experience a good one.
1) Use Simple, Clear Language and Instructions
You can have some fun with greetings and witty jokes, but when it comes to the core functionality of your chatbot, make sure it’s easy to understand. The user should get a simple answer or solution and know what they need to do to move forward. Don’t leave them guessing as to what they need to say.
2) Use Guided Responses
You may have heard that NLP technology is helping computers understand complex sentences and formulate intelligent responses. While that’s true, your basic chatbot isn’t going to be equipped with the fanciest technology on the market.
You can solve this by using simple, clear language and prompting the user with options to respond. This maintains the flow and dynamic of the conversation without forcing customers to worry about formatting their answer properly.
Leaving questions open ended is likely to create a frustrating experience for shoppers. It forces them to guess what to say and increases the chance that they won’t have a coherent experience, thereby increasing the odds that they will leave before accomplishing their goal.
3) Don’t Be Pushy
I’ll say this again – your chatbots serve at the pleasure of your customer.
Before you can start chatting with anyone, they must approve and initiate contact. Users can also easily block your bot if it’s annoying them. Facebook does this to protect the platform from becoming the next generation of spam and maintain its status as the most popular chat app on earth.
It’s probably best to start by using your app to help improve your customers’ experience first by making it easier to manage returns, orders and customer service. Then slowly introduce promotions or selling opportunities. Monitor your customers’ reaction closely to make sure you’re not annoying anyone.
4) Have a Plan
Know who you’re building this bot for and what problem you’re trying to solve. Think about your customers right now and solicit their thoughts.
What part of the buying experience has the most friction? Where are they dropping off? What steps are costing you the most money or causing the most headaches?
Build a chatbot to solve those problems. One at a time.
5) Offer a Way to Speak with a Real Person
While it’s fine to get excited about this new technology, don’t expect to have robots fully servicing your customers like a Star Wars droid just yet (you’ve seen 2001: A Space Odyssey, right?).
Most chatbots are still programmed to respond to simple commands. And the last thing you want to do is leave your customer frustrated with an unresolved issue. Instead of making things easier and quicker for them, you’ll only infuriate them before they call you angrily, decide not to buy and/or leave a scathing review on all your social media.
This tip is especially important for Customer Service bots. Customers should always know that they can reach out to a real person for help. It’s a good idea to let them know how (like typing a certain command or calling your support line) in the beginning.
If you’re suggesting solutions to a problem, you may also want to offer a preset option like “Contact Support” in case their questions remain unanswered.
If you have the size and budget for it, it may even pay to have someone supervise the conversations in real time and jump in whenever things look messy.
Even though you need staff on standby, you’ll still benefit greatly from having bots handle the common and easy problems that would normally be repetitive and a waste of time. They can also do the legwork by collecting all the necessary information from your customer while she waits for your agent to connect.
6) Optimize and Update Your Bot
As with any new technology or marketing strategy, you’ll benefit from continued testing and improvement. Watch how your customers respond to new ideas and see which steps trip them up.
You won’t build the perfect system on your first try, but if you pay attention and aim to improve it each quarter, you’ll get more and more out of it. Watch technology trends and as this technology improves, you’ll be able to grow and do more with it. You’ll be lightyears ahead of the competition who wait until the next big press buzz to start.
How to Set Up Facebook Messenger Chatbots
There are many services out there that will help you set up Facebook Messenger Chatbots. In the beginning, you’ll probably want to use a platform like Chatfuel, OnSequel or Botsify. For a simple do-it-yourself guide to creating a chatbot, Social Media Examiner has a great post.

Start by building a bot with a single goal like handling order confirmations or navigating your FAQ. Watch how your customers use it and refine it based on their feedback. Then slowly start adding features to help more customers and start actually driving new sales.
As a bonus, the early users will be excited to grow with your brand as they see their online shopping companion evolve based on their specific feedback.
While you could code these bots from scratch or hire a developer to build them for you (the same companies mentioned above offer custom development services), it’s really not necessary and probably a waste of money until you have a better idea of exactly how your customers are responding to it.
Conclusion
While we’re at least a few years (and it really might be just a few years) from building our own android assistant like C3PO, chatbots are already starting to help businesses mimic human conversations to enhance their relationships with customers.
That’s why Amazon and Apple both announced that they are focusing on machine learning technology in 2017 and why Shopify includes Messenger as an official sales channel.
Facebook Messenger chatbots are easy enough to set up on your own, so why wait to try it out?
Source: https://www.singlegrain.com/social-media-news/how-e-commerce-companies-can-drive-sales-with-facebook-messenger-chatbots/
0 notes
Text
Lazy Loading JavaScript Modules With ConditionerJS
Linking JavaScript functionality to the DOM can be a repetitive and tedious task. You add a class to an element, find all the elements on the page, and attach the matching JavaScript functionality to the element. Conditioner is here to not only take this work of your hands but supercharge it as well!
In this article, we’ll look at the JavaScript initialization logic that is often used to link UI components to a webpage. Step-by-step we’ll improve this logic, and finally, we’ll make a 1 Kilobyte jump to replacing it with Conditioner. Then we’ll explore some practical examples and code snippets and see how Conditioner can help make our websites more flexible and user-oriented.
Conditioner And Progressive Enhancement Sitting In A Tree
Before we proceed, I need to get one thing across:
Conditioner is not a framework for building web apps.
Instead, it’s aimed at websites. The distinction between websites and web apps is useful for the continuation of this story. Let me explain how I view the overall difference between the two.
Websites are mostly created from a content viewpoint; they are there to present content to the user. The HTML is written to semantically describe the content. CSS is added to nicely present the content across multiple viewports. The last and third act is to carefully layer JavaScript on top to add that extra zing to the user experience. Think of a date picker, navigation, scroll animations, or carousels (pardon my French).
“You must unlearn what you have learned!” Meet the brand new episode of SmashingConf San Francisco with smart front-end tricks and UX techniques. Featuring Yiying Lu, Aarron Draplin, Smashing Yoda, and many others. Tickets now on sale. April 17-18.
Check the speakers →
Examples of content-oriented websites are for instance: Wikipedia, Smashing Magazine, your local municipality website, newspapers, and webshops. Web apps are often found in the utility area, think of web-based email clients and online maps. While also presenting content, the focus of web apps is often more on interacting with content than presenting content. There’s a huge grey area between the two, but this contrast will help us decide when Conditioner might be effective and when we should steer clear.
As stated earlier, Conditioner is all about websites, and it’s specifically built to deal with that third act:
Enhancing the presentation layer with JavaScript functionality to offer an improved user experience.
The Troublesome Third Act
The third act is about enhancing the user experience with that zingy JavaScript layer.
Judging from experience and what I’ve seen online, JavaScript functionality is often added to websites like this:
A class is added to an HTML element.
The querySelectorAll method is used to get all elements assigned the class.
A for-loop traverses the NodeList returned in step 2.
A JavaScript function is called for each item in the list.
Let’s quickly put this workflow in code by adding autocomplete functionality to an input field. We’ll create a file called autocomplete.js and add it to the page using a <script> tag.
function createAutocomplete(element) { // our autocomplete logic // ... }
<input type="text" class="autocomplete"/> <script src="autocomplete.js"></script> <script> var inputs = document.querySelectorAll('.autocomplete'); for (var i = 0; i < inputs.length; i++) { createAutocomplete(inputs[i]); } </script>
Go to demo →
That’s our starting point.
Suppose we’re now told to add another functionality to the page, say a date picker, it’s initialization will most likely follow the same pattern. Now we’ve got two for-loops. Add another functionality, and you’ve got three, and so on and so on. Not the best.
While this works and keeps you off the street, it creates a host of problems. We’ll have to add a loop to our initialization script for each functionality we add. For each loop we add, the initialization script gets linked ever tighter to the document structure of our website. Often the initialization script will be loaded on each page. Meaning all the querySelectorAll calls for all the different functionalities will be run on each and every page whether functionality is defined on the page or not.
For me, this setup never felt quite right. It always started out “okay,” but then it would slowly grow to a long list of repetitive for-loops. Depending on the project it might contain some conditional logic here and there to determine if something loads on a certain viewport or not.
if (window.innerWidth
Eventually, my initialization script would always grow out of control and turn into a giant pile of spaghetti code that I would not wish on anyone.
Something needed to be done.
Soul Searching
I am a huge proponent of carefully separating the three web dev layers HTML, CSS, and JavaScript. HTML shouldn’t have a rigid relationship with JavaScript, so no use of inline onclick attributes. The same goes for CSS, so no inline style attributes. Adding classes to HTML elements and then later searching for them in my beloved for-loops followed that philosophy nicely.
That stack of spaghetti loops though, I wanted to get rid them so badly.
I remember stumbling upon an article about using data attributes instead of classes, and how those could be used to link up JavaScript functionality (I’m not sure it was this article, but it seems to be from right timeframe). I didn’t like it, misunderstood it, and my initial thought was that this was just covering up for onclick, this mixed HTML and JavaScript, no way I was going to be lured to the dark side, I don’t want anything to do with it. Close tab.
Some weeks later I would return to this and found that linking JavaScript functionality using data attributes was still in line with having separate layers for HTML and JavaScript. As it turned out, the author of the article handed me a solution to my ever-growing initialization problem.
We’ll quickly update our script to use data attributes instead of classes.
<input type="text" data-module="autocomplete"> <script src="autocomplete.js"></script> <script> var inputs = document.querySelectorAll('[data-module=autocomplete]'); for (var i = 0; i < inputs.length; i++) { createAutocomplete(inputs[i]); } </script>
Go to demo →
Done!
But hang on, this is nearly the same setup; we’ve only replaced .autocomplete with [data-module=autocomplete]. How’s that any better? It’s not, you’re right. If we add an additional functionality to the page, we still have to duplicate our for-loop — blast! Don’t be sad though as this is the stepping stone to our killer for-loop.
Watch what happens when we make a couple of adjustments.
<input type="text" data-module="createAutocomplete"> <script src="autocomplete.js"></script> <script> var elements = document.querySelectorAll('[data-module]'); for (var i = 0; i < elements.length; i++) { var name = elements[i].getAttribute('data-module'); var factory = window[name]; factory(elements[i]); } </script>
Go to demo →
Now we can load any functionality with a single for-loop.
Find all elements on the page with a data-module attribute;
Loop over the node list;
Get the name of the module from the data-module attribute;
Store a reference to the JavaScript function in factory;
Call the factory JavaScript function and pass the element.
Since we’ve now made the name of the module dynamic, we no longer have to add any additional initialization loops to our script. This is all we need to link any JavaScript functionality to an HTML element.
This basic setup has some other advantages as well:
The init script no longer needs to know what it loads; it just needs to be very good at this one little trick.
There’s now a convention for linking functionality to the DOM; this makes it very easy to tell which parts of the HTML will be enhanced with JavaScript.
The init script does not search for modules that are not there, i.e. no wasted DOM searches.
The init script is done. No more adjustments are needed. When we add functionality to the page, it will automatically be found and will simply work.
Wonderful!
So What About This Thing Called Conditioner?
We finally have our single loop, our one loop to rule all other loops, our king of loops, our hyper-loop. Ehm. Okay. We’ll just have to conclude that our is a loop of high quality and is so flexible that it can be re-used in each project (there’s not really anything project specific about it). That does not immediately make it library-worthy, it’s still quite a basic loop. However, we’ll find that our loop will require some additional trickery to really cover all our use-cases.
Let’s explore.
With the one loop, we are now loading our functionality automatically.
We assign a data-module attribute to an element.
We add a <script> tag to the page referencing our functionality.
The loop matches the right functionality to each element.
Boom!
Let’s take a look at what we need to add to our loop to make it a bit more flexible and re-usable. Because as it is now, while amazing, we’re going to run into trouble.
It would be handy if we moved the global functions to isolated modules. This prevents pollution of the global scope. Makes our modules more portable to other projects. And we’ll no longer have to add our <script> tags manually. Fewer things to add to the page, fewer things to maintain.
When using our portable modules across multiple projects (and/or pages) we’ll probably encounter a situation where we need to pass configuration options to a module. Think API keys, labels, animation speeds. That’s a bit difficult at the moment as we can’t access the for-loop.
With the ever-growing diversity of devices out there we will eventually encounter a situation where we only want to load a module in a certain context. For instance, a menu that needs to be collapsed on small viewports. We don’t want to add if-statements to our loop. It’s beautiful as it is, we will not add if statements to our for-loop. Never.
That’s where Conditioner can help out. It encompasses all above functionality. On top of that, it exposes a plugin API so we can configure and expand Conditioner to exactly fit our project setup.
Let’s make that 1 Kilobyte jump and replace our initialization loop with Conditioner.
Switching To Conditioner
We can get the Conditioner library from the GitHub repository, npm or from unpkg. For the rest of the article, we’ll assume the Conditioner script file has been added to the page.
The fastest way is to add the unpkg version.
<script src="https://unpkg.com/conditioner-core/conditioner-core.js"></script>
With Conditioner added to the page lets take a moment of silence and say farewell to our killer for-loop.
Conditioners default behavior is exactly the same as our now departed for-loop. It’ll search for elements with the data-module attribute and link them to globally scoped JavaScript functions.
We can start this process by calling the conditioner hydrate method.
<input type="text" data-module="createAutocomplete"/> <script src="autocomplete.js"></script> <script> conditioner.hydrate(document.documentElement); </script>
Go to demo →
Note that we pass the documentElement to the hydrate method. This tells Conditioner to search the subtree of the <html> element for elements with the data-module attribute.
It basically does this:
document.documentElement.querySelectorAll('[data-module]');
Okay, great! We’re set to take it to the next level. Let’s try to replace our globally scoped JavaScript functions with modules. Modules are reusable pieces of JavaScript that expose certain functionality for use in your scripts.
Moving From Global Functions To Modules
In this article, our modules will follow the new ES Module standard, but the examples will also work with modules based on the Universal Module Definition or UMD.
Step one is turning the createAutocomplete function into a module. Let’s create a file called autocomplete.js. We’ll add a single function to this file and make it the default export.
export default function(element) { // autocomplete logic // ... }
It’s the same as our original function, only prepended with export default.
For the other code snippets, we’ll switch from our classic function to arrow functions.
export default element => { // autocomplete logic // ... }
We can now import our autocomplete.js module and use the exported function like this:
import('./autocomplete.js').then(module => { // the autocomplete function is located in module.default });
Note that this only works in browsers that support Dynamic import(). At the time of this writing that would be Chrome 63 and Safari 11.
Okay, so we now know how to create and import modules, our next step is to tell Conditioner to do the same.
We update the data-module attribute to ./autocomplete.js so it matches our module file name and relative path.
Remember: The import() method requires a path relative to the current module. If we don’t prepend the autocomplete.js filename with ./ the browser won’t be able to find the module.
Conditioner is still busy searching for functions on the global scope. Let’s tell it to dynamically load ES Modules instead. We can do this by overriding the moduleImport action.
We also need to tell it where to find the constructor function (module.default) on the imported module. We can point Conditioner in the right direction by overriding the moduleGetConstructor action.
<input type="text" data-module="./autocomplete.js"/> <script> conditioner.addPlugin({ // fetch module with dynamic import moduleImport: (name) => import(name), // get the module constructor moduleGetConstructor: (module) => module.default }); conditioner.hydrate(document.documentElement); </script>
Go to demo →
Done!
Conditioner will now automatically lazy load ./autocomplete.js, and once received, it will call the module.default function and pass the element as a parameter.
Defining our autocomplete as ./autocomplete.js is very verbose. It’s difficult to read, and when adding multiple modules on the page, it quickly becomes tedious to write and error prone.
This can be remedied by overriding the moduleSetName action. Conditioner views the data-module value as an alias and will only use the value returned by moduleSetName as the actual module name. Let’s automatically add the js extension and relative path prefix to make our lives a bit easier.
<input type="text" data-module="autocomplete"/>
conditioner.addPlugin({ // converts module aliases to paths moduleSetName: (name) => `./${ name }.js` });
Go to demo →
Now we can set data-module to autocomplete instead of ./autocomplete.js, much better.
That’s it! We’re done! We’ve setup Conditioner to load ES Modules. Adding modules to a page is now as easy as creating a module file and adding a data-module attribute.
The plugin architecture makes Conditioner super flexible. Because of this flexibility, it can be modified for use with a wide range of module loaders and bundlers. There’s bootstrap projects available for Webpack, Browserify and RequireJS.
Please note that Conditioner does not handle module bundling. You’ll have to configure your bundler to find the right balance between serving a bundled file containing all modules or a separate file for each module. I usually cherry pick tiny modules and core UI modules (like navigation) and serve them in a bundled file while conditionally loading all scripts further down the page.
Alright, module loading — check! It’s now time to figure out how to pass configuration options to our modules. We can’t access our loop; also we don’t really want to, so we need to figure out how to pass parameters to the constructor functions of our modules.
Passing Configuration Options To Our Modules
I might have bent the truth a little bit. Conditioner has no out-of-the-box solution for passing options to modules. There I said it. To keep Conditioner as tiny as possible I decided to strip it and make it available through the plugin API. We’ll explore some other options of passing variables to modules and then use the plugin API to set up an automatic solution.
The easiest and at the same time most banal way to create options that our modules can access is to define options on the global window scope.
window.autocompleteSource = './api/query';
export default (element) => { console.log(window.autocompleteSource); // will log './api/query' // autocomplete logic // ... }
Don’t do this.
It’s better to simply add additional data attributes.
<input type="text" data-module="autocomplete" data-source="./api/query"/>
These attributes can then be accessed inside our module by accessing the element dataset which returns a DOMStringMap of all data attributes.
export default (element) => { console.log(element.dataset.source); // will log './api/query' // autocomplete logic // ... }
This could result in a bit of repetition as we’ll be accessing element.dataset in each module. If repetition is not your thing, read on, we’ll fix it right away.
We can automate this by extracting the dataset and injecting it as an options parameter when mounting the module. Let’s override the moduleSetConstructorArguments action.
conditioner.addPlugin({ // the name of the module and the element it's being mounted to moduleSetConstructorArguments: (name, element) => ([ element, element.dataset ]) });
The moduleSetConstructorArguments action returns an array of parameters which will automatically be passed to the module constructor.
export default (element, options) => { console.log(options.source); // will log './api/query' // autocomplete logic // ... }
We’ve only eliminated the dataset call, i.e. seven characters. Not the biggest improvement, but we’ve opened the door to take this a bit further.
Suppose we have multiple autocomplete modules on the page, and each and every single one of them requires the same API key. It would be handy if that API key was supplied automagically instead of having to add it as a data attribute on each element.
We can improve our developer lives by adding a page level configuration object.
const pageOptions = { // the module alias autocomplete: { key: 'abc123' // api key } } conditioner.addPlugin({ // the name of the module and the element it's being mounted to moduleSetConstructorArguments: (name, element) => ([ element, // merge the default page options with the options set on the element it self Object.assign({}, defaultOptions[element.dataset.module], element.dataset ) ]) });
Go to demo →
As our pageOptions variable has been defined with const it’ll be block-scoped, which means it won’t pollute the global scope. Nice.
Using Object.assign we merge an empty object with both the pageOptions for this module and the dataset DOMStringMap found on the element. This will result in an options object containing both the source property and the key property. Should one of the autocomplete elements on the page have a data-key attribute, it will override the pageOptions default key for that element.
const ourOptions = Object.assign( {}, { key: 'abc123' }, { source: './api/query' } ); console.log(ourOptions); // output: { key: 'abc123', source: './api/query' }
That’s some top-notch developer convenience right there.
By having added this tiny plugin, we can automatically pass options to our modules. This makes our modules more flexible and therefore re-usable over multiple projects. We can still choose to opt-out and use dataset or globally scope our configuration variables (no, don’t), whatever fits best.
Our next challenge is the conditional loading of modules. It’s actually the reason why Conditioner is named Conditioner. Welcome to the inner circle!
Conditionally Loading Modules Based On User Context
Back in 2005, desktop computers were all the rage, everyone had one, and everyone browsed the web with it. Screen resolutions ranged from big to bigger. And while users could scale down their browser windows, we looked the other way and basked in the glory of our beautiful fixed-width sites.
I’ve rendered an artist impression of the 2005 viewport:
The 2005 viewport in its full glory, 1024 pixels wide, and 768 pixels high. Wonderful.
Today, a little over ten years later, there’s more people browsing the web on mobile than on desktop, resulting in lots of different viewports.
I’ve applied this knowledge to our artist impression below.
More viewports than you can shake a stick at.
Holy smokes! That’s a lot of viewports.
Today, someone might visit your site on a small mobile device connected to a crazy fast WiFi hotspot, while another user might access your site using a desktop computer on a slow tethered connection. Yes, I switched up the connection speeds — reality is unpredictable.
And to think we were worried about users resizing their browser window. Hah!
Note that those million viewports are not set in stone. A user might load a website in portrait orientation and then rotate the device, (or, resize the browser window), all without reloading the page. Our websites should be able to handle this and load or unload functionality accordingly.
Someone on a tiny device should not receive the same JavaScript package as someone on a desktop device. That seems hardly fair; it’ll most likely result in a sub-optimal user experience on both the tiny mobile device and the good ol’ desktop device.
With Conditioner in place, let’s configure it as a gatekeeper and have it load modules based on the current user context. The user context contains information about the environment in which the user is interacting with your functionality. Some examples of environment variables influencing context are viewport size, time of day, location, and battery level. The user can also supply you with context hints, for instance, a preference for reduced motion. How a user behaves on your platform will also tell you something about the context she might be in, is this a recurring visit, how long is the current user session?
The better we’re able to measure these environment variables the better we can enhance our interface to be appropriate for the context the user is in.
We’ll need an attribute to describe our modules context requirements so Conditioner can determine the right moment for the module to load and to unload. We’ll call this attribute data-context. It’s pretty straightforward.
Let’s leave our lovely autocomplete module behind and shift focus to a new module. Our new section-toggle module will be used to hide the main navigation behind a toggle button on small viewports.
Since it should be possible for our section-toggle to be unloaded, the default function returns another function. Conditioner will call this function when it unloads the module.
export default (element) => { // sectionToggle logic // ... return () => { // sectionToggle unload logic // ... } }
We don’t need the toggle behavior on big viewports as those have plenty of space for our menu (it’s a tiny menu). We only want to collapse our menu on viewports more narrow than 30em (this translates to 480px).
Let’s setup the HTML.
<nav> <h1 data-module="sectionToggle" data-context="@media (max-width:30em)"> Navigation </h1> <ul> <li><a href="/home">home</a></li> <li><a href="/about">about</a></li> <li><a href="/contact">contact</a></li> </ul> </nav>
Go to demo →
The data-context attribute will trigger Conditioner to automatically load a context monitor observing the media query (max-width:30em). When the user context matches this media query, it will load the module; when it does not, or no longer does, it will unload the module.
Monitoring happens based on events. This means that after the page has loaded, should the user resize the viewport or rotate the device, the user context is re-evaluated and the module is loaded or unloaded based on the new observations.
You can view monitoring as feature detection. Where feature detection is about an on/off situation, the browser either supports WebGL, or it doesn’t. Context monitoring is a continuous process, the initial state is observed at page load, but monitoring continues after. While the user is navigating the page, the context is monitored, and observations can influence page state in real-time.
This nonstop monitoring is important as it allows us to adapt to context changes immediately (without page reload) and optimizes our JavaScript layer to fit each new user context like a glove.
The media query monitor is the only monitor that is available by default. Adding your own custom monitors is possible using the plugin API. Let’s add a visible monitor which we’ll use to determine if an element is visible to the user (scrolled into view). To do this, we’ll use the brand new IntersectionObserver API.
conditioner.addPlugin({ // the monitor hook expects a configuration object monitor: { // the name of our monitor with the '@' name: 'visible', // the create method will return our monitor API create: (context, element) => ({ // current match state matches: false, // called by conditioner to start listening for changes addListener (change) { new IntersectionObserver(entries => { // update the matches state this.matches = entries.pop().isIntersecting == context; // inform Conditioner of the state change change(); }).observe(element); } }) } });
We now have a visible monitor at our disposal.
Let’s use this monitor to only load images when they are scrolled in to view.
Our base image HTML will be a link to the image. When JavaScript fails to load the links will still work, and the contents of the link will describe the image. This is progressive enhancement at work.
<a href="cat-nom.jpg" data-module="lazyImage" data-context="@visible"> A red cat eating a yellow bird </a>
Go to demo →
The lazyImage module will extract the link text, create an image element, and set the link text to the alt text of the image.
export default (element) => { // store original link text const text = element.textContent; // replace element text with image const image = new Image(); image.src = element.href; image.setAttribute('alt', text); element.replaceChild(image, element.firstChild); return () => { // restore original element state element.innerHTML = text } }
When the anchor is scrolled into view, the link text is replaced with an img tag.
Because we’ve returned an unload function the image will be removed when the element scrolls out of view. This is most likely not what we desire.
We can remedy this behavior by adding the was operator. It will tell Conditioner to retain the first matched state.
<a href="cat-nom.jpg" data-module="lazyImage" data-context="was @visible"> A red cat eating a yellow bird </a>
There are three other operators at our disposal.
The not operator lets us invert a monitor result. Instead of writing @visible false we can write not @visible which makes for a more natural and relaxed reading experience.
Last but not least, we can use the or and and operators to string monitors together and form complex context requirements. Using and combined with or we can do lazy image loading on small viewports and load all images at once on big viewports.
<a href="cat-nom.jpg" data-module="lazyImage" data-context="was @visible and @media (max-width:30em) or @media (min-width:30em)"> A red cat eating a yellow bird </a>
We’ve looked at the @media monitor and have added our custom @visible monitor. There are lots of other contexts to measure and custom monitors to build:
Tap into the Geolocation API and monitor the location of the user @location (near: 51.4, 5.4) to maybe load different scripts when a user is near a certain location.
Imagine a @time monitor, which would make it possible to enhance a page dynamically based on the time of day @time (after 20:00).
Use the Device Light API to determine the light level @lightlevel (max-lumen: 50) at the location of the user. Which, combined with the time, could be used to perfectly tune page colors.
By moving context monitoring outside of our modules, our modules have become even more portable. If we need to add collapsible sections to one of our pages, it’s now easy to re-use our section toggle module, because it’s not aware of the context in which it’s used. It just wants to be in charge of toggling something.
And this is what Conditioner makes possible, it extracts all distractions from the module and allows you to write a module focused on a single task.
Using Conditioner In JavaScript
Conditioner exposes a total of three methods. We’ve already encountered the hydrate and addPlugin methods. Let’s now have a look at the monitor method.
The monitor method lets us manually monitor a context and receive context updates.
const monitor = conditioner.monitor('@media (min-width:30em)'); monitor.onchange = (matches) => { // called when a change to the context was observed }; monitor.start();
This method makes it possible to do context monitoring from JavaScript without requiring the DOM starting point. This makes it easier to combine Conditioner with frameworks like React, Angular or Vue to help with context monitoring.
As a quick example, I’ve built a React <ContextRouter> component that uses Conditioner to monitor user context queries and switch between views. It’s heavily inspired by React Router so might look familiar.
<ContextRouter> <Context query="@media (min-width:30em)" component={ FancyInfoGraphic }/> <Context> // fallback to use on smaller viewports <table/> </Context> </ContextRouter>
I hope someone out there is itching to convert this to Angular. As a cat and React person I just can’t get myself to do it.
Conclusion
Replacing our initialization script with the killer for loop created a single entity in charge of loading modules. From that change, automatically followed a set of requirements. We used Conditioner to fulfill these requirements and then wrote custom plugins to extend Conditioner where it didn’t fit our needs.
Not having access to our single for loop, steered us towards writing more re-usable and flexible modules. By switching to dynamic imports we could then lazy load these modules, and later load them conditionally by combining the lazy loading with context monitoring.
With conditional loading, we can quickly determine when to send which module over the connection, and by building advanced context monitors and queries, we can target more specific contexts for enhancement.
By combining all these tiny changes, we can speed up page load time and more closely match our functionality to each different context. This will result in improved user experience and as a bonus improve our developer experience as well.
(rb, ra, hj, il)
from Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/2018/03/lazy-loading-with-conditioner-js/
0 notes