#but that would add an unnecessary variable to this very important data
Explore tagged Tumblr posts
merakiui · 1 year ago
Note
would u still happen to have ur twst dick size tier list (thats so awkward to actually type omg) 😭😭 I've been thinking about it and I know you've done it before but it was a few months back iirc </3
also ... on the tier list note, have u done a "chest or ass (or a third thing)" chart on the twst guys yet 👀
Tumblr media
Here it is!!! :D and a chest or ass list!!! Omg that's very fun. I should preface this by saying that, while it isn't noted on the list, Jade also likes a secret third thing (footjobs), but you did not hear that from me........ 🤫
In any case, here is my very humble opinion:
Tumblr media
103 notes · View notes
thequirkdetective · 4 years ago
Text
Investigation 13: Elasticity – Danjuro Tobita
It’s almost the Christmas holidays as I write this, as well as being exactly halfway through December. Hopefully during the holidays we can catch back up to schedule, so this should be the last upload that isn’t on time. Let’s not dwell on it. Shall we?
Danjuro Tobita, better known as Gentle Criminal, has the quirk of elasticity. This allows him to make anything he touches elastic (in the sense of property, rather than material). I only learned when researching this quirk that his name is in fact a pun of the quirk in Japanese, where the standard Japanese kanji for ‘elasticity’ also translates as ‘gentleman’. This has nothing to do with the investigation, I just thought it was a cool fact.
The first problem we run into is that elasticity isn’t a set property at the molecular level. Different materials are elastic for different reasons, mostly due to the different ways the atoms and molecules within the substance are bonded together. For example, rubber is elastic because of its complex interlinked polymers that align when stretched. Most other substances deform elastically due to uniform deformation of their atomic structure. Which brings us to our second problem…
Nothing is perfectly elastic. Rubber and other elastomers are ‘elastic’ up to a point (around 10x their original length), but other materials can only deform slightly before they become plastic. Plasticity, in opposition to elasticity, is the property of a substance that retains its new shape when deformed, while elastic materials return to their original shape.
These two factors together make the effects of Gentle’s quirk highly irregular. To explain them, we first have to look at elasticity in more depth. So-called ‘perfect elasticity’ is described by Hooke’s Law, which states the force required to stretch or extend a material is directly proportional to the distance it is to be stretched or extended. This means to compress a spring to a quarter its length, one must apply twice the force required to halve its length, and so forth. This is (practically) true for all materials to a point. After that point, known as the yield point, the material stops being perfectly elastic, and begins deforming plastically. There are other stages between this and the material breaking, but these are unnecessary to discuss since no material that has been affected by the quirk has ever reached its yield point.
Let’s begin, as always, with the largest use of the quirk, here being the material that underwent the largest force and still remained elastic. The answer would of course be the air trampolines that redirected Izuku’s pellet of air[1], but the physics behind this interaction is possibly the most gratuitous and bloody murder of sense in the anime. I usually shy away from criticising the anime on its science, primarily because it’s a work of fiction about superheroes, but also because its purpose is a source of entertainment, and I bring the burden of applying science to it upon myself. In this instance however, I am allowing myself a small fracture in my usual composure to discuss why this scene is absolutely nonsensical.
Firstly, Izuku cannot create a bullet of air. To flick his finger is to create a pressure wave that spreads out from the point of creation at the speed of sound. No faster, no slower. If the finger is to move at a supersonic speed, the resultant pressure wave would create a sonic boom, and still travel at only the speed of light, still in a dissipating wave. Due to the properties of waves, their amplitude decreases with the square of the distance from the source. Thus Deku’s blast wave would not need aiming, and would also be barely a light breeze at such a distance as it is used.
Additionally, and most grievously, Gentle cannot create trampolines of air in the air. This is for the simple (yet often misquoted) fact of Newton’s Third Law. The classic, profound, smart-guy quip version is “every action has an equal opposite reaction”, but this is most likely only because the full answer is far more bloated. The law is in fact “when body A exerts a force on body B, body B exerts a force on body A that is of equal magnitude, opposite direction, identical type, and in the same line, as the force of body A on body B”. Quite a mouthful, but there’s a lot of subtle and important detail missing from the first, I’m sure you’ll agree. The problem here is that the created trampoline must exert a large force of Gentle to cause such acceleration (see Newton’s Second Law of Motion), and thus Gentle exerts a force on the trampoline, that causes an accelerate in the same proportion to the acceleration of Gentle as the ratios of their respective masses. Since air is a lot less dense than Gentle, and his quirk does not appear to add mass to a system, the trampoline would be accelerated backwards considerably fast before Gentle could gain any significant acceleration. It would be like trying to push yourself backwards by punching a balloon. Sure, the balloon is elastic, but it does not have enough mass to exert the required force. The only way this could work is if the trampolines were connected to the earth (thus the mass of the system is increased), but the only way this could occur is via more elasticated air. This does not happen because a) it is not seen – the elasticated air becomes slightly opaque (possibly a stylistic effect to show the action of the quirk) and there are no opaque structures visible, just a single floating disc, and b) these structures would be elasticated, and thus the system would be too flexible to exert such force over such distance.
Right, after that little rant, lets get back to the matter at hand. During the fight with Gentle and Deku there is a scene within a construction site that gives a lot of valuable information. This comes in the form of gratuitous quirk use, as well as an explicit statement of the quirk’s features: it cannot be turned off at will, and instead fades over time. This is odd when compared to almost all other quirks (if you need any examples, every other quirk investigated save one can be both activated and deactivated at will) and so it is likely it ties into the mechanism of the quirk’s action.
The scene contains two key uses of elasticity. Firstly, multiple steel girders are made elastic, and secondly a crane arm is made elastic. The former is useful because it is used by Gentle for movement, so the force on it and thus a lower limit for the yield point can be garnered. The second is useful because it showcases the flexibility of elasticated materials by how much the crane arm bends.
The steel beam bends about 1m each side of its equilibrium, which seems to be relatively unaffected by the quirk. The beam seems around 10m long, but thankfully the beams look like Universal Beams, which have standardised measurements including each type’s flange thickness, root radius, and most importantly, mass by metre and elastic modulus in each axis. Unfortunately there are almost 100 types, each of subtly different dimensions and properties. After downloading a spreadsheet and sifting through all types, I can confidently say the distinction does not matter, as the differences are all within the margin of error that arises upon attempts to measure the on-screen girder.
Let’s start with some maths. There’s no escaping it, and this time it’s back with a vengeance. Assuming the girder bends to approximate an arc (a section of a circle’s circumference) we can use some geometry to figure out the length of the original and stretched girders, and thus how much longer the latter is than the former. The unstretched we already know is around 10m long, and the centre bends ~1.5m from equilibrium. Since the ends are fixed, we know the chord subtending the arc is 10m long, and the distance bent (1.5m) is the distance between the arc and the centre of the chord. I won’t bore you with the details, but it turns out that the steel only increases length by 60cm, or one 60th its original length.
There isn’t much clear data on how elastic metals are (illustrated by the fact that a cursory search of “how far do metals stretch” gets 10 results in before some very different and nsfw questions come up instead, no points for guessing what they are) but there is an incredibly useful dataset courtesy of engineering toolbox, containing the ultimate tensile strength, yield strength or Young’s Modulus of almost every material you can think of. I’m not sure which engineer would need to compare the elasticity of compact and spongy bone, but I’m sure some day I’ll be glad the entry is there. For now we’ll look at the structural steel values, and thankfully all three are available. Let’s take a moment to discuss what they mean.
Young’s Modulus is the ratio of stress against strain, and has a fixed value for each material. Stress is the force per unit of cross-sectional area applied to the material, and the strain is the stretched length sure to such stress over the original strength. Yield strength is the minimum stress required to deform the material plastically, and ultimate tensile strength is the stress required to snap the material. Structural Steel has a Young’s Modulus of 200, so for every 200 MN of force per square metre of cross-sectional area, the beam will double in length. Sadly, these simple calculations are only applicable when the force and extension lie on the same line. In our case, the deformation is complex, non-shear, and therefore cannot be described at an angle relative to the force. In this case, we must apply the terrifyingly named Euler-Bernoulli Beam Theory. It contains some fittingly terrifying equations, included variable functions based on beam material, and second derivatives against two separate nested variables. However, in our scenario, the beam is supported at both ends (known as a simply supported beam) and we’ll assume it is uniform in density, elasticity, etc. Therefore we get an equation that looks like this: σmax = ymax F L / (4 I) where σmax is maximum stress at a given point, ymax is the distance from the point to the neutral axis, F is the force applied to the centre of the beam, L is double the length of the beam, and I is the ‘area moment of inertia of the cross section’. I have almost no idea what that last one means, but thankfully I managed to find an equation for it given different dimensions of a symmetrical I-shaped cross-section. There are two pieces of bad news. 1, it looks like this: Iy = (a^3 h / 12) + (b^3 / 12) (H - h), and 2, we now need to play a game of universal-beam ‘Guess Who’ to gain the correct dimensions.
The beam in the anime seems to be less than 500mm in depth, so that removes 47 possible types. Less than 500mm in width sadly doesn’t remove any more. However, we do know the beam is roughly larger than 150mm, since it larger than Deku’s hand span, which removes another 23. Averaging the rest gives us some dimensions we can use as an approximation of the beam. Thankfully, there exists a table of standard UK I beam dimensions and their respective area moment of inertia of the cross section. Comparing our values to the closest standard gives a value of 7440. Plugging this into the max strain equation, we find the maximum strain on the beam to be 0.79N per square metre. A strangely low number that says to me something must be wrong. The problem is we don’t know the value of F, and since I just used Gentle’s weight the formula treats the beam as incredibly flexible, since it bent so much under such little load. This is a problem, only solved by using a formula involving the Young’s Modulus E  of the beam rather than F. Such a formula is even more complex than those already seen, and is at such a level that I cannot understand how to apply it to the above scenario. Indeed, this post is already much over its due posting date at time of writing, and we have not talked at all about the quirk’s mechanism. Beam theory being as complicated as it is, and having spent now a good few days failing to apply it, I believe it is best we approach the problem from a different angle.
It’s safe to say the metal becomes not just more elastic, but more flexible, when the quirk takes effect. It takes a very large force to bend metal to the extent shown, and that metal would snap or at least bend plastically before that point is reached (sadly I cannot say which would occur). Therefore something about the molecular structure of the metal must change.
As previously discussed, metals and polymers bend differently at the molecular level, and this is because their very structures are different. Metal atoms bond by delocalising their outer electrons, creating positively charged ions attracted to a sea of negatively charged delocalised electrons. This is why metals shine – the electron sea is incredibly smooth, sub atomically so. Polymers bond via covalent bonds and inter-molecular bonds, creating discrete polymers that weakly attract each other. Gentle’s quirk must somehow make both these structures, and others, elastic in the same fashion.
The first answer is to weaken the inter-molecular forces within the structures, allowing polymers/molecules/any base elements to more easily move past each other within the material. Sadly, this just makes the material more ductile, which is the ease with which the material can be elongated via tensile force. To make something more elastic, the forces holing the molecules together must be made, for want of a better word, springier. Essentially, they must be able to act over a longer distance in order to pull the material back into shape after deformation. To do this simply would be to make the bonds stronger, but this would also make the material less flexible and denser. Instead, the force must somehow be spread across some distance profile, maintaining its magnitude at the standard distance of molecules from each other, but fall off slower as distance increases. The way to do this while retaining the other featured of the material is essentially fictional, and would even break thermodynamics (again) by being able to increase the Helmholtz Free Energy within a closed system. Since we’re now changing the mechanism by with one of the four fundamental forces of the universe functions, we can suppose the quirk changes the quirk in such a way ass to create perfectly elastic materials, since they already seem to have ridiculously high yield points.
Supposing this is the case, the question immediately arises – so what? The answer is that perfectly elastic materials have immense uses within many scientific circles. If a material returns to exactly the same state after deformation as it was in before, then it has the same energy. This means any object that hits it rebounds with the same kinetic energy as it started with, a phenomenon known fittingly as a perfectly elastic collision. Every other collision loses energy as heat, save for collisions that stretch the term for physics reasons, such as two orbiting objects. In our case purely elastic collisions have as many uses as elastic materials do, and possibly more. To have any material possible suddenly, even though temporarily, gain perfect elasticity will have material scientists drooling, and although I do not have the intelligence to think of any novel applications of such, asking one of them would I’m sure give you myriad answers.
Another fun application is heat-proofing. A material becomes liquid when the inter-molecular forces are partially overcome by kinetic energy, and gasses when the forces are broken completely. Since these forces are unlimited in distance, the objects would never be able to become gaseous, and would have very high cohesion (surface tension) when liquid. I’m again not sure of the applications of this, but it is cool nonetheless.
To conclude, Gentle Criminal’s quirk affects any material he touches, and changes the effect of the electrostatic forces within it, making them act across any distance, with a slight reduction in magnitude with distance. This works by having the force pull the molecules together from any distance, until they become close enough to be repelled by the electrostatic repulsion of the atoms. Any force applied may overcome the electrostatics for a distance, but will never cause yielding.
[1] Season 4 episode 85: School Festival Start!!
I hope you enjoyed this investigation! It’s almost Christmas as I post this, and as I’m sure you’re aware this post should have ben released on the 1st. I’m also sure you’re aware this has become a trend, and I’m sure you know reasons behind it. It is therefore with a heavy heart I announce we will be taking a hiatus for an undefined length of time. We have decided it is better to write a few posts as backup and prepare for posting, rather than desperately writing posts weeks after they’re due and apologising. We don’t have an idea of when we will be back, but we will. In the mean time, go have a Merry Christmas/Happy Holidays, and a happy new year. We’ll see you some time in 2021.
3 notes · View notes
bimengusllp · 4 years ago
Text
Managing Building Life-Cycles with Facility Management
Facilities Management
Facilities Management is all about collating & managing meticulous construction data. The integration of Information Technology with Facility Management (FM) is a powerful combination for facility managers to create infrastructure that is completely energy efficient and productive.
Tumblr media
Thus, in simplicity facility management would be described as a process to integrate multiple variables like building stakeholders, processes, and automation. Individuals in the Construction Business need to learn this trade to add value to their organization. When coupled with integrated workplace management systems, Facility Management Services help build a repository of building data which can be used by multiple stakeholders at various levels of the project buildup.
Integration of Facility Management into BIM – A Demand & Opportunity
Spending less time behind the desk and being flexible is one of the most important aspects of consolidating BIM with Facility Management. Facilities managers working on the fly prove to be more productive than their counterparts working behind the desk. This opportunity can be leveraged for efficient operational costs, decision making, better documentation & flexibility.  In terms of operational costs, the cost of the facility comes to be at a lower fraction, than the operating cost. The greatest expenses of the building phase occur in the operational phase. This can be eliminated to a great extent through Facilities Management Services. It is estimated that the total life-cycle cost can be much higher than the investment cost. Furthermore, the inability to manage these operational costs can lead to overspending, and go over budget, with the addition of hidden costs as well.
Leverage absolute documentation clarity with Facility Management Services  
Identifying useful information and eliminating the rest is one of the major benefits of Facility Management.  In terms of renovation and maintenance of existing projects in the BIM database, it can cause a substantial cost rise for activities like repairs, renovations, and other maintenance issues. A reliable database can serve as a backbone for facility managers and other stakeholders to refer in real-time with critical data and models included, rather than loads of paper-work that creates unnecessary hassles in various stages of the project life-cycle. A BIM repository serves as a high-value asset that increases the value of infrastructure and property.
Stay informed with updated information & obliterate clashes
With state-of-the-art software’s and tools, facility managers and other project stakeholders can regularly update information into BIM repositories for others to view and refer. Constant updates can eliminate clash detection in later stages of the project life-cycle, this significantly improves efficiency by avoiding errors, disparities, and identical work. Perpetual improvements and updates in software’s optimize information management and system efficiency. Information in any form can help managers plan and configure space very efficiently by using three-dimensional drawings for construction, phasing, and more.
Garner significant benefits with BAS and BLC
BIM in Facility Management also helps facility managers explore greater opportunities and benefits of enhancing energy efficiency through Building Automation Systems for complex projects that rely on integrating various automation systems in their project, it also builds a significant mark on the Building Life Cycle wherein sustainability and financial management can be handled through alternate sources or via improvements and upgrades. Owners are able to make absolute informed decisions in regards to all the trades through optimized Building Life Cycle Management.
Last Words  
BIM applications and reporting effectiveness have become extremely sophisticated with a Technology up-curve that promises and maximizes BIM benefits. Professionals and Facility Management assets would absolutely would reap big returns in the future. Get to Know more about The Use of 3D BIM Model and 3D Laser Scanner in Facilities Management
Contact Us: - 703-994-4242
Visit us: -
https://www.bimengus.com/facility-management-company
0 notes
csrgood · 4 years ago
Text
4 High-Tech Tools Johnson & Johnson Is Using to Get Products to You During the Pandemic
When the COVID-19 pandemic first hit, everyday staples like toilet paper and paper towels were suddenly seemingly nowhere to be found. Over-the-counter products, like fever reducers and cough medicine, were in high demand—along with hand sanitizer, disinfectant sprays and cleaning wipes.
While consumers were getting a crash course in supply and demand, the experts at Johnson & Johnson remained confident in their ability to manage or circumvent disruptions so that hospitals, pharmacies and people around the world could continue to get much needed medications, medical devices and other healthcare products.
"One of the things I've learned after working on the supply chain for many years is that we need to expect the unexpected," says Lada Kecman, Vice President of Supply Chain Systems and Solutions, Johnson & Johnson. "The COVID-19 pandemic has reminded us how supply chain market patterns can be affected by outside events—but the latest technologies allow us to be more agile in response to those events."
We take you on a behind-the-scenes look at some of the novel—and high-tech—ways Johnson & Johnson's supply chain organization tackled challenges posed by the pandemic to continue to serve patients, consumers, healthcare providers and customers worldwide.
1. The pandemic scenario: Responding to rapid surges in product demand
Early on in the pandemic, Johnson & Johnson saw demand from consumers and wholesale customers alike for Tylenol® literally double. The company's supply chain took all possible measures to maximize product availability in the face of this surge, running its plants 24/7 and making trade-offs in other areas, such as reducing the production of more complex formulations, in order to focus on producing the highest volumes of the medicines people most needed in the moment.
"Sometimes, the reason for the higher demand might be logical, but if it was simply due to stockpiling, we wouldn't automatically fill the order," says Kevin Whitehead, Head of Digital Strategy, Supply Chain, Janssen Pharmaceutical Companies of Johnson & Johnson. Instead, the company would reassure the purchaser or country that enough product would be made available to them in future to meet their needs.
Another way the company works to circumvent unnecessary stockpiling: leveraging data science and utilizing complex algorithms to monitor typical order patterns and flag major deviations. Johnson & Johnson has technology that automatically monitors hundreds of thousands of orders placed by big customers, such as medical centers and governments. So when an algorithm detects an unusual pattern, it alerts supply chain professionals to investigate it.
"We want global supply to keep matching the global demand," Whitehead says. "We're confident that we can keep medication flowing as long as everyone only orders what they need."
2. The pandemic scenario: Planning ahead to be ready for a moment just like this
When COVID-19 first broke out in Italy, supply chain leaders knew they would have to deal with variable staffing levels in the company's manufacturing plants due to community exposure to the virus or government requirements to quarantine at home. How great an impact would that have on a given factory's output? What about all the factories in the country?
To troubleshoot this in advance, Johnson & Johnson turned to highly automated scenario risk simulation technology, which uses real, live data about staffing levels and typical production rates to make predictions about different worst-case scenarios that could occur. This way, the company could plan ahead instead of being forced to react in an emergency.
"We wanted to know what the thresholds were and what contingencies we had to plan for," Whitehead says. Could a factory function sufficiently at 30% or 40% reduced capacity? What if the products couldn't be transported for a week, two weeks or even a month?
Mathematical models enabled supply chain leaders to understand what they could withstand—and what would require some rejiggering, perhaps by shifting production to a different location, staggering shifts or changing shipping methods.
Risk simulation technology has also allowed the company to keep better track of the need for raw materials, so there was no reason to over-order or under-order—both of which could have costly consequences.
3. The pandemic scenario: Ensuring products arrive on time—as promised
As one of the world's biggest manufacturers of pharmaceuticals, medical devices and consumer healthcare items, Johnson & Johnson’s products must often travel thousands of miles before reaching their ultimate home.
But a lot can happen between the time a package leaves a manufacturing plant and its arrival at the final destination.
Enter track-and-trace sensors that travel with a shipment, allowing for "end-to-end visibility"—meaning it can be continually monitored from the time it departs its initial location to the time it reaches its ultimate location. The sensor utilizes GPS technology, so it's easy to keep tabs on where a shipment is at any given moment, explains Whitehead.
It also measures temperature, so recipients know whether the product has experienced fluctuations that might interfere with its quality. "And it can check for shock—like if it's fallen off the back of a lorry or has been involved in an accident—we can measure that," Whitehead adds.
Knowing where your product is at any point in the supply chain is especially crucial during a pandemic, he says, adding that a recent critical shipment was on its way from the Netherlands to the United States when it missed its connection at an airport in Germany.
"Previously, we wouldn't have even known that had happened," Whitehead says. But thanks to the end-to-end visibility provided by track-and-trace sensors, the company was able to follow the shipment in real time and get it onto the next plane without further delay.
In some instances, track-and-trace technology is also being paired with intelligent automation, which is essentially an alarm or trigger that uses data to automatically set next steps in motion. So if a plane took off without an important package on it, for example, intelligent automation might instantly generate an email to the person responsible for monitoring that shipment.
4. The pandemic scenario: Managing manufacturing virtually
Before COVID-19, it wasn't unusual for a scientist or engineer to hop on a plane or train to check in on manufacturing equipment. Due to travel restrictions and contagion concerns, however, such in-person fine-tuning is not always an option.
This is where smart glass technology, such as Google Glass, comes in, which allows someone at a remote location to virtually see the exact same thing that a worker standing in front of a machine is seeing.
Employees often refer to it as "you see what I see," Kecman says, because it means someone need not be physically present to see or access information that was previously only available to someone in the same room. When paired with "privileged remote access," you can go a step further: An expert in a remote location is given elevated security clearance and can actually tap into the equipment, pull data from it and change the settings.
The benefits are clear. At many Johnson & Johnson locations, the combination of smart glass technology and privileged remote access has enabled manufacturing to continue without interruption, despite being in the middle of a pandemic, Kecman notes.
What's more, the company has been able to harness the technology to keep production flowing for its medical devices and consumer products, as well as to open new research and development centers and run them 24/7—allowing crucial projects to expand in record time.
Of course, relying on technology of this scale also requires an equally sophisticated security protocol. "Cybersecurity is extremely important for obvious reasons," she says. "We work very closely with the Johnson & Johnson information security risk management group" to ensure all data is kept secure.
Whitehead agrees and credits old-fashioned collaboration with the company's ability to successfully implement advanced technologies so quickly. "All our supply chain teams have a role to play," he says. "Everybody works together."
Originally published on jnj.com
source: https://www.csrwire.com/press_releases/45818-4-High-Tech-Tools-Johnson-Johnson-Is-Using-to-Get-Products-to-You-During-the-Pandemic?tracking_source=rss
0 notes
void36-blog · 5 years ago
Text
SEO Company
Search Engine Optimization Advertising And Marketing
In the enterprise area, one significant trend we're seeing recently is data import across the big gamers. Much of Search Engine Optimization involves working with the information Google gives you and after that completing every one of the spaces. Google Look Console (formerly, Webmaster Equipment) only offers you a 90-day window of data, so business suppliers, such as Conductor as well as Shouting Frog, are continuously including and also importing data resources from other crawling databases (like DeepCrawl's). They're integrating that with Google Browse Console information for even more precise, continuous Internet search engine Results Page (SERP) tracking and also placement monitoring on specific key words. SEMrush and also Searchmetrics (in its venture Collection packages) provide this level of venture SERP surveillance too, which can provide your company a higher-level view of how you're doing versus rivals.
In 2013, the Tenth Circuit Court of Appeals held in Lens.com, Inc. v. 1-800 Contacts, Inc. that on-line contact lens vendor Lens.com did not dedicate trademark violation when it bought search ads using competitor 1-800 Get in touches with' government signed up 1800 CALLS hallmark as a key phrase. In August 2016, the Federal Trade Payment filed a management problem versus 1-800 Calls alleging, among other points, that its trademark enforcement techniques in the online search engine advertising and marketing room have unreasonably restrained competitors in infraction of the FTC Act. 1-800 Calls has denied all misdeed and also is scheduled to show up before an FTC administrative legislation judge in April 2017. [30]
Seo
Structured data21 is code that you can add to your sites' web pages to describe your content to online search engine, so they can much better understand what gets on your pages. Internet search engine can use this understanding to present your content in valuable (and also captivating!) methods search results page. That, consequently, can aid you draw in just the appropriate sort of consumers for your company.
Tumblr media
Pros: The most granular as well as extensive internet site creeping device we checked. On-page Search Engine Optimization referrals. Responsive contemporary interface. Google Analytics and also Google Browse Console assimilation. Back links monitoring. AMP metrics. Desktop/mobile/tablet breakdowns.
Seo Marketing Devices
Another component of SEM is social networks marketing (SMM). SMM is a kind of advertising that entails making use of social networks to influence consumers that one business's products and/or solutions are important. [23] Some of the most recent theoretical advancements include online search engine marketing monitoring (SEMM). SEMM associates with tasks including Search Engine Optimization however concentrates on return on investment (ROI) administration rather than pertinent web traffic structure (as is the case of mainstream Search Engine Optimization). SEMM also incorporates natural Search Engine Optimization, attempting to accomplish leading position without using paid means to attain it, as well as pay per click SEO. As an example, a few of the focus is placed on the web page layout style as well as exactly how content and details is displayed to the web site site visitor. SEO & SEM are two columns of one advertising job as well as they both run side by side to generate far better outcomes than concentrating on only one column.
Search Engine Optimization Google Advertisements
If you've improved the crawling as well as indexing of your site using Google Browse Console or other solutions, you're probably curious concerning the traffic pertaining to your website. Internet analytics programs like Google Analytics are a beneficial resource of understanding for this. You can utilize these to:
|What Does Search Engine Optimization Cost
Web designers and also content service providers started maximizing websites for internet search engine in the mid-1990s, as the initial internet search engine were cataloging the very early Internet. Initially, all webmasters only required to submit the address of a web page, or LINK, to the numerous engines which would certainly send out a "spider" to "crawl" that web page, essence links to other web pages from it, and return info discovered on the web page to be indexed. [5] The procedure entails an online search engine crawler downloading and install a page as well as saving it on the search engine's own server. A second program, referred to as an indexer, removes details about the web page, such as words it contains, where they lie, and any type of weight for particular words, in addition to all web links the web page includes. Every one of this information is after that positioned right into a scheduler for crawling at a later date.
Internet Search Engine Optimisation Google Ads
If you're thinking about hiring a Search Engine Optimization, the earlier the much better. A great time to hire is when you're taking into consideration a website redesign, or intending to launch a brand-new website. That way, you as well as your Search Engine Optimization can make sure that your website is created to be search engine-friendly from the bottom up. Nevertheless, an excellent Search Engine Optimization can likewise aid enhance an existing site.
Tumblr media
In 2007, Google announced a war paid links that move PageRank. [30] On June 15, 2009, Google disclosed that they had taken procedures to minimize the impacts of PageRank sculpting by utilize of the nofollow feature on links. Matt Cutts, a popular software program engineer at Google, revealed that Google Robot would no more deal with any nofollow web links, in the same way, to prevent Search Engine Optimization service providers from using nofollow for PageRank sculpting. [31] As a result of this adjustment the use of nofollow brought about evaporation of PageRank. To avoid the above, Search Engine Optimization designers created alternative techniques that change nofollowed tags with obfuscated JavaScript and also thus permit PageRank sculpting. Additionally numerous solutions have actually been suggested that consist of the usage of iframes, Flash and JavaScript. [32]
If you want to improve your positions in search results page for key words associated with your service, taking note of and also enhancing these SEO variables is a terrific location to begin. However, bear in mind that SEO isn't a set-it-and-forget-it technique. To put it simply, it spends some time to see outcomes, but they're well worth the wait!
Early versions of search algorithms counted on webmaster-provided details such as the keyword meta tag or index documents in engines like ALIWEB. Meta tags provide an overview per web page's web content. Using metadata to index pages was discovered to be much less than dependable, however, due to the fact that the webmaster's option of keyword phrases in the meta tag can possibly be an imprecise depiction of the site's real material. Imprecise, insufficient, and also inconsistent data in meta tags might and did trigger pages to rank for irrelevant searches. [10] [dubious] Internet content service providers also controlled some features within the HTML source of a web page in an effort to rank well in search engines. [11] By 1997, search engine designers identified that web designers were exerting to rank well in their online search engine, which some web designers were even manipulating their rankings in search engine result by packing pages with extreme or unnecessary key words. Very early internet search engine, such as Altavista and Infoseek, changed their algorithms to avoid webmasters from adjusting positions. [12]|Seo Meaning Easy
The surge of Gopher (created in 1991 by Mark McCahill at the College of Minnesota) led to two new search programs, Veronica and also Jughead. Like Archie, they searched the documents names as well as titles stored in Gopher index systems. Veronica (Very Easy Rodent-Oriented Net-wide Index to Computerized Archives) gave a keyword search of most Gopher menu titles in the entire Gopher listings. Jughead (Jonzy's Universal Gopher Pecking order Excavation As Well As Show) was a device for obtaining menu information from details Gopher servers. While the name of the online search engine "Archie Search Engine" was not a recommendation to the Archie comic book series, "Veronica" and also "Jughead" are characters in the series, therefore referencing their predecessor.
Between sees by the spider, the cached variation of page (some or all the web content required to provide it) kept in the internet search engine functioning memory is promptly sent to an inquirer. If a go to is overdue, the search engine can just function as a web proxy instead. In this case the web page may differ from the search terms indexed. [25] The cached web page holds the appearance of the version whose words were previously indexed, so a cached version of a web page can be useful to the internet site when the actual page has been shed, but this trouble is also thought about a light form of linkrot.
0 notes
fnewstoday · 5 years ago
Text
Want To Know How To Affect Credit Card Terms To Pay Less? Read This!
The huge popularity of credit cards among Americans is due to the opportunities that they provide to each holder to solve various financial issues, as well as the convenience of their use. However, at the same time, credit cards have one of the highest interest rates and, due to their peculiarities, become the cause of large debts. Not surprisingly, many users would like to know if they could affect credit card terms to pay less. The use of credit cards as a whole provides a convenient way to pay for your daily needs, as well as to solve difficulties in case of unexpected expenses. Why then are credit cards such a powerful debt generator in the US? The fact is that not all of their holders understand the subtleties of their use and begin to spend more than they should in their position. Inflation, as well as the rising cost of living, do their part in increasing the debt of ordinary Americans, since many millions of citizens do not have enough savings, or they are completely absent. So it turns out that expenses increase, and savings decrease. If we add to this cocktail the lack of understanding of how credit card terms affect credit card terms and how much exactly the credit card costs, then we get the debt situation that we have now. That is why it is important to know and understand the terms of your credit card, which are described in the agreement with the issuer. In order to understand them and use opportunities for your own benefit, we suggest you to get acquainted here with some secrets that will help you to save. Of course, various credit cards differ from each other, but many of the principles of their work remain the same. Knowing these principles, you will understand how to affect credit card terms to pay less in order to avoid unnecessary spending. This will help you curb your credit card debt and gain control over it, which will make less stress in your life and allow you to sleep well at night.
The cost of using a credit card or a financial fee
A finance charge is the total cost of using a loan, which includes the interest rate, all commissions, fees, and penalties you pay on top of the amount you lend. Almost all of these costs can be optimized, then we will understand how. The main creator of the growing debt is annual interest rate Different credit cards have their pros and cons. If you have bad credit, you can find a lender who will be ready to approve a credit card for you and even provide a small credit line, but at the same time you should expect that the interest rate on it will have maximum values ​​or close to them. The good news is that you can always qualify for other options of credit cards, perhaps one of them will have a lower APR with those rewards and bonuses that interest you. Do not grasp at the first available option, if the lender is ready to give you a credit. Learn about the different variations of conditions and only then submit an application for registering a credit card with the best conditions that best suits you.
Your APR may be negotiable
This is one of the most effective ways to affect credit card terms to pay less. Not many credit card holders know that they can try to negotiate a reduction in their interest rates, but they perceive the APR as something unshakeable. It is not so; at a certain moment it can be very difficult to repay a debt with the same high percentages. Try to get better terms when it becomes harder to pay. The creditor will probably be more interested in reducing their profits a little, but giving you the opportunity to continue to pay your debt than to stop receiving payments from you altogether.
Tumblr media
how-much-exactly-the-credit-card-costs
Even fixed interest rates may vary
Credit cards rarely use fixed interest rates, most often issuers set variable rates there, but if you find this option, you should not relax. Not only will such interest rates be likely to be quite high, the issuer will be able to change them within a year and is unlikely to decrease. What you should know in this case is that he will be obliged to notify you of such a change no less than 15 days.
Interest rates most often increase due to payments made at the wrong time
An overdue payment will be any payment made after the due date, or if its amount is less than the minimum required payment. In this case, you may be charged a fine for late payment, penalty interest and just raised the interest rate to a maximum value that can reach very high altitudes.
Exceptionally minimal payments drag you to endless debt
If you pay only the minimum required amount each month, then your debt time is stretched more and more. This is due to the fact that each time to your debt interest is added to the unpaid debt. So you will practically stand in one place, not moving a step closer to repaying your debt, paying only interest to the issuer.
The interest rate limit is not set in all states
If the law is not limited to what level interest rates can reach, then theoretically they can reach almost any transcendental heights. It can instantly swell your debt to enormous proportions. Check with your credit card issuer if there are any interest rate limits. Credit line The credit line is determined by the lender based on your data and determines what the maximum amount you can use on credit. The lender examines your credit report with a credit history, your income and expenses before you set a credit limit.
Late reduce your credit limit
Every late payment is recorded in your credit report and badly affects both your credit score and your credit history. In addition, each of your lenders will most likely want to reduce your credit line, as well as raise your APR to secure your money. This should not be allowed, otherwise the troubles in this case begin to grow, like a snowball, with which it will be harder to cope. Grace period In order to attract new customers, many companies, credit card issuers, have made a grace period for using a credit, that is, the length of time is usually from 20 to 60 days when you do not calculate the interest rate for the used credit if you return the debt within the prescribed period.
Grace periods disappear
A grace period is a great opportunity not to overpay interest for used money, but if you want to know how to pay credit, then you should understand that if you don’t invest in this period once, you lose it at its best case for several months. Most lenders remove the grace period for you permanently. Annual fee The annual card service charge is collected by many credit card issuers. This fee is charged regardless of whether you use a credit card or not, so you should always pay attention to its presence in the conditions of a particular credit card. Some credit card issuers may charge up to $ 500 in annual fees.
Use the knowledge gained
When you know how to affect credit card terms to pay less, you protect yourself and your family from financial upheaval, which always bring with you great debts that you cannot cope with. When choosing a credit card, go into all the details to never lose control of your debts. You will be able to make informed decisions when you decide on the choice of a particular credit card. Understanding for what purpose you will need a certain option, you can calculate how profitable it will be for you to hold it. Check out our financial blog for information you need. Ask your questions below. Read the full article
0 notes
matthewjosephtaylor · 6 years ago
Text
Thoughts on Rustlang
My raw first impressions as I read the Rust Programming Language Book https://doc.rust-lang.org/book/
I reserve...no I demand the right to change my mind later about everything below (In fact, I'll be disappointed if I don't).
These are my unfiltered musings meant for historical and entertainment purposes only (Nothing more amusing in life, than seeing how 'wrong' one's past self was) :)
Cargo
Seems to be the law of the universe that every language has its own build and dependency management system
So far seems fairly clean/simple and straight-forward
Unfortunate that creating new project also includes SCM (let one thing be responsible for one thing not 'all the things')
since I like git it is handy, but feels a bit too opinionated and bloaty
semver
will be glad when semver dies as an idea of how to link dependencies
would have preferred identifier based on content (like a hash of the source) with option to use monotonic identifier like date for convenience
lock file should absolutely be based on content-hash not a label like semver (disappointed)
Rustlang Basics
variable shadowing (dangerous?)
dangerously useful feature
could see this being labeled as 'considered harmful'
immediate impression is that the danger > usefulness but time will tell, maybe the usefulness outweighs
clear integer types (awesome)
u32 is perfect name compared to something like 'int' or 'long', etc in other languages where the important size of the integer is obscure
usize is a bit of an odd/ugly name for architecture specific size. Something like iarch would prolly have been better IMHO
WTF different behavior on debug/release mode for overflow?
Broken design
Is this being fixed/worked on?!?!?!?
why the space in specifying type like 'foo: bool'?
seems that 'bool:foo' would have been a cleaner more easily cut/pastable and easier to follow syntax design (Rust syntax seems backwards and overly expansive)
built-in tuple type (cool)
possibly 'dangerously useful'...but for the moment all I see is the sexy and want to dance with this potential danger ;)
destructuring
return multiple auto destructuring?
arrays don't auto-expand/shrink
fair trade it seems to me (for the most part data-structures should be immutable anyway).
let a: [i32; 5] is really ugly syntax
would prefer let i32:a[5] as ideal or let a: i32[5] to remain consistent with variable type specification poor syntax design choice
feels like the poor syntax of variable typing lead to the train-wreck of array-typing
let a = [3; 5]; OMFG this is now getting ridiculous
Bad design leads to worse design leads to truly evil design
first 'thing' before the semicolon is either a type or a value...wonder how many arrays have been created with the value 32 when the the type i32 was intended?
zero is the only interesting value one would want to 'autofill' an array with, and would be the expected default. This entire syntax 'feature' should go away.
At this moment going to call the variable typing syntax 'broken'. Too late to change. This will be one of the 'broken' parts of Rust that everyone has to deal with via best-practices and lint-checking. Unfortunate but here we are.
Important lesson to learn: Syntax matters for a language design. If the syntax is bad there is really no way to recover until the next language.
Nice that runtime array out-of-bounds checking is done (honestly how could it be otherwise in the modern era?)
Would be better if compiler/checker was a bit smarter and able to compile-time check OOB errors when it has the info to do so
possibly there is already a linter/checker that does this? Is there more than one?
clear distinction made between expressions and statements
difference as defined is that expressions return values and statements don't.
{} is an expression that appears to evaluate to the implicit returned value of the last expression inside of it
does that understanding hold up?
is there a return keyword? or is this how returning works?
WTF expressions are defined syntactically by missing semicolons?
Expressions do not include ending semicolons. If you add a semicolon to the end of an expression, you turn it into a statement, which will then not return a value
Immediate impression is that this is the most dodgy error-prone way of 'returning' I've every come across. Maybe it makes sense in context, and there are other safeguards that make this not as crazy as it appears on first blush...but I'm going to demand some explanation...
return is a keyword that docs say work as expected but also the return value of the function is synonymous with the value of the final expression in the block of the body of a function.
using -> to specify return type seems a bit cute for the sake of cuteness and 'being different' (extra syntax for no real purpose)
Not getting the purpose of turning an expression into a statement by adding a semicolon. At least not yet.
I'm presently thinking that the 'missing semicolon' is the way the language designers chose to be able to differentiate an expression that is meant to be evaluated and assigned to a variable within a function body, from an expression that is meant to be returned. In other words the 'missing semicolon' is effectively just a 'cute trick' to make the language look a bit different by not having a return before the last expression.
First impression is that this 'odd' syntax decision perhaps won't lead errors since the function itself should have a return type defined in the method signature. So if one accidentally puts in the semicolon like one would do for every other fucking usage of this expression (preloading my anger here in anticipation of being tricked by this constantly)) it won't cause a runtime error because the compiler should catch what I suspect is a very likely and common mistake.
They should just have mandated a return statement at the end of a function that returns a value to make things clear. Fear that the missing syntax character as replacement for a keyword is just 'being different' for no good reason that will lead to errors, where clarity of syntax would have been a touch more verbose but consistent.
I do appreciate that they wanted to have a language where the last expression evaluated to the return, and wanted to avoid the pitfall of language-users accidentally returning a thing they didn't mean to. Feel that is a common mistake in languages with 'implicit return' and the language-designers were wise to differentiate 'expressions meant for return' with 'expressions meant for assignment' but that 'missing semicolon' is an un-intuitive/un-obvious way of doing that. If they wanted to be cute I would have suggested they use -> as their return keyword and 'skipped the syntax garbage' in the method signature (like all other c-like languages have managed to avoid doing by having the return type be the thing to the left of the function-name)
First time I've run across 'arms' and 'arm' to mean something like 'execution path'. Short and simple, I think I kind of like it. A thing that needed a name. Was this innovative of Rust or already existing name I was unaware of?
No parens around conditions for if statements...hmmm....
quick check says they are allowed but give compile warning as unnecessary.
might be OK with leaving off (maybe)
Absolutely NOT OK with the compiler warning (parens in conditionals usually lead to clearifying intent of code)
eh, only seems to warn if the parens are at the top level so will back off a bit but still disagree with compiler warnings for extra-verbose syntax on general principle
quick code check on returning from conditional leads to the conclusion that one can't return from inside a condition
initial impression is that I like this behavior quite a bit.
the error message of 'unexpected token' is crap (going to be a common thing people are going to attempt to do, would be better to have compiler print more helpful message).
book gives hint to a 'match' keyword while explaining else if (excited/hopeful...please Knuth let match be what I want it to be)
Like that if is an expression. Goodbye elvis.
Randomly in playing around I've noticed that string concatenation with + isn't a thing in Rust. Hopefully there is a replacement syntax that allows string concatenation.
feel that sprintf style strings are error-prone
Like for in but don't see an equiv to for(i=0;i<10;i++). That is a really handy for-loop construct.
Range might fill 80% need for 'real' for-loop but isn't a 100% replacement
Possible that the need for this style goes away with other constructs/usage paradigms in Rust
I note that as I've gotten more into functional programming this 'old' style of for-loop is generally nicely replaced with a Range.
If all else fails hopefully creating some form of 'iterable' is easy, and perhaps that is the 'right and true' answer in all cases anyway :)
perhaps 'old school' for-loop construct is just 'dangerous by design', that seems likely given the mutated counter.
Rustlang Ownership
The string example where s1 = s2 and then s1 is referenced later which creates a compile error
WTF
Not how I expected that to behave at all, which I suppose is the point of this chapter.
I suppose intuitively I was assuming some sort of fancy 'reference counting' or that s1 and s2 would share a reference to the same underlying 'object'.
Assignment instead seems to imply 'transfer of ownership'.
First blush the fact that s1 is still 'in scope' seems to imply some sort of language syntax design problem since s1 clearly should no longer be in scope, by how the runtime expects things to behave, but is in scope as a practical matter, and how the language-user expects things to behave.
'We changed the rules of scope pray we do not change them any further' seems like an evil trick to play on the language-user.
the compile error of 'no Copy trait' is better than blowing up at runtime, and I appreciate the safety, but the real problem is that I as a language-user would not expect s1 to somehow 'internally' go out of scope
Better it seems to me that variable 'renaming' (variable to variable assignment (no expression)) be made illegal.
This also seems to imply that variables are sort of 'one time use' sort of things. Quite a different way of looking at the world. Possible that c-like syntax is just not a good fit since language-user's expect scope to behave in a way that it clearly doesn't.
Thinking that the runtime designers just made a bad choice in how to interpret the semantics of the language (hinted by the fact that docs imply that x = y will copy data from y to x instead of just let x and y share pointer to same location in memory). Seems like they want variables to mean something like 'location in memory' where really the language-user doesn't care unless the location is mutable which should be the rare case. Thinking the runtime designers hadn't quite grokked the power of immutable data structures.
Copy but no Share?
Seems that Copy 'trait' is the escape hatch to distinguish stack-level from heap-level allocation.
First impression is that all this 'Copying' is wasteful and unneeded (hoping that the Copying is a lie and that 'under the hood' immutables really do share)
All-in-all feels like at the very least the distinction between a Copy-able variable and a Drop-able variable should be visible via language syntax.
clet x = 5 and dlet y = String::from("foo")
How do I know as a language-user what is Copy and what is Drop without reading all lines of code that go into the creation of the types???
References
Well duh...why isn't this the default behavior? Pass by reference isn't a new/unexpected thing in the universe. Feels like the default being pass-by-copy is a bit old-school.
Getting feeling this is the tension between systems and application programming languages. Systems expects mutable and so wants to copy stuff all over the place to prevent the hell mutability brings, Application should in the modern era expect immutable and so expects pass-by-reference to be safe.
Thinking there is a lot of pain here that could be solved by just assuming immutable, and then treating mutable as the rare-its-ok-to-have-wierd-rules-wear-special-gloves situation
Feels like syntax around references are sort of unnecessary if one assumes pass-by-reference. Languages-users never should need the actual pointer to the memory location which was why c-like & exists. From the language-user's perspective I can't think of a legit reason why there would need to be a reason to distinguish between 'the value' and 'the reference to the value'. Just assume that the variable one has access to is a reference and move on with life, let the runtime deal with dereferencing. If it is faster for there to be a copy (simple integer value where the pointer takes just as much room as the value) let the runtime deal with that decision transparently (I don't give a !@#% what the assembly code looks like as a high-level language user, just make it do the right thing so that my assumptions of behavior are respected).
all-in-all this feels like too much ceremony for no valid reason. Just remove & as a thing. No reason for that thing to exist in a language without pointer arithmetic (oh please let there be no pointer arithmetic in Rust...).
Slices
Neat idea and useful concept
First impression is that this should just be the default mechanism for dealing with list-like data-structures. (why have a separate thing when one thing can mean both and is more powerful?)
Feels like best-practice would just to be use slices and forget the other non-slice variants exist.
Rustlang Data Structures
Structs
Goodbye sexy tuples. I'll probably still flirt with you, but you will likely be a quick-n-dirty when I'm too lazy, not for 'production' use.
interesting constructor syntax. All-in-all I think I like it on first impression.
like the 'field init shorthand syntax' alternate
love the 'other instance shorthand syntax' alternate
Tuple Structs
Dangerously useful.
Language would probably be better served without this construct, to 'gently direct' users into explicit-naming vs ordering-as-naming.
Sexy but no substance
Hopefully destructuring of Structs works as one would expect with the definition order being the value-order
Oddity just noticed playing around: String literal can't be used in place of String type? String::from("foo") != "foo". What is the type of a string literal if not String?
First introduction of annotations
@ would have been more natural than # since it is more commonly used (different just to be different again?)
Methods
'self' instead of 'this' (I can live with 'self', but hopefully there is a reasoned argument why the non-c-family name was used)
Seems like the first argument should be assumed to be &self and drop it from method signature (remove the tedium)
Feels like it wasn't done this way just to give the option of pass-by-copy, which is just sort of a bad-idea anyway (shouldn't be default, OK if hard/weird/impossible syntax for that option) in most cases.
I've been willfully ignoring the ugly that is &mut as a syntax 'keyword/symbol'. Feels like that wants to be something more like &! (lesson: symbols and keywords don't really want to be mixed together)
Enums
Do what they say on the tin (work as expected).
Like that each enum can have specific values
Like the 'helper' methods
Love the allowing of comma on last value (consistent syntax ftw)
Match
Meh on the 'optional' curly brackets.
agree it is 'noisy' syntax
that 'noise' also adds clarity
since all cases must be accounted for I think I'm more willing to trust that the additional clarity likely isn't needed
Like the 'value binding'
Up in the air on the syntax. Feels like argument-order is a bad thing to rely on.
Good that Monads like Option are 'pattern matchable'. Will be interesting to see how deep this goes(is Option just a simple enum, or something more powerful happening?).
like _ as the 'placeholder' variable name.
like () as the 'unit value' name.
not sure how to get the value of the 'wildcard' match?
like 'if let' I think
docs are kind of crap on how to use this....testing.....
Syntax seems backwards with the assignment being on the right (wtf?)
Absolutely hate the = now that I see that it means something closer to 'associate with' for if let
let foo = if let Coin::Penny = coin {... reads let foo contain the value of the result of matching Penny on the coin variable containing a Coin enum
Like the idea hate the syntax
better would be: let foo = coin if Coin::Penny {...
Starting to feel that for some reason the Rust developers are intentionally attempting to create confusing syntax on purpose (just not sure why yet)
Packages, Crates, Modules
initial impression after reading the summary: Lots of new keywords. Hopefully they are used for good purposes...but I'm reserving judgement based on some of the poor naming/syntax choices I've seen so far.
src/bin is an ugly way to to distinguish multiple 'binaries' (think they mean executables).
would prefer something like src/mains and src/lib
feel 'binary' is a bad name for a 'command line executable' and 'executable' would have been better
Modules
Like the nesting
'crate' feels a bit odd for naming choice of 'root' module. Feel name like 'root' would likely serve better
Uneasy about the 'placeholder' functions inside of module definitions.
Burdensome typing
Serves a useful purpose if modules are 'closed'
Serves a useful purpose if modules are more 'interface' in nature
first impression pathing: Like the idea and love the attention to this important detail
liking 'crate' to mean 'root module' less and less as I see the name used in the pathing....
relative pathing seems like a 'dangerously useful' idea.
Feeling that relative pathing is less useful if IDE takes on burden of refactoring...liking relative pathing less...
Like the fact there is a simple private/public (finer grained is generally less useful IMHO)
Like private as default
Like child sees parent, and children are hidden from parent
Feeling more and more as I see documentation-reasons why relative paths are used that it is always in the context of moving code around
Don't like the idea of reducing the 'strength' of module paths if it is only useful for refactoring purposes
Feels a bit like the language-designers added an unneeded/dangerous syntax element strictly to aid less-powerful code-editors
Like as as a renaming syntax
Like module file names being defined by module names
Like module hierarchies being defined by directory structure
Rustlang Collections
Happy that have, Disappointed in how the Book presents
Looking based off of the Documentation looks like better would have been to start from concepts like sequence
RE: vec![1,2,3] not sure I like a macro for Vector construction
Perhaps just a getting used to thing but construction feels like a thing I would expect Type-constructor to be able to do.
The bang I want to mean that the macro is creating a mutable thing (hopefully that is the case)
Playing around it appears that vec[1,2,3] isn't legal
would hope to construct an immutable vector
perhaps a non-sensical thing to want?
Hopefully there is an immutable sequence that takes care of what I want that to be
further evidence of poor choice on references.
Elements are typed as &i32 instead of i32
Get that that needs to be a reference, disagree that there should be a 'language-syntax' level distinction (should be a detail compiler takes care of)
unfortunate that 'panic' is an expected situation user should expect to get into
Feel using &v[100] is 'considered dangerous' and suspect shouldn't be part of language
Prefer v[100] to be an alternate syntax for v.get(100) (and then maybe just drop the more verbose form)
Once again poor language-design choices have lead to normal usage patterns being taxed with the 'uglier' form of syntax
Like the 'trick' of using enum-values for creating a 'closed-universe of types'
That is a really good trick the more I think about it.
As I think on this some more: that 'trick' I feel was worth the price of admission to learning the language and is an 'aha'.
Going to take that with me. That is a deeply good idea.
Strings
The fact that "initial contents".to_string(); is a reasonable thing to type I think speaks for itself regarding poor choices in the language.
First google on Rust String concatenation leads to an involved discussion on various ways of doing what should be a simple thing like "hello" + "world"
Going to add 'String concatenation' to my language-evaluation litmus tests
If a language can't handle the basics with grace then that is a red-flag because it is going to kill adoption-rates which is death for any language
A String is a wrapper over a Vec<u8> WTF?
How in the @#%@#% is that a reasonable way to look at Strings in the modern era?
String wants to be something like Vec<CodePoint>
Understand that in 1970 'C' can expect a String to comprised of 8-byte 'characters' but that is totally unacceptable today
Perhaps the entire built-in 'String' is just DOA?
Do people actually use this garbage in practice?
Feel there must certainly be one or (unfortunately) more libraries that people must use to handle strings in Rust if this is the built-in
&hello[0] is illegal to 'avoid confusion' but in the same breath &hello[0..1] is legal
I don't know what to say....that is just horrible
Like the attempt at guarding users from evil and just plain wrong abstraction, but the guarding is incomplete (and just proves the point that String is just broken in its current state)
I could almost accept this if this language 'evolved' from an era when ASCII was a thing. But this language started in 2010. There is just no excuse.
Thinking that current String type should be named something like C_String_WRAPPER and only be used in cases where one is interfacing with an external c-like library (and intentionally named ugly to signal the pain and torture and danger found within)
HashMaps
Hash maps also have less support from the standard library; there’s no built-in macro to construct them, for example
appears that Rust wants to treat Type construction as an 'outside' 'macro' responsibility
Don't think I agree with this philosophy
Feels like creators of Types should also be expected to be responsible for providing type-constructors
Cool with having type-construction 'open' (allowing more) but feel that there should be a basic set of constructors that cover most cases as part of the 'contract' of creating a Type.
HashMap<_, _>
Not such a fan of using _ to mean 'any' here. (feel it is fine for variables NOT for types)
It's OK and there is a consistency argument (a weak one IMHO (any != ignore))
like the more 'in your face' 'you are probably doing it wrong, are you sure?' ?
No 'literal' constructors for either Map or Sequence that I've run across. Hopefully they exist...but starting to worry...
Rustlang Errors
Distinction made between 'Recoverable' and 'Unrecoverable' errors
Wants to use Result (enum? Monad hopefully (please let enums be Monadic...getting worried they aren't)) for 'Recoverable'
Wants to use so-called panic! macro for unrecoverable.
From what I've seen so far panic is way too common and 'expected' (I've noted that array accesses can result in a 'panic')
language-users IMHO should NEVER encounter a panic in a situation that they have control over (like array accessing)
such accesses that might not have results or have errorful results should be handled in a Monadic way (return an Option or Result)
'panic' IMHO wants be reserved for truly world-ending and unexpected situations like 'out of memory' or language-runtime-bug encountered.
We are already starting off badly since I'm not a fan of the implementation of the philosophy in some of the core library stuff I've already encountered.
Possible I can still live with this as long as it is possible to intelligently and easily avoid usage of the bad/evil language 'features'
Not a fan of backtrace not being the default
Don't like that backtraces don't appear to have column-numbers (is that an option?)
Doesn't appear to be but also appears an area of active development so willing to 'wait and see' on improvement
Feel that the 'backtrace' is a bit unreadable
Disappointing that not all lines have at least line numbers
Feel that the LHS row-numbers are not useful/helpful and just add noise
Examples in the book of where/why to use panic
Get the feeling either the book-writer or language-designers never programmed anything large/complex
Possible this is just bad-writing and 'simple' examples are being used, but feel this is going to lead to bad-practices and should be noted that this is NOT the way to run a ship (just doing here for purpose of explaining this feature).
Since there is no 'warning, this is bad style' I expect that the book-writer or possibly worse the language-designers, intend users to use panic! rather freely.
Feel strongly that since the core-library uses panic freely it this reflects poor choice of the language-designers.
Feeling like panic! is possibly going to be an Achilles heel of the language depending on how poorly the language-community reacts to this bad 'usage suggestion'.
Going to have to be very careful that any libraries are conservative with panic
perhaps might even be worth an linter-check to verify that there are no panics used in library or dependencies.
Might even be so bad that panic would have to be 'caught' (if that is possible)
Don't feel that 'catching' is a good idea generally (in cases where the 'panic' is legit like OOM they shouldn't be caught)
Might be forced into that bad situation due to poor guidance by language-designers on how to use panic
Nervous that Enums and/or Results are NOT Monads
Have not seen even a hint of a map() function or that there is 'another way' / style to handling
Possibly just being uber-conservative and not wanting to be off-putting to the non-functional crowd, but I'm starting to loose faith/patience
Feeling that expect(...) is quite the poor choice of naming since it is the opposite.
else_panic! would have been a better name
OMFG they created a special syntax structure of ? to avoid 'map' ?!?!?!??
Why?
Are they intentionally being dense or is there something I'm missing?
Does ? mean map or is it just for this one special case with Result?
The ? Operator Can Only Be Used in Functions That Return Result well I guess that answers that.
At the very least they've created a special 'hard mode' way of dealing with Results...and I fear there isn't an 'easy mode'.
Chapter 13 promises 'Functional Language Features'
Feeling it is ominous that this is the 13th chapter (what horrors does it contain?)
Error Chapter contains first hints of something called a 'Trait Object' that looks like Box<dyn Error> that is supposed to 'allow for values of different types'.
In my present state of mind I don't know how much more crazy I can take before I give up.
I'm sort of 'thrilled with dread' at how bad things are going to get, and feel this new horror is looming around the corner now.
Still hopeful there is light at the end of the tunnel, and that there will be 'salvageable' parts of the language that somehow manage to save it.
At the very least it promises not to be boring which is good enough to continue....how bad can it get? :)
Books idea of 'when to call panic'
Suggests that 'unwrap' is OK to use if the 'user knows better than the compiler'
Feel this is a bad practice.
User is almost never smarter thant the compiler
Code evolves over time so what might once have been sound reasoning might no longer be sound.
Rustlang Generics/Traits
Generics appear to work as expected
Due more I feel to simple lack of imagination, or perhaps this was designed later when saner heads were in the mix.
'Monomorphization' my what a fancy word for compile-time rewriting of code to be more concrete.
Funny how some 'magic' isn't so magical under the hood.
Interesting idea to have 'paritally defined' Generics, I think I like it.
Traits == Interface
Book notes there are 'some differences' to an Interface...what are they?
Liking the 'Partial based on Type' interface implementation
Novel? First I've seen. Wonder how it feels in practice?
'Closed traits' (can't implement a 'trait' outside of your 'crate')
Appears to be targeting the 'Ruby' problem.
Agree with no 'overwriting' of external type trait impls (what killed Ruby)
Want to disagree with 'adding' traits/impls to other types.
Need to think about that some more, feel the pendulum swung a bit too far.
Feels like adding could lead to some nice behavior naming-wise and scoping-wise
pub fn notify(item: impl Summary) {
ouch that 'impl' is an ugly way to specify an interface
Why is it needed, why isn't item: Summary sufficient?
Even if it is needed that 'impl' wants do mean implimentation not interface/trait
pub fn notify<T: Summary>(item: T) {
Good to know there is an even uglier form (so I suppose one should be thankful for the slightly prettier ugly form?)
Better: pub fn notify(item : Summary) {
See no purpose currently for all the extra-special-syntax care to distinguish interfaces and types
Feels like the phrase 'coding to the interface' would be a foreign idea to the language-designer (ignorance/spite or is there a solid reason?)
The + syntax for 'combining' interfaces feels a bit lazy at first glance but 'works'
Better would be to give a proper name to a combination of interfaces
Now I see the reason for the ugly: the where clause
Feeling this fits into 'create problem so I can show how clever I am at solving the problem I just created' idiom Rust seems to be enamoured with
Better to not have the problem in the first place
pub fn notify(t : DisplayClone, u: CloneDebug) is way easier to read, type, understand
pub fn notify(t: Display + Clone, u: Clone + Debug) also works for the lazy
Just like the problem with the unneeded Reference/Value distinction, the manufactured problem of Type/Interface distinction does not benefit language-users, and in fact is a detriment.
First 'crate' and now 'impl', I'm feeling that the language-designers where big fans of the smurfs and/or brainfuck
Perhaps should feel grateful that 'crate' and 'impl' are separate (would make about as much sense to combined them)
Rustlang Lifetimes
WTF?
Why is this needed?
Surely the compiler has enough information to determine lifetimes of all references or safe defaults?
Why not just use the function scope as the lifetime (as it seems to be able to do in most cases?)?
when a function has references to or from code outside that function, it becomes almost impossible for Rust to figure out the lifetimes of the parameters or return values on its own. The lifetimes might be different each time the function is called. This is why we need to annotate the lifetimes manually. is the explanation we are given.
Why not just default to `a for lifetime of all arguments?
aka if fn longest<'a>(x: &'a str, y: &'a str) -> &'a str { is a 'reasonable' solution why not just have the compiler write it?
Feeling like the answer is because we might be able to release some resource just a tad sooner if we give user assembly-level access to the internals of reference counter/borrower.
The real answer I fear is that the language-runtime developers sort of 'gave up' when the counter/borrower logic got too much, and left burden of their failings on the language-user, so that code could be written for some benchmark to 'prove' that Rust was just as fast/efficient as C.
IF this is a 'good' feature and not a 'BS' feature as I suspect here is a better way:
the function signature syntax itself is crap (as I've come to expect now)
Better would be fn longest(x: &[a]str, y: &[b]str) -> &[a]str {
no reason to 'define' the names ahead of usage
IMHO using square brackets makes it feel a bit more like the lifetime name is a particular 'part' of the reference (which it is, a 'time' part instead of a 'value' part)
weird apostrophe syntax is just confusing and weird, that compounds the confusion and weirdness
if one is going to do something weird at least hit the language-user with one weird thing at a time not two at once
the type syntax is crap
're-using' <> to mean either 'lifetime name' OR 'type name' has that now-familiar 'smurfy' smell to it.
better would be struct ImportantExcerpt[a] { which would allow struct ImportantExcerpt[a]<a> {
name collision between generic-type-name and lifetime-name avoided since lifetimes would only be referenced inside []
to be clear feeling rather strongly now that this is a BS language 'non-feature' but if you're going to do it, at least make the syntax not as horrible
Feel Lifetimes are now the 3rd manufactured pain-point (No reason to have except to solve its own pain)
Lol! After writing a lot of Rust code, the Rust team found that Rust programmers were entering the same lifetime annotations over and over in particular situations. These situations were predictable and followed a few deterministic patterns. The developers programmed these patterns into the compiler’s code so the borrow checker could infer the lifetimes in these situations and wouldn’t need explicit annotations.
The explanation of 'lifetime elision' admits that it isn't needed, and is due to failings of compiler to determine lifetimes (can at least commend them on honesty)
Better would be to use safe default lifetimes (I feel there is always going to be a safe option that is reasonable (function scope if nothing else))
If someone needs extra performant code then let them break out the ugly syntax in those (no doubt super rare) instances where that is needed.
Automated Tests
Well done to include automated testing as a 'built-in' for the language
Doesn't seem to be a clear distinction between 'test' source and 'production' source
Feel a simple /tests directory would have solved
Basic functionality, no real depth at all
no built-in ability co 'compose' assertion conditions
no concept of 'test resources'
Rust seems to want unit tests to live next to code
Think I can be OK with this as long as it is possible to break test code out into separate files with easily distinguishable names
Ah...there is a tests directory but is for integration testing (no access to non-public code)
Intermission chapter building a grep command
Tedious to wade through but feel it is important to get a feel for style
Better if they weren't trying to teach programming AND Rust at the same time
little chance that the audience is new to programming
Note that they use clone as a work-around for the the poor reference handling
If it is too broken to use in THE example of how to write code, then what was the point?
Shockingly, at least the book author thinks a lambda is a closure
IMHO closure is something that 'closes over' a scope (doesn't necessarily have to be a lambda but usually is)
One forms a closure and it is possible that lambdas will create a closure if they reference a scoped variable
However, if the lambda doesn't reference any scoped variables then it isn't a 'closure'
I'm frightened that the language wants to refer to lambdas a closures....please...no....
calling a lambda a closure in this way, is like calling all refrigerators 'Kenmore's
Admit to skimming here. Life is short.
Feeling that Rust is at this point language with many built-in flaws and might not be worth the effort
Will continue as the meaty subjects like concurrency and functional programming are ahead and there is still a slim chance the language isn't a total loss
It took the book author 40 paragraphs and 695 words to say: eprintln! prints to stderr. FML, my patience is wearing thin....
Rustlang Functional Features
Closures
Annoying as mentioned earlier to refer to all lambdas as closures
Feeling that this ignorant way of speaking (not using 'ignorant' in a mean way, feeling language-designer just didn't know the distinction), speaks to the ignorance of the language-designers at this point and I think hints to all the troubles this language has.
Feel the |var| {} syntax is OK but that the | is a bit of a clunky character.
Prefer var -> {} style
Telling that they mention Ruby as a inspiration choice for this syntax
Iterators
OMFG 'Iterator' has a:
map
collect
filter
Book section does a crap job of explaining or env just listing the Functional features
flatmap? : Yes https://doc.rust-lang.org/std/iter/struct.FlatMap.html
Full Iterator docs: https://doc.rust-lang.org/std/iter/index.html
Option does have a map: https://doc.rust-lang.org/std/option/enum.Option.html#method.map
Result does have a map: https://doc.rust-lang.org/rust-by-example/error/result/result_map.html
Feeling will have to look elsewhere for reasonable explanation, but happy that Rust does indeed appear to have first-class functional language features.
Faith somewhat restored
Feeling at this moment that perhaps this language is possibly salvageable if one can ignore the bad parts.
Feeling relieved (Up until I saw map on Iterator the book/language had beaten me down almost to the point of quitting)
Rustlang Documentation
Love that documentation comments can be run as tests
Feel there is a strong synergy between tests and documentation in any language
Glad Rust recognizes the synergy
Feel there is room for more here, but will give credit for a useful innovation
Meh on the ///
Would prefer an open/close style syntax at least as an option
Crate.io
Unfortunate and odd that github account is only auth mechanism (would expect to see multiple oauths)
Like the balance that 'yank' provides for publishing and 'hiding' bad-publications from view.
Feel that Semantic versioning is a mistake but an understandable one
Simple monotonic like date or 1-up integer would be better
Like the use of a publisher-secret but needs to be a signing key (maybe it is?)
Workspaces
Meh on the name, that probably wants to be something like 'Project'
No a fan of the relative paths, those want to be unique names for sub-modules
cargo install
Not a fan of 'global' 'hidden' binary installation
Better would be to have some form of built-in packager/package manager instead of going the 'hacky' way
cargo-something named binaries as 'cargo commands'
Like the naming convention and the extensibility this offers
Not a fan of 'anything ing $PATH'
Needs to be a ~/.cargo/extensions and/or project-name/.cargo/extensions
Rustlang Smart Pointers
Just reading the name I already feel like I'm going ot be sick...but let's power through....
Box puts thing on the heap
Why not just have the runtime figure out what to box/unbox?
Oh joy (not) there is a * dereferencing syntax.
At least it isn't something weird, and behaves in a c-like fashion
Going to call this the 2nd great universal mistake after 'null'
Possibly even worse, prolly Trillion-dollar instead of Billion-dollar mistake.
Deref trait
Why aren't all values 'dereferencable' ?
Hints that this isn't a real memory-pointer kind of reference/dereference like in C
Rather purposefully built-in as a control mechanism for what is on/off heap but want to make it look a bit like C
All the pain and none of the real purpose?
Why not just have a 'HeapOnly' annotation if one wants to fetishize controlling where values are stored?
Why is there a different Deref(Mut)trait for immutable / mutable?
Drop trait
Dangerously Interesting that the language gives access to when things go out of scope
Neat for logging but feels unsafe
Whole point of Rust IMHO is that it deals with resource de-allocation for the language-user
Having a destructor kind of puts a lie to the safety if language-users can fiddle with the de-allocation process too much.
Sometimes non-language resources need to be cleaned up as well, so happy to have a mechanism for indicating when this happens
Feel strongly that this should NEVER be used for language-resources (memory)
Rc trait (Reference Counter)
First impression: Dangerous to the point of probably not being recommended for use
Expect the language compiler/runtime to deal with ALL reference counting
Feels like the sort of thing that exists purely for language-users to get into trouble with it
Good debug for Rust-language compiler writers but not seeing point for language-users
Feels like this wants to be an internal-language construct that is NOT visible to normal language-users
Except possibly for debug-purposes where read-only parts like counts are exposed
RefCell
Hard to imagine a legit scenario where the compiler wasn't able to correctly reference-count
Possible this is due to lack of imagination on my part....but at the moment I don't see it.
Reference Cycles
Interesting to note that Rust does not perform any detection of reference cycles
Disappointing but that is a hard problem...something to think about
Feel that having first-level language abstractions for references only makes the cycle problem worse
Easier for language-user to have 'hidden' cycles
Rustlang Concurrency
Threads
no green threads as built-in (hints to a separate library)
move keyword to move variables
why not just move on reference?
Message Passing
Channels behave as expected
Suspect would be better threads should just create by default (result of the spawn)
Nice that Rust protects against accessing mutable variables after sending
Better if only immutable were allowed to prevent the problem
what madman would send a mutable thing and expect it to work between threads?
again feels like a 'manufactured problem' that serves no purpose
Shared Memory
Dangerous but potentially useful
Mutexes work as expected
Atomic References via Arc work as expected
Sync and Send 'marker traits'
'Build Your Own Concurrency' seems like a bad idea
I trust language-designers barely to get concurrency right, not sure if trusting a library here is a good thing
Rustlang OO
If a language must have inheritance to be an object-oriented language, then Rust is not one
Happy to see this 'non feature'
Book recommends traits (interfaces) as means for one value to have multiple 'types'
Good practice, happy it recommends
Dynamic dispatch for traits?
Why?
The compiler doesn’t know all the types that might be used with the code that is using trait objects, so it doesn’t know which method implemented on which type to call
This doesn't seem correct
'Object Safety' of 'Trait Objects'
There are no generic type parameters
WTF, why?
once you’ve used a trait object, Rust no longer knows the concrete type that’s implementing that trait
Yes it should be able to
Feels like a huge miss on the compiler, is this going to be fixed?
Interesting to note that the book spends 4351 words and 263 paragraphs explaining how one might want to implement the OO 'state pattern'
compared to 2422/173 on how to use map/collect and doesn't even mention flatmap)
Feeling the book is not using my time/attention well
Rustlang Pattern Matching
Good that matching is exhaustive
let is defined as let PATTERN = EXPRESSION;
Can't quite put my finger on it but I'm seeing let used in ways that feel inconsistent weird
if let Some(x) = some_option_value {
Why isn't this if let x: Option::Some = some_option_value; ?
Function arguments are pattern-matching in nature
fn print_coordinates(&(x, y): &(i32, i32)) {
The above wants to be:
fn print_coordinates(i32:x, i32:y) {
Another good example of how poor choices (references and bad type specification syntax) make overly syntax-garbage-filled/unreadable code.
'irrefutable' and 'refutable' to distinguish between full and partial pattern matching
Seems like reasonable language, is this a Rustism or is this widely used terminology?
Good that destructuring is a thing in Rust
Match Guards
Some(x) if x < 5 => println!("less than five: {}", x),
Really neat idea. Similar to concept in TypeScript
Would love to see this as a 'first class' definition of a type (similar to TypeScript)
| as part of match-gaurd syntax to mean 'OR'
no corresponding AND ?
Can see it getting messy, possibly less is more here
@ binding operator
Like the idea
Seems a bit too verbose
Message::Hello { id: id_variable @ 3...7 } => {
better Message::Hello::id { 3...7 } => {
and then reference the 'bound' variable to id
Rustlang Advanced Features
Trait 'placeholder' types
Why?
From reading the explanation it appears to be a hack to get round some sort of 'Generic within a Generic' combinitorial implementation issue
How often is that a thing in practice?
Is that even bad to specify the implementation multiple times?
Supertrait
Lol inheritance is bad, lets have composition via interfaces...but lets let the interfaces inherit?!?!?!
Really dumb stupid idea. You know it is wrong and do it anyway. Bad Rust, no cookie.
Newtype Pattern
Delegation to create a new wrapper type
Type aliasing
type keyword
Possibly some interesting things to be done here...will have to play around to see.
! empty type
never return
Macros
Why the need to define before use?
Get that it would make the compiler harder to write but doesn't seem like it would be that much harder
OMFG entirely different syntax
macro_rules!
Syntax appears to be very specific and limited.
Feels like more of an afterthought of 'how can we handle vargs'...oh I have an idea...kinda thing
Procedureal Macros
Ah this was what I was expecting...
Rust doesn’t have reflection capabilities
Disappointing but good to know
TLDR appears one has full control over the AST...nice!
quote! macro...that could be interesting...can use outside in 'normal' code?
WOOHOO! it works. Can use quote in 'normal' code...{maniacal laughter}
#name nice, built-in templating
Why does the limited/wierd macro_rules! form exist? (just a stop-gap/hangover?)
stringify! works as expected. Nice!
Random Thoughts (Notes I made as reading/discovering that didn't fit in chapter context)
The fact that String literals like "foo" are references is annoying
consequence of poor reference/value defaults
just forget String exists and use &str
Better if language just removed idea of reference (like 'null' the idea of a value-reference is just dangerous and bad idea that needs to die)
In those rare instances where one needs to go down to 'assembly code level' have a special syntax for that don't burden 99.99% of language-user's
Hate &str as the type name of 'String reference'. Should be &String (or they should have named the type str)
Don't like from as value Type constructor/builder. Prefer shorter of
Don't like the documentation style where mut is used in examples. Feel it encourages bad practices.
Starting to despise the word 'crate' to mean 'module' and 'library' and 'executable' where one has to construct the meaning of 'crate' via context
Feeling 'crate' is a 'smurfism'
Stunningly short-sighted to have dependencies all route through a special name like https://crates.io
What happens to all the Rust code of the world when crates.io turns evil?
Better would be to have packages identified by secure public identifier (pub-key signed) and let user's choose where to get the binary representations
How does one control artifact dependencies in Rust 'for real'? (because the default appears to be broken by design)
Love the fact that line numbers AND column numbers are specified on compile errors
Starting to feel like the reasons for the bad-choices regarding reference 'exposure' is mostly in place so designers can 'show off' the fact that the compiler will detect when the language-user does the wrong thing. Sort of like putting down stumbling-blocks intentionally just so they can 'be the hero' and show off saving the user.
Better not have the intentional stumbling-blocks IMHO
WTF? Book suggests 'asking' the compiler for the return-type of a function by 'guessing' and then looking at the compile-error
Good that the compiler does this
Bad that the book suggests this as a reasonable practice to follow
Is there no language-server for Rust?
Surely any reasonable IDE would be able to suggest type
Why isn't this in the book?
Summary and Thoughts
There a lot of things I don't like about Rust
Unexpected how many things I didn't like
So many things wrong that I suspect Rust will be a stepping-stone language
Big things Rust got wrong:
References / Pointers
I get that Rust wants to be a systems programming language but there are better and worser ways of handling the 'dirty' details.
Feel that in the rare cases where References/Pointers are needed they should be abstracted from the language-user as types not syntax
Feel that the compiler should make decisions on how the stack/heap is controlled with the user hinting (via ugly syntax) for those rare instances where language-users want more control
Lifetimes / Ownership
Happy that Rust internally does the accounting for memory/resource accounting (The big draw for the language)
Sad that they are so proud of this feature that they burden the language-user with it
Feel strongly this should be a language-runtime concept that is hidden from language-user
In general, Rust syntax is too verbose and too unnecessarily ugly for the sake of being different
Types want to be defined in a c-like 'on the left side' ('C' won, deal with it)
In general following the lead of C/C++/C#/Java/Javascript aka mainstream is preferred unless there an obvious improvement to be made
Lots of NIH renaming that doesn't serve any purpose other than to be different
Trait = Interface
Crate = Module
Workspace = Project
The Good Parts
No nulls (Awesome!)
Pattern Matching well implemented
exhaustive
destructuring
Automatic Memory/Resource accounting (sort of)
Hard to call this a 'full win' due to the ugly syntax/semantics
The next language that uses Rust as inspiration will have this without all this ugliness
'Rust without the Lifetime garbage' will be the slogan
The 'half-baked' nature of the language-user-interface is what will compel the next language into existence
Obviously people want this or Rust wouldn't exist, shame that it was implemented so imperfectly
Entirely possible that I'm being too harsh and that while the book makes a big deal of Lifetimes, in practice they are never used.
Still bad/ugly they exist as a thing, but the degree to how often language-users are confronted with this mess, is the degree to which Rust will prove to be useful
Overall
I still want to like Rust and am not discouraged enough to give up on it entirely
But I am discouraged enough to look for alternatives before I spend any significant time with it
I note that Swift is also LLVM based which should give it similarly wide reach but is geared more for Apple platforms which makes it less attractive (narrows the language-community)
Feel that most of the heavy-lifting of the language is handled by LLMV, and it might be worth-while to spend time creating the better language vs trying to ignore the bad/ugly parts of Rust
0 notes
nycvideomarketing · 6 years ago
Link
One question we get at least a few times a week is: “How much does a marketing video cost?” Our standard response is “it depends”.  Until a quality corporate video production company has some information about the scope of the project, it is hard to throw numbers out.  Since many businesses are just getting started with B2B promotional video, they are surprised at how much quality video production costs.  Occasionally people will ask their other burning question… “Why does this video cost so much?”
Since many companies are just starting to get familiar with using B2B promotional video, and what goes into the corporate video production process, we thought it would be a good idea to give a comprehensive outline from the perspective of video production companies.
Corporate video production strategy
Before video production companies even think about shooting video, we need to know your goals and objectives.
For example:
What is your messaging?
Where do you intend on distributing the video?
How does this video fit into the rest of your digital marketing and communications efforts?
How long will it take to shoot the visuals for your video?
This initial stage is the most critical and most overlooked part of B2B promotional video production.
As a top New York video production company, the research and development time that we put into this starting point with each of our clients makes us stand out against other corporate video production companies. In order for our approach to work, we need to take an inside look at what your marketing and communications teams are working on to properly integrate your video into that work-flow. An advertising video will have a different distribution work-flow than a corporate overview video for example – see One Video is not a Video Marketing Strategy.
In essence, we become part of your marketing team to deliver the best possible B2B promotional video project to hit your goals and objectives. We don’t just go to your company with cameras and start filming anything and everything. We go in with a firm understanding of the message you want to convey and what images we will need to capture.
Since this is one of the most overlooked parts of quality video production, many firms don’t think that these professional services apply to the question, “How much does a marketing video cost?” The pre-production strategy and R&D time adds up…and like with any other industry, time equates to money in the marketing video production world. For more information, check out “What is the average cost of a video?”
Allocating the right professional crew
Tumblr media
MultiVision Digital – Corporate Video Production for Commercial Observer
Once video production companies have a firm understanding of the goals and messaging, it’s time to start planning the technical execution and managing logistics.
But, you’re thinking, don’t video production companies just come with a videographer? Not at all.
Top New York video production companies have a wide array of technical skills and equipment that we can bring to your project that allow us to meet many different budget needs.  And based on your budget we can get you the right crew for the right project that is technically skilled with cinematography, but also has a solid understanding of your goals.
Because the strategy and messaging is locked down, the video crew stays focused on their part of the project.  The pre-production stage provides a road map for what a corporate videographer will capture and in what style it’s captured along with what people say when they are in front of the camera.
Planning logistics
Planning logistics depends on how many locations we have to go to, where the locations are, and how much content we need to capture. Sometimes to produce the video you need; we might have to go to 3 or 4 different locations with a huge amount of gear to execute the production process. This might take 2 or 3 days to shoot and video production companies will charge for the crew and equipment for each day. 
Therefore, a one location set up for individual sit down interviews with a basic lighting setup is going to be a lot cheaper than a multiple day, multiple location shoot.
While you will only see video production companies at your location, there’s a lot of prep work like equipment management, transportation, set up and more. This means extra time behind the scenes that goes into the question, “How much does a marketing video cost?”
As a top New York video production company, we know that making the most out of your investment is one of your top priorities.  That’s why we spend most time on the strategy and asking you questions.  If we can get a 3 day shoot down to 2 days by being more productive and efficient, we have just saved you money.
Equipment and technical execution
As the shoot approaches, there is the matter of what type of equipment you will need, and what your budget allows for.  As stated earlier, top New York video production companies have an inventory of equipment – cameras, lights, flags, diffusers, gimbals, and other little items that make your video look great.
We won’t bore you with all of the technical details of camera and lighting gear here, but here’s a very brief rundown:
Does the shoot call for a 6K, 4K or 1080p video camera?  6K and 4K cameras give you better video (sharper, more pixels, wider post-production capabilities) but will cost more.
What kind of and how many lenses do you need to get the footage to stand out?  Lenses can cost more then cameras and is another major overlooked part of how much a video is going to cost.  But, more variety and better lenses make for a higher quality video.
How simple or complex will your lighting setups be?  Lighting is an art and bad lighting can be detrimental to the objective of your video. More and higher quality lights / flags / diffusion often means a better-looking interview and a better looking subject but, again, will cost more.
What kind of audio will you need and will you need a separate person to monitor audio?  While a corporate videographer can do both, do you really want them too?  It is hard for one person to be doing two critical jobs – monitoring the camera AND monitoring audio increases the probability of something going wrong.  Having a separate audio person decreases the risk but adds another resource to the scope of the project.
Tumblr media
MultiVision Digital – Corporate Video Production for Haydon Corporation
Gear becomes very complicated, but it is an important technical consideration for how your final video product will turn out.  Which, like it or not, translates directly to your sales and marketing communications goals and how it influences viewers.
Finally, once all of this is figured out, we can schedule the shoot.  Managing time is a key quality of video production companies.  We need to manage our crew and your team to make sure everyone is on time, things go smoothly and we capture everything we need to produce the video you are expecting.
Post-production – Editing
Tumblr media
Post-Production – Editing in Premiere Pro
Congratulations!  Your shoot is done!  All the footage is now being synced and backed up to another hard drive for safety…multiple copies reduces risk. It’s time to start the most time-consuming part of producing corporate video content….editing…and more editing….and just when you’re really tired of editing…. how about you edit some more!
The post-production process is simultaneously the most fun and most difficult part of making a video. And although we have our post-production roadmap, it takes a long time to go through all the content and craft a B2B promotional video to meet your objectives.
Post-production is where we deal with the majority of issues that we couldn’t prevent during the production process. Did the person being interviewed speak eloquently and without many mistakes or did they have a really difficult time conveying their message succinctly by using filler words like “ums” and “ahs” or other mannerisms. If there are a bunch of these then we have to cut around them. This takes a lot of time and nuanced editing skills.
We also have to go through the B-roll (various footage of your facility, people working, visually appealing content, etc.) to see what matches your messaging the best. We cut all of these elements together with your interview or voiceover messaging.  The more complicated your video is and how much information we need to convey equates to more time spent editing and thus more of an investment on your end.
Post-production – Resolutions and Motion Graphics
If we shot in 4K (4 times as sharp as 1080p) for your project (which we prefer to do nowadays), then that also means the editing process requires more computer power and hard drive data storage. This is absolutely worth it, but does add a little to the cost for both production and post-production. If we shot in 6K (9 times as sharp as 1080p), then of course even more cost is added. We don’t recommend shooting 6K unless it’s for a big budget commercial or advertisement shoot as it’s unnecessary.
Tumblr media
1080p, 4K, and 6K Resolution Comparison Chart
To further complicate the editing process, it is a regular occurrence that filming conditions in offices, warehouses, and similar locations are not ideal. Perhaps there are machines or A/C units or people making too much noise in the background and thus audio needs to be worked on. Or maybe someone at your company wasn’t told that we were filming on a specific day and walks into a shot or does something distracting.
There are so many different variables that we as a corporate video production and marketing company cannot account for (and often neither can our clients!). It’s just a reality of the production process. And this is ok! It just means we need to take the extra time to fix things we have no control over.
Finally, when considering “How much does a marketing video cost?”, you will have to decide if you want or need motion graphics, animation, color correction and other technical things done for your video.
Motion graphics animation often means hiring more than just your average video editor. We need someone skilled in many different types of software to execute these more complex post-production tasks. Like everything else in post-production, it is extremely tedious and time consuming, but lots of fun too! If you need complicated motion graphics like animated titles, logos, lower thirds, or visual effects…*drum roll please*….It will cost more!
Making sure your money is well spent!
Now you have a preview, or final version, of your video after some back and forth comments. No matter how much scout work and preparation we do, we always expect some changes, as video production, at the end of the day, is a creative process. This is ok! It’s to be expected.
At this stage, the biggest priority for top video production companies is making the client happy with the video. So, as a top New York corporate video production company, we are ok with sending different versions and minor changes in the edit that you desire at no extra cost. Obviously, this has to be done within the initial pre-production scope so we don’t produce an entirely different video. However, we are there every step of the way to get picky alongside you and make sure you are happy with the final product!
Conclusion
We love what we do and wouldn’t have it any other way. And we want all of our current and future clients to have an understanding of why we charge what we charge. So, perhaps you were initially expecting a video to cost $6,000 and we quoted $9,000. Or you thought $12,000 but we quoted $18,000 and so on and so forth. While this post is more of a high-level overview of “How much does a marketing video cost?”, we’re always happy with explaining more of the technical details of any stage of the production process with you personally.
Producing corporate video production content for businesses or branded commercial content is always fun! We just want people to understand that it is a lot of time and effort…we can’t just pull out our smartphones, hit record, and call it a day!
The post What actually goes into making a corporate video production project? appeared first on MultiVision Digital.
0 notes
siliconwebx · 6 years ago
Text
Creating a Reusable Pagination Component in Vue
The idea behind most of web applications is to fetch data from the database and present it to the user in the best possible way. When we deal with data there are cases when the best possible way of presentation means creating a list.
Depending on the amount of data and its content, we may decide to show all content at once (very rarely), or show only a specific part of a bigger data set (more likely). The main reason behind showing only part of the existing data is that we want to keep our applications as performant as possible and avoid loading or showing unnecessary data.
If we decide to show our data in "chunks" then we need a way to navigate through that collection. The two most common ways of navigating through set of data are:
The first is pagination, a technique that splits the set of data into a specific number of pages, saving users from being overwhelmed by the amount of data on one page and allowing them to view one set of results at a time. Take this very blog you're reading, for example. The homepage lists the latest 10 posts. Viewing the next set of latest posts requires clicking a button.
Tumblr media
The second common technique is infinite scrolling, something you're likely familiar with if you've ever scrolled through a timeline on either Facebook or Twitter.
Tumblr media
The Apple News app also uses infinite scroll to browse a list of articles.
We're going to take a deeper look at the first type in this post. Pagination is something we encounter on a near-daily basis, yet making it is not exactly trivial. It's a great use case for a component, so that's exactly what we're going to do. We will go through the process of creating a component that is in charge of displaying that list, and triggering the action that fetches additional articles when we click on a specific page to be displayed. In other words, we’re making a pagination component in Vue.js like this:
Let's go through the steps together.
Step 1: Create the ArticlesList component in Vue
Let’s start by creating a component that will show a list of articles (but without pagination just yet). We’ll call it ArticlesList. In the component template, we’ll iterate through the set of articles and pass a single article item to each ArticleItem component.
// ArticlesList.vue <template> <div> <ArticleItem v-for="article in articles" :key="article.publishedAt" :article="article" /> </div> </template>
In the script section of the component, we set initial data:
articles: This is an empty array filled with data fetched from the API on mounted hook.
currentPage: This is used to manipulate the pagination.
pageCount : This is the total number of pages, calculated on mounted hook based on the API response.
visibleItemsPerPageCount: This is how many articles we want to see on a single page.
At this stage, we fetch only first page of the article list. This will give us a list two articles:
// ArticlesList.vue import ArticleItem from "./ArticleItem" import axios from "axios" export default { name: "ArticlesList", static: { visibleItemsPerPageCount: 2 }, data() { return { articles: [], currentPage: 1, pageCount: 0 } }, components: { ArticleItem, }, async mounted() { try { const { data } = await axios.get( `?country=us&page=1&pageSize=${ this.$options.static.visibleItemsPerPageCount }&category=business&apiKey=065703927c66462286554ada16a686a1` ) this.articles = data.articles this.pageCount = Math.ceil( data.totalResults / this.$options.static.visibleItemsPerPageCount ) } catch (error) { throw error } } }
Step 2: Create pageChangeHandle method
Now we need to create a method that will load the next page, the previous page or a selected page.
In the pageChangeHandle method, before loading new articles, we change the currentPage value depending on a property passed to the method and fetch the data respective to a specific page from the API. Upon receiving new data, we replace the existing articles array with the fresh data containing a new page of articles.
// ArticlesList.vue ... export default { ... methods: { async pageChangeHandle(value) { switch (value) { case 'next': this.currentPage += 1 break case 'previous': this.currentPage -= 1 break default: this.currentPage = value } const { data } = await axios.get( `?country=us&page=${this.currentPage}&pageSize=${ this.$options.static.visibleItemsPerPageCount }&category=business&apiKey=065703927c66462286554ada16a686a1` ) this.articles = data.articles } } }
Step 3: Create a component to fire page changes
We have the pageChangeHandle method, but we do not fire it anywhere. We need to create a component that will be responsible for that.
This component should do the following things:
Allow the user to go to the next/previous page.
Allow the user to go to a specific page within a range from currently selected page.
Change the range of page numbers based on the current page.
If we were to sketch that out, it would look something like this:
Tumblr media
Let’s proceed!
Requirement 1: Allow the user to go to the next or previous page
Tumblr media
Our BasePagination will contain two buttons responsible for going to the next and previous page.
// BasePagination.vue <template> <div class="base-pagination"> <BaseButton :disabled="isPreviousButtonDisabled" @click.native="previousPage" > ← </BaseButton> <BaseButton :disabled="isNextButtonDisabled" @click.native="nextPage" > → </BaseButton> </div> </template>
The component will accept currentPage and pageCount properties from the parent component and emit proper actions back to the parent when the next or previous button is clicked. It will also be responsible for disabling buttons when we are on the first or last page to prevent moving out of the existing collection.
// BasePagination.vue import BaseButton from "./BaseButton.vue"; export default { components: { BaseButton }, props: { currentPage: { type: Number, required: true }, pageCount: { type: Number, required: true } }, computed: { isPreviousButtonDisabled() { return this.currentPage === 1 }, isNextButtonDisabled() { return this.currentPage === this.pageCount } }, methods: { nextPage() { this.$emit('nextPage') }, previousPage() { this.$emit('previousPage') } }
We will render that component just below our ArticleItems in ArticlesList component.
// ArticlesList.vue <template> <div> <ArticleItem v-for="article in articles" :key="article.publishedAt" :article="article" /> <BasePagination :current-page="currentPage" :page-count="pageCount" class="articles-list__pagination" @nextPage="pageChangeHandle('next')" @previousPage="pageChangeHandle('previous')" /> </div> </template>
That was the easy part. Now we need to create a list of page numbers, each allowing us to select a specific page. The number of pages should be customizable and we also need to make sure not to show any pages that may lead us beyond the collection range.
Requirement 2: Allow the user to go to a specific page within a range
Tumblr media
Let's start by creating a component that will be used as a single page number. I called it BasePaginationTrigger. It will do two things: show the page number passed from the BasePagination component and emit an event when the user clicks on a specific number.
// BasePaginationTrigger.vue <template> <span class="base-pagination-trigger" @click="onClick"> </span> </template> <script> export default { props: { pageNumber: { type: Number, required: true } }, methods: { onClick() { this.$emit("loadPage", this.pageNumber) } } } </script>
This component will then be rendered in the BasePagination component between the next and previous buttons.
// BasePagination.vue <template> <div class="base-pagination"> <BaseButton /> ... <BasePaginationTrigger class="base-pagination__description" :pageNumber="currentPage" @loadPage="onLoadPage" /> ... <BaseButton /> </div> </template>
In the script section, we need to add one more method (onLoadPage) that will be fired when the loadPage event is emitted from the trigger component. This method will receive a page number that was clicked and emit the event up to the ArticlesList component.
// BasePagination.vue export default { ... methods: { ... onLoadPage(value) { this.$emit("loadPage", value) } } }
Then, in the ArticlesList, we will listen for that event and trigger the pageChangeHandle method that will fetch the data for our new page.
// ArticlesList <template> ... <BasePagination :current-page="currentPage" :page-count="pageCount" class="articles-list__pagination" @nextPage="pageChangeHandle('next')" @previousPage="pageChangeHandle('previous')" @loadPage="pageChangeHandle" /> ... </template>
Requirement 3: Change the range of page numbers based on the current page
Tumblr media
OK, now we have a single trigger that shows us the current page and allows us to fetch the same page again. Pretty useless, don't you think? Let's make some use of that newly created trigger component. We need a list of pages that will allow us to jump from one page to another without needing to go through the pages in between.
We also need to make sure to display the pages in a nice manner. We always want to display the first page (on the far left) and the last page (on the far right) on the pagination list and then the remaining pages between them.
We have three possible scenarios:
The selected page number is smaller than half of the list width (e.g. 1 - 2 - 3 - 4 - 18)
The selected page number is bigger than half of the list width counting from the end of the list (e.g. 1 - 15 - 16 - 17 - 18)
All other cases (e.g. 1 - 4 - 5 - 6 - 18)
To handle these cases, we will create a computed property that will return an array of numbers that should be shown between the next and previous buttons. To make the component more reusable we will accept a property visiblePagesCount that will specify how many pages should be visible in the pagination component.
Before going to the cases one by one we create few variables:
visiblePagesThreshold:- Tells us how many pages from the centre (selected page should be shown)
paginationTriggersArray: Array that will be filled with page numbers
visiblePagesCount: Creates an array with the required length
// BasePagination.vue export default { props: { visiblePagesCount: { type: Number, default: 5 } } ... computed: { ... paginationTriggers() { const currentPage = this.currentPage const pageCount = this.pageCount const visiblePagesCount = this.visiblePagesCount const visiblePagesThreshold = (visiblePagesCount - 1) / 2 const pagintationTriggersArray = Array(this.visiblePagesCount - 1).fill(0) } ... } ... }
Now let's go through each scenario.
Scenario 1: The selected page number is smaller than half of the list width
We set the first element to always be equal to 1. Then we iterate through the list, adding an index to each element. At the end, we add the last value and set it to be equal to the last page number — we want to be able to go straight to the last page if we need to.
if (currentPage <= visiblePagesThreshold + 1) { pagintationTriggersArray[0] = 1 const pagintationTriggers = pagintationTriggersArray.map( (paginationTrigger, index) => { return pagintationTriggersArray[0] + index } ) pagintationTriggers.push(pageCount) return pagintationTriggers }
Scenario 2: The selected page number is bigger than half of the list width counting from the end of the list
Similar to the previous scenario, we start with the last page and iterate through the list, this time subtracting the index from each element. Then we reverse the array to get the proper order and push 1 into the first place in our array.
if (currentPage >= pageCount - visiblePagesThreshold + 1) { const pagintationTriggers = pagintationTriggersArray.map( (paginationTrigger, index) => { return pageCount - index } ) pagintationTriggers.reverse().unshift(1) return pagintationTriggers }
Scenario 3: All other cases
We know what number should be in the center of our list: the current page. We also know how long the list should be. This allows us to get the first number in our array. Then we populate the list by adding an index to each element. At the end, we push 1 into the first place in our array and replace the last number with our last page number.
pagintationTriggersArray[0] = currentPage - visiblePagesThreshold + 1 const pagintationTriggers = pagintationTriggersArray.map( (paginationTrigger, index) => { return pagintationTriggersArray[0] + index } ) pagintationTriggers.unshift(1); pagintationTriggers[pagintationTriggers.length - 1] = pageCount return pagintationTriggers
That covers all of our scenarios! We only have one more step to go.
Step 5: Render the list of numbers in BasePagination component
Now that we know exactly what number we want to show in our pagination, we need to render a trigger component for each one of them.
We do that using a v-for directive. Let's also add a conditional class that will handle selecting our current page.
// BasePagination.vue <template> ... <BasePaginationTrigger v-for="paginationTrigger in paginationTriggers" :class="{ 'base-pagination__description--current': paginationTrigger === currentPage }" :key="paginationTrigger" :pageNumber="paginationTrigger" class="base-pagination__description" @loadPage="onLoadPage" /> ... </template>
And we are done! We just built a nice and reusable pagination component in Vue.
When to avoid this pattern
Although this component is pretty sweet, it’s not a silver bullet for all use cases involving pagination.
For example, it’s probably a good idea to avoid this pattern for content that streams constantly and has a relatively flat structure, like each item is at the same level of hierarchy and has a similar chance of being interesting to the user. In other words, something less like an article with multiple pages and something more like main navigation.
Another example would be browsing news rather than looking for a specific news article. We do not need to know where exactly the news is and how much we scrolled to get to a specific article.
That’s a wrap!
Hopefully this is a pattern you will be able to find useful in a project, whether it’s for a simple blog, a complex e-commerce site, or something in between. Pagination can be a pain, but having a modular pattern that not only can be re-used, but considers a slew of scenarios, can make it much easier to handle.
The post Creating a Reusable Pagination Component in Vue appeared first on CSS-Tricks.
😉SiliconWebX | 🌐CSS-Tricks
0 notes
theresawelchy · 6 years ago
Text
Practical Recursive Feature Selection
With the Summer 2018 Release Data Transformations were added to BigML. SQL-style queries, feature engineering with the Flatline editor and options to merge and join datasets are great, but sometimes these are not enough. If we have hundreds of different columns in our dataset it can be hard to handle them.
We are able to apply transformations but to which columns? Which features are useful for our target prediction and which ones are only increasing resource usage and model complexity?
Later, with the Fall 2018 Release, Principal Component Analysis (PCA) was added to the platform in order to help with dimensionality reduction. PCA helps with the challenge of extracting the discriminative information in the data while removing those fields that only add noise and make it difficult for the algorithm to achieve the expected performance. However, PCA transformation yields new variables which are linear combinations of the original fields, and this can be a problem if we want to obtain interpretable models.
Tumblr media
Feature Selection Algorithms will help you to deal with wide datasets. There are 4 main reasons to obtain the most useful fields in a dataset and discard the others:
Memory and CPU: Useless features consume unnecessary memory and CPU.
Model performance: Although a good model will be able to detect which are the important features in a dataset, sometimes, this noise generated by useless fields confuses our model, and we obtain better performance when we remove them.
Cost: Obtaining data is not free. If some columns are not useful, don’t waste your time and money trying to collect them.
Interpretability: Reducing the number of features will make our model simpler and easier to understand.
In this series of four blog posts, we will describe three different techniques that can help us in this task: Recursive Feature Elimination (RFE), Boruta algorithm, and Best-First Feature Selection. These three scripts have been created using WhizzML, BigML’s domain-specific language. In the fourth and final post, we will summarize the techniques and provide guidelines for which are better suited depending on your use case.
Some of you may already know about the Best-First and Boruta scripts since we have offered them in the WhizzML Scripts Gallery. We will provide some details about the improvements we made to those and the new script, RFE.
Introduction to Recursive Feature Elimination (RFE)
In this post, we are focusing on Recursive Feature Elimination. You can find it in BigML Script Gallery If you want to know more about this script, visit its info page.
This is a completely new script in WhizzML. RFE starts using all the fields and, iteratively, creates ensembles removing the least important field at each iteration. The process is repeated until the number of fields (set by the user in advance) is reached. One interesting feature of this script is that it can return the evaluation for each possible number of features. This is very helpful to find the ideal number of features we should use.  
The script input parameters are:
dataset-id: input dataset
n: final number of features expected
objective-id: objective field (target)
test-ds-id: test dataset to be used in the evaluations (no evaluations take place if empty)
evaluation-metric: metric to be maximized in evaluations (default if empty). Possible classification metrics: accuracy, average_f_measure, average_phi (default), average_precision, and average_recall. Possible regression metrics: mean_absolute_error, mean_squared_error, r_squared (default).
Our dataset: System failures in trucks dataset
This dataset, originally from the UCI Machine Learning Repository, contains information for multiple sensors inside trucks. The dataset consists of trucks with failures and the objective field determines whether or not the failure comes from the Air Pressure System (APS). This dataset will be useful for us for two reasons:
It contains 171 different fields, which is a sufficiently large number for feature selection.
Field names have been anonymized for proprietary reasons so we can’t apply domain knowledge to remove useless features.
As it is a very big dataset, we will use a sample of it with 15,000 rows.
Feature Selection with Recursive Feature Elimination
We will start applying Recursive Feature Elimination with the following inputs. We are using an n=1 because we want to obtain the performance of all possible subset of features, from 171 until 1. If we set a higher n, e.g. 50, the script would stop when it reached that number so we wouldn’t know how smaller subsets of features perform.
Tumblr media
After 30 minutes, we obtain an output-features object that contains all the possible subsets of features and their performance. We can use it to create the graph below. From this, we can deduce the optimal number of features is around 25. From 25 features on, the performance is stable.
Now that we know that we should obtain around 25 features, let’s run the script again to find out which are the optimal 25. This time, as we don’t need to perform evaluations, we won’t pass the test dataset to the script execution.
The script needs 20 minutes to finish the execution. The 25 most important fields that RFE returns are:
"bs_000", "cn_004", "cs_002","cn_000", "dn_000", "ay_008", "ba_005", "ee_005", "bj_000", "az_000", "al_000", "am_0", "ay_003", "ci_000", "ba_007",  "aq_000", "ag_002", "ee_007", "ck_000", "bc_000", "ay_005", "ba_002", "ee_000", "cm_000", "ai_000"
From the script execution, we can obtain a filtered dataset with these 25 fields. The ensemble associated with this filtered dataset has a phi coefficient of 0.815. The phi coefficient of the ensemble that uses the original dataset was only a bit higher, 0.824. That sounds good!
As we have seen, Recursive Feature Elimination is a simple but powerful feature selection script with only a few parameters, serving as a very useful way to get an idea of which features are actually contributing to our model. In the next post, we will see how we can achieve similar results using Boruta. Stay tuned!
0 notes
sjphotosphere · 7 years ago
Photo
Tumblr media
(How To Learn About Mutual Funds) This post will be a bit of a “back to basics” post. I’ve written about mutual funds in the past, but it has been a long time and I’ve never done a post like this one. If you want to see some of the older stuff on mutual funds, check these out: Mutual Fund Expenses Why Vanguard? Avoid Actively Managed Mutual Funds Survival Bias- Another Great Reason to Invest In Index Funds Mutual Funds Versus ETFs But today, I’m not going to give you a fish. I’m going to teach you how to fish. Mutual funds make up the majority of my investment portfolio and I think that should be the case for most investors out there. There are other ways to invest successfully, but they will require significantly more time and effort than building a mutual fund-based portfolio. Upsides and Downsides Mutual funds have a number of sweet benefits you can’t get by buying individual stocks, bonds, and properties. These include: Diversification – Buy thousands of securities in 10 seconds Pooled Costs – Share the costs of the fund with thousands of others Daily Liquidity – Buy or sell the entire investment any day the market is open Professional Management – Don’t know what you’re doing? No problem. Hire someone for cheap who does Automatic Reinvestment – While stocks often have DRIP programs, try doing that with a municipal bond or a duplex Mutual funds have a few downsides as well, and in full disclosure they ought to be mentioned. Diversification – It works both ways, you (or the manager’s) best ideas get diluted No Capital Loss Pass Through – While capital losses in the fund can be used to reduce the capital gains passed through, those tax losses that occur on individual securities in the fund won’t find their way on to your tax return. You can be assured you’ll get a capital gains distribution most years though, whether the fund makes money or not. Management Fees – While they can be very low, they often are not Loads and 12b-1 Fees- While you don’t have to buy a fund with these fees, lots of investors do Manager Risk- The reason you hire a professional manager is because you recognize you’re an idiot. But what if the manager is too? What Mutual Funds Should You Buy? Instead of paying mutual fund loads, save your money and go heli-skiing This is a “back to basics” post, so let’s make this real basic. If the name of your mutual fund does not have one or more of the following words in it: Vanguard DFA TSP Index Spartan you probably shouldn’t buy it. That doesn’t mean that any fund with one of these words in it is a good fund, nor that every fund without one of these words is a bad fund, but it’s a pretty darn good first screen. What Are Acceptable Fees? I’ve written before about mutual fund fees. There are a number of fees associated with mutual funds. Most of them you don’t have to pay. Expense Ratio: Don’t pay one over 1% and try to keep it under 0.2%. Load: Don’t pay one at all. This is supposed to compensate your “advisor” for his advice. In reality, it’s a commission for a commissioned salesman. Since the best funds don’t charge loads, why would you pay extra to get a crummier fund? You wouldn’t, unless you don’t know. Now you know, and knowing is half the battle. And if you need advice, go to someone who sells advice (i.e. a fee-only, not fee-based planner/investment manager) not products. 12b-1 fee: Just like a load, this is an unnecessary fee. Since the best mutual funds don’t have one, if the fund you’re looking at has one, then you know it’s a crummy mutual fund. It doesn’t even matter what the theory behind 12b-1 fees was/is (the theory is BS anyway.) Buy/Sell Fees: Some funds, including some of those at Vanguard, have buy and sell fees. It might be structured so you get hit with a sell fee only if you don’t hold on to the fund for a period of time like 6 months or 5 years. Try to avoid these as much as possible. If you are really, really interested in the fund/asset class and are committed to it for a long time and the fee is low, then maybe it’s okay to pay. What About ETFs Exchange Traded Funds are just mutual funds that you can buy and sell during the day instead of at 4:00 pm. They’re not necessarily good or bad, just slightly different. They are certainly a little more complicated to use, so have a good reason (such as lower overall expenses) to use an ETF over a mutual fund. Actively Managed Versus Index Funds I love it when people call the frequently seen argument about active management a “debate.” It’s not a debate and if it ever was, it was over a decade or two ago. An actively managed mutual fund has a manager who tries to buy the good securities and avoid the bad ones. A passively managed mutual fund has a manager (mostly a computer) who just buys all the securities and keeps costs as low as possible. It turns out that it is very hard for a mutual fund manager to add enough value to overcome the costs of active management over the long run, especially in a taxable account. In fact, it is so hard that an individual investor even bothering to choose an active manager is probably making a mistake. The data, which I don’t have room to recount here, is pretty overwhelming. So at least until you know something, stick with passive (index) mutual funds. Chances are once you do know something that you won’t change your strategy and you’ll be glad you started with it. And you’ll probably send me a nice thank-you email in a few years and I like those. Which Mutual Funds Should I Invest In? Okay, you got the message and you’re looking for an appropriately risky mix of low-cost, passively-managed, broadly diversified index mutual funds mostly from Vanguard. But then you go to the Vanguard site and it’s overwhelming. I mean “there are eight money market funds, and I don’t even know what money market funds are.” There are 37 bond funds. And dozens and dozens of index funds. Too many choices lead to analysis by paralysis. I was in a restaurant recently and I was handed a menu. There were three options on it. That was awesome. I think all menus should be like that. The happiness literature tells us that we like to have choices and feel in control, but that the fewer choices we have, the happier we’ll be. So let me try to simplify things a bit. We’re going to work our way down the entire page listing the Vanguard mutual funds by asset class. (Remember “asset class” is the type of investment the mutual fund invests in.) By the way, this is one of the most important pages of the internet for a Do-It-Yourself investor. If you don’t have an investment advisor, you should know it like the back of your hand. Okay, let’s walk through this. First, what’s a money market fund? Well, it’s basically a bank account. There are some subtle differences, but not enough that you really need to spend a lot of time on them. Basically money market funds make very short term loans to companies and federal, state, and local governments. In return, they are paid interest. After paying their expenses, whatever interest is left over is paid to you. They are very safe investments in that you are unlikely to lose money in them. But don’t expect to make much. In fact, for the last 5-8 years, you’ve made less than the rate of inflation in money market funds. As you can see, there are two types of money market funds. There are “taxable” ones and “tax-exempt” ones. The tax-exempt ones are like municipal bonds. You’re loaning money to state and local governments. In order to incentive you to do so, you get a federal tax break and maybe a state and local tax break on the interest. So as you might expect, the interest on these is generally lower than on a taxable fund, but if you’re in a high tax bracket, you may come out ahead after tax even with that lower interest payment. As we move left to right here, we see the name of the fund, the ticker symbol (ignore this), the expense ratio (never ignore these, but if you’re on the Vanguard site, they’re all pretty low), and then we come to the price of the shares. In a money market fund, the price is always $1.00. The next two columns give you the change in the share price yesterday, both in dollar terms and percentage terms. Since the price of a money market fund is zero, that should also always be zero. The next column is important. This shows you the yield on the mutual fund. Remember that yield is not return for most mutual funds, but for a money market fund (and a bank account) they are essentially interchangeable. Finally, we come to the “return” figures. Remember that it is not wise to choose a mutual fund primarily based on past returns, but it is a good idea to have some idea of what you can expect from this mutual fund in a given economic environment. The first column is the year to date return (interesting, but not very useful) and then Vanguard publishes the 1 year, 5 year, 10 year, and “since inception” return. As you can see, the last decade has not been kind to money market funds but the “since inception” numbers and dates tell you that things were not always like this. Okay, let’s move on to bonds. Remember a bond is a loan to someone, but it’s a longer loan than the ones that go in a money market fund. Because of this, bond funds can’t keep the share price at $1.00. As Jack Bogle has said, you can have stable principle or you can have stable yield, but you can’t have both. With a bond, you get a stable yield and a variable principle (unless held to term). With a money market fund, you get get a stable principle, but a variable yield. However, when you throw a bunch of bonds into a bond fund, the yield is only kind of stable, especially with economic fluctuations. Platinum Level Scholarship Sponsor Let’s go down the left hand column first. Luckily, at Vanguard the names of the funds actually tell you what they’re invested in. At other mutual fund companies, you might actually have to read the prospectus to get that information. No wonder everyone is pulling their assets from other mutual fund companies and sending them to Vanguard. At any rate, the first fund is GNMA. Ginnie Mae is a semi-government agency that does mortgages. So the bonds in this fund are loans to home owners. You’re buying mortgages. Where does the money go when you pay your mortgage? It doesn’t go to the bank. They sold your mortgage to someone like this fund two weeks after you got it. So when people pay on the mortgages you own through this mutual fund, you make money. When they don’t pay, well, you don’t make money. The next fund is “Inflation Protected Securities.” That means Treasury Inflation Protected Securities, or TIPS. These are bonds whose value is indexed to inflation. This is one of my favorite funds and one I’ve owned for years. The next fund is Intermediate Term Bond Index Admiral Shares. That means it invests in all types of bonds that are of an intermediate duration and uses an index fund strategy. It buys both corporate (loans to Ford and Apple) bonds and government bonds (treasuries.) This fund doesn’t hold GNMA bonds. The “admiral” means you have to put at least $10K into it. If you don’t have $10K, you have to buy the “investor” shares, which have a slightly higher expense ratio and usually a $3K minimum. I also like this fund and use it in my parent’s portfolio. The next fund is just like it, except no corporates. The fifth fund down doesn’t have the word “index” in it. It is actively managed and invests only in treasury bonds. Luckily, even the actively managed bond funds at Vanguard act like index funds so there isn’t a bad fund on this list. Moving left to right, we see some various expense ratio, prices that aren’t stable (but really don’t move much, I mean, you can handle swings of 0.27% per day, which are actually pretty big for a bond fund,) and higher yields and returns than you see from money market funds. Be aware the TIPS fund yield is a “real” (i.e. after-inflation) yield. If it was a nominal yield, you would be better off putting money in your mattress than investing in that. As you scroll down the page you will also notice there are corporate bonds funds (guess what they invest in) and tax-exempt bonds funds (just like the tax-exempt money market funds.) In the interest of time, we’ll skip through all that and get to the Balanced Fund Section. What is a balanced fund? It invests in both stocks and bonds at varying ratios depending on the strategy. Why might you want to use one? Mostly to keep things simple. You only have to own one fund and you get to own all kinds of assets all over the world without any hassle. I use them (okay, one of them) for things like my kids’ Roth IRAs. Vanguard has a number of different types of balanced funds. The Target Retirement funds are supposed to be chosen by your retirement date. The further you are out from retirement, the more aggressive the fund is (i.e. more stocks, fewer bonds.) Then the fund gradually becomes less aggressive as the years go by. The Target Risk funds are also contain a reasonable mix of stocks and bonds, but they don’t become less aggressive as time goes by. They just stay the same. Then there are more traditional balanced funds, including both index funds and some of Vanguard’s most successful actively managed funds. Finally, there is the managed payout fund, which tries to keep a constant “pay-out” despite widely fluctuating asset values. That’s kind of fun to watch to see if Vanguard can do it, but I wouldn’t actually invest in it. Now let’s move on to a more exciting part of the portfolio- the stocks! Remember when you own a stock you own a tiny piece of a real, live company with real, live customers. When they make money, you make money. When they lose money, you lose money. In the short run, there is also an impressive speculative component, but in the long run, you’re just buying a piece of a (hopefully) profitable enterprise. First we see US Large Cap Stock (or Equity) Mutual Funds. Dave Ramsey (and other people who were investing in the 90s) calls these Growth and Income funds. You’ll notice Vanguard has a couple dozen of these. Which one should you invest in? This one. That was easy, wasn’t it? It is also a good example of why you shouldn’t choose a fund based on performance since inception. As you scan down that list, you’ll see funds with very different inception dates, and the date has more to do with the return since inception than anything the mutual fund actually does or has control over. But moving left to right, you’ll see slight changes in the asset class column. Some funds invest in Large Cap Growth stocks, some invest in Large Cap Value stocks, and most invest in Large Cap Blend (growth and value) stocks. You can also see the difference in expenses between an index fund and an actively managed fund. Even at Vanguard, you can see an 8-fold difference in expense ratio. That is not easy for a manager to overcome. Stock funds have yield too, although instead of coming from a bond coupon, they come from stock dividends, and thus aren’t nearly as stable. You’ll also notice that returns, particularly for the last few years, are dramatically higher for stock funds than balanced, bond, and money market funds. As you scroll down, you’ll come to Mid Cap stock funds (Dave Ramsey calls these “Growth” funds) and Small Cap stock funds (“Aggressive Growth.”) Then you move into international funds. Remember this section includes both stock funds and mutual funds, but again, the names are descriptive. Developed Markets include mostly Europe, Australia, and Japan. Emerging markets are places like Brazil, Russia, India, China, most of the Pacific Rim, and most of Central and South America. The best thing for most investors to do is scroll to the bottom of this section and look at the two “Total International” funds. The first buys all the bonds in the world outside of the US and the other buys all the stocks in the world outside the US. I’ve been using the Total International Stock Index fund for more than a decade in my portfolio. The next section down is for “Global” stock funds. There is an important bit of terminology here. When it comes to investing, “International” means outside the US and “Global” means the entire world including the US. These funds are all a bit small and a bit expensive. I’ve never invested in any of them. The Total World Stock Index has potential, but still hasn’t caught on much after almost a decade. You can buy its components cheaper separately. Finally, we get to the bottom. If you want to invest at Vanguard, but still want some excitement in your life, this is your place. Why buy a diversified portfolio of stocks when you can get a concentrated one? If you learned something in the previous 3000 words of this post, you have no business buying any of these funds. That said, I’ve owned all of them at one point or another and they’re a lot of fun. I mean, look at Energy- 33% last year alone! And Precious Metals- 76% last year! Whoohooo! (And I owned both of them last year- bragging rights for cocktail parties.) Guess what? They go down just as fast. In fact, precious metals still has a markedly negative return over the last 10 years. Health Care is one of Vanguard’s long-term successes in active management. But they had a pretty rough year last year, underperforming the overall market by 10%. Lots of people hold a little slice of REITs in their portfolio in hopes that they will act differently from other stocks due to the slightly different structure. I lost 78% of my money in that fund in the 2008 bear market. Use extreme caution with any of these four funds, even if they do have the word Vanguard in their name. You should not have a large portion of your portfolio in any of them. Prospectus As you can see, this page alone gives you a lot of information about a mutual fund once you know how to read it. You can get even more information from the Prospectus and Annual Report, which I also recommend you at least skim. In fact, let’s look at one now. Just click on a mutual fund link. Let’s do the REIT Index Admiral Fund for convenience. It’ll take you here: This is the fund page. It gives you even more information about the fund including what it invests in, what the fees are, what the past performance is etc. If you want even more information, click on “View Prospectus and Reports.” Then read the prospectus. There’s a short version (8 pages) and a long version (53 pages.) The short version is probably good enough. I would concentrate on these sections: Tons of interesting information on the 2nd page. First, you learn what it invests in. Unsurprisingly, it invests the entire fund in Real Estate Investment Trusts. You also learn the strategy- it tries to track the performance of an index. In other words, it just buys all the publicly traded REITs. Platinum Level Scholarship Sponsor The fee section is also interesting. Well, maybe not for Vanguard funds, but when you compare it to another fund. You see there are no loads, purchase fees, sales fees, redemption fees, account services fees (I know, it says $20 but that gets waived if you opt for electronic communications or if you have more than $10K in the fund), or 12b-1 fees. The expense ratio is a low, low 0.12%. Just for fun, let’s look at a similar page from the prospectus of another mutual fund. How about the Alger Capital Appreciation Fund Class A. It’s page looks like this: They have an investment objective too. But it’s so friggin’ vague you have no idea what they’re doing. And check out those fees. Wow! Let’s start with the 5.25% load. Yup, that’s money right out of your pocket. Give your commissioned salesman $1000 to invest, and he takes $52.50, puts it in his pocket and invests $947.50. That’s going to take a little while to recover from. Oh wait, there’s more. Not only do you get to pay a “front-load” but you also get to pay a 1% back-load. I love the little extra kicker there- if the share value goes down, you pay a back load off what it used to be, not what it actually is at the time of sale. The ER is 0.79%, or approximately 16 times as high as a Vanguard Index Fund. But wait, there’s more. You can also pay a 12b-1 fee of 0.25-1%. And “other expenses” of 0.19%, whatever the heck those are. All in, you’re looking at 1.23% for the front-loaded shares. But wait, there’s more. Look at all those asterisks and fine print at the bottom! I’m not saying this fund sucks and you should avoid it….actually, that is what I’m saying. Given those high fees, you won’t be surprised to learn its recent performance was kind of crummy too. Last year, while the US stock market generated returns of 12.94%, this fund LOST MONEY. A LOT OF MONEY. -4.94%. That sucks and it certainly doesn’t sound like “capital appreciation” to me. Why are people still investing with those chumps? Because they’ve never read a blog post like this one. Okay, let’s go back to the Vanguard prospectus. This section is pretty important. It talks about the risks you’re running in this fund. Let’s just say it is a risky fund, but you should read and understand all of these before buying the fund. There is a reason the government requires them to tell you this. This is also a really useful page. It may give you some idea of what to expect in the fund. You’ll notice it has had some huge losses, such as in 2007 and 2008. -37.05% doesn’t sound too bad, right? But wait. Didn’t I say I lost 78% of my money in this fund in that bear market? Yes I did. Bear in mind that performance data reported for the calendar years will down play what you will feel as an investor. You feel the peak to trough drop (and trough to peak rise), not the calendar year drop. Notice how few years there are with returns of 5-10%, which is what you expect the long-term return to be. Most years are big losses or big gains. That tells you it’s a risky fund. Check out the tax data too. These are also mandatory disclosures. Notice the difference between the 10 year pre-tax return of 7.44% and the post-tax return (assuming maximum tax brackets) of 5.40%. That’s a fairly tax-inefficient fund to lose 27% of its return to taxes. Compare those numbers to a more tax-efficient fund, like the Vanguard Total Stock Market Index Fund which lost just 19% to taxes or a really tax-efficient fund like the Vanguard Intermediate Term Tax Exempt Bond Fund which only lost 3% to taxes. Morningstar Vanguard is pretty good at putting lots of very useful information on their website and in their prospectuses and reports. Probably because they don’t have much to be ashamed of. But if you’re looking up a mutual fund somewhere else, you may find it a little tougher to get the information you seek. Or you might just want more detail. In those cases, you can go to Morningstar, which provides all kinds of mutual fund information. There is some information behind a paywall, but everything you really want is in front of it. Let’s take a look at that Vanguard REIT fund there. Most of the good stuff is under the “Performance,” “Portfolio,” and “Expense” tabs and is summarized at the bottom of the front page. This tells you what it is invested in (100% stocks, remember REITS are a type of stock) and mostly small to medium slightly growthy stocks. At the bottom, you can see all of the money is invested in the real estate sector (no surprise there) and that their top holdings are all big real estate companies, some of which you might have even heard of. It is a fairly concentrated fund, with over 20% invested in just the top five holdings. The comparative performance data is also pretty useful. Look at the long-term % rank in category near the bottom. Over 5-10 years, this fund has outperformed 80-83% of the other funds in its category. That’s pretty typical for an index fund. Despite whooping up on 4 out of 5 funds for a decade, Morningstar only gives it 3 out of 5 stars. That’s another good point- when you go to Morningstar, you’re looking for 3-4 star funds. 1-2 star funds tend to stay 1-2 star funds, but 5 star ratings do not predict future performance. Steady eddies are what you want. This post is way too long already, but I hope it has been educational. You can learn a lot about mutual funds without ever reading an investment book if you just know where to look on the internet and what you’re looking for. What do you think? How did you learn about mutual funds? What do you think a beginning investor needs to know? Do you invest in mutual funds? Why or why not? Comment below! !function()function e()var e=document.createElement("script"),n=document.getElementById("myFinance-widget-script"),a=t+"static/widget/myFinance.js";e.type="text/javascript",e.async=!0,e.src=a,n.parentNode.insertBefore(e,n);var c="myFinance-widget-css";if(!document.getElementById(c))var d=document.getElementsByTagName("head")[0],i=document.createElement("link");i.id=c,i.rel="stylesheet",i.type="text/css",i.href=t+"static/widget/myFinance.css",i.media="all",d.appendChild(i)var t="http://ift.tt/2oFUowK";document.attachEvent?document.attachEvent("onreadystatechange",function()"complete"===document.readyState&&e()):document.addEventListener("DOMContentLoaded",e,!1)(); http://ift.tt/2qH1K3j
1 note · View note
brajeshupadhyay · 4 years ago
Quote
The coronavirus has had a shattering effect on the country, and we are now in the deepest recession the UK has ever known. More than nine million people have been furloughed and 730,000 jobs have been lost. But Money Mail is here with a special edition to help you revive your finances and make sure you and your family are in the best possible position as the nation fights its way out of the economic crisis. Take shelter: Our financial survival guide will help you put every penny to good use as we emerge from lockdown and battle the recession It comes after our redundancy survival guide earlier this month. And last week we explained how you can set up your own business and take the first steps on a new career path. The virus has cost some households dearly, with many suffering huge income losses.  Meanwhile, others have been able to save for the first time in years, as commuting costs vanished, along with the temptation to spend on dining out and holidays. Whatever your situation, today Money Mail will help you put every penny to good use as we emerge from lockdown and battle the recession. We have spoken to Britain’s top money experts, and over the next eight pages we will explain how you can rebuild your finances, fill your war chest and plan for the future, whatever it may hold. Take stock and set a household budget Before taking any action, you need to grab your calculator, bank statements and bills and draw up a household budget. Below you’ll find a print-out-and-keep budget planner to help you measure your income and outgoings. Fill this in to identify any savings that might be made. You could also make your own budget calculator on a computer spreadsheet. This vital tool can help you slash unnecessary spending and plot a savings strategy. Then use our guides to savings and investments here and here to make your money work harder. John Ellmore, director of the finance comparison site KnowYourMoney.co.uk, says: ‘It is important not to panic. Rather, people must take stock.’ You will need to review your budget at regular intervals to keep on top of your spending. And don’t forget to look at statements so you can prepare for big annual spends, such as Christmas and holidays. Here’s your print-out-and-keep budget planner to help you measure your income and outgoings. Fill this in to identify any savings that might be made Becky O’Connor, personal finance specialist at insurer Royal London, recommends checking your bank balance daily to keep a close eye on outgoings. She also suggests arranging for major transactions and loan repayments to come out immediately after payday — so you know what you will have left for the rest of the month. You could go further and work out a daily budget, too. But Ms O’Connor warns that no month is typical and you should always be prepared for unexpected expenditure. Sarah Coles, personal finance expert at investment firm Hargreaves Lansdown, says it’s worth having a ‘plan B’, adding: ‘If your circumstances change for the worse, you need to be able to slash your costs quickly, so plan this in advance. ‘This doesn’t have to be a budget that you can live with for ever: it is designed to get you through the worst of circumstances.’ Seek out any chance to save Once you have a comprehensive list of all your outgoings, you may well be shocked at how much you are spending each month. Go through your list with a fine-tooth comb to find any unnecessary spending. When you have a total for this, you may again be surprised at how quickly it all adds up. Perhaps you pay regularly for a service you find you don’t use very often, such as Amazon Prime or Netflix? Laura Suter, personal finance analyst at investment broker AJ Bell, says: ‘Work out whether you’re getting value for money and still using the service — if you are not, cancel it.’ But Ms Coles warns not to sacrifice all the things that make you happy — because if you are miserable, you may be more likely to bust your budget. The internet can also help you save. Sell unwanted clutter or buy cheaply on sites such as eBay, Shpock and Facebook, or use local groups on the social media site to borrow expensive items such as lawn mowers and power tools. Try planning meals in advance to avoid spending on takeaways, and use a food-sharing app such as Olio, where neighbours exchange food they no longer need for free. How I lopped £1,000 a year off my bills  Richard Jackson saves his family close to £1,000 every year with his switching strategy Stay-at-home dad Richard Jackson saves his family close to £1,000 every year with his switching strategy. Richard, 39, sets reminders on his mobile phone when his energy, mobile, television and broadband deals are due to expire. He then uses price comparison websites to seek out the cheapest offers. Richard, who lives in Devon with his wife and two boys, says the family has managed to save more than £500 a month in lockdown after spending on meals out, petrol and outgoings plummeted. And he has managed to bolster the family finances even further by switching energy deals from EDF to Shell Energy – saving £13 a month. He has also pocketed £100 offered to banking customers who switch their current account to Halifax. Richard says that anyone looking to save like him should make notes of when their household bill deals are due to expire. He says: ‘If you don’t save yourself the money, the companies will just take it off you. You need to be organised. It doesn’t take ten minutes to do it. ‘They entice you with an introductory deal and hope you don’t leave, but I will leave if the deal isn’t good enough.’ The Jacksons, who also received a refund for their cancelled cruise holiday, are saving the money to help them through a major building project at the family home. Keep up good money habits  Ask yourself what you can truly afford to save every month and commit to doing so. Set up a direct debit so the money is moved automatically on payday, then you cannot spend it.  But do not save more than you can afford — if the sum is too high, you may panic and cancel the deposits. Ms O’Connor also recommends setting a savings target – for example, £2,000 by the end of the year. Lockdown forced many of us into a savings habit, as we were simply unable to spend. It made us realise just how much was being lashed out on luxuries. But how can you make these good habits stick? A Hargreaves Lansdown survey during lockdown found that about one person in three would go out less, buy fewer clothes and avoid impulse purchases in future to save money.  Meanwhile, one in five said they were likely to save on commuting costs, with employers now happier to have staff working from home. Ms Coles suggests simply thinking twice about ‘habitual’ spending – so consider whether you really need something before plucking it off the shelf in a shop. You might also try giving yourself 24 hours before buying anything new, to avoid impulse purchases. If you think it would help, you could make it harder for yourself to spend rashly by removing any card details saved online and by unsubscribing from emails sent by stores which tempt you to spend. Before you take any action, you need to grab your calculator, bank statements and bills, and draw up a household budget Go hunting for better deals  The largest monthly outgoing for most households is the mortgage – and this is perhaps where the biggest saving can be found. If you are on your lender’s standard variable rate, it is likely you can get a better deal. Rates hit record lows in June, before the average rates for both two and five-year fixed-rate deals fell even further to 1.99 per cent and 2.25 per cent respectively last month. So now is the ideal time to lock in to a cheap deal. Ms Suter says: ‘Once you’ve tackled that big outgoing, look at bills that have crept up. Whether it’s switching to a cheaper energy deal, realising your Sky package has shot up in price, or cutting the cost of your car insurance, there’s lots you can do just by going on a comparison website and hunting for a new deal.’  You can sign up to an energy supplier auto-switching service such as Switchd or Flipper.  They will move you to the cheapest deal automatically, taking the hassle out of switching and removing the need to worry that you are not on a good rate. Ms O’Connor adds: ‘Deals and rates change all the time, but no one is going to tell you. It’s up to you to be aware of what is available.’ The comparison site Uswitch estimates that fixed energy plans used by 1.5 million households will end this summer — and that those homeowners could save an average £149 a year if they sought out a better one. Energy prices are now relatively low, making this the perfect time to switch tariffs. Sarah Broomfield, energy expert at Uswitch, says: ‘Now is a crucial time to bag yourself a new deal, before you get dumped on your supplier’s standard variable tariff.’ About a third of broadband customers are thought to be paying more than they should because their contracts have run out. And finding a decent internet deal is more important now than ever, as so many people are working from home. The soaring cost of new mobile phones also presents an opportunity to save: instead of upgrading to the latest model, keep your current phone and move to a Sim-only deal when your contract ends. Uswitch says someone who bought an iPhone two years ago on an average 24-month contract could save £600 by switching to a Sim-only deal with the same amount of data and minute allowances. Go through your outgoings list with a fine-tooth comb to find any unnecessary spending. When you have a total, you may again be surprised at how much this adds up to every month Get organised to deal with debt  Charity StepChange predicted that 4.6 million people would together rack up £6 billion in debts during the crisis, while millions will have taken payment holidays from mortgages and loan repayments. Ms Coles says: ‘For some people, cuts in income or losing work has meant a horrible battle to make ends meet, forcing them to max-out on debt. If you’re in this position, it’s time to take stock and assess how to stop things getting worse.’ Debts with the highest interest-rate charges should be cleared first. If you cannot pay off your debt any time soon, you could at least shift it to a cheaper rate – for instance, with a 0 per cent credit card or a personal loan with a low rate.  Ms Suter says: ‘This means you can use more of your capital to pay down the actual debt each month, rather than just paying off the interest.’ If you were forced to ask your bank or building society for a three-month mortgage payment holiday, it is worth remembering that you will now owe more interest. Taking such a break will increase debt by more than £1,000 on a typical mortgage. Also, bear in mind that overdrafts are no longer a cheap and convenient way to borrow. After orders from the regulator to make fees fairer and simpler, most major banks have brought in charges of up to 50 per cent. For vulnerable borrowers who use their overdrafts without permission, the charges will probably mean they pay less. But those who dip in and out of an agreed overdraft could find it is a very expensive way to borrow. If you need to take out a loan, local credit unions may offer the fairest rates around. These mutual organisations act in the best interests of their members. Find your nearest at abcul.coop/home. Mr Ellmore, from KnowYourMoney.co.uk, says it is also important to maintain a strong credit score, so bills should be paid on time if possible. This will put you in the best position if you need to apply for a loan. Yet Ms Coles says it is now worth drawing a line under taking on more debt. ‘There is no point in working hard to pay it all back if you are still prepared to run up debts,’ she says. ‘If this is going to be a life-long change, you need to commit to steering clear of needless debt for good.’ Fresh figures, from the banking body UK Finance, yesterday showed credit card debt had fallen by 12.6 per cent so far this year. The Covid-19 pandemic has sparked the deepest recession the UK has ever known. More than 9 million people have been furloughed and job losses have hit 730,000 Start building a safety net  Once you have cleared any debts and boosted your savings, you can focus on building a safety net in case you lose your job or face unexpected costs. One adult in eight in Britain has no savings at all, and 45 per cent of the country has less than £2,000 put away. Experts say it is a good idea to build up an emergency fund to last between three and six months, to cover essential outgoings such as mortgage or rent payments, household bills and food.  This pot of money should be left in an easy-access savings account, even though these might not offer the most generous interest rates. After the Bank of England slashed the base rate to a record low of 0.1 per cent, all High Street banks cut their savings rates to a pittance at 0.01 per cent. But you can still earn at least 1 per cent more by finding a top-paying account. Ms Suter says: ‘The Bank of England cutting rates and the high demand for savings accounts means you’re not going to get loads of interest. But far too much money is sitting in current accounts earning nothing, or old savings accounts that pay a pittance.’ If you have lost your job, you may also have lost important benefits, such as pension payments and insurance. Money Mail’s guide to retirement in a recession can be found here. But if you can afford it, consider taking out insurance in case you can no longer work. Unemployment and redundancy policies have been pulled from the market because of the pandemic, but you can still buy income protection and critical illness cover. Income protection will pay a tax-free wage if you cannot work due to health problems. Critical illness cover, meanwhile, will pay out a lump sum to cover mortgage payments and financial security if you have a serious health condition diagnosed. The price of the policy will depend on individual circumstances, including your age, health, occupation and smoking history, but the average premium is about £25 to £30 a month.  Protection insurance specialist Kevin Carr says: ‘If you, or anybody else, relies on your salary to pay the bills, [these policies are] worth thinking about.’ Don’t hesitate to ask for help There is no shame in asking for help or claiming benefits if you find yourself struggling. Ms O’Connor says: ‘Managing finances is hard at the best of times – but at the worst of times, it can feel impossible without falling into debt. ‘No one wants anyone to go hungry or have their home repossessed. There is always help if you know where to look.’ She adds: ‘As a rule, if you lose your job or a big chunk of income, there is probably some form of help out there for you, so don’t suffer alone. When it comes to money problems, prevention is always better than cure.’ Families should also make sure they are claiming any benefits to which they entitled. Check your eligibility for tax-free childcare, which pays 20p in every £1 spent. Child benefit of £21.05 a week for your first child and £13.95 per subsequent child is available. If you earn more than £50,000, you pay a charge on some of the benefit. If you lose your job, you may be entitled to Universal Credit. It depends on whether you are still earning or have savings. Check at: entitledto.co.uk/benefits-calculator. Help is also available from the Money Advice Service, Citizens Advice and debt charities such as Turn2Us and StepChange. [email protected] Could you swallow the 5:2 money diet? Tips to save: Personal finance expert Laura Whateley , author of Money: A User’s Guide Here, personal finance expert Laura Whateley, author of Money: A User’s Guide, gives us her top ten finance tips: 1 Financial advisers recommend you have enough emergency cash to cover at least three months of essential bills. I’d argue that the pandemic has upended traditional personal finance rules. You may have discovered that three months is not enough, or that your finances are more resilient than expected. How much you need in savings is personal. Write down your essential outgoings and the minimum on which you could survive if you were made redundant or forced to take a pay cut. 2 Try budgeting using the Japanese art of Kakeibo, where you split your outgoings into ‘unavoidable’ and ‘non-essential’. Set yourself weekly spend and save targets, then detail each day, in a notebook, where your money goes. Committing it to paper, rather than just using your phone, helps to make it stick. 3 However, technology can help you assess all your spending and saving in one place. Consolidate your accounts using an app such as Money Dashboard or Yolt, so you can see on one screen your true financial situation. Combine it with a partner’s accounts if you dare. 4 Contactless tapping, online shopping and buy-now, pay-later plans, all of which proliferated during lockdown, have made it easier than ever for us to lose track of where our money goes. Cash is psychologically more difficult to part with. Use it when you can. 5 Know what your debts are costing you. If you had £1,000 on a credit card with an interest rate of 20 per cent and you cleared only minimum payments, it would take you more than 18 years to pay off. 6 If you struggle to budget, try setting some rules — perhaps a 5:2 money diet, where you have two no-spend days a week, or a 24-hour cooling-off period for anything you’ve put in your online shopping basket. 7 Apps can make saving fun. Chip and Plum both analyse your spending and move money automatically. The app IFTTT pairs with online bank Monzo to enable you to take part in challenges where, for example, it moves £1 on a Monday into a pot, £2 on a Tuesday, all the way through to £7 on a Sunday. After a year, you’ll have £1,500. 8 Make the most of all benefits and tax reliefs available, including marriage tax allowance, worth £250 a year; tax-free childcare, worth £2,000 a year; and, if you’re working from home, you can claim up to £6 a week for equipment or expenses. 9 Don’t ignore the longer-term. It’s tempting to pause payments into a pension during uncertainty, but you’ll be turning down ‘free’ money in the form of tax relief and a top-up from an employer, as well as overlooking the most powerful money tool available: compounding returns. 10 Understand your own weaknesses and the principles of behavioural economics. We are loss-averse, which means we are more upset about losing money than we are happy about gaining it. The endowment effect means we overvalue things we already own when we could get a better deal elsewhere. And anchoring is where we base how much we are willing to pay on a high figure quoted, not on what we think something is worth. Budgeting is as much about our emotions as it is about spreadsheets. Some links in this article may be affiliate links. If you click on them we may earn a small commission. That helps us fund This Is Money, and keep it free to use. We do not write articles to promote products. We do not allow any commercial relationship to affect our editorial independence. The post How to shock proof your finances: Protect your family from recession appeared first on Shri Times News. from WordPress https://ift.tt/2Q5do6r
http://sansaartimes.blogspot.com/2020/08/how-to-shock-proof-your-finances.html
0 notes
douglassmiith · 5 years ago
Text
Better Reducers With Immer
About The Author
Awesome frontend developer who loves everything coding. I’m a lover of choral music and I’m working to make it more accessible to the world, one upload at a … More about Chidi …
In this article, we’re going to learn how to use Immer to write reducers. When working with React, we maintain a lot of state. To make updates to our state, we need to write a lot of reducers. Manually writing reducers results in bloated code where we have to touch almost every part of our state. This is tedious and error-prone. In this article, we’re going to see how Immer brings more simplicity to the process of writing state reducers.
As a React developer, you should be already familiar with the principle that state should not be mutated directly. You might be wondering what that means (most of us had that confusion when we started out).
This tutorial will do justice to that: you will understand what immutable state is and the need for it. You’ll also learn how to use Immer to work with immutable state and the benefits of using it. You can find the code in this article in this Github repo.
Immutability In JavaScript And Why It Matters
Immer.js is a tiny JavaScript library was written by Michel Weststrate whose stated mission is to allow you “to work with immutable state in a more convenient way.”
But before diving into Immer, let’s quickly have a refresher about immutability in JavaScript and why it matters in a React application.
The latest ECMAScript (aka JavaScript) standard defines nine built-in data types. Of these nine types, there are six that are referred to as primitive values/types. These six primitives are undefined, number, string, boolean, bigint, and symbol. A simple check with JavaScript’s typeof operator will reveal the types of these data types.
console.log(typeof 5) // numberconsole.log(typeof 'name') // stringconsole.log(typeof (1
A primitive is a value that is not an object and has no methods. Most important to our present discussion is the fact that a primitive’s value cannot be changed once it is created. Thus, primitives are said to be immutable.
The remaining three types are null, object, and function. We can also check their types using the typeof operator.
console.log(typeof null) // objectconsole.log(typeof [0, 1]) // objectconsole.log(typeof {name: 'name'}) // objectconst f = () => ({})console.log(typeof f) // function
These types are mutable. This means that their values can be changed at any time after they are created.
You might be wondering why I have the array [0, 1] up there. Well, in JavaScriptland, an array is simply a special type of object. In case you’re also wondering about null and how it is different from undefined. undefined simply means that we haven’t set a value for a variable while null is a special case for objects. If you know something should be an object but the object is not there, you simply return null.
To illustrate with a simple example, try running the code below in your browser console.
console.log('aeiou'.match(/[x]/gi)) // nullconsole.log('xyzabc'.match(/[x]/gi)) // [ 'x' ]
String.prototype.match should return an array, which is an object type. When it can’t find such an object, it returns null. Returning undefined wouldn’t make sense here either.
Enough with that. Let’s return to discussing immutability.
According to the MDN docs:
“All types except objects define immutable values (that is, values which can’t be changed).”
This statement includes functions because they are a special type of JavaScript object. See function definition here.
Let’s take a quick look at what mutable and immutable data types mean in practice. Try running the below code in your browser console.
let a = 5;let b = aconsole.log(`a: ${a}; b: ${b}`) // a: 5; b: 5b = 7console.log(`a: ${a}; b: ${b}`) // a: 5; b: 7
Our results show that even though b is “derived” from a, changing the value of b doesn’t affect the value of a. This arises from the fact that when the JavaScript engine executes the statement b = a, it creates a new, separate memory location, puts 5 in there, and points b at that location.
What about objects? Consider the below code.
let c = { name: 'some name'}let d = c;console.log(`c: ${JSON.stringify(c)}; d: ${JSON.stringify(d)}`) // {"name":"some name"}; d: {"name":"some name"}d.name = 'new name'console.log(`c: ${JSON.stringify(c)}; d: ${JSON.stringify(d)}`) // {"name":"new name"}; d: {"name":"new name"}
We can see that changing the name property via variable d also changes it in c. This arises from the fact that when the JavaScript engine executes the statement, c = { name: 'some name' }, the JavaScript engine creates a space in memory, puts the object inside, and points c at it. Then, when it executes the statement d = c, the JavaScript engine just points d to the same location. It doesn’t create a new memory location. Thus any changes to the items in d is implicitly an operation on the items in c. Without much effort, we can see why this is trouble in the making.
Imagine you were developing a React application and somewhere you want to show the user’s name as some name by reading from variable c. But somewhere else you had introduced a bug in your code by manipulating the object d. This would result in the user’s name appearing as new name. If c and d were primitives we wouldn’t have that problem. But primitives are too simple for the kinds of state a typical React application has to maintain.
This is about the major reasons why it is important to maintain an immutable state in your application. I encourage you to check out a few other considerations by reading this short section from the Immutable.js README: the case for immutability.
Having understood why we need immutability in a React application, let’s now take a look at how Immer tackles the problem with its produce function.
Immer’s produce Function
Immer’s core API is very small, and the main function you’ll be working with is the produce function. produce simply takes an initial state and a callback that defines how the state should be mutated. The callback itself receives a draft (identical, but still a copy) copy of the state to which it makes all the intended update. Finally, it produces a new, immutable state with all the changes applied.
The general pattern for this sort of state update is:
// produce signatureproduce(state, callback) => nextState
Let’s see how this works in practice.
import produce from 'immer' const initState = { pets: ['dog', 'cat'], packages: [ { name: 'react', installed: true }, { name: 'redUX', installed: true }, ],} // to add a new packageconst newPackage = { name: 'immer', installed: false } const nextState = produce(initState, draft => { draft.packages.push(newPackage)})
In the above code, we simply pass the starting state and a callback that specifies how we want the mutations to happen. It’s as simple as that. We don’t need to touch any other part of the state. It leaves initState untouched and structurally shares those parts of the state that we didn’t touch between the starting and the new states. One such part in our state is the pets array. The produced nextState is an immutable state tree that has the changes we’ve made as well as the parts we didn’t modify.
Armed with this simple, but useful knowledge, let’s take a look at how produce can help us simplify our React reducers.
Writing Reducers With Immer
Suppose we have the state object defined below
const initState = { pets: ['dog', 'cat'], packages: [ { name: 'react', installed: true }, { name: 'redUX', installed: true }, ],};
And we wanted to add a new object, and on a subsequent step, set its installed key to true
const newPackage = { name: 'immer', installed: false };
If we were to do this the usual way with JavaScripts object and array spread syntax, our state reducer might look like below.
const updateReducer = (state = initState, action) => { switch (action.type) { case 'ADD_PACKAGE': return { ...state, packages: [...state.packages, action.package], }; case 'UPDATE_INSTALLED': return { ...state, packages: state.packages.map(pack => pack.name === action.name ? { ...pack, installed: action.installed } : pack ), }; default: return state; }};
We can see that this is unnecessarily verbose and prone to mistakes for this relatively simple state object. We also have to touch every part of the state, which is unnecessary. Let’s see how we can simplify this with Immer.
const updateReducerWithProduce = (state = initState, action) => produce(state, draft => { switch (action.type) { case 'ADD_PACKAGE': draft.packages.push(action.package); break; case 'UPDATE_INSTALLED': { const package = draft.packages.filter(p => p.name === action.name)[0]; if (package) package.installed = action.installed; break; } default: break; } });
And with a few lines of code, we have greatly simplified our reducer. Also, if we fall into the default case, Immer just returns the draft state without us needing to do anything. Notice how there is less boilerplate code and the elimination of state spreading. With Immer, we only concern ourselves with the part of the state that we want to update. If we can’t find such an item, as in the `UPDATE_INSTALLED` action, we simply move on without touching anything else. The `produce` function also lends itself to currying. Passing a callback as the first argument to `produce` is intended to be used for currying. The signature of the curried `produce` is
//curried produce signatureproduce(callback) => (state) => nextState
Let’s see how we can update our earlier state with a curried produce. Our curried produce would look like this:
const curriedProduce = produce((draft, action) => { switch (action.type) { case 'ADD_PACKAGE': draft.packages.push(action.package); break; case 'SET_INSTALLED': { const package = draft.packages.filter(p => p.name === action.name)[0]; if (package) package.installed = action.installed; break; } default: break; }});
The curried produce function accepts a function as its first argument and returns a curried produce that only now requires a state from which to produce the next state. The first argument of the function is the draft state (which will be derived from the state to be passed when calling this curried produce). Then follows every number of arguments we wish to pass to the function.
All we need to do now to use this function is to pass in the state from which we want to produce the next state and the action object like so.
// add a new package to the starting stateconst nextState = curriedProduce(initState, { type: 'ADD_PACKAGE', package: newPackage,}); // update an item in the recently produced stateconst nextState2 = curriedProduce(nextState, { type: 'SET_INSTALLED', name: 'immer', installed: true,});
Note that in a React application when using the useReducer hook, we don’t need to pass the state explicitly as I’ve done above because it takes care of that.
You might be wondering, would Immer be getting a hook, like everything in React these days? Well, you’re in company with good news. Immer has two hooks for working with state: the useImmer and the useImmerReducer hooks. Let’s see how they work.
Using The useImmer And useImmerReducer Hooks
The best description of the useImmer hook comes from the use-immer README itself.
useImmer(initialState) is very similar to useState. The function returns a tuple, the first value of the tuple is the current state, the second is the updater function, which accepts an immer producer function, in which the draft can be mutated freely, until the producer ends and the changes will be made immutable and become the next state.
To make use of these hooks, you have to install them separately, in addition to the main Immer libarary.
yarn add immer use-immer
In code terms, the useImmer hook looks like below
import React from "react";import { useImmer } from "use-immer"; const initState = {}const [ data, updateData ] = useImmer(initState)
And it’s as simple as that. You could say it’s React’s useState but with a bit of steroid. To use the update function is very simple. It receives the draft state and you can modify it as much as you want like below.
// make changes to dataupdateData(draft => { // modify the draft as much as you want.})
The creator of Immer has provided a codesandbox example which you can play around with to see how it works.
useImmerReducer is similarly simple to use if you’ve used React’s useReducer hook. It has a similar signature. Let’s see what that looks like in code terms.
import React from "react";import { useImmerReducer } from "use-immer"; const initState = {}const reducer = (draft, action) => { switch(action.type) { default: break; }} const [data, dataDispatch] = useImmerReducer(reducer, initState);
We can see that the reducer receives a draft state which we can modify as much as we want. There’s also a codesandbox example here for you to experiment with.
And that is how simple it is to use Immer hooks. But in case you’re still wondering why you should use Immer in your project, here’s a summary of some of the most important reasons I’ve found for using Immer.
Why You Should Use Immer
If you’ve written state management logic for any length of time you’ll quickly appreciate the simplicity Immer offers. But that is not the only benefit Immer offers.
When you use Immer, you end up writing less boilerplate code as we have seen with relatively simple reducers. This also makes deep updates relatively easy.
With libraries such as Immutable.js, you have to learn a new API to reap the benefits of immutability. But with Immer you achieve the same thing with normal JavaScript Objects, Arrays, Sets, and Maps. There’s nothing new to learn.
Immer also provides structural sharing by default. This simply means that when you make changes to a state object, Immer automatically shares the unchanged parts of the state between the new state and the previous state.
With Immer, you also get automatic object freezing which means that you cannot make changes to the produced state. For instance, when I started using Immer, I tried to apply the sort method on an array of objects returned by Immer’s produce function. It threw an error telling me I can’t make any changes to the array. I had to apply the array slice method before applying sort. Once again, the produced nextState is an immutable state tree.
Immer is also strongly typed and very small at just 3KB when gzipped.
Conclusion
When it comes to managing state updates, using Immer is a no-brainer for me. It’s a very lightweight library that lets you keep using all the things you’ve learned about JavaScript without trying to learn something entirely new. I encourage you to install it in your project and start using it right away. You can add use it in existing projects and incrementally update your reducers.
I’d also encourage you to read the Immer introductory blog post by Michael Weststrate. The part I find especially interesting is the “How does Immer work?” section which explains how Immer leverages language features such as proxies and concepts such as copy-on-write.
I’d also encourage you to take a look at this blog post: Immutability in JavaScript: A Contratian View where the author, Steven de Salas, presents his thoughts about the merits of pursuing immutability.
I hope that with the things you’ve learned in this post you can start using Immer right away.
Related Resources
use-immer, GitHub
Immer, GitHub
function, MDN web docs, Mozilla
proxy, MDN web docs, Mozilla
Object (computer science), Wikipedia
“Immutability in JS,” Orji Chidi Matthew, GitHub
“ECMAScript Data Types and Values,” Ecma International
Immutable collections for JavaScript, Immutable.js , GitHub
“The case for Immutability,” Immutable.js , GitHub
(ks, ra, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
Via http://www.scpie.org/better-reducers-with-immer/
source https://scpie.weebly.com/blog/better-reducers-with-immer
0 notes
laurelkrugerr · 5 years ago
Text
Better Reducers With Immer
About The Author
Awesome frontend developer who loves everything coding. I’m a lover of choral music and I’m working to make it more accessible to the world, one upload at a … More about Chidi …
In this article, we’re going to learn how to use Immer to write reducers. When working with React, we maintain a lot of state. To make updates to our state, we need to write a lot of reducers. Manually writing reducers results in bloated code where we have to touch almost every part of our state. This is tedious and error-prone. In this article, we’re going to see how Immer brings more simplicity to the process of writing state reducers.
As a React developer, you should be already familiar with the principle that state should not be mutated directly. You might be wondering what that means (most of us had that confusion when we started out).
This tutorial will do justice to that: you will understand what immutable state is and the need for it. You’ll also learn how to use Immer to work with immutable state and the benefits of using it. You can find the code in this article in this Github repo.
Immutability In JavaScript And Why It Matters
Immer.js is a tiny JavaScript library was written by Michel Weststrate whose stated mission is to allow you “to work with immutable state in a more convenient way.”
But before diving into Immer, let’s quickly have a refresher about immutability in JavaScript and why it matters in a React application.
The latest ECMAScript (aka JavaScript) standard defines nine built-in data types. Of these nine types, there are six that are referred to as primitive values/types. These six primitives are undefined, number, string, boolean, bigint, and symbol. A simple check with JavaScript’s typeof operator will reveal the types of these data types.
console.log(typeof 5) // number console.log(typeof 'name') // string console.log(typeof (1
A primitive is a value that is not an object and has no methods. Most important to our present discussion is the fact that a primitive’s value cannot be changed once it is created. Thus, primitives are said to be immutable.
The remaining three types are null, object, and function. We can also check their types using the typeof operator.
console.log(typeof null) // object console.log(typeof [0, 1]) // object console.log(typeof {name: 'name'}) // object const f = () => ({}) console.log(typeof f) // function
These types are mutable. This means that their values can be changed at any time after they are created.
You might be wondering why I have the array [0, 1] up there. Well, in JavaScriptland, an array is simply a special type of object. In case you’re also wondering about null and how it is different from undefined. undefined simply means that we haven’t set a value for a variable while null is a special case for objects. If you know something should be an object but the object is not there, you simply return null.
To illustrate with a simple example, try running the code below in your browser console.
console.log('aeiou'.match(/[x]/gi)) // null console.log('xyzabc'.match(/[x]/gi)) // [ 'x' ]
String.prototype.match should return an array, which is an object type. When it can’t find such an object, it returns null. Returning undefined wouldn’t make sense here either.
Enough with that. Let’s return to discussing immutability.
According to the MDN docs:
“All types except objects define immutable values (that is, values which can’t be changed).”
This statement includes functions because they are a special type of JavaScript object. See function definition here.
Let’s take a quick look at what mutable and immutable data types mean in practice. Try running the below code in your browser console.
let a = 5; let b = a console.log(`a: ${a}; b: ${b}`) // a: 5; b: 5 b = 7 console.log(`a: ${a}; b: ${b}`) // a: 5; b: 7
Our results show that even though b is “derived” from a, changing the value of b doesn’t affect the value of a. This arises from the fact that when the JavaScript engine executes the statement b = a, it creates a new, separate memory location, puts 5 in there, and points b at that location.
What about objects? Consider the below code.
let c = { name: 'some name'} let d = c; console.log(`c: ${JSON.stringify(c)}; d: ${JSON.stringify(d)}`) // {"name":"some name"}; d: {"name":"some name"} d.name = 'new name' console.log(`c: ${JSON.stringify(c)}; d: ${JSON.stringify(d)}`) // {"name":"new name"}; d: {"name":"new name"}
We can see that changing the name property via variable d also changes it in c. This arises from the fact that when the JavaScript engine executes the statement, c = { name: 'some name' }, the JavaScript engine creates a space in memory, puts the object inside, and points c at it. Then, when it executes the statement d = c, the JavaScript engine just points d to the same location. It doesn’t create a new memory location. Thus any changes to the items in d is implicitly an operation on the items in c. Without much effort, we can see why this is trouble in the making.
Imagine you were developing a React application and somewhere you want to show the user’s name as some name by reading from variable c. But somewhere else you had introduced a bug in your code by manipulating the object d. This would result in the user’s name appearing as new name. If c and d were primitives we wouldn’t have that problem. But primitives are too simple for the kinds of state a typical React application has to maintain.
This is about the major reasons why it is important to maintain an immutable state in your application. I encourage you to check out a few other considerations by reading this short section from the Immutable.js README: the case for immutability.
Having understood why we need immutability in a React application, let’s now take a look at how Immer tackles the problem with its produce function.
Immer’s produce Function
Immer’s core API is very small, and the main function you’ll be working with is the produce function. produce simply takes an initial state and a callback that defines how the state should be mutated. The callback itself receives a draft (identical, but still a copy) copy of the state to which it makes all the intended update. Finally, it produces a new, immutable state with all the changes applied.
The general pattern for this sort of state update is:
// produce signature produce(state, callback) => nextState
Let’s see how this works in practice.
import produce from 'immer' const initState = { pets: ['dog', 'cat'], packages: [ { name: 'react', installed: true }, { name: 'redUX', installed: true }, ], } // to add a new package const newPackage = { name: 'immer', installed: false } const nextState = produce(initState, draft => { draft.packages.push(newPackage) })
In the above code, we simply pass the starting state and a callback that specifies how we want the mutations to happen. It’s as simple as that. We don’t need to touch any other part of the state. It leaves initState untouched and structurally shares those parts of the state that we didn’t touch between the starting and the new states. One such part in our state is the pets array. The produced nextState is an immutable state tree that has the changes we’ve made as well as the parts we didn’t modify.
Armed with this simple, but useful knowledge, let’s take a look at how produce can help us simplify our React reducers.
Writing Reducers With Immer
Suppose we have the state object defined below
const initState = { pets: ['dog', 'cat'], packages: [ { name: 'react', installed: true }, { name: 'redUX', installed: true }, ], };
And we wanted to add a new object, and on a subsequent step, set its installed key to true
const newPackage = { name: 'immer', installed: false };
If we were to do this the usual way with JavaScripts object and array spread syntax, our state reducer might look like below.
const updateReducer = (state = initState, action) => { switch (action.type) { case 'ADD_PACKAGE': return { ...state, packages: [...state.packages, action.package], }; case 'UPDATE_INSTALLED': return { ...state, packages: state.packages.map(pack => pack.name === action.name ? { ...pack, installed: action.installed } : pack ), }; default: return state; } };
We can see that this is unnecessarily verbose and prone to mistakes for this relatively simple state object. We also have to touch every part of the state, which is unnecessary. Let’s see how we can simplify this with Immer.
const updateReducerWithProduce = (state = initState, action) => produce(state, draft => { switch (action.type) { case 'ADD_PACKAGE': draft.packages.push(action.package); break; case 'UPDATE_INSTALLED': { const package = draft.packages.filter(p => p.name === action.name)[0]; if (package) package.installed = action.installed; break; } default: break; } });
And with a few lines of code, we have greatly simplified our reducer. Also, if we fall into the default case, Immer just returns the draft state without us needing to do anything. Notice how there is less boilerplate code and the elimination of state spreading. With Immer, we only concern ourselves with the part of the state that we want to update. If we can’t find such an item, as in the `UPDATE_INSTALLED` action, we simply move on without touching anything else. The `produce` function also lends itself to currying. Passing a callback as the first argument to `produce` is intended to be used for currying. The signature of the curried `produce` is
//curried produce signature produce(callback) => (state) => nextState
Let’s see how we can update our earlier state with a curried produce. Our curried produce would look like this:
const curriedProduce = produce((draft, action) => { switch (action.type) { case 'ADD_PACKAGE': draft.packages.push(action.package); break; case 'SET_INSTALLED': { const package = draft.packages.filter(p => p.name === action.name)[0]; if (package) package.installed = action.installed; break; } default: break; } });
The curried produce function accepts a function as its first argument and returns a curried produce that only now requires a state from which to produce the next state. The first argument of the function is the draft state (which will be derived from the state to be passed when calling this curried produce). Then follows every number of arguments we wish to pass to the function.
All we need to do now to use this function is to pass in the state from which we want to produce the next state and the action object like so.
// add a new package to the starting state const nextState = curriedProduce(initState, { type: 'ADD_PACKAGE', package: newPackage, }); // update an item in the recently produced state const nextState2 = curriedProduce(nextState, { type: 'SET_INSTALLED', name: 'immer', installed: true, });
Note that in a React application when using the useReducer hook, we don’t need to pass the state explicitly as I’ve done above because it takes care of that.
You might be wondering, would Immer be getting a hook, like everything in React these days? Well, you’re in company with good news. Immer has two hooks for working with state: the useImmer and the useImmerReducer hooks. Let’s see how they work.
Using The useImmer And useImmerReducer Hooks
The best description of the useImmer hook comes from the use-immer README itself.
useImmer(initialState) is very similar to useState. The function returns a tuple, the first value of the tuple is the current state, the second is the updater function, which accepts an immer producer function, in which the draft can be mutated freely, until the producer ends and the changes will be made immutable and become the next state.
To make use of these hooks, you have to install them separately, in addition to the main Immer libarary.
yarn add immer use-immer
In code terms, the useImmer hook looks like below
import React from "react"; import { useImmer } from "use-immer"; const initState = {} const [ data, updateData ] = useImmer(initState)
And it’s as simple as that. You could say it’s React’s useState but with a bit of steroid. To use the update function is very simple. It receives the draft state and you can modify it as much as you want like below.
// make changes to data updateData(draft => { // modify the draft as much as you want. })
The creator of Immer has provided a codesandbox example which you can play around with to see how it works.
useImmerReducer is similarly simple to use if you’ve used React’s useReducer hook. It has a similar signature. Let’s see what that looks like in code terms.
import React from "react"; import { useImmerReducer } from "use-immer"; const initState = {} const reducer = (draft, action) => { switch(action.type) { default: break; } } const [data, dataDispatch] = useImmerReducer(reducer, initState);
We can see that the reducer receives a draft state which we can modify as much as we want. There’s also a codesandbox example here for you to experiment with.
And that is how simple it is to use Immer hooks. But in case you’re still wondering why you should use Immer in your project, here’s a summary of some of the most important reasons I’ve found for using Immer.
Why You Should Use Immer
If you’ve written state management logic for any length of time you’ll quickly appreciate the simplicity Immer offers. But that is not the only benefit Immer offers.
When you use Immer, you end up writing less boilerplate code as we have seen with relatively simple reducers. This also makes deep updates relatively easy.
With libraries such as Immutable.js, you have to learn a new API to reap the benefits of immutability. But with Immer you achieve the same thing with normal JavaScript Objects, Arrays, Sets, and Maps. There’s nothing new to learn.
Immer also provides structural sharing by default. This simply means that when you make changes to a state object, Immer automatically shares the unchanged parts of the state between the new state and the previous state.
With Immer, you also get automatic object freezing which means that you cannot make changes to the produced state. For instance, when I started using Immer, I tried to apply the sort method on an array of objects returned by Immer’s produce function. It threw an error telling me I can’t make any changes to the array. I had to apply the array slice method before applying sort. Once again, the produced nextState is an immutable state tree.
Immer is also strongly typed and very small at just 3KB when gzipped.
Conclusion
When it comes to managing state updates, using Immer is a no-brainer for me. It’s a very lightweight library that lets you keep using all the things you’ve learned about JavaScript without trying to learn something entirely new. I encourage you to install it in your project and start using it right away. You can add use it in existing projects and incrementally update your reducers.
I’d also encourage you to read the Immer introductory blog post by Michael Weststrate. The part I find especially interesting is the “How does Immer work?” section which explains how Immer leverages language features such as proxies and concepts such as copy-on-write.
I’d also encourage you to take a look at this blog post: Immutability in JavaScript: A Contratian View where the author, Steven de Salas, presents his thoughts about the merits of pursuing immutability.
I hope that with the things you’ve learned in this post you can start using Immer right away.
Related Resources
use-immer, GitHub
Immer, GitHub
function, MDN web docs, Mozilla
proxy, MDN web docs, Mozilla
Object (computer science), Wikipedia
“Immutability in JS,” Orji Chidi Matthew, GitHub
“ECMAScript Data Types and Values,” Ecma International
Immutable collections for JavaScript, Immutable.js , GitHub
“The case for Immutability,” Immutable.js , GitHub
(ks, ra, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/better-reducers-with-immer/ source https://scpie1.blogspot.com/2020/06/better-reducers-with-immer.html
0 notes
riichardwilson · 5 years ago
Text
Better Reducers With Immer
About The Author
Awesome frontend developer who loves everything coding. I’m a lover of choral music and I’m working to make it more accessible to the world, one upload at a … More about Chidi …
In this article, we’re going to learn how to use Immer to write reducers. When working with React, we maintain a lot of state. To make updates to our state, we need to write a lot of reducers. Manually writing reducers results in bloated code where we have to touch almost every part of our state. This is tedious and error-prone. In this article, we’re going to see how Immer brings more simplicity to the process of writing state reducers.
As a React developer, you should be already familiar with the principle that state should not be mutated directly. You might be wondering what that means (most of us had that confusion when we started out).
This tutorial will do justice to that: you will understand what immutable state is and the need for it. You’ll also learn how to use Immer to work with immutable state and the benefits of using it. You can find the code in this article in this Github repo.
Immutability In JavaScript And Why It Matters
Immer.js is a tiny JavaScript library was written by Michel Weststrate whose stated mission is to allow you “to work with immutable state in a more convenient way.”
But before diving into Immer, let’s quickly have a refresher about immutability in JavaScript and why it matters in a React application.
The latest ECMAScript (aka JavaScript) standard defines nine built-in data types. Of these nine types, there are six that are referred to as primitive values/types. These six primitives are undefined, number, string, boolean, bigint, and symbol. A simple check with JavaScript’s typeof operator will reveal the types of these data types.
console.log(typeof 5) // number console.log(typeof 'name') // string console.log(typeof (1
A primitive is a value that is not an object and has no methods. Most important to our present discussion is the fact that a primitive’s value cannot be changed once it is created. Thus, primitives are said to be immutable.
The remaining three types are null, object, and function. We can also check their types using the typeof operator.
console.log(typeof null) // object console.log(typeof [0, 1]) // object console.log(typeof {name: 'name'}) // object const f = () => ({}) console.log(typeof f) // function
These types are mutable. This means that their values can be changed at any time after they are created.
You might be wondering why I have the array [0, 1] up there. Well, in JavaScriptland, an array is simply a special type of object. In case you’re also wondering about null and how it is different from undefined. undefined simply means that we haven’t set a value for a variable while null is a special case for objects. If you know something should be an object but the object is not there, you simply return null.
To illustrate with a simple example, try running the code below in your browser console.
console.log('aeiou'.match(/[x]/gi)) // null console.log('xyzabc'.match(/[x]/gi)) // [ 'x' ]
String.prototype.match should return an array, which is an object type. When it can’t find such an object, it returns null. Returning undefined wouldn’t make sense here either.
Enough with that. Let’s return to discussing immutability.
According to the MDN docs:
“All types except objects define immutable values (that is, values which can’t be changed).”
This statement includes functions because they are a special type of JavaScript object. See function definition here.
Let’s take a quick look at what mutable and immutable data types mean in practice. Try running the below code in your browser console.
let a = 5; let b = a console.log(`a: ${a}; b: ${b}`) // a: 5; b: 5 b = 7 console.log(`a: ${a}; b: ${b}`) // a: 5; b: 7
Our results show that even though b is “derived” from a, changing the value of b doesn’t affect the value of a. This arises from the fact that when the JavaScript engine executes the statement b = a, it creates a new, separate memory location, puts 5 in there, and points b at that location.
What about objects? Consider the below code.
let c = { name: 'some name'} let d = c; console.log(`c: ${JSON.stringify(c)}; d: ${JSON.stringify(d)}`) // {"name":"some name"}; d: {"name":"some name"} d.name = 'new name' console.log(`c: ${JSON.stringify(c)}; d: ${JSON.stringify(d)}`) // {"name":"new name"}; d: {"name":"new name"}
We can see that changing the name property via variable d also changes it in c. This arises from the fact that when the JavaScript engine executes the statement, c = { name: 'some name' }, the JavaScript engine creates a space in memory, puts the object inside, and points c at it. Then, when it executes the statement d = c, the JavaScript engine just points d to the same location. It doesn’t create a new memory location. Thus any changes to the items in d is implicitly an operation on the items in c. Without much effort, we can see why this is trouble in the making.
Imagine you were developing a React application and somewhere you want to show the user’s name as some name by reading from variable c. But somewhere else you had introduced a bug in your code by manipulating the object d. This would result in the user’s name appearing as new name. If c and d were primitives we wouldn’t have that problem. But primitives are too simple for the kinds of state a typical React application has to maintain.
This is about the major reasons why it is important to maintain an immutable state in your application. I encourage you to check out a few other considerations by reading this short section from the Immutable.js README: the case for immutability.
Having understood why we need immutability in a React application, let’s now take a look at how Immer tackles the problem with its produce function.
Immer’s produce Function
Immer’s core API is very small, and the main function you’ll be working with is the produce function. produce simply takes an initial state and a callback that defines how the state should be mutated. The callback itself receives a draft (identical, but still a copy) copy of the state to which it makes all the intended update. Finally, it produces a new, immutable state with all the changes applied.
The general pattern for this sort of state update is:
// produce signature produce(state, callback) => nextState
Let’s see how this works in practice.
import produce from 'immer' const initState = { pets: ['dog', 'cat'], packages: [ { name: 'react', installed: true }, { name: 'redUX', installed: true }, ], } // to add a new package const newPackage = { name: 'immer', installed: false } const nextState = produce(initState, draft => { draft.packages.push(newPackage) })
In the above code, we simply pass the starting state and a callback that specifies how we want the mutations to happen. It’s as simple as that. We don’t need to touch any other part of the state. It leaves initState untouched and structurally shares those parts of the state that we didn’t touch between the starting and the new states. One such part in our state is the pets array. The produced nextState is an immutable state tree that has the changes we’ve made as well as the parts we didn’t modify.
Armed with this simple, but useful knowledge, let’s take a look at how produce can help us simplify our React reducers.
Writing Reducers With Immer
Suppose we have the state object defined below
const initState = { pets: ['dog', 'cat'], packages: [ { name: 'react', installed: true }, { name: 'redUX', installed: true }, ], };
And we wanted to add a new object, and on a subsequent step, set its installed key to true
const newPackage = { name: 'immer', installed: false };
If we were to do this the usual way with JavaScripts object and array spread syntax, our state reducer might look like below.
const updateReducer = (state = initState, action) => { switch (action.type) { case 'ADD_PACKAGE': return { ...state, packages: [...state.packages, action.package], }; case 'UPDATE_INSTALLED': return { ...state, packages: state.packages.map(pack => pack.name === action.name ? { ...pack, installed: action.installed } : pack ), }; default: return state; } };
We can see that this is unnecessarily verbose and prone to mistakes for this relatively simple state object. We also have to touch every part of the state, which is unnecessary. Let’s see how we can simplify this with Immer.
const updateReducerWithProduce = (state = initState, action) => produce(state, draft => { switch (action.type) { case 'ADD_PACKAGE': draft.packages.push(action.package); break; case 'UPDATE_INSTALLED': { const package = draft.packages.filter(p => p.name === action.name)[0]; if (package) package.installed = action.installed; break; } default: break; } });
And with a few lines of code, we have greatly simplified our reducer. Also, if we fall into the default case, Immer just returns the draft state without us needing to do anything. Notice how there is less boilerplate code and the elimination of state spreading. With Immer, we only concern ourselves with the part of the state that we want to update. If we can’t find such an item, as in the `UPDATE_INSTALLED` action, we simply move on without touching anything else. The `produce` function also lends itself to currying. Passing a callback as the first argument to `produce` is intended to be used for currying. The signature of the curried `produce` is
//curried produce signature produce(callback) => (state) => nextState
Let’s see how we can update our earlier state with a curried produce. Our curried produce would look like this:
const curriedProduce = produce((draft, action) => { switch (action.type) { case 'ADD_PACKAGE': draft.packages.push(action.package); break; case 'SET_INSTALLED': { const package = draft.packages.filter(p => p.name === action.name)[0]; if (package) package.installed = action.installed; break; } default: break; } });
The curried produce function accepts a function as its first argument and returns a curried produce that only now requires a state from which to produce the next state. The first argument of the function is the draft state (which will be derived from the state to be passed when calling this curried produce). Then follows every number of arguments we wish to pass to the function.
All we need to do now to use this function is to pass in the state from which we want to produce the next state and the action object like so.
// add a new package to the starting state const nextState = curriedProduce(initState, { type: 'ADD_PACKAGE', package: newPackage, }); // update an item in the recently produced state const nextState2 = curriedProduce(nextState, { type: 'SET_INSTALLED', name: 'immer', installed: true, });
Note that in a React application when using the useReducer hook, we don’t need to pass the state explicitly as I’ve done above because it takes care of that.
You might be wondering, would Immer be getting a hook, like everything in React these days? Well, you’re in company with good news. Immer has two hooks for working with state: the useImmer and the useImmerReducer hooks. Let’s see how they work.
Using The useImmer And useImmerReducer Hooks
The best description of the useImmer hook comes from the use-immer README itself.
useImmer(initialState) is very similar to useState. The function returns a tuple, the first value of the tuple is the current state, the second is the updater function, which accepts an immer producer function, in which the draft can be mutated freely, until the producer ends and the changes will be made immutable and become the next state.
To make use of these hooks, you have to install them separately, in addition to the main Immer libarary.
yarn add immer use-immer
In code terms, the useImmer hook looks like below
import React from "react"; import { useImmer } from "use-immer"; const initState = {} const [ data, updateData ] = useImmer(initState)
And it’s as simple as that. You could say it’s React’s useState but with a bit of steroid. To use the update function is very simple. It receives the draft state and you can modify it as much as you want like below.
// make changes to data updateData(draft => { // modify the draft as much as you want. })
The creator of Immer has provided a codesandbox example which you can play around with to see how it works.
useImmerReducer is similarly simple to use if you’ve used React’s useReducer hook. It has a similar signature. Let’s see what that looks like in code terms.
import React from "react"; import { useImmerReducer } from "use-immer"; const initState = {} const reducer = (draft, action) => { switch(action.type) { default: break; } } const [data, dataDispatch] = useImmerReducer(reducer, initState);
We can see that the reducer receives a draft state which we can modify as much as we want. There’s also a codesandbox example here for you to experiment with.
And that is how simple it is to use Immer hooks. But in case you’re still wondering why you should use Immer in your project, here’s a summary of some of the most important reasons I’ve found for using Immer.
Why You Should Use Immer
If you’ve written state management logic for any length of time you’ll quickly appreciate the simplicity Immer offers. But that is not the only benefit Immer offers.
When you use Immer, you end up writing less boilerplate code as we have seen with relatively simple reducers. This also makes deep updates relatively easy.
With libraries such as Immutable.js, you have to learn a new API to reap the benefits of immutability. But with Immer you achieve the same thing with normal JavaScript Objects, Arrays, Sets, and Maps. There’s nothing new to learn.
Immer also provides structural sharing by default. This simply means that when you make changes to a state object, Immer automatically shares the unchanged parts of the state between the new state and the previous state.
With Immer, you also get automatic object freezing which means that you cannot make changes to the produced state. For instance, when I started using Immer, I tried to apply the sort method on an array of objects returned by Immer’s produce function. It threw an error telling me I can’t make any changes to the array. I had to apply the array slice method before applying sort. Once again, the produced nextState is an immutable state tree.
Immer is also strongly typed and very small at just 3KB when gzipped.
Conclusion
When it comes to managing state updates, using Immer is a no-brainer for me. It’s a very lightweight library that lets you keep using all the things you’ve learned about JavaScript without trying to learn something entirely new. I encourage you to install it in your project and start using it right away. You can add use it in existing projects and incrementally update your reducers.
I’d also encourage you to read the Immer introductory blog post by Michael Weststrate. The part I find especially interesting is the “How does Immer work?” section which explains how Immer leverages language features such as proxies and concepts such as copy-on-write.
I’d also encourage you to take a look at this blog post: Immutability in JavaScript: A Contratian View where the author, Steven de Salas, presents his thoughts about the merits of pursuing immutability.
I hope that with the things you’ve learned in this post you can start using Immer right away.
Related Resources
use-immer, GitHub
Immer, GitHub
function, MDN web docs, Mozilla
proxy, MDN web docs, Mozilla
Object (computer science), Wikipedia
“Immutability in JS,” Orji Chidi Matthew, GitHub
“ECMAScript Data Types and Values,” Ecma International
Immutable collections for JavaScript, Immutable.js , GitHub
“The case for Immutability,” Immutable.js , GitHub
(ks, ra, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/better-reducers-with-immer/ source https://scpie.tumblr.com/post/621096973700907008
0 notes
scpie · 5 years ago
Text
Better Reducers With Immer
About The Author
Awesome frontend developer who loves everything coding. I’m a lover of choral music and I’m working to make it more accessible to the world, one upload at a … More about Chidi …
In this article, we’re going to learn how to use Immer to write reducers. When working with React, we maintain a lot of state. To make updates to our state, we need to write a lot of reducers. Manually writing reducers results in bloated code where we have to touch almost every part of our state. This is tedious and error-prone. In this article, we’re going to see how Immer brings more simplicity to the process of writing state reducers.
As a React developer, you should be already familiar with the principle that state should not be mutated directly. You might be wondering what that means (most of us had that confusion when we started out).
This tutorial will do justice to that: you will understand what immutable state is and the need for it. You’ll also learn how to use Immer to work with immutable state and the benefits of using it. You can find the code in this article in this Github repo.
Immutability In JavaScript And Why It Matters
Immer.js is a tiny JavaScript library was written by Michel Weststrate whose stated mission is to allow you “to work with immutable state in a more convenient way.”
But before diving into Immer, let’s quickly have a refresher about immutability in JavaScript and why it matters in a React application.
The latest ECMAScript (aka JavaScript) standard defines nine built-in data types. Of these nine types, there are six that are referred to as primitive values/types. These six primitives are undefined, number, string, boolean, bigint, and symbol. A simple check with JavaScript’s typeof operator will reveal the types of these data types.
console.log(typeof 5) // number console.log(typeof 'name') // string console.log(typeof (1
A primitive is a value that is not an object and has no methods. Most important to our present discussion is the fact that a primitive’s value cannot be changed once it is created. Thus, primitives are said to be immutable.
The remaining three types are null, object, and function. We can also check their types using the typeof operator.
console.log(typeof null) // object console.log(typeof [0, 1]) // object console.log(typeof {name: 'name'}) // object const f = () => ({}) console.log(typeof f) // function
These types are mutable. This means that their values can be changed at any time after they are created.
You might be wondering why I have the array [0, 1] up there. Well, in JavaScriptland, an array is simply a special type of object. In case you’re also wondering about null and how it is different from undefined. undefined simply means that we haven’t set a value for a variable while null is a special case for objects. If you know something should be an object but the object is not there, you simply return null.
To illustrate with a simple example, try running the code below in your browser console.
console.log('aeiou'.match(/[x]/gi)) // null console.log('xyzabc'.match(/[x]/gi)) // [ 'x' ]
String.prototype.match should return an array, which is an object type. When it can’t find such an object, it returns null. Returning undefined wouldn’t make sense here either.
Enough with that. Let’s return to discussing immutability.
According to the MDN docs:
“All types except objects define immutable values (that is, values which can’t be changed).”
This statement includes functions because they are a special type of JavaScript object. See function definition here.
Let’s take a quick look at what mutable and immutable data types mean in practice. Try running the below code in your browser console.
let a = 5; let b = a console.log(`a: ${a}; b: ${b}`) // a: 5; b: 5 b = 7 console.log(`a: ${a}; b: ${b}`) // a: 5; b: 7
Our results show that even though b is “derived” from a, changing the value of b doesn’t affect the value of a. This arises from the fact that when the JavaScript engine executes the statement b = a, it creates a new, separate memory location, puts 5 in there, and points b at that location.
What about objects? Consider the below code.
let c = { name: 'some name'} let d = c; console.log(`c: ${JSON.stringify(c)}; d: ${JSON.stringify(d)}`) // {"name":"some name"}; d: {"name":"some name"} d.name = 'new name' console.log(`c: ${JSON.stringify(c)}; d: ${JSON.stringify(d)}`) // {"name":"new name"}; d: {"name":"new name"}
We can see that changing the name property via variable d also changes it in c. This arises from the fact that when the JavaScript engine executes the statement, c = { name: 'some name' }, the JavaScript engine creates a space in memory, puts the object inside, and points c at it. Then, when it executes the statement d = c, the JavaScript engine just points d to the same location. It doesn’t create a new memory location. Thus any changes to the items in d is implicitly an operation on the items in c. Without much effort, we can see why this is trouble in the making.
Imagine you were developing a React application and somewhere you want to show the user’s name as some name by reading from variable c. But somewhere else you had introduced a bug in your code by manipulating the object d. This would result in the user’s name appearing as new name. If c and d were primitives we wouldn’t have that problem. But primitives are too simple for the kinds of state a typical React application has to maintain.
This is about the major reasons why it is important to maintain an immutable state in your application. I encourage you to check out a few other considerations by reading this short section from the Immutable.js README: the case for immutability.
Having understood why we need immutability in a React application, let’s now take a look at how Immer tackles the problem with its produce function.
Immer’s produce Function
Immer’s core API is very small, and the main function you’ll be working with is the produce function. produce simply takes an initial state and a callback that defines how the state should be mutated. The callback itself receives a draft (identical, but still a copy) copy of the state to which it makes all the intended update. Finally, it produces a new, immutable state with all the changes applied.
The general pattern for this sort of state update is:
// produce signature produce(state, callback) => nextState
Let’s see how this works in practice.
import produce from 'immer' const initState = { pets: ['dog', 'cat'], packages: [ { name: 'react', installed: true }, { name: 'redUX', installed: true }, ], } // to add a new package const newPackage = { name: 'immer', installed: false } const nextState = produce(initState, draft => { draft.packages.push(newPackage) })
In the above code, we simply pass the starting state and a callback that specifies how we want the mutations to happen. It’s as simple as that. We don’t need to touch any other part of the state. It leaves initState untouched and structurally shares those parts of the state that we didn’t touch between the starting and the new states. One such part in our state is the pets array. The produced nextState is an immutable state tree that has the changes we’ve made as well as the parts we didn’t modify.
Armed with this simple, but useful knowledge, let’s take a look at how produce can help us simplify our React reducers.
Writing Reducers With Immer
Suppose we have the state object defined below
const initState = { pets: ['dog', 'cat'], packages: [ { name: 'react', installed: true }, { name: 'redUX', installed: true }, ], };
And we wanted to add a new object, and on a subsequent step, set its installed key to true
const newPackage = { name: 'immer', installed: false };
If we were to do this the usual way with JavaScripts object and array spread syntax, our state reducer might look like below.
const updateReducer = (state = initState, action) => { switch (action.type) { case 'ADD_PACKAGE': return { ...state, packages: [...state.packages, action.package], }; case 'UPDATE_INSTALLED': return { ...state, packages: state.packages.map(pack => pack.name === action.name ? { ...pack, installed: action.installed } : pack ), }; default: return state; } };
We can see that this is unnecessarily verbose and prone to mistakes for this relatively simple state object. We also have to touch every part of the state, which is unnecessary. Let’s see how we can simplify this with Immer.
const updateReducerWithProduce = (state = initState, action) => produce(state, draft => { switch (action.type) { case 'ADD_PACKAGE': draft.packages.push(action.package); break; case 'UPDATE_INSTALLED': { const package = draft.packages.filter(p => p.name === action.name)[0]; if (package) package.installed = action.installed; break; } default: break; } });
And with a few lines of code, we have greatly simplified our reducer. Also, if we fall into the default case, Immer just returns the draft state without us needing to do anything. Notice how there is less boilerplate code and the elimination of state spreading. With Immer, we only concern ourselves with the part of the state that we want to update. If we can’t find such an item, as in the `UPDATE_INSTALLED` action, we simply move on without touching anything else. The `produce` function also lends itself to currying. Passing a callback as the first argument to `produce` is intended to be used for currying. The signature of the curried `produce` is
//curried produce signature produce(callback) => (state) => nextState
Let’s see how we can update our earlier state with a curried produce. Our curried produce would look like this:
const curriedProduce = produce((draft, action) => { switch (action.type) { case 'ADD_PACKAGE': draft.packages.push(action.package); break; case 'SET_INSTALLED': { const package = draft.packages.filter(p => p.name === action.name)[0]; if (package) package.installed = action.installed; break; } default: break; } });
The curried produce function accepts a function as its first argument and returns a curried produce that only now requires a state from which to produce the next state. The first argument of the function is the draft state (which will be derived from the state to be passed when calling this curried produce). Then follows every number of arguments we wish to pass to the function.
All we need to do now to use this function is to pass in the state from which we want to produce the next state and the action object like so.
// add a new package to the starting state const nextState = curriedProduce(initState, { type: 'ADD_PACKAGE', package: newPackage, }); // update an item in the recently produced state const nextState2 = curriedProduce(nextState, { type: 'SET_INSTALLED', name: 'immer', installed: true, });
Note that in a React application when using the useReducer hook, we don’t need to pass the state explicitly as I’ve done above because it takes care of that.
You might be wondering, would Immer be getting a hook, like everything in React these days? Well, you’re in company with good news. Immer has two hooks for working with state: the useImmer and the useImmerReducer hooks. Let’s see how they work.
Using The useImmer And useImmerReducer Hooks
The best description of the useImmer hook comes from the use-immer README itself.
useImmer(initialState) is very similar to useState. The function returns a tuple, the first value of the tuple is the current state, the second is the updater function, which accepts an immer producer function, in which the draft can be mutated freely, until the producer ends and the changes will be made immutable and become the next state.
To make use of these hooks, you have to install them separately, in addition to the main Immer libarary.
yarn add immer use-immer
In code terms, the useImmer hook looks like below
import React from "react"; import { useImmer } from "use-immer"; const initState = {} const [ data, updateData ] = useImmer(initState)
And it’s as simple as that. You could say it’s React’s useState but with a bit of steroid. To use the update function is very simple. It receives the draft state and you can modify it as much as you want like below.
// make changes to data updateData(draft => { // modify the draft as much as you want. })
The creator of Immer has provided a codesandbox example which you can play around with to see how it works.
useImmerReducer is similarly simple to use if you’ve used React’s useReducer hook. It has a similar signature. Let’s see what that looks like in code terms.
import React from "react"; import { useImmerReducer } from "use-immer"; const initState = {} const reducer = (draft, action) => { switch(action.type) { default: break; } } const [data, dataDispatch] = useImmerReducer(reducer, initState);
We can see that the reducer receives a draft state which we can modify as much as we want. There’s also a codesandbox example here for you to experiment with.
And that is how simple it is to use Immer hooks. But in case you’re still wondering why you should use Immer in your project, here’s a summary of some of the most important reasons I’ve found for using Immer.
Why You Should Use Immer
If you’ve written state management logic for any length of time you’ll quickly appreciate the simplicity Immer offers. But that is not the only benefit Immer offers.
When you use Immer, you end up writing less boilerplate code as we have seen with relatively simple reducers. This also makes deep updates relatively easy.
With libraries such as Immutable.js, you have to learn a new API to reap the benefits of immutability. But with Immer you achieve the same thing with normal JavaScript Objects, Arrays, Sets, and Maps. There’s nothing new to learn.
Immer also provides structural sharing by default. This simply means that when you make changes to a state object, Immer automatically shares the unchanged parts of the state between the new state and the previous state.
With Immer, you also get automatic object freezing which means that you cannot make changes to the produced state. For instance, when I started using Immer, I tried to apply the sort method on an array of objects returned by Immer’s produce function. It threw an error telling me I can’t make any changes to the array. I had to apply the array slice method before applying sort. Once again, the produced nextState is an immutable state tree.
Immer is also strongly typed and very small at just 3KB when gzipped.
Conclusion
When it comes to managing state updates, using Immer is a no-brainer for me. It’s a very lightweight library that lets you keep using all the things you’ve learned about JavaScript without trying to learn something entirely new. I encourage you to install it in your project and start using it right away. You can add use it in existing projects and incrementally update your reducers.
I’d also encourage you to read the Immer introductory blog post by Michael Weststrate. The part I find especially interesting is the “How does Immer work?” section which explains how Immer leverages language features such as proxies and concepts such as copy-on-write.
I’d also encourage you to take a look at this blog post: Immutability in JavaScript: A Contratian View where the author, Steven de Salas, presents his thoughts about the merits of pursuing immutability.
I hope that with the things you’ve learned in this post you can start using Immer right away.
Related Resources
use-immer, GitHub
Immer, GitHub
function, MDN web docs, Mozilla
proxy, MDN web docs, Mozilla
Object (computer science), Wikipedia
“Immutability in JS,” Orji Chidi Matthew, GitHub
“ECMAScript Data Types and Values,” Ecma International
Immutable collections for JavaScript, Immutable.js , GitHub
“The case for Immutability,” Immutable.js , GitHub
(ks, ra, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/better-reducers-with-immer/
0 notes