#SEMrush Group Bu
Explore tagged Tumblr posts
groupbuyseotools123 · 4 days ago
Text
Discover cost-effective SEO solutions with SEMrush Group Buy at the Group Buy SEO Tools Store. Access premium features like keyword research, site audits, and competitor analysis without the high subscription costs. This is perfect for businesses aiming to rank higher and grow smarter. For more information, visit our website!
1 note · View note
janetjacksonseo · 1 year ago
Text
🔥🔍 SEO Group Buy Tools India: Get Access to Premium SEO Tools at Affordable Prices! 🇮🇳💰
Are you a digital marketer or an SEO enthusiast looking to level up your game without breaking the bank? Look no further! In this thread, I'll introduce you to SEO Group Buy Tools India, a game-changing platform that offers access to premium SEO tools at unbelievable prices. 🚀💪
With SEO Group Buy Tools India, you can get your hands on powerful tools like Ahrefs, SEMrush, Moz, and many more, without burning a hole in your pocket. 🤑💼
Why pay full price for individual subscriptions when you can access all these tools for a fraction of the cost? 🤷‍♀️💡
Here's how it works:
1️⃣ Visit the SEO Group Buy Tools India website and choose the subscription plan that suits your needs. They offer flexible plans with different tool combinations to cater to various requirements. 🌐🛠️
2️⃣ Once you've subscribed, you'll receive login credentials to access the SEO tools dashboard. It's like having your own personal arsenal of SEO weapons at your fingertips! ⚔️💻
3️⃣ Enjoy unlimited access to premium SEO tools and take your website optimization, keyword research, backlink analysis, and competitor analysis to the next level. 📈🔍
4️⃣ Save big on your SEO expenses and invest those savings into other crucial aspects of your business. SEO Group Buy Tools India makes it possible for small businesses and startups to compete with larger players in the market. 💪🌟
5️⃣ Rest assured, as SEO Group Buy Tools India ensures the utmost privacy and security of your data. All accounts are individual and separate, so you don't have to worry about any unauthorized access. 🔒🔐
Don't let budget constraints hold you back from achieving your SEO goals. With SEO Group Buy Tools India, you can access top-notch SEO tools without breaking the bank. Start optimizing your website like a pro today! 💯🔥
Visit their website now and unlock the power of premium SEO tools at affordable prices. 🌐💼
👉 [Website URL] 👈
SEO #Tools #GroupBuy #India #Affordable #DigitalMarketing #Optimization #Ranking
300 + Seo Tools Sale Trusted Site,,,,,,
* Visit My Website 👉: Group Buy Seo Tools
Visit : 👇👇👇👇
30 notes · View notes
evasalinasrest · 6 years ago
Text
The Fractured Web
Anyone can argue about the intent of a particular action & the outcome that is derived by it. But when the outcome is known, at some point the intent is inferred if the outcome is derived from a source of power & the outcome doesn’t change.
Or, put another way, if a powerful entity (government, corporation, other organization) disliked an outcome which appeared to benefit them in the short term at great lasting cost to others, they could spend resources to adjust the system.
If they don’t spend those resources (or, rather, spend them on lobbying rather than improving the ecosystem) then there is no desired change. The outcome is as desired. Change is unwanted.
Engagement is a toxic metric.Products which optimize for it become worse. People who optimize for it become less happy.It also seems to generate runaway feedback loops where most engagable people have a) worst individual experiences and then b) end up driving the product bus.— Patrick McKenzie (@patio11) April 9, 2019
News is a stock vs flow market where the flow of recent events drives most of the traffic to articles. News that is more than a couple days old is no longer news. A news site which stops publishing news stops becoming a habit & quickly loses relevancy. Algorithmically an abandoned archive of old news articles doesn’t look much different than eHow, in spite of having a much higher cost structure.
According to SEMrush’s traffic rank, ampproject.org gets more monthly visits than Yahoo.com.
That actually understates the prevalence of AMP because AMP is generally designed for mobile AND not all AMP-formatted content is displayed on ampproject.org.
Part of how AMP was able to get widespread adoption was because in the news vertical the organic search result set was displaced by an AMP block. If you were a news site either you were so differentiated that readers would scroll past the AMP block in the search results to look for you specifically, or you adopted AMP, or you were doomed.
Some news organizations like The Guardian have a team of about a dozen people reformatting their content to the duplicative & proprietary AMP format. That’s wasteful, but necessary “In theory, adoption of AMP is voluntary. In reality, publishers that don’t want to see their search traffic evaporate have little choice. New data from publisher analytics firm Chartbeat shows just how much leverage Google has over publishers thanks to its dominant search engine.”
It seems more than a bit backward that low margin publishers are doing duplicative work to distance themselves from their own readers while improving the profit margins of monopolies. But it is what it is. And that no doubt drew the ire of many publishers across the EU.
And now there are AMP Stories to eat up even more visual real estate.
If you spent a bunch of money to create a highly differentiated piece of content, why would you prefer that high spend flaghship content appear on a third party website rather than your own?
Google & Facebook have done such a fantastic job of eating the entire pie that some are celebrating Amazon as a prospective savior to the publishing industry. That view – IMHO – is rather suspect.
Where any of the tech monopolies dominate they cram down on partners. The New York Times acquired The Wirecutter in Q4 of 2016. In Q1 of 2017 Amazon adjusted their affiliate fee schedule.
Amazon generally treats consumers well, but they have been much harder on business partners with tough pricing negotiations, counterfeit protections, forced ad buying to have a high enough product rank to be able to rank organically, ad displacement of their organic search results below the fold (even for branded search queries), learning suppliers & cutting out the partners, private label products patterned after top sellers, in some cases running pop over ads for the private label products on product level pages where brands already spent money to drive traffic to the page, etc.
They’ve made things tougher for their partners in a way that mirrors the impact Facebook & Google have had on online publishers:
“Boyce’s experience on Amazon largely echoed what happens in the offline world: competitors entered the market, pushing down prices and making it harder to make a profit. So Boyce adapted. He stopped selling basketball hoops and developed his own line of foosball tables, air hockey tables, bocce ball sets and exercise equipment. The best way to make a decent profit on Amazon was to sell something no one else had and create your own brand. … Amazon also started selling bocce ball sets that cost $15 less than Boyce’s. He says his products are higher quality, but Amazon gives prominent page space to its generic version and wins the cost-conscious shopper.”
Google claims they have no idea how content publishers are with the trade off between themselves & the search engine, but every quarter Alphabet publish the share of ad spend occurring on owned & operated sites versus the share spent across the broader publisher network. And in almost every quarter for over a decade straight that ratio has grown worse for publishers.
When Google tells industry about how much $ it funnels to rest of ecosystem, just show them this chart. It’s good to be the “revenue regulator” (note: G went public in 2004). pic.twitter.com/HCbCNgbzKc— Jason Kint (@jason_kint) February 5, 2019
The aggregate numbers for news publishers are worse than shown above as Google is ramping up ads in video games quite hard. They’ve partnered with Unity & promptly took away the ability to block ads from appearing in video games using googleadsenseformobileapps.com exclusion (hello flat thumb misclicks, my name is budget & I am gone!)
They will also track video game player behavior & alter game play to maximize revenues based on machine learning tied to surveillance of the user’s account: “We’re bringing a new approach to monetization that combines ads and in-app purchases in one automated solution. Available today, new smart segmentation features in Google AdMob use machine learning to segment your players based on their likelihood to spend on in-app purchases. Ad units with smart segmentation will show ads only to users who are predicted not to spend on in-app purchases. Players who are predicted to spend will see no ads, and can simply continue playing.”
And how does the growth of ampproject.org square against the following wisdom?
If you do use a CDN, I’d recommend using a domain name of your own (eg, https://t.co/fWMc6CFPZ0), so you can move to other CDNs if you feel the need to over time, without having to do any redirects.— John (@JohnMu) April 15, 2019
Literally only yesterday did Google begin supporting instant loading of self-hosted AMP pages.
China has a different set of tech leaders than the United States. Baidu, Alibaba, Tencent (BAT) instead of Facebook, Amazon, Apple, Netflix, Google (FANG). China tech companies may have won their domestic markets in part based on superior technology or better knowledge of the local culture, though those same companies have largely went nowhere fast in most foreign markets. A big part of winning was governmental assistance in putting a foot on the scales.
Part of the US-China trade war is about who controls the virtual “seas” upon which value flows:
it can easily be argued that the last 60 years were above all the era of the container-ship (with container-ships getting ever bigger). But will the coming decades still be the age of the container-ship? Possibly not, for the simple reason that things that have value increasingly no longer travel by ship, but instead by fiberoptic cables! … you could almost argue that ZTE and Huawei have been the “East India Company” of the current imperial cycle. Unsurprisingly, it is these very companies, charged with laying out the “new roads” along which “tomorrow’s value” will flow, that find themselves at the center of the US backlash. … if the symbol of British domination was the steamship, and the symbol of American strength was the Boeing 747, it seems increasingly clear that the question of the future will be whether tomorrow’s telecom switches and routers are produced by Huawei or Cisco. … US attempts to take down Huawei and ZTE can be seen as the existing empire’s attempt to prevent the ascent of a new imperial power. With this in mind, I could go a step further and suggest that perhaps the Huawei crisis is this century’s version of Suez crisis. No wonder markets have been falling ever since the arrest of the Huawei CFO. In time, the Suez Crisis was brought to a halt by US threats to destroy the value of sterling. Could we now witness the same for the US dollar?
China maintains Huawei is an employee-owned company. But that proposition is suspect. Broadly stealing technology is vital to the growth of the Chinese economy & they have no incentive to stop unless their leading companies pay a direct cost. Meanwhile, China is investigating Ericsson over licensing technology.
India has taken notice of the success of Chinese tech companies & thus began to promote “national champion” company policies. That, in turn, has also meant some of the Chinese-styled laws requiring localized data, antitrust inquiries, foreign ownership restrictions, requirements for platforms to not sell their own goods, promoting limits on data encryption, etc.
The secretary of India’s Telecommunications Department, Aruna Sundararajan, last week told a gathering of Indian startups in a closed-door meeting in the tech hub of Bangalore that the government will introduce a “national champion” policy “very soon” to encourage the rise of Indian companies, according to a person familiar with the matter. She said Indian policy makers had noted the success of China’s internet giants, Alibaba Group Holding Ltd. and Tencent Holdings Ltd. … Tensions began rising last year, when New Delhi decided to create a clearer set of rules for e-commerce and convened a group of local players to solicit suggestions. Amazon and Flipkart, even though they make up more than half the market, weren’t invited, according to people familiar with the matter.
Amazon vowed to invest $5 billion in India & they have done some remarkable work on logistics there. Walmart acquired Flipkart for $16 billion.
Other emerging markets also have many local ecommerce leaders like Jumia, MercadoLibre, OLX, Gumtree, Takealot, Konga, Kilimall, BidOrBuy, Tokopedia, Bukalapak, Shoppee, Lazada. If you live in the US you may have never heard of *any* of those companies. And if you live in an emerging market you may have never interacted with Amazon or eBay.
It makes sense that ecommerce leadership would be more localized since it requires moving things in the physical economy, dealing with local currencies, managing inventory, shipping goods, etc. whereas information flows are just bits floating on a fiber optic cable.
If the Internet is primarily seen as a communications platform it is easy for people in some emerging markets to think Facebook is the Internet. Free communication with friends and family members is a compelling offer & as the cost of data drops web usage increases.
At the same time, the web is incredibly deflationary. Every free form of entertainment which consumes time is time that is not spent consuming something else.
Add the technological disruption to the wealth polarization that happened in the wake of the great recession, then combine that with algorithms that promote extremist views & it is clearly causing increasing conflict.
If you are a parent and you think you child has no shot at a brighter future than your own life it is easy to be full of rage.
Empathy can radicalize otherwise normal people by giving them a more polarized view of the world:
Starting around 2000, the line starts to slide. More students say it’s not their problem to help people in trouble, not their job to see the world from someone else’s perspective. By 2009, on all the standard measures, Konrath found, young people on average measure 40 percent less empathetic than my own generation … The new rule for empathy seems to be: reserve it, not for your “enemies,” but for the people you believe are hurt, or you have decided need it the most. Empathy, but just for your own team. And empathizing with the other team? That’s practically a taboo.
A complete lack of empathy could allow a psychopath (hi Chris!) to commit extreme crimes while feeling no guilt, shame or remorse. Extreme empathy can have the same sort of outcome:
“Sometimes we commit atrocities not out of a failure of empathy but rather as a direct consequence of successful, even overly successful, empathy. … They emphasized that students would learn both sides, and the atrocities committed by one side or the other were always put into context. Students learned this curriculum, but follow-up studies showed that this new generation was more polarized than the one before. … [Empathy] can be good when it leads to good action, but it can have downsides. For example, if you want the victims to say ‘thank you.’ You may even want to keep the people you help in that position of inferior victim because it can sustain your feeling of being a hero.” – Fritz Breithaupt
News feeds will be read. Villages will be razed. Lynch mobs will become commonplace.
Many people will end up murdered by algorithmically generated empathy.
As technology increases absentee ownership & financial leverage, a society led by morally agnostic algorithms is not going to become more egalitarian.
The more I think about and discuss it, the more I think WhatsApp is simultaneously the future of Facebook, and the most potentially dangerous digital tool yet created. We haven’t even begun to see the real impact yet of ubiquitous, unfettered and un-moderatable human telepathy.— Antonio García Martínez (@antoniogm) April 15, 2019
When politicians throw fuel on the fire it only gets worse:
It’s particularly odd that the government is demanding “accountability and responsibility” from a phone app when some ruling party politicians are busy spreading divisive fake news. How can the government ask WhatsApp to control mobs when those convicted of lynching Muslims have been greeted, garlanded and fed sweets by some of the most progressive and cosmopolitan members of Modi’s council of ministers?
Mark Zuckerburg won’t get caught downstream from platform blowback as he spends $20 million a year on his security.
The web is a mirror. Engagement-based algorithms reinforcing our perceptions & identities.
And every important story has at least 2 sides!
The Rohingya asylum seekers are victims of their own violent Jihadist leadership that formed a militia to kill Buddhists and Hindus. Hindus are being massacred, where’s the outrage for them!? https://t.co/P3m6w4B1Po— Imam Tawhidi (@Imamofpeace) May 23, 2018
Some may “learn” vaccines don’t work. Others may learn the vaccines their own children took did not work, as it failed to protect them from the antivax content spread by Facebook & Google, absorbed by people spreading measles & Medieval diseases.
Passion drives engagement, which drives algorithmic distribution: “There’s an asymmetry of passion at work. Which is to say, there’s very little counter-content to surface because it simply doesn’t occur to regular people (or, in this case, actual medical experts) that there’s a need to produce counter-content.”
As the costs of “free” become harder to hide, social media companies which currently sell emerging markets as their next big growth area will end up having embedded regulatory compliance costs which will end up exceeding any sort of prospective revenue they could hope to generate.
The Pinterest S1 shows almost all their growth is in emerging markets, yet almost all their revenue is inside the United States.
As governments around the world see the real-world cost of the foreign tech companies & view some of them as piggy banks, eventually the likes of Facebook or Google will pull out of a variety of markets they no longer feel worth serving. It will be like Google did in mainland China with search after discovering pervasive hacking of activist Gmail accounts.
Just tried signing into Gmail from a new device. Unless I provide a phone number, there is no way to sign in and no one to call about it. Oh, and why do they say they need my phone? If you guessed “for my protection,” you would be correct. Talk about Big Brother…— Simon Mikhailovich (@S_Mikhailovich) April 16, 2019
Lower friction & lower cost information markets will face more junk fees, hurdles & even some legitimate regulations. Information markets will start to behave more like physical goods markets.
The tech companies presume they will be able to use satellites, drones & balloons to beam in Internet while avoiding messy local issues tied to real world infrastructure, but when a local wealthy player is betting against them they’ll probably end up losing those markets: “One of the biggest cheerleaders for the new rules was Reliance Jio, a fast-growing mobile phone company controlled by Mukesh Ambani, India’s richest industrialist. Mr. Ambani, an ally of Mr. Modi, has made no secret of his plans to turn Reliance Jio into an all-purpose information service that offers streaming video and music, messaging, money transfer, online shopping, and home broadband services.”
Publishers do not have “their mojo back” because the tech companies have been so good to them, but rather because the tech companies have been so aggressive that they’ve earned so much blowback which will in turn lead publishers to opting out of future deals, which will eventually lead more people back to the trusted brands of yesterday.
Publishers feeling guilty about taking advertorial money from the tech companies to spread their propaganda will offset its publication with opinion pieces pointing in the other direction: “This is a lobbying campaign in which buying the good opinion of news brands is clearly important. If it was about reaching a target audience, there are plenty of metrics to suggest his words would reach further – at no cost – on Facebook. Similarly, Google is upping its presence in a less obvious manner via assorted media initiatives on both sides of the Atlantic. Its more direct approach to funding journalism seems to have the desired effect of making all media organisations (and indeed many academic institutions) touched by its money slightly less questioning and critical of its motives.”
When Facebook goes down direct visits to leading news brand sites go up.
When Google penalizes a no-name me-too site almost nobody realizes it is missing. But if a big publisher opts out of the ecosystem people will notice.
The reliance on the tech platforms is largely a mirage. If enough key players were to opt out at the same time people would quickly reorient their information consumption habits.
If the platforms can change their focus overnight then why can’t publishers band together & choose to dump them?
CEO Jack Dorsey said Twitter is looking to change the focus from following specific individuals to topics of interest, acknowledging that what’s incentivized today on the platform is at odds with the goal of healthy dialoguehttps://t.co/31FYslbePA— Axios (@axios) April 16, 2019
In Europe there is GDPR, which aimed to protect user privacy, but ultimately acted as a tax on innovation by local startups while being a subsidy to the big online ad networks. They also have Article 11 & Article 13, which passed in spite of Google’s best efforts on the scaremongering anti-SERP tests, lobbying & propaganda fronts: “Google has sparked criticism by encouraging news publishers participating in its Digital News Initiative to lobby against proposed changes to EU copyright law at a time when the beleaguered sector is increasingly turning to the search giant for help.”
Remember the Eric Schmidt comment about how brands are how you sort out (the non-YouTube portion of) the cesspool? As it turns out, he was allegedly wrong as Google claims they have been fighting for the little guy the whole time:
Article 11 could change that principle and require online services to strike commercial deals with publishers to show hyperlinks and short snippets of news. This means that search engines, news aggregators, apps, and platforms would have to put commercial licences in place, and make decisions about which content to include on the basis of those licensing agreements and which to leave out. Effectively, companies like Google will be put in the position of picking winners and losers. … Why are large influential companies constraining how new and small publishers operate? … The proposed rules will undoubtedly hurt diversity of voices, with large publishers setting business models for the whole industry. This will not benefit all equally. … We believe the information we show should be based on quality, not on payment.
Facebook claims there is a local news problem: “Facebook Inc. has been looking to boost its local-news offerings since a 2017 survey showed most of its users were clamoring for more. It has run into a problem: There simply isn’t enough local news in vast swaths of the country. … more than one in five newspapers have closed in the past decade and a half, leaving half the counties in the nation with just one newspaper, and 200 counties with no newspaper at all.”
Google is so for the little guy that for their local news experiments they’ve partnered with a private equity backed newspaper roll up firm & another newspaper chain which did overpriced acquisitions & is trying to act like a PE firm (trying to not get eaten by the PE firm).
Does the above stock chart look in any way healthy?
Does it give off the scent of a firm that understood the impact of digital & rode it to new heights?
If you want good market-based outcomes, why not partner with journalists directly versus operating through PE chop shops?
If Patch is profitable & Google were a neutral ranking system based on quality, couldn’t Google partner with journalists directly?
Throwing a few dollars at a PE firm in some nebulous partnership sure beats the sort of regulations coming out of the EU. And the EU’s regulations (and prior link tax attempts) are in addition to the three multi billion Euro fines the European Union has levied against Alphabet for shopping search, Android & AdSense.
Google was also fined in Russia over Android bundling. The fine was tiny, but after consumers gained a search engine choice screen (much like Google pushed for in Europe on Microsoft years ago) Yandex’s share of mobile search grew quickly.
The UK recently published a white paper on online harms. In some ways it is a regulation just like the tech companies might offer to participants in their ecosystems:
Companies will have to fulfil their new legal duties or face the consequences and “will still need to be compliant with the overarching duty of care even where a specific code does not exist, for example assessing and responding to the risk associated with emerging harms or technology”.
If web publishers should monitor inbound links to look for anything suspicious then the big platforms sure as hell have the resources & profit margins to monitor behavior on their own websites.
Australia passed the Sharing of Abhorrent Violent Material bill which requires platforms to expeditiously remove violent videos & notify the Australian police about them.
There are other layers of fracturing going on in the web as well.
Programmatic advertising shifted revenue from publishers to adtech companies & the largest ad sellers. Ad blockers further lower the ad revenues of many publishers. If you routinely use an ad blocker, try surfing the web for a while without one & you will notice layover welcome AdSense ads on sites as you browse the web – the very type of ad they were allegedly against when promoting AMP.
There has been much more press in the past week about ad blocking as Google’s influence is being questioned as it rolls out ad blocking as a feature built into Google’s dominant Chrome web browser. https://t.co/LQmvJu9MYB— Jason Kint (@jason_kint) February 19, 2018
Tracking protection in browsers & ad blocking features built directly into browsers leave publishers more uncertain. And who even knows who visited an AMP page hosted on a third party server, particularly when things like GDPR are mixed in? Those who lack first party data may end up having to make large acquisitions to stay relevant.
Voice search & personal assistants are now ad channels.
Google Assistant Now Showing Sponsored Link Ads for Some Travel Related Queries “Similar results are delivered through both Google Home and Google Home Hub without the sponsored links.” https://t.co/jSVKKI2AYT via @bretkinsella pic.twitter.com/0sjAswy14M— Glenn Gabe (@glenngabe) April 15, 2019
App stores are removing VPNs in China, removing Tiktok in India, and keeping female tracking apps in Saudi Arabia. App stores are centralized chokepoints for governments. Every centralized service is at risk of censorship. Web browsers from key state-connected players can also censor messages spread by developers on platforms like GitHub.
Microsoft’s newest Edge web browser is based on Chromium, the source of Google Chrome. While Mozilla Firefox gets most of their revenue from a search deal with Google, Google has still went out of its way to use its services to both promote Chrome with pop overs AND break in competing web browsers:
“All of this is stuff you’re allowed to do to compete, of course. But we were still a search partner, so we’d say ‘hey what gives?’ And every time, they’d say, ‘oops. That was accidental. We’ll fix it in the next push in 2 weeks.’ Over and over. Oops. Another accident. We’ll fix it soon. We want the same things. We’re on the same team. There were dozens of oopses. Hundreds maybe?” – former Firefox VP Jonathan Nightingale
This is how it spreads. Google normalizes “web apps” that are really just Chrome apps. Then others follow. We’ve been here before, y’all. Remember IE? Browser hegemony is not a happy place. https://t.co/b29EvIty1H— DHH (@dhh) April 1, 2019
In fact, it’s alarming how much of Microsoft’s cut-off-the-air-supply playbook on browser dominance that Google is emulating. From browser-specific apps to embrace-n-extend AMP “standards”. It’s sad, but sadder still is when others follow suit.— DHH (@dhh) April 1, 2019
YouTube page load is 5x slower in Firefox and Edge than in Chrome because YouTube’s Polymer redesign relies on the deprecated Shadow DOM v0 API only implemented in Chrome. You can restore YouTube’s faster pre-Polymer design with this Firefox extension: https://t.co/F5uEn3iMLR— Chris Peterson (@cpeterso) July 24, 2018
As phone sales fall & app downloads stall a hardware company like Apple is pushing hard into services while quietly raking in utterly fantastic ad revenues from search & ads in their app store.
Part of the reason people are downloading fewer apps is so many apps require registration as soon as they are opened, or only let a user engage with them for seconds before pushing aggressive upsells. And then many apps which were formerly one-off purchases are becoming subscription plays. As traffic acquisition costs have jumped, many apps must engage in sleight of hand behaviors (free but not really, we are collecting data totally unrelated to the purpose of our app & oops we sold your data, etc.) in order to get the numbers to back out. This in turn causes app stores to slow down app reviews.
Apple acquired the news subscription service Texture & turned it into Apple News Plus. Not only is Apple keeping half the subscription revenues, but soon the service will only work for people using Apple devices, leaving nearly 100,000 other subscribers out in the cold: “if you’re part of the 30% who used Texture to get your favorite magazines digitally on Android or Windows devices, you will soon be out of luck. Only Apple iOS devices will be able to access the 300 magazines available from publishers. At the time of the sale in March 2018 to Apple, Texture had about 240,000 subscribers.”
Apple is also going to spend over a half-billion Dollars exclusively licensing independently developed games:
Several people involved in the project’s development say Apple is spending several million dollars each on most of the more than 100 games that have been selected to launch on Arcade, with its total budget likely to exceed $500m. The games service is expected to launch later this year. … Apple is offering developers an extra incentive if they agree for their game to only be available on Arcade, withholding their release on Google’s Play app store for Android smartphones or other subscription gaming bundles such as Microsoft’s Xbox game pass.
Verizon wants to launch a video game streaming service. It will probably be almost as successful as their Go90 OTT service was. Microsoft is pushing to make Xbox games work on Android devices. Amazon is developing a game streaming service to compliment Twitch.
The hosts on Twitch, some of whom sign up exclusively with the platform in order to gain access to its moneymaking tools, are rewarded for their ability to make a connection with viewers as much as they are for their gaming prowess. Viewers who pay $4.99 a month for a basic subscription — the money is split evenly between the streamers and Twitch — are looking for immediacy and intimacy. While some hosts at YouTube Gaming offer a similar experience, they have struggled to build audiences as large, and as dedicated, as those on Twitch. … While YouTube has made millionaires out of the creators of popular videos through its advertising program, Twitch’s hosts make money primarily from subscribers and one-off donations or tips. YouTube Gaming has made it possible for viewers to support hosts this way, but paying audiences haven’t materialized at the scale they have on Twitch.
Google, having a bit of Twitch envy, is also launching a video game streaming service which will be deeply integrated into YouTube: “With Stadia, YouTube watchers can press “Play now” at the end of a video, and be brought into the game within 5 seconds. The service provides “instant access” via button or link, just like any other piece of content on the web.”
Google will also launch their own game studio making exclusive games for their platform.
When consoles don’t use discs or cartridges so they can sell a subscription access to their software library it is hard to be a game retailer! GameStop’s stock has been performing like an ICO. And these sorts of announcements from the tech companies have been hitting stock prices for companies like Nintendo & Sony: “There is no doubt this service makes life even more difficult for established platforms,” Amir Anvarzadeh, a market strategist at Asymmetric Advisors Pte, said in a note to clients. “Google will help further fragment the gaming market which is already coming under pressure by big games which have adopted the mobile gaming business model of giving the titles away for free in hope of generating in-game content sales.”
The big tech companies which promoted everything in adjacent markets being free are now erecting paywalls for themselves, balkanizing the web by paying for exclusives to drive their bundled subscriptions.
How many paid movie streaming services will the web have by the end of next year? 20? 50? Does anybody know?
Disney alone with operate Disney+, ESPN+ as well as Hulu.
And then the tech companies are not only licensing exclusives to drive their subscription-based services, but we’re going to see more exclusionary policies like YouTube not working on Amazon Echo, Netflix dumping support for Apple’s Airplay, or Amazon refusing to sell devices like Chromecast or Apple TV.
The good news in a fractured web is a broader publishing industry that contains many micro markets will have many opportunities embedded in it. A Facebook pivot away from games toward news, or a pivot away from news toward video won’t kill third party publishers who have a more diverse traffic profile and more direct revenues. And a regional law blocking porn or gambling websites might lead to an increase in demand for VPNs or free to play points-based games with paid upgrades. Even the rise of metered paywalls will lead to people using more web browsers & more VPNs. Each fracture (good or bad) will create more market edges & ultimately more opportunities. Chinese enforcement of their gambling laws created a real estate boom in Manila.
So long as there are 4 or 5 game stores, 4 or 5 movie streaming sites, etc. … they have to compete on merit or use money to try to buy exclusives. Either way is better than the old monopoly strategy of take it or leave it ultimatums.
The publisher wins because there is a competitive bid. There won’t be an arbitrary 30% tax on everything. So long as there is competition from the open web there will be means to bypass the junk fees & the most successful companies that do so might create their own stores with a lower rate: “Mr. Schachter estimates that Apple and Google could see a hit of about 14% to pretax earnings if they reduced their own app commissions to match Epic’s take.”
As the big media companies & big tech companies race to create subscription products they’ll spend many billions on exclusives. And they will be training consumers that there’s nothing wrong with paying for content. This will eventually lead to hundreds of thousands or even millions of successful niche publications which have incentives better aligned than all the issues the ad supported web has faced.
Categories: publishing & media
from SEO Book http://www.seobook.com/fractured
via IFTTT from Tumblr http://localseoguru.tumblr.com/post/184254147558/the-fractured-web via IFTTT
from Local SEO Guru https://localseogurublog.wordpress.com/2019/04/17/the-fractured-web/ via IFTTT
from WordPress https://evasalinasrest.wordpress.com/2019/04/17/the-fractured-web/ via IFTTT
0 notes
alanajacksontx · 6 years ago
Text
The Fractured Web
Anyone can argue about the intent of a particular action & the outcome that is derived by it. But when the outcome is known, at some point the intent is inferred if the outcome is derived from a source of power & the outcome doesn’t change.
Or, put another way, if a powerful entity (government, corporation, other organization) disliked an outcome which appeared to benefit them in the short term at great lasting cost to others, they could spend resources to adjust the system.
If they don’t spend those resources (or, rather, spend them on lobbying rather than improving the ecosystem) then there is no desired change. The outcome is as desired. Change is unwanted.
Engagement is a toxic metric.Products which optimize for it become worse. People who optimize for it become less happy.It also seems to generate runaway feedback loops where most engagable people have a) worst individual experiences and then b) end up driving the product bus.— Patrick McKenzie (@patio11) April 9, 2019
News is a stock vs flow market where the flow of recent events drives most of the traffic to articles. News that is more than a couple days old is no longer news. A news site which stops publishing news stops becoming a habit & quickly loses relevancy. Algorithmically an abandoned archive of old news articles doesn’t look much different than eHow, in spite of having a much higher cost structure.
According to SEMrush’s traffic rank, ampproject.org gets more monthly visits than Yahoo.com.
That actually understates the prevalence of AMP because AMP is generally designed for mobile AND not all AMP-formatted content is displayed on ampproject.org.
Part of how AMP was able to get widespread adoption was because in the news vertical the organic search result set was displaced by an AMP block. If you were a news site either you were so differentiated that readers would scroll past the AMP block in the search results to look for you specifically, or you adopted AMP, or you were doomed.
Some news organizations like The Guardian have a team of about a dozen people reformatting their content to the duplicative & proprietary AMP format. That’s wasteful, but necessary “In theory, adoption of AMP is voluntary. In reality, publishers that don’t want to see their search traffic evaporate have little choice. New data from publisher analytics firm Chartbeat shows just how much leverage Google has over publishers thanks to its dominant search engine.”
It seems more than a bit backward that low margin publishers are doing duplicative work to distance themselves from their own readers while improving the profit margins of monopolies. But it is what it is. And that no doubt drew the ire of many publishers across the EU.
And now there are AMP Stories to eat up even more visual real estate.
If you spent a bunch of money to create a highly differentiated piece of content, why would you prefer that high spend flaghship content appear on a third party website rather than your own?
Google & Facebook have done such a fantastic job of eating the entire pie that some are celebrating Amazon as a prospective savior to the publishing industry. That view - IMHO - is rather suspect.
Where any of the tech monopolies dominate they cram down on partners. The New York Times acquired The Wirecutter in Q4 of 2016. In Q1 of 2017 Amazon adjusted their affiliate fee schedule.
Amazon generally treats consumers well, but they have been much harder on business partners with tough pricing negotiations, counterfeit protections, forced ad buying to have a high enough product rank to be able to rank organically, ad displacement of their organic search results below the fold (even for branded search queries), learning suppliers & cutting out the partners, private label products patterned after top sellers, in some cases running pop over ads for the private label products on product level pages where brands already spent money to drive traffic to the page, etc.
They’ve made things tougher for their partners in a way that mirrors the impact Facebook & Google have had on online publishers:
“Boyce’s experience on Amazon largely echoed what happens in the offline world: competitors entered the market, pushing down prices and making it harder to make a profit. So Boyce adapted. He stopped selling basketball hoops and developed his own line of foosball tables, air hockey tables, bocce ball sets and exercise equipment. The best way to make a decent profit on Amazon was to sell something no one else had and create your own brand. … Amazon also started selling bocce ball sets that cost $15 less than Boyce’s. He says his products are higher quality, but Amazon gives prominent page space to its generic version and wins the cost-conscious shopper.”
Google claims they have no idea how content publishers are with the trade off between themselves & the search engine, but every quarter Alphabet publish the share of ad spend occurring on owned & operated sites versus the share spent across the broader publisher network. And in almost every quarter for over a decade straight that ratio has grown worse for publishers.
When Google tells industry about how much $ it funnels to rest of ecosystem, just show them this chart. It’s good to be the “revenue regulator” (note: G went public in 2004). pic.twitter.com/HCbCNgbzKc— Jason Kint (@jason_kint) February 5, 2019
The aggregate numbers for news publishers are worse than shown above as Google is ramping up ads in video games quite hard. They’ve partnered with Unity & promptly took away the ability to block ads from appearing in video games using googleadsenseformobileapps.com exclusion (hello flat thumb misclicks, my name is budget & I am gone!)
They will also track video game player behavior & alter game play to maximize revenues based on machine learning tied to surveillance of the user’s account: “We’re bringing a new approach to monetization that combines ads and in-app purchases in one automated solution. Available today, new smart segmentation features in Google AdMob use machine learning to segment your players based on their likelihood to spend on in-app purchases. Ad units with smart segmentation will show ads only to users who are predicted not to spend on in-app purchases. Players who are predicted to spend will see no ads, and can simply continue playing.”
And how does the growth of ampproject.org square against the following wisdom?
If you do use a CDN, I’d recommend using a domain name of your own (eg, https://t.co/fWMc6CFPZ0), so you can move to other CDNs if you feel the need to over time, without having to do any redirects.— John (@JohnMu) April 15, 2019
Literally only yesterday did Google begin supporting instant loading of self-hosted AMP pages.
China has a different set of tech leaders than the United States. Baidu, Alibaba, Tencent (BAT) instead of Facebook, Amazon, Apple, Netflix, Google (FANG). China tech companies may have won their domestic markets in part based on superior technology or better knowledge of the local culture, though those same companies have largely went nowhere fast in most foreign markets. A big part of winning was governmental assistance in putting a foot on the scales.
Part of the US-China trade war is about who controls the virtual “seas” upon which value flows:
it can easily be argued that the last 60 years were above all the era of the container-ship (with container-ships getting ever bigger). But will the coming decades still be the age of the container-ship? Possibly not, for the simple reason that things that have value increasingly no longer travel by ship, but instead by fiberoptic cables! … you could almost argue that ZTE and Huawei have been the “East India Company” of the current imperial cycle. Unsurprisingly, it is these very companies, charged with laying out the “new roads” along which “tomorrow’s value” will flow, that find themselves at the center of the US backlash. … if the symbol of British domination was the steamship, and the symbol of American strength was the Boeing 747, it seems increasingly clear that the question of the future will be whether tomorrow’s telecom switches and routers are produced by Huawei or Cisco. … US attempts to take down Huawei and ZTE can be seen as the existing empire’s attempt to prevent the ascent of a new imperial power. With this in mind, I could go a step further and suggest that perhaps the Huawei crisis is this century’s version of Suez crisis. No wonder markets have been falling ever since the arrest of the Huawei CFO. In time, the Suez Crisis was brought to a halt by US threats to destroy the value of sterling. Could we now witness the same for the US dollar?
China maintains Huawei is an employee-owned company. But that proposition is suspect. Broadly stealing technology is vital to the growth of the Chinese economy & they have no incentive to stop unless their leading companies pay a direct cost. Meanwhile, China is investigating Ericsson over licensing technology.
India has taken notice of the success of Chinese tech companies & thus began to promote “national champion” company policies. That, in turn, has also meant some of the Chinese-styled laws requiring localized data, antitrust inquiries, foreign ownership restrictions, requirements for platforms to not sell their own goods, promoting limits on data encryption, etc.
The secretary of India’s Telecommunications Department, Aruna Sundararajan, last week told a gathering of Indian startups in a closed-door meeting in the tech hub of Bangalore that the government will introduce a “national champion” policy “very soon” to encourage the rise of Indian companies, according to a person familiar with the matter. She said Indian policy makers had noted the success of China’s internet giants, Alibaba Group Holding Ltd. and Tencent Holdings Ltd. … Tensions began rising last year, when New Delhi decided to create a clearer set of rules for e-commerce and convened a group of local players to solicit suggestions. Amazon and Flipkart, even though they make up more than half the market, weren’t invited, according to people familiar with the matter.
Amazon vowed to invest $5 billion in India & they have done some remarkable work on logistics there. Walmart acquired Flipkart for $16 billion.
Other emerging markets also have many local ecommerce leaders like Jumia, MercadoLibre, OLX, Gumtree, Takealot, Konga, Kilimall, BidOrBuy, Tokopedia, Bukalapak, Shoppee, Lazada. If you live in the US you may have never heard of *any* of those companies. And if you live in an emerging market you may have never interacted with Amazon or eBay.
It makes sense that ecommerce leadership would be more localized since it requires moving things in the physical economy, dealing with local currencies, managing inventory, shipping goods, etc. whereas information flows are just bits floating on a fiber optic cable.
If the Internet is primarily seen as a communications platform it is easy for people in some emerging markets to think Facebook is the Internet. Free communication with friends and family members is a compelling offer & as the cost of data drops web usage increases.
At the same time, the web is incredibly deflationary. Every free form of entertainment which consumes time is time that is not spent consuming something else.
Add the technological disruption to the wealth polarization that happened in the wake of the great recession, then combine that with algorithms that promote extremist views & it is clearly causing increasing conflict.
If you are a parent and you think you child has no shot at a brighter future than your own life it is easy to be full of rage.
Empathy can radicalize otherwise normal people by giving them a more polarized view of the world:
Starting around 2000, the line starts to slide. More students say it’s not their problem to help people in trouble, not their job to see the world from someone else’s perspective. By 2009, on all the standard measures, Konrath found, young people on average measure 40 percent less empathetic than my own generation … The new rule for empathy seems to be: reserve it, not for your “enemies,” but for the people you believe are hurt, or you have decided need it the most. Empathy, but just for your own team. And empathizing with the other team? That’s practically a taboo.
A complete lack of empathy could allow a psychopath (hi Chris!) to commit extreme crimes while feeling no guilt, shame or remorse. Extreme empathy can have the same sort of outcome:
“Sometimes we commit atrocities not out of a failure of empathy but rather as a direct consequence of successful, even overly successful, empathy. … They emphasized that students would learn both sides, and the atrocities committed by one side or the other were always put into context. Students learned this curriculum, but follow-up studies showed that this new generation was more polarized than the one before. … [Empathy] can be good when it leads to good action, but it can have downsides. For example, if you want the victims to say ‘thank you.’ You may even want to keep the people you help in that position of inferior victim because it can sustain your feeling of being a hero.” - Fritz Breithaupt
News feeds will be read. Villages will be razed. Lynch mobs will become commonplace.
Many people will end up murdered by algorithmically generated empathy.
As technology increases absentee ownership & financial leverage, a society led by morally agnostic algorithms is not going to become more egalitarian.
The more I think about and discuss it, the more I think WhatsApp is simultaneously the future of Facebook, and the most potentially dangerous digital tool yet created. We haven’t even begun to see the real impact yet of ubiquitous, unfettered and un-moderatable human telepathy.— Antonio García Martínez (@antoniogm) April 15, 2019
When politicians throw fuel on the fire it only gets worse:
It’s particularly odd that the government is demanding “accountability and responsibility” from a phone app when some ruling party politicians are busy spreading divisive fake news. How can the government ask WhatsApp to control mobs when those convicted of lynching Muslims have been greeted, garlanded and fed sweets by some of the most progressive and cosmopolitan members of Modi’s council of ministers?
Mark Zuckerburg won’t get caught downstream from platform blowback as he spends $20 million a year on his security.
The web is a mirror. Engagement-based algorithms reinforcing our perceptions & identities.
And every important story has at least 2 sides!
The Rohingya asylum seekers are victims of their own violent Jihadist leadership that formed a militia to kill Buddhists and Hindus. Hindus are being massacred, where’s the outrage for them!? https://t.co/P3m6w4B1Po— Imam Tawhidi (@Imamofpeace) May 23, 2018
Some may “learn” vaccines don’t work. Others may learn the vaccines their own children took did not work, as it failed to protect them from the antivax content spread by Facebook & Google, absorbed by people spreading measles & Medieval diseases.
Passion drives engagement, which drives algorithmic distribution: “There’s an asymmetry of passion at work. Which is to say, there’s very little counter-content to surface because it simply doesn’t occur to regular people (or, in this case, actual medical experts) that there’s a need to produce counter-content.”
As the costs of “free” become harder to hide, social media companies which currently sell emerging markets as their next big growth area will end up having embedded regulatory compliance costs which will end up exceeding any sort of prospective revenue they could hope to generate.
The Pinterest S1 shows almost all their growth is in emerging markets, yet almost all their revenue is inside the United States.
As governments around the world see the real-world cost of the foreign tech companies & view some of them as piggy banks, eventually the likes of Facebook or Google will pull out of a variety of markets they no longer feel worth serving. It will be like Google did in mainland China with search after discovering pervasive hacking of activist Gmail accounts.
Just tried signing into Gmail from a new device. Unless I provide a phone number, there is no way to sign in and no one to call about it. Oh, and why do they say they need my phone? If you guessed “for my protection,” you would be correct. Talk about Big Brother…— Simon Mikhailovich (@S_Mikhailovich) April 16, 2019
Lower friction & lower cost information markets will face more junk fees, hurdles & even some legitimate regulations. Information markets will start to behave more like physical goods markets.
The tech companies presume they will be able to use satellites, drones & balloons to beam in Internet while avoiding messy local issues tied to real world infrastructure, but when a local wealthy player is betting against them they’ll probably end up losing those markets: “One of the biggest cheerleaders for the new rules was Reliance Jio, a fast-growing mobile phone company controlled by Mukesh Ambani, India’s richest industrialist. Mr. Ambani, an ally of Mr. Modi, has made no secret of his plans to turn Reliance Jio into an all-purpose information service that offers streaming video and music, messaging, money transfer, online shopping, and home broadband services.”
Publishers do not have “their mojo back” because the tech companies have been so good to them, but rather because the tech companies have been so aggressive that they’ve earned so much blowback which will in turn lead publishers to opting out of future deals, which will eventually lead more people back to the trusted brands of yesterday.
Publishers feeling guilty about taking advertorial money from the tech companies to spread their propaganda will offset its publication with opinion pieces pointing in the other direction: “This is a lobbying campaign in which buying the good opinion of news brands is clearly important. If it was about reaching a target audience, there are plenty of metrics to suggest his words would reach further – at no cost – on Facebook. Similarly, Google is upping its presence in a less obvious manner via assorted media initiatives on both sides of the Atlantic. Its more direct approach to funding journalism seems to have the desired effect of making all media organisations (and indeed many academic institutions) touched by its money slightly less questioning and critical of its motives.”
When Facebook goes down direct visits to leading news brand sites go up.
When Google penalizes a no-name me-too site almost nobody realizes it is missing. But if a big publisher opts out of the ecosystem people will notice.
The reliance on the tech platforms is largely a mirage. If enough key players were to opt out at the same time people would quickly reorient their information consumption habits.
If the platforms can change their focus overnight then why can’t publishers band together & choose to dump them?
CEO Jack Dorsey said Twitter is looking to change the focus from following specific individuals to topics of interest, acknowledging that what’s incentivized today on the platform is at odds with the goal of healthy dialoguehttps://t.co/31FYslbePA— Axios (@axios) April 16, 2019
In Europe there is GDPR, which aimed to protect user privacy, but ultimately acted as a tax on innovation by local startups while being a subsidy to the big online ad networks. They also have Article 11 & Article 13, which passed in spite of Google’s best efforts on the scaremongering anti-SERP tests, lobbying & propaganda fronts: “Google has sparked criticism by encouraging news publishers participating in its Digital News Initiative to lobby against proposed changes to EU copyright law at a time when the beleaguered sector is increasingly turning to the search giant for help.”
Remember the Eric Schmidt comment about how brands are how you sort out (the non-YouTube portion of) the cesspool? As it turns out, he was allegedly wrong as Google claims they have been fighting for the little guy the whole time:
Article 11 could change that principle and require online services to strike commercial deals with publishers to show hyperlinks and short snippets of news. This means that search engines, news aggregators, apps, and platforms would have to put commercial licences in place, and make decisions about which content to include on the basis of those licensing agreements and which to leave out. Effectively, companies like Google will be put in the position of picking winners and losers. … Why are large influential companies constraining how new and small publishers operate? … The proposed rules will undoubtedly hurt diversity of voices, with large publishers setting business models for the whole industry. This will not benefit all equally. … We believe the information we show should be based on quality, not on payment.
Facebook claims there is a local news problem: “Facebook Inc. has been looking to boost its local-news offerings since a 2017 survey showed most of its users were clamoring for more. It has run into a problem: There simply isn’t enough local news in vast swaths of the country. … more than one in five newspapers have closed in the past decade and a half, leaving half the counties in the nation with just one newspaper, and 200 counties with no newspaper at all.”
Google is so for the little guy that for their local news experiments they’ve partnered with a private equity backed newspaper roll up firm & another newspaper chain which did overpriced acquisitions & is trying to act like a PE firm (trying to not get eaten by the PE firm).
Does the above stock chart look in any way healthy?
Does it give off the scent of a firm that understood the impact of digital & rode it to new heights?
If you want good market-based outcomes, why not partner with journalists directly versus operating through PE chop shops?
If Patch is profitable & Google were a neutral ranking system based on quality, couldn’t Google partner with journalists directly?
Throwing a few dollars at a PE firm in some nebulous partnership sure beats the sort of regulations coming out of the EU. And the EU’s regulations (and prior link tax attempts) are in addition to the three multi billion Euro fines the European Union has levied against Alphabet for shopping search, Android & AdSense.
Google was also fined in Russia over Android bundling. The fine was tiny, but after consumers gained a search engine choice screen (much like Google pushed for in Europe on Microsoft years ago) Yandex’s share of mobile search grew quickly.
The UK recently published a white paper on online harms. In some ways it is a regulation just like the tech companies might offer to participants in their ecosystems:
Companies will have to fulfil their new legal duties or face the consequences and “will still need to be compliant with the overarching duty of care even where a specific code does not exist, for example assessing and responding to the risk associated with emerging harms or technology”.
If web publishers should monitor inbound links to look for anything suspicious then the big platforms sure as hell have the resources & profit margins to monitor behavior on their own websites.
Australia passed the Sharing of Abhorrent Violent Material bill which requires platforms to expeditiously remove violent videos & notify the Australian police about them.
There are other layers of fracturing going on in the web as well.
Programmatic advertising shifted revenue from publishers to adtech companies & the largest ad sellers. Ad blockers further lower the ad revenues of many publishers. If you routinely use an ad blocker, try surfing the web for a while without one & you will notice layover welcome AdSense ads on sites as you browse the web - the very type of ad they were allegedly against when promoting AMP.
There has been much more press in the past week about ad blocking as Google’s influence is being questioned as it rolls out ad blocking as a feature built into Google’s dominant Chrome web browser. https://t.co/LQmvJu9MYB— Jason Kint (@jason_kint) February 19, 2018
Tracking protection in browsers & ad blocking features built directly into browsers leave publishers more uncertain. And who even knows who visited an AMP page hosted on a third party server, particularly when things like GDPR are mixed in? Those who lack first party data may end up having to make large acquisitions to stay relevant.
Voice search & personal assistants are now ad channels.
Google Assistant Now Showing Sponsored Link Ads for Some Travel Related Queries “Similar results are delivered through both Google Home and Google Home Hub without the sponsored links.” https://t.co/jSVKKI2AYT via @bretkinsella pic.twitter.com/0sjAswy14M— Glenn Gabe (@glenngabe) April 15, 2019
App stores are removing VPNs in China, removing Tiktok in India, and keeping female tracking apps in Saudi Arabia. App stores are centralized chokepoints for governments. Every centralized service is at risk of censorship. Web browsers from key state-connected players can also censor messages spread by developers on platforms like GitHub.
Microsoft’s newest Edge web browser is based on Chromium, the source of Google Chrome. While Mozilla Firefox gets most of their revenue from a search deal with Google, Google has still went out of its way to use its services to both promote Chrome with pop overs AND break in competing web browsers:
“All of this is stuff you’re allowed to do to compete, of course. But we were still a search partner, so we’d say 'hey what gives?’ And every time, they’d say, 'oops. That was accidental. We’ll fix it in the next push in 2 weeks.’ Over and over. Oops. Another accident. We’ll fix it soon. We want the same things. We’re on the same team. There were dozens of oopses. Hundreds maybe?” - former Firefox VP Jonathan Nightingale
This is how it spreads. Google normalizes “web apps” that are really just Chrome apps. Then others follow. We’ve been here before, y’all. Remember IE? Browser hegemony is not a happy place. https://t.co/b29EvIty1H— DHH (@dhh) April 1, 2019
In fact, it’s alarming how much of Microsoft’s cut-off-the-air-supply playbook on browser dominance that Google is emulating. From browser-specific apps to embrace-n-extend AMP “standards”. It’s sad, but sadder still is when others follow suit.— DHH (@dhh) April 1, 2019
YouTube page load is 5x slower in Firefox and Edge than in Chrome because YouTube’s Polymer redesign relies on the deprecated Shadow DOM v0 API only implemented in Chrome. You can restore YouTube’s faster pre-Polymer design with this Firefox extension: https://t.co/F5uEn3iMLR— Chris Peterson (@cpeterso) July 24, 2018
As phone sales fall & app downloads stall a hardware company like Apple is pushing hard into services while quietly raking in utterly fantastic ad revenues from search & ads in their app store.
Part of the reason people are downloading fewer apps is so many apps require registration as soon as they are opened, or only let a user engage with them for seconds before pushing aggressive upsells. And then many apps which were formerly one-off purchases are becoming subscription plays. As traffic acquisition costs have jumped, many apps must engage in sleight of hand behaviors (free but not really, we are collecting data totally unrelated to the purpose of our app & oops we sold your data, etc.) in order to get the numbers to back out. This in turn causes app stores to slow down app reviews.
Apple acquired the news subscription service Texture & turned it into Apple News Plus. Not only is Apple keeping half the subscription revenues, but soon the service will only work for people using Apple devices, leaving nearly 100,000 other subscribers out in the cold: “if you’re part of the 30% who used Texture to get your favorite magazines digitally on Android or Windows devices, you will soon be out of luck. Only Apple iOS devices will be able to access the 300 magazines available from publishers. At the time of the sale in March 2018 to Apple, Texture had about 240,000 subscribers.”
Apple is also going to spend over a half-billion Dollars exclusively licensing independently developed games:
Several people involved in the project’s development say Apple is spending several million dollars each on most of the more than 100 games that have been selected to launch on Arcade, with its total budget likely to exceed $500m. The games service is expected to launch later this year. … Apple is offering developers an extra incentive if they agree for their game to only be available on Arcade, withholding their release on Google’s Play app store for Android smartphones or other subscription gaming bundles such as Microsoft’s Xbox game pass.
Verizon wants to launch a video game streaming service. It will probably be almost as successful as their Go90 OTT service was. Microsoft is pushing to make Xbox games work on Android devices. Amazon is developing a game streaming service to compliment Twitch.
The hosts on Twitch, some of whom sign up exclusively with the platform in order to gain access to its moneymaking tools, are rewarded for their ability to make a connection with viewers as much as they are for their gaming prowess. Viewers who pay $4.99 a month for a basic subscription — the money is split evenly between the streamers and Twitch — are looking for immediacy and intimacy. While some hosts at YouTube Gaming offer a similar experience, they have struggled to build audiences as large, and as dedicated, as those on Twitch. … While YouTube has made millionaires out of the creators of popular videos through its advertising program, Twitch’s hosts make money primarily from subscribers and one-off donations or tips. YouTube Gaming has made it possible for viewers to support hosts this way, but paying audiences haven’t materialized at the scale they have on Twitch.
Google, having a bit of Twitch envy, is also launching a video game streaming service which will be deeply integrated into YouTube: “With Stadia, YouTube watchers can press “Play now” at the end of a video, and be brought into the game within 5 seconds. The service provides “instant access” via button or link, just like any other piece of content on the web.”
Google will also launch their own game studio making exclusive games for their platform.
When consoles don’t use discs or cartridges so they can sell a subscription access to their software library it is hard to be a game retailer! GameStop’s stock has been performing like an ICO. And these sorts of announcements from the tech companies have been hitting stock prices for companies like Nintendo & Sony: “There is no doubt this service makes life even more difficult for established platforms,” Amir Anvarzadeh, a market strategist at Asymmetric Advisors Pte, said in a note to clients. “Google will help further fragment the gaming market which is already coming under pressure by big games which have adopted the mobile gaming business model of giving the titles away for free in hope of generating in-game content sales.”
The big tech companies which promoted everything in adjacent markets being free are now erecting paywalls for themselves, balkanizing the web by paying for exclusives to drive their bundled subscriptions.
How many paid movie streaming services will the web have by the end of next year? 20? 50? Does anybody know?
Disney alone with operate Disney+, ESPN+ as well as Hulu.
And then the tech companies are not only licensing exclusives to drive their subscription-based services, but we’re going to see more exclusionary policies like YouTube not working on Amazon Echo, Netflix dumping support for Apple’s Airplay, or Amazon refusing to sell devices like Chromecast or Apple TV.
The good news in a fractured web is a broader publishing industry that contains many micro markets will have many opportunities embedded in it. A Facebook pivot away from games toward news, or a pivot away from news toward video won’t kill third party publishers who have a more diverse traffic profile and more direct revenues. And a regional law blocking porn or gambling websites might lead to an increase in demand for VPNs or free to play points-based games with paid upgrades. Even the rise of metered paywalls will lead to people using more web browsers & more VPNs. Each fracture (good or bad) will create more market edges & ultimately more opportunities. Chinese enforcement of their gambling laws created a real estate boom in Manila.
So long as there are 4 or 5 game stores, 4 or 5 movie streaming sites, etc. … they have to compete on merit or use money to try to buy exclusives. Either way is better than the old monopoly strategy of take it or leave it ultimatums.
The publisher wins because there is a competitive bid. There won’t be an arbitrary 30% tax on everything. So long as there is competition from the open web there will be means to bypass the junk fees & the most successful companies that do so might create their own stores with a lower rate: “Mr. Schachter estimates that Apple and Google could see a hit of about 14% to pretax earnings if they reduced their own app commissions to match Epic’s take.”
As the big media companies & big tech companies race to create subscription products they’ll spend many billions on exclusives. And they will be training consumers that there’s nothing wrong with paying for content. This will eventually lead to hundreds of thousands or even millions of successful niche publications which have incentives better aligned than all the issues the ad supported web has faced.
Categories: 
publishing & media
from IM Tips And Tricks http://www.seobook.com/fractured from Rising Phoenix SEO https://risingphxseo.tumblr.com/post/184253783845
0 notes
kellykperez · 6 years ago
Text
The Fractured Web
Anyone can argue about the intent of a particular action & the outcome that is derived by it. But when the outcome is known, at some point the intent is inferred if the outcome is derived from a source of power & the outcome doesn't change.
Or, put another way, if a powerful entity (government, corporation, other organization) disliked an outcome which appeared to benefit them in the short term at great lasting cost to others, they could spend resources to adjust the system.
If they don't spend those resources (or, rather, spend them on lobbying rather than improving the ecosystem) then there is no desired change. The outcome is as desired. Change is unwanted.
Engagement is a toxic metric.Products which optimize for it become worse. People who optimize for it become less happy.It also seems to generate runaway feedback loops where most engagable people have a) worst individual experiences and then b) end up driving the product bus.— Patrick McKenzie (@patio11) April 9, 2019
News is a stock vs flow market where the flow of recent events drives most of the traffic to articles. News that is more than a couple days old is no longer news. A news site which stops publishing news stops becoming a habit & quickly loses relevancy. Algorithmically an abandoned archive of old news articles doesn't look much different than eHow, in spite of having a much higher cost structure.
According to SEMrush's traffic rank, ampproject.org gets more monthly visits than Yahoo.com.
That actually understates the prevalence of AMP because AMP is generally designed for mobile AND not all AMP-formatted content is displayed on ampproject.org.
Part of how AMP was able to get widespread adoption was because in the news vertical the organic search result set was displaced by an AMP block. If you were a news site either you were so differentiated that readers would scroll past the AMP block in the search results to look for you specifically, or you adopted AMP, or you were doomed.
Some news organizations like The Guardian have a team of about a dozen people reformatting their content to the duplicative & proprietary AMP format. That's wasteful, but necessary "In theory, adoption of AMP is voluntary. In reality, publishers that don’t want to see their search traffic evaporate have little choice. New data from publisher analytics firm Chartbeat shows just how much leverage Google has over publishers thanks to its dominant search engine."
It seems more than a bit backward that low margin publishers are doing duplicative work to distance themselves from their own readers while improving the profit margins of monopolies. But it is what it is. And that no doubt drew the ire of many publishers across the EU.
And now there are AMP Stories to eat up even more visual real estate.
If you spent a bunch of money to create a highly differentiated piece of content, why would you prefer that high spend flaghship content appear on a third party website rather than your own?
Google & Facebook have done such a fantastic job of eating the entire pie that some are celebrating Amazon as a prospective savior to the publishing industry. That view - IMHO - is rather suspect.
Where any of the tech monopolies dominate they cram down on partners. The New York Times acquired The Wirecutter in Q4 of 2016. In Q1 of 2017 Amazon adjusted their affiliate fee schedule.
Amazon generally treats consumers well, but they have been much harder on business partners with tough pricing negotiations, counterfeit protections, forced ad buying to have a high enough product rank to be able to rank organically, ad displacement of their organic search results below the fold (even for branded search queries), learning suppliers & cutting out the partners, private label products patterned after top sellers, in some cases running pop over ads for the private label products on product level pages where brands already spent money to drive traffic to the page, etc.
They've made things tougher for their partners in a way that mirrors the impact Facebook & Google have had on online publishers:
"Boyce’s experience on Amazon largely echoed what happens in the offline world: competitors entered the market, pushing down prices and making it harder to make a profit. So Boyce adapted. He stopped selling basketball hoops and developed his own line of foosball tables, air hockey tables, bocce ball sets and exercise equipment. The best way to make a decent profit on Amazon was to sell something no one else had and create your own brand. ... Amazon also started selling bocce ball sets that cost $15 less than Boyce’s. He says his products are higher quality, but Amazon gives prominent page space to its generic version and wins the cost-conscious shopper."
Google claims they have no idea how content publishers are with the trade off between themselves & the search engine, but every quarter Alphabet publish the share of ad spend occurring on owned & operated sites versus the share spent across the broader publisher network. And in almost every quarter for over a decade straight that ratio has grown worse for publishers.
When Google tells industry about how much $ it funnels to rest of ecosystem, just show them this chart. It's good to be the "revenue regulator" (note: G went public in 2004). pic.twitter.com/HCbCNgbzKc— Jason Kint (@jason_kint) February 5, 2019
The aggregate numbers for news publishers are worse than shown above as Google is ramping up ads in video games quite hard. They've partnered with Unity & promptly took away the ability to block ads from appearing in video games using googleadsenseformobileapps.com exclusion (hello flat thumb misclicks, my name is budget & I am gone!)
They will also track video game player behavior & alter game play to maximize revenues based on machine learning tied to surveillance of the user's account: "We’re bringing a new approach to monetization that combines ads and in-app purchases in one automated solution. Available today, new smart segmentation features in Google AdMob use machine learning to segment your players based on their likelihood to spend on in-app purchases. Ad units with smart segmentation will show ads only to users who are predicted not to spend on in-app purchases. Players who are predicted to spend will see no ads, and can simply continue playing."
And how does the growth of ampproject.org square against the following wisdom?
If you do use a CDN, I'd recommend using a domain name of your own (eg, https://t.co/fWMc6CFPZ0), so you can move to other CDNs if you feel the need to over time, without having to do any redirects.— John (@JohnMu) April 15, 2019
Literally only yesterday did Google begin supporting instant loading of self-hosted AMP pages.
China has a different set of tech leaders than the United States. Baidu, Alibaba, Tencent (BAT) instead of Facebook, Amazon, Apple, Netflix, Google (FANG). China tech companies may have won their domestic markets in part based on superior technology or better knowledge of the local culture, though those same companies have largely went nowhere fast in most foreign markets. A big part of winning was governmental assistance in putting a foot on the scales.
Part of the US-China trade war is about who controls the virtual "seas" upon which value flows:
it can easily be argued that the last 60 years were above all the era of the container-ship (with container-ships getting ever bigger). But will the coming decades still be the age of the container-ship? Possibly not, for the simple reason that things that have value increasingly no longer travel by ship, but instead by fiberoptic cables! ... you could almost argue that ZTE and Huawei have been the “East India Company” of the current imperial cycle. Unsurprisingly, it is these very companies, charged with laying out the “new roads” along which “tomorrow’s value” will flow, that find themselves at the center of the US backlash. ... if the symbol of British domination was the steamship, and the symbol of American strength was the Boeing 747, it seems increasingly clear that the question of the future will be whether tomorrow’s telecom switches and routers are produced by Huawei or Cisco. ... US attempts to take down Huawei and ZTE can be seen as the existing empire’s attempt to prevent the ascent of a new imperial power. With this in mind, I could go a step further and suggest that perhaps the Huawei crisis is this century’s version of Suez crisis. No wonder markets have been falling ever since the arrest of the Huawei CFO. In time, the Suez Crisis was brought to a halt by US threats to destroy the value of sterling. Could we now witness the same for the US dollar?
China maintains Huawei is an employee-owned company. But that proposition is suspect. Broadly stealing technology is vital to the growth of the Chinese economy & they have no incentive to stop unless their leading companies pay a direct cost. Meanwhile, China is investigating Ericsson over licensing technology.
India has taken notice of the success of Chinese tech companies & thus began to promote "national champion" company policies. That, in turn, has also meant some of the Chinese-styled laws requiring localized data, antitrust inquiries, foreign ownership restrictions, requirements for platforms to not sell their own goods, promoting limits on data encryption, etc.
The secretary of India’s Telecommunications Department, Aruna Sundararajan, last week told a gathering of Indian startups in a closed-door meeting in the tech hub of Bangalore that the government will introduce a “national champion” policy “very soon” to encourage the rise of Indian companies, according to a person familiar with the matter. She said Indian policy makers had noted the success of China’s internet giants, Alibaba Group Holding Ltd. and Tencent Holdings Ltd. ... Tensions began rising last year, when New Delhi decided to create a clearer set of rules for e-commerce and convened a group of local players to solicit suggestions. Amazon and Flipkart, even though they make up more than half the market, weren’t invited, according to people familiar with the matter.
Amazon vowed to invest $5 billion in India & they have done some remarkable work on logistics there. Walmart acquired Flipkart for $16 billion.
Other emerging markets also have many local ecommerce leaders like Jumia, MercadoLibre, OLX, Gumtree, Takealot, Konga, Kilimall, BidOrBuy, Tokopedia, Bukalapak, Shoppee, Lazada. If you live in the US you may have never heard of *any* of those companies. And if you live in an emerging market you may have never interacted with Amazon or eBay.
It makes sense that ecommerce leadership would be more localized since it requires moving things in the physical economy, dealing with local currencies, managing inventory, shipping goods, etc. whereas information flows are just bits floating on a fiber optic cable.
If the Internet is primarily seen as a communications platform it is easy for people in some emerging markets to think Facebook is the Internet. Free communication with friends and family members is a compelling offer & as the cost of data drops web usage increases.
At the same time, the web is incredibly deflationary. Every free form of entertainment which consumes time is time that is not spent consuming something else.
Add the technological disruption to the wealth polarization that happened in the wake of the great recession, then combine that with algorithms that promote extremist views & it is clearly causing increasing conflict.
If you are a parent and you think you child has no shot at a brighter future than your own life it is easy to be full of rage.
Empathy can radicalize otherwise normal people by giving them a more polarized view of the world:
Starting around 2000, the line starts to slide. More students say it's not their problem to help people in trouble, not their job to see the world from someone else's perspective. By 2009, on all the standard measures, Konrath found, young people on average measure 40 percent less empathetic than my own generation ... The new rule for empathy seems to be: reserve it, not for your "enemies," but for the people you believe are hurt, or you have decided need it the most. Empathy, but just for your own team. And empathizing with the other team? That's practically a taboo.
A complete lack of empathy could allow a psychopath (hi Chris!) to commit extreme crimes while feeling no guilt, shame or remorse. Extreme empathy can have the same sort of outcome:
"Sometimes we commit atrocities not out of a failure of empathy but rather as a direct consequence of successful, even overly successful, empathy. ... They emphasized that students would learn both sides, and the atrocities committed by one side or the other were always put into context. Students learned this curriculum, but follow-up studies showed that this new generation was more polarized than the one before. ... [Empathy] can be good when it leads to good action, but it can have downsides. For example, if you want the victims to say 'thank you.' You may even want to keep the people you help in that position of inferior victim because it can sustain your feeling of being a hero." - Fritz Breithaupt
News feeds will be read. Villages will be razed. Lynch mobs will become commonplace.
Many people will end up murdered by algorithmically generated empathy.
As technology increases absentee ownership & financial leverage, a society led by morally agnostic algorithms is not going to become more egalitarian.
The more I think about and discuss it, the more I think WhatsApp is simultaneously the future of Facebook, and the most potentially dangerous digital tool yet created. We haven't even begun to see the real impact yet of ubiquitous, unfettered and un-moderatable human telepathy.— Antonio García Martínez (@antoniogm) April 15, 2019
When politicians throw fuel on the fire it only gets worse:
It’s particularly odd that the government is demanding “accountability and responsibility” from a phone app when some ruling party politicians are busy spreading divisive fake news. How can the government ask WhatsApp to control mobs when those convicted of lynching Muslims have been greeted, garlanded and fed sweets by some of the most progressive and cosmopolitan members of Modi’s council of ministers?
Mark Zuckerburg won't get caught downstream from platform blowback as he spends $20 million a year on his security.
The web is a mirror. Engagement-based algorithms reinforcing our perceptions & identities.
And every important story has at least 2 sides!
The Rohingya asylum seekers are victims of their own violent Jihadist leadership that formed a militia to kill Buddhists and Hindus. Hindus are being massacred, where’s the outrage for them!? https://t.co/P3m6w4B1Po— Imam Tawhidi (@Imamofpeace) May 23, 2018
Some may "learn" vaccines don't work. Others may learn the vaccines their own children took did not work, as it failed to protect them from the antivax content spread by Facebook & Google, absorbed by people spreading measles & Medieval diseases.
Passion drives engagement, which drives algorithmic distribution: "There’s an asymmetry of passion at work. Which is to say, there’s very little counter-content to surface because it simply doesn’t occur to regular people (or, in this case, actual medical experts) that there’s a need to produce counter-content."
As the costs of "free" become harder to hide, social media companies which currently sell emerging markets as their next big growth area will end up having embedded regulatory compliance costs which will end up exceeding any sort of prospective revenue they could hope to generate.
The Pinterest S1 shows almost all their growth is in emerging markets, yet almost all their revenue is inside the United States.
As governments around the world see the real-world cost of the foreign tech companies & view some of them as piggy banks, eventually the likes of Facebook or Google will pull out of a variety of markets they no longer feel worth serving. It will be like Google did in mainland China with search after discovering pervasive hacking of activist Gmail accounts.
Just tried signing into Gmail from a new device. Unless I provide a phone number, there is no way to sign in and no one to call about it. Oh, and why do they say they need my phone? If you guessed "for my protection," you would be correct. Talk about Big Brother...— Simon Mikhailovich (@S_Mikhailovich) April 16, 2019
Lower friction & lower cost information markets will face more junk fees, hurdles & even some legitimate regulations. Information markets will start to behave more like physical goods markets.
The tech companies presume they will be able to use satellites, drones & balloons to beam in Internet while avoiding messy local issues tied to real world infrastructure, but when a local wealthy player is betting against them they'll probably end up losing those markets: "One of the biggest cheerleaders for the new rules was Reliance Jio, a fast-growing mobile phone company controlled by Mukesh Ambani, India’s richest industrialist. Mr. Ambani, an ally of Mr. Modi, has made no secret of his plans to turn Reliance Jio into an all-purpose information service that offers streaming video and music, messaging, money transfer, online shopping, and home broadband services."
Publishers do not have "their mojo back" because the tech companies have been so good to them, but rather because the tech companies have been so aggressive that they've earned so much blowback which will in turn lead publishers to opting out of future deals, which will eventually lead more people back to the trusted brands of yesterday.
Publishers feeling guilty about taking advertorial money from the tech companies to spread their propaganda will offset its publication with opinion pieces pointing in the other direction: "This is a lobbying campaign in which buying the good opinion of news brands is clearly important. If it was about reaching a target audience, there are plenty of metrics to suggest his words would reach further – at no cost – on Facebook. Similarly, Google is upping its presence in a less obvious manner via assorted media initiatives on both sides of the Atlantic. Its more direct approach to funding journalism seems to have the desired effect of making all media organisations (and indeed many academic institutions) touched by its money slightly less questioning and critical of its motives."
When Facebook goes down direct visits to leading news brand sites go up.
When Google penalizes a no-name me-too site almost nobody realizes it is missing. But if a big publisher opts out of the ecosystem people will notice.
The reliance on the tech platforms is largely a mirage. If enough key players were to opt out at the same time people would quickly reorient their information consumption habits.
If the platforms can change their focus overnight then why can't publishers band together & choose to dump them?
CEO Jack Dorsey said Twitter is looking to change the focus from following specific individuals to topics of interest, acknowledging that what's incentivized today on the platform is at odds with the goal of healthy dialoguehttps://t.co/31FYslbePA— Axios (@axios) April 16, 2019
In Europe there is GDPR, which aimed to protect user privacy, but ultimately acted as a tax on innovation by local startups while being a subsidy to the big online ad networks. They also have Article 11 & Article 13, which passed in spite of Google's best efforts on the scaremongering anti-SERP tests, lobbying & propaganda fronts: "Google has sparked criticism by encouraging news publishers participating in its Digital News Initiative to lobby against proposed changes to EU copyright law at a time when the beleaguered sector is increasingly turning to the search giant for help."
Remember the Eric Schmidt comment about how brands are how you sort out (the non-YouTube portion of) the cesspool? As it turns out, he was allegedly wrong as Google claims they have been fighting for the little guy the whole time:
Article 11 could change that principle and require online services to strike commercial deals with publishers to show hyperlinks and short snippets of news. This means that search engines, news aggregators, apps, and platforms would have to put commercial licences in place, and make decisions about which content to include on the basis of those licensing agreements and which to leave out. Effectively, companies like Google will be put in the position of picking winners and losers. ... Why are large influential companies constraining how new and small publishers operate? ... The proposed rules will undoubtedly hurt diversity of voices, with large publishers setting business models for the whole industry. This will not benefit all equally. ... We believe the information we show should be based on quality, not on payment.
Facebook claims there is a local news problem: "Facebook Inc. has been looking to boost its local-news offerings since a 2017 survey showed most of its users were clamoring for more. It has run into a problem: There simply isn’t enough local news in vast swaths of the country. ... more than one in five newspapers have closed in the past decade and a half, leaving half the counties in the nation with just one newspaper, and 200 counties with no newspaper at all."
Google is so for the little guy that for their local news experiments they've partnered with a private equity backed newspaper roll up firm & another newspaper chain which did overpriced acquisitions & is trying to act like a PE firm (trying to not get eaten by the PE firm).
Does the above stock chart look in any way healthy?
Does it give off the scent of a firm that understood the impact of digital & rode it to new heights?
If you want good market-based outcomes, why not partner with journalists directly versus operating through PE chop shops?
If Patch is profitable & Google were a neutral ranking system based on quality, couldn't Google partner with journalists directly?
Throwing a few dollars at a PE firm in some nebulous partnership sure beats the sort of regulations coming out of the EU. And the EU's regulations (and prior link tax attempts) are in addition to the three multi billion Euro fines the European Union has levied against Alphabet for shopping search, Android & AdSense.
Google was also fined in Russia over Android bundling. The fine was tiny, but after consumers gained a search engine choice screen (much like Google pushed for in Europe on Microsoft years ago) Yandex's share of mobile search grew quickly.
The UK recently published a white paper on online harms. In some ways it is a regulation just like the tech companies might offer to participants in their ecosystems:
Companies will have to fulfil their new legal duties or face the consequences and “will still need to be compliant with the overarching duty of care even where a specific code does not exist, for example assessing and responding to the risk associated with emerging harms or technology”.
If web publishers should monitor inbound links to look for anything suspicious then the big platforms sure as hell have the resources & profit margins to monitor behavior on their own websites.
Australia passed the Sharing of Abhorrent Violent Material bill which requires platforms to expeditiously remove violent videos & notify the Australian police about them.
There are other layers of fracturing going on in the web as well.
Programmatic advertising shifted revenue from publishers to adtech companies & the largest ad sellers. Ad blockers further lower the ad revenues of many publishers. If you routinely use an ad blocker, try surfing the web for a while without one & you will notice layover welcome AdSense ads on sites as you browse the web - the very type of ad they were allegedly against when promoting AMP.
There has been much more press in the past week about ad blocking as Google's influence is being questioned as it rolls out ad blocking as a feature built into Google's dominant Chrome web browser. https://t.co/LQmvJu9MYB— Jason Kint (@jason_kint) February 19, 2018
Tracking protection in browsers & ad blocking features built directly into browsers leave publishers more uncertain. And who even knows who visited an AMP page hosted on a third party server, particularly when things like GDPR are mixed in? Those who lack first party data may end up having to make large acquisitions to stay relevant.
Voice search & personal assistants are now ad channels.
Google Assistant Now Showing Sponsored Link Ads for Some Travel Related Queries "Similar results are delivered through both Google Home and Google Home Hub without the sponsored links." https://t.co/jSVKKI2AYT via @bretkinsella pic.twitter.com/0sjAswy14M— Glenn Gabe (@glenngabe) April 15, 2019
App stores are removing VPNs in China, removing Tiktok in India, and keeping female tracking apps in Saudi Arabia. App stores are centralized chokepoints for governments. Every centralized service is at risk of censorship. Web browsers from key state-connected players can also censor messages spread by developers on platforms like GitHub.
Microsoft's newest Edge web browser is based on Chromium, the source of Google Chrome. While Mozilla Firefox gets most of their revenue from a search deal with Google, Google has still went out of its way to use its services to both promote Chrome with pop overs AND break in competing web browsers:
"All of this is stuff you're allowed to do to compete, of course. But we were still a search partner, so we'd say 'hey what gives?' And every time, they'd say, 'oops. That was accidental. We'll fix it in the next push in 2 weeks.' Over and over. Oops. Another accident. We'll fix it soon. We want the same things. We're on the same team. There were dozens of oopses. Hundreds maybe?" - former Firefox VP Jonathan Nightingale
This is how it spreads. Google normalizes “web apps” that are really just Chrome apps. Then others follow. We’ve been here before, y’all. Remember IE? Browser hegemony is not a happy place. https://t.co/b29EvIty1H— DHH (@dhh) April 1, 2019
In fact, it’s alarming how much of Microsoft’s cut-off-the-air-supply playbook on browser dominance that Google is emulating. From browser-specific apps to embrace-n-extend AMP “standards”. It’s sad, but sadder still is when others follow suit.— DHH (@dhh) April 1, 2019
YouTube page load is 5x slower in Firefox and Edge than in Chrome because YouTube's Polymer redesign relies on the deprecated Shadow DOM v0 API only implemented in Chrome. You can restore YouTube's faster pre-Polymer design with this Firefox extension: https://t.co/F5uEn3iMLR— Chris Peterson (@cpeterso) July 24, 2018
As phone sales fall & app downloads stall a hardware company like Apple is pushing hard into services while quietly raking in utterly fantastic ad revenues from search & ads in their app store.
Part of the reason people are downloading fewer apps is so many apps require registration as soon as they are opened, or only let a user engage with them for seconds before pushing aggressive upsells. And then many apps which were formerly one-off purchases are becoming subscription plays. As traffic acquisition costs have jumped, many apps must engage in sleight of hand behaviors (free but not really, we are collecting data totally unrelated to the purpose of our app & oops we sold your data, etc.) in order to get the numbers to back out. This in turn causes app stores to slow down app reviews.
Apple acquired the news subscription service Texture & turned it into Apple News Plus. Not only is Apple keeping half the subscription revenues, but soon the service will only work for people using Apple devices, leaving nearly 100,000 other subscribers out in the cold: "if you’re part of the 30% who used Texture to get your favorite magazines digitally on Android or Windows devices, you will soon be out of luck. Only Apple iOS devices will be able to access the 300 magazines available from publishers. At the time of the sale in March 2018 to Apple, Texture had about 240,000 subscribers."
Apple is also going to spend over a half-billion Dollars exclusively licensing independently developed games:
Several people involved in the project’s development say Apple is spending several million dollars each on most of the more than 100 games that have been selected to launch on Arcade, with its total budget likely to exceed $500m. The games service is expected to launch later this year. ... Apple is offering developers an extra incentive if they agree for their game to only be available on Arcade, withholding their release on Google’s Play app store for Android smartphones or other subscription gaming bundles such as Microsoft’s Xbox game pass.
Verizon wants to launch a video game streaming service. It will probably be almost as successful as their Go90 OTT service was. Microsoft is pushing to make Xbox games work on Android devices. Amazon is developing a game streaming service to compliment Twitch.
The hosts on Twitch, some of whom sign up exclusively with the platform in order to gain access to its moneymaking tools, are rewarded for their ability to make a connection with viewers as much as they are for their gaming prowess. Viewers who pay $4.99 a month for a basic subscription — the money is split evenly between the streamers and Twitch — are looking for immediacy and intimacy. While some hosts at YouTube Gaming offer a similar experience, they have struggled to build audiences as large, and as dedicated, as those on Twitch. ... While YouTube has made millionaires out of the creators of popular videos through its advertising program, Twitch’s hosts make money primarily from subscribers and one-off donations or tips. YouTube Gaming has made it possible for viewers to support hosts this way, but paying audiences haven’t materialized at the scale they have on Twitch.
Google, having a bit of Twitch envy, is also launching a video game streaming service which will be deeply integrated into YouTube: "With Stadia, YouTube watchers can press “Play now” at the end of a video, and be brought into the game within 5 seconds. The service provides “instant access” via button or link, just like any other piece of content on the web."
Google will also launch their own game studio making exclusive games for their platform.
When consoles don't use discs or cartridges so they can sell a subscription access to their software library it is hard to be a game retailer! GameStop's stock has been performing like an ICO. And these sorts of announcements from the tech companies have been hitting stock prices for companies like Nintendo & Sony: “There is no doubt this service makes life even more difficult for established platforms,” Amir Anvarzadeh, a market strategist at Asymmetric Advisors Pte, said in a note to clients. “Google will help further fragment the gaming market which is already coming under pressure by big games which have adopted the mobile gaming business model of giving the titles away for free in hope of generating in-game content sales.”
The big tech companies which promoted everything in adjacent markets being free are now erecting paywalls for themselves, balkanizing the web by paying for exclusives to drive their bundled subscriptions.
How many paid movie streaming services will the web have by the end of next year? 20? 50? Does anybody know?
Disney alone with operate Disney+, ESPN+ as well as Hulu.
And then the tech companies are not only licensing exclusives to drive their subscription-based services, but we're going to see more exclusionary policies like YouTube not working on Amazon Echo, Netflix dumping support for Apple's Airplay, or Amazon refusing to sell devices like Chromecast or Apple TV.
The good news in a fractured web is a broader publishing industry that contains many micro markets will have many opportunities embedded in it. A Facebook pivot away from games toward news, or a pivot away from news toward video won't kill third party publishers who have a more diverse traffic profile and more direct revenues. And a regional law blocking porn or gambling websites might lead to an increase in demand for VPNs or free to play points-based games with paid upgrades. Even the rise of metered paywalls will lead to people using more web browsers & more VPNs. Each fracture (good or bad) will create more market edges & ultimately more opportunities. Chinese enforcement of their gambling laws created a real estate boom in Manila.
So long as there are 4 or 5 game stores, 4 or 5 movie streaming sites, etc. ... they have to compete on merit or use money to try to buy exclusives. Either way is better than the old monopoly strategy of take it or leave it ultimatums.
The publisher wins because there is a competitive bid. There won't be an arbitrary 30% tax on everything. So long as there is competition from the open web there will be means to bypass the junk fees & the most successful companies that do so might create their own stores with a lower rate: "Mr. Schachter estimates that Apple and Google could see a hit of about 14% to pretax earnings if they reduced their own app commissions to match Epic’s take."
As the big media companies & big tech companies race to create subscription products they'll spend many billions on exclusives. And they will be training consumers that there's nothing wrong with paying for content. This will eventually lead to hundreds of thousands or even millions of successful niche publications which have incentives better aligned than all the issues the ad supported web has faced.
Categories: 
publishing & media
source http://www.seobook.com/fractured from Rising Phoenix SEO http://risingphoenixseo.blogspot.com/2019/04/the-fractured-web.html
0 notes
bambiguertinus · 6 years ago
Text
The Fractured Web
Anyone can argue about the intent of a particular action & the outcome that is derived by it. But when the outcome is known, at some point the intent is inferred if the outcome is derived from a source of power & the outcome doesn't change.
Or, put another way, if a powerful entity (government, corporation, other organization) disliked an outcome which appeared to benefit them in the short term at great lasting cost to others, they could spend resources to adjust the system.
If they don't spend those resources (or, rather, spend them on lobbying rather than improving the ecosystem) then there is no desired change. The outcome is as desired. Change is unwanted.
Engagement is a toxic metric.Products which optimize for it become worse. People who optimize for it become less happy.It also seems to generate runaway feedback loops where most engagable people have a) worst individual experiences and then b) end up driving the product bus.— Patrick McKenzie (@patio11) April 9, 2019
News is a stock vs flow market where the flow of recent events drives most of the traffic to articles. News that is more than a couple days old is no longer news. A news site which stops publishing news stops becoming a habit & quickly loses relevancy. Algorithmically an abandoned archive of old news articles doesn't look much different than eHow, in spite of having a much higher cost structure.
According to SEMrush's traffic rank, ampproject.org gets more monthly visits than Yahoo.com.
That actually understates the prevalence of AMP because AMP is generally designed for mobile AND not all AMP-formatted content is displayed on ampproject.org.
Part of how AMP was able to get widespread adoption was because in the news vertical the organic search result set was displaced by an AMP block. If you were a news site either you were so differentiated that readers would scroll past the AMP block in the search results to look for you specifically, or you adopted AMP, or you were doomed.
Some news organizations like The Guardian have a team of about a dozen people reformatting their content to the duplicative & proprietary AMP format. That's wasteful, but necessary "In theory, adoption of AMP is voluntary. In reality, publishers that don’t want to see their search traffic evaporate have little choice. New data from publisher analytics firm Chartbeat shows just how much leverage Google has over publishers thanks to its dominant search engine."
It seems more than a bit backward that low margin publishers are doing duplicative work to distance themselves from their own readers while improving the profit margins of monopolies. But it is what it is. And that no doubt drew the ire of many publishers across the EU.
And now there are AMP Stories to eat up even more visual real estate.
If you spent a bunch of money to create a highly differentiated piece of content, why would you prefer that high spend flaghship content appear on a third party website rather than your own?
Google & Facebook have done such a fantastic job of eating the entire pie that some are celebrating Amazon as a prospective savior to the publishing industry. That view - IMHO - is rather suspect.
Where any of the tech monopolies dominate they cram down on partners. The New York Times acquired The Wirecutter in Q4 of 2016. In Q1 of 2017 Amazon adjusted their affiliate fee schedule.
Amazon generally treats consumers well, but they have been much harder on business partners with tough pricing negotiations, counterfeit protections, forced ad buying to have a high enough product rank to be able to rank organically, ad displacement of their organic search results below the fold (even for branded search queries), learning suppliers & cutting out the partners, private label products patterned after top sellers, in some cases running pop over ads for the private label products on product level pages where brands already spent money to drive traffic to the page, etc.
They've made things tougher for their partners in a way that mirrors the impact Facebook & Google have had on online publishers:
"Boyce’s experience on Amazon largely echoed what happens in the offline world: competitors entered the market, pushing down prices and making it harder to make a profit. So Boyce adapted. He stopped selling basketball hoops and developed his own line of foosball tables, air hockey tables, bocce ball sets and exercise equipment. The best way to make a decent profit on Amazon was to sell something no one else had and create your own brand. ... Amazon also started selling bocce ball sets that cost $15 less than Boyce’s. He says his products are higher quality, but Amazon gives prominent page space to its generic version and wins the cost-conscious shopper."
Google claims they have no idea how content publishers are with the trade off between themselves & the search engine, but every quarter Alphabet publish the share of ad spend occurring on owned & operated sites versus the share spent across the broader publisher network. And in almost every quarter for over a decade straight that ratio has grown worse for publishers.
When Google tells industry about how much $ it funnels to rest of ecosystem, just show them this chart. It's good to be the "revenue regulator" (note: G went public in 2004). pic.twitter.com/HCbCNgbzKc— Jason Kint (@jason_kint) February 5, 2019
The aggregate numbers for news publishers are worse than shown above as Google is ramping up ads in video games quite hard. They've partnered with Unity & promptly took away the ability to block ads from appearing in video games using googleadsenseformobileapps.com exclusion (hello flat thumb misclicks, my name is budget & I am gone!)
They will also track video game player behavior & alter game play to maximize revenues based on machine learning tied to surveillance of the user's account: "We’re bringing a new approach to monetization that combines ads and in-app purchases in one automated solution. Available today, new smart segmentation features in Google AdMob use machine learning to segment your players based on their likelihood to spend on in-app purchases. Ad units with smart segmentation will show ads only to users who are predicted not to spend on in-app purchases. Players who are predicted to spend will see no ads, and can simply continue playing."
And how does the growth of ampproject.org square against the following wisdom?
If you do use a CDN, I'd recommend using a domain name of your own (eg, https://t.co/fWMc6CFPZ0), so you can move to other CDNs if you feel the need to over time, without having to do any redirects.— John (@JohnMu) April 15, 2019
Literally only yesterday did Google begin supporting instant loading of self-hosted AMP pages.
China has a different set of tech leaders than the United States. Baidu, Alibaba, Tencent (BAT) instead of Facebook, Amazon, Apple, Netflix, Google (FANG). China tech companies may have won their domestic markets in part based on superior technology or better knowledge of the local culture, though those same companies have largely went nowhere fast in most foreign markets. A big part of winning was governmental assistance in putting a foot on the scales.
Part of the US-China trade war is about who controls the virtual "seas" upon which value flows:
it can easily be argued that the last 60 years were above all the era of the container-ship (with container-ships getting ever bigger). But will the coming decades still be the age of the container-ship? Possibly not, for the simple reason that things that have value increasingly no longer travel by ship, but instead by fiberoptic cables! ... you could almost argue that ZTE and Huawei have been the “East India Company” of the current imperial cycle. Unsurprisingly, it is these very companies, charged with laying out the “new roads” along which “tomorrow’s value” will flow, that find themselves at the center of the US backlash. ... if the symbol of British domination was the steamship, and the symbol of American strength was the Boeing 747, it seems increasingly clear that the question of the future will be whether tomorrow’s telecom switches and routers are produced by Huawei or Cisco. ... US attempts to take down Huawei and ZTE can be seen as the existing empire’s attempt to prevent the ascent of a new imperial power. With this in mind, I could go a step further and suggest that perhaps the Huawei crisis is this century’s version of Suez crisis. No wonder markets have been falling ever since the arrest of the Huawei CFO. In time, the Suez Crisis was brought to a halt by US threats to destroy the value of sterling. Could we now witness the same for the US dollar?
China maintains Huawei is an employee-owned company. But that proposition is suspect. Broadly stealing technology is vital to the growth of the Chinese economy & they have no incentive to stop unless their leading companies pay a direct cost. Meanwhile, China is investigating Ericsson over licensing technology.
India has taken notice of the success of Chinese tech companies & thus began to promote "national champion" company policies. That, in turn, has also meant some of the Chinese-styled laws requiring localized data, antitrust inquiries, foreign ownership restrictions, requirements for platforms to not sell their own goods, promoting limits on data encryption, etc.
The secretary of India’s Telecommunications Department, Aruna Sundararajan, last week told a gathering of Indian startups in a closed-door meeting in the tech hub of Bangalore that the government will introduce a “national champion” policy “very soon” to encourage the rise of Indian companies, according to a person familiar with the matter. She said Indian policy makers had noted the success of China’s internet giants, Alibaba Group Holding Ltd. and Tencent Holdings Ltd. ... Tensions began rising last year, when New Delhi decided to create a clearer set of rules for e-commerce and convened a group of local players to solicit suggestions. Amazon and Flipkart, even though they make up more than half the market, weren’t invited, according to people familiar with the matter.
Amazon vowed to invest $5 billion in India & they have done some remarkable work on logistics there. Walmart acquired Flipkart for $16 billion.
Other emerging markets also have many local ecommerce leaders like Jumia, MercadoLibre, OLX, Gumtree, Takealot, Konga, Kilimall, BidOrBuy, Tokopedia, Bukalapak, Shoppee, Lazada. If you live in the US you may have never heard of *any* of those companies. And if you live in an emerging market you may have never interacted with Amazon or eBay.
It makes sense that ecommerce leadership would be more localized since it requires moving things in the physical economy, dealing with local currencies, managing inventory, shipping goods, etc. whereas information flows are just bits floating on a fiber optic cable.
If the Internet is primarily seen as a communications platform it is easy for people in some emerging markets to think Facebook is the Internet. Free communication with friends and family members is a compelling offer & as the cost of data drops web usage increases.
At the same time, the web is incredibly deflationary. Every free form of entertainment which consumes time is time that is not spent consuming something else.
Add the technological disruption to the wealth polarization that happened in the wake of the great recession, then combine that with algorithms that promote extremist views & it is clearly causing increasing conflict.
If you are a parent and you think you child has no shot at a brighter future than your own life it is easy to be full of rage.
Empathy can radicalize otherwise normal people by giving them a more polarized view of the world:
Starting around 2000, the line starts to slide. More students say it's not their problem to help people in trouble, not their job to see the world from someone else's perspective. By 2009, on all the standard measures, Konrath found, young people on average measure 40 percent less empathetic than my own generation ... The new rule for empathy seems to be: reserve it, not for your "enemies," but for the people you believe are hurt, or you have decided need it the most. Empathy, but just for your own team. And empathizing with the other team? That's practically a taboo.
A complete lack of empathy could allow a psychopath (hi Chris!) to commit extreme crimes while feeling no guilt, shame or remorse. Extreme empathy can have the same sort of outcome:
"Sometimes we commit atrocities not out of a failure of empathy but rather as a direct consequence of successful, even overly successful, empathy. ... They emphasized that students would learn both sides, and the atrocities committed by one side or the other were always put into context. Students learned this curriculum, but follow-up studies showed that this new generation was more polarized than the one before. ... [Empathy] can be good when it leads to good action, but it can have downsides. For example, if you want the victims to say 'thank you.' You may even want to keep the people you help in that position of inferior victim because it can sustain your feeling of being a hero." - Fritz Breithaupt
News feeds will be read. Villages will be razed. Lynch mobs will become commonplace.
Many people will end up murdered by algorithmically generated empathy.
As technology increases absentee ownership & financial leverage, a society led by morally agnostic algorithms is not going to become more egalitarian.
The more I think about and discuss it, the more I think WhatsApp is simultaneously the future of Facebook, and the most potentially dangerous digital tool yet created. We haven't even begun to see the real impact yet of ubiquitous, unfettered and un-moderatable human telepathy.— Antonio García Martínez (@antoniogm) April 15, 2019
When politicians throw fuel on the fire it only gets worse:
It’s particularly odd that the government is demanding “accountability and responsibility” from a phone app when some ruling party politicians are busy spreading divisive fake news. How can the government ask WhatsApp to control mobs when those convicted of lynching Muslims have been greeted, garlanded and fed sweets by some of the most progressive and cosmopolitan members of Modi’s council of ministers?
Mark Zuckerburg won't get caught downstream from platform blowback as he spends $20 million a year on his security.
The web is a mirror. Engagement-based algorithms reinforcing our perceptions & identities.
And every important story has at least 2 sides!
The Rohingya asylum seekers are victims of their own violent Jihadist leadership that formed a militia to kill Buddhists and Hindus. Hindus are being massacred, where’s the outrage for them!? https://t.co/P3m6w4B1Po— Imam Tawhidi (@Imamofpeace) May 23, 2018
Some may "learn" vaccines don't work. Others may learn the vaccines their own children took did not work, as it failed to protect them from the antivax content spread by Facebook & Google, absorbed by people spreading measles & Medieval diseases.
Passion drives engagement, which drives algorithmic distribution: "There’s an asymmetry of passion at work. Which is to say, there’s very little counter-content to surface because it simply doesn’t occur to regular people (or, in this case, actual medical experts) that there’s a need to produce counter-content."
As the costs of "free" become harder to hide, social media companies which currently sell emerging markets as their next big growth area will end up having embedded regulatory compliance costs which will end up exceeding any sort of prospective revenue they could hope to generate.
The Pinterest S1 shows almost all their growth is in emerging markets, yet almost all their revenue is inside the United States.
As governments around the world see the real-world cost of the foreign tech companies & view some of them as piggy banks, eventually the likes of Facebook or Google will pull out of a variety of markets they no longer feel worth serving. It will be like Google did in mainland China with search after discovering pervasive hacking of activist Gmail accounts.
Just tried signing into Gmail from a new device. Unless I provide a phone number, there is no way to sign in and no one to call about it. Oh, and why do they say they need my phone? If you guessed "for my protection," you would be correct. Talk about Big Brother...— Simon Mikhailovich (@S_Mikhailovich) April 16, 2019
Lower friction & lower cost information markets will face more junk fees, hurdles & even some legitimate regulations. Information markets will start to behave more like physical goods markets.
The tech companies presume they will be able to use satellites, drones & balloons to beam in Internet while avoiding messy local issues tied to real world infrastructure, but when a local wealthy player is betting against them they'll probably end up losing those markets: "One of the biggest cheerleaders for the new rules was Reliance Jio, a fast-growing mobile phone company controlled by Mukesh Ambani, India’s richest industrialist. Mr. Ambani, an ally of Mr. Modi, has made no secret of his plans to turn Reliance Jio into an all-purpose information service that offers streaming video and music, messaging, money transfer, online shopping, and home broadband services."
Publishers do not have "their mojo back" because the tech companies have been so good to them, but rather because the tech companies have been so aggressive that they've earned so much blowback which will in turn lead publishers to opting out of future deals, which will eventually lead more people back to the trusted brands of yesterday.
Publishers feeling guilty about taking advertorial money from the tech companies to spread their propaganda will offset its publication with opinion pieces pointing in the other direction: "This is a lobbying campaign in which buying the good opinion of news brands is clearly important. If it was about reaching a target audience, there are plenty of metrics to suggest his words would reach further – at no cost – on Facebook. Similarly, Google is upping its presence in a less obvious manner via assorted media initiatives on both sides of the Atlantic. Its more direct approach to funding journalism seems to have the desired effect of making all media organisations (and indeed many academic institutions) touched by its money slightly less questioning and critical of its motives."
When Facebook goes down direct visits to leading news brand sites go up.
When Google penalizes a no-name me-too site almost nobody realizes it is missing. But if a big publisher opts out of the ecosystem people will notice.
The reliance on the tech platforms is largely a mirage. If enough key players were to opt out at the same time people would quickly reorient their information consumption habits.
If the platforms can change their focus overnight then why can't publishers band together & choose to dump them?
CEO Jack Dorsey said Twitter is looking to change the focus from following specific individuals to topics of interest, acknowledging that what's incentivized today on the platform is at odds with the goal of healthy dialoguehttps://t.co/31FYslbePA— Axios (@axios) April 16, 2019
In Europe there is GDPR, which aimed to protect user privacy, but ultimately acted as a tax on innovation by local startups while being a subsidy to the big online ad networks. They also have Article 11 & Article 13, which passed in spite of Google's best efforts on the scaremongering anti-SERP tests, lobbying & propaganda fronts: "Google has sparked criticism by encouraging news publishers participating in its Digital News Initiative to lobby against proposed changes to EU copyright law at a time when the beleaguered sector is increasingly turning to the search giant for help."
Remember the Eric Schmidt comment about how brands are how you sort out (the non-YouTube portion of) the cesspool? As it turns out, he was allegedly wrong as Google claims they have been fighting for the little guy the whole time:
Article 11 could change that principle and require online services to strike commercial deals with publishers to show hyperlinks and short snippets of news. This means that search engines, news aggregators, apps, and platforms would have to put commercial licences in place, and make decisions about which content to include on the basis of those licensing agreements and which to leave out. Effectively, companies like Google will be put in the position of picking winners and losers. ... Why are large influential companies constraining how new and small publishers operate? ... The proposed rules will undoubtedly hurt diversity of voices, with large publishers setting business models for the whole industry. This will not benefit all equally. ... We believe the information we show should be based on quality, not on payment.
Facebook claims there is a local news problem: "Facebook Inc. has been looking to boost its local-news offerings since a 2017 survey showed most of its users were clamoring for more. It has run into a problem: There simply isn’t enough local news in vast swaths of the country. ... more than one in five newspapers have closed in the past decade and a half, leaving half the counties in the nation with just one newspaper, and 200 counties with no newspaper at all."
Google is so for the little guy that for their local news experiments they've partnered with a private equity backed newspaper roll up firm & another newspaper chain which did overpriced acquisitions & is trying to act like a PE firm (trying to not get eaten by the PE firm).
Does the above stock chart look in any way healthy?
Does it give off the scent of a firm that understood the impact of digital & rode it to new heights?
If you want good market-based outcomes, why not partner with journalists directly versus operating through PE chop shops?
If Patch is profitable & Google were a neutral ranking system based on quality, couldn't Google partner with journalists directly?
Throwing a few dollars at a PE firm in some nebulous partnership sure beats the sort of regulations coming out of the EU. And the EU's regulations (and prior link tax attempts) are in addition to the three multi billion Euro fines the European Union has levied against Alphabet for shopping search, Android & AdSense.
Google was also fined in Russia over Android bundling. The fine was tiny, but after consumers gained a search engine choice screen (much like Google pushed for in Europe on Microsoft years ago) Yandex's share of mobile search grew quickly.
The UK recently published a white paper on online harms. In some ways it is a regulation just like the tech companies might offer to participants in their ecosystems:
Companies will have to fulfil their new legal duties or face the consequences and “will still need to be compliant with the overarching duty of care even where a specific code does not exist, for example assessing and responding to the risk associated with emerging harms or technology”.
If web publishers should monitor inbound links to look for anything suspicious then the big platforms sure as hell have the resources & profit margins to monitor behavior on their own websites.
Australia passed the Sharing of Abhorrent Violent Material bill which requires platforms to expeditiously remove violent videos & notify the Australian police about them.
There are other layers of fracturing going on in the web as well.
Programmatic advertising shifted revenue from publishers to adtech companies & the largest ad sellers. Ad blockers further lower the ad revenues of many publishers. If you routinely use an ad blocker, try surfing the web for a while without one & you will notice layover welcome AdSense ads on sites as you browse the web - the very type of ad they were allegedly against when promoting AMP.
There has been much more press in the past week about ad blocking as Google's influence is being questioned as it rolls out ad blocking as a feature built into Google's dominant Chrome web browser. https://t.co/LQmvJu9MYB— Jason Kint (@jason_kint) February 19, 2018
Tracking protection in browsers & ad blocking features built directly into browsers leave publishers more uncertain. And who even knows who visited an AMP page hosted on a third party server, particularly when things like GDPR are mixed in? Those who lack first party data may end up having to make large acquisitions to stay relevant.
Voice search & personal assistants are now ad channels.
Google Assistant Now Showing Sponsored Link Ads for Some Travel Related Queries "Similar results are delivered through both Google Home and Google Home Hub without the sponsored links." https://t.co/jSVKKI2AYT via @bretkinsella pic.twitter.com/0sjAswy14M— Glenn Gabe (@glenngabe) April 15, 2019
App stores are removing VPNs in China, removing Tiktok in India, and keeping female tracking apps in Saudi Arabia. App stores are centralized chokepoints for governments. Every centralized service is at risk of censorship. Web browsers from key state-connected players can also censor messages spread by developers on platforms like GitHub.
Microsoft's newest Edge web browser is based on Chromium, the source of Google Chrome. While Mozilla Firefox gets most of their revenue from a search deal with Google, Google has still went out of its way to use its services to both promote Chrome with pop overs AND break in competing web browsers:
"All of this is stuff you're allowed to do to compete, of course. But we were still a search partner, so we'd say 'hey what gives?' And every time, they'd say, 'oops. That was accidental. We'll fix it in the next push in 2 weeks.' Over and over. Oops. Another accident. We'll fix it soon. We want the same things. We're on the same team. There were dozens of oopses. Hundreds maybe?" - former Firefox VP Jonathan Nightingale
This is how it spreads. Google normalizes “web apps” that are really just Chrome apps. Then others follow. We’ve been here before, y’all. Remember IE? Browser hegemony is not a happy place. https://t.co/b29EvIty1H— DHH (@dhh) April 1, 2019
In fact, it’s alarming how much of Microsoft’s cut-off-the-air-supply playbook on browser dominance that Google is emulating. From browser-specific apps to embrace-n-extend AMP “standards”. It’s sad, but sadder still is when others follow suit.— DHH (@dhh) April 1, 2019
YouTube page load is 5x slower in Firefox and Edge than in Chrome because YouTube's Polymer redesign relies on the deprecated Shadow DOM v0 API only implemented in Chrome. You can restore YouTube's faster pre-Polymer design with this Firefox extension: https://t.co/F5uEn3iMLR— Chris Peterson (@cpeterso) July 24, 2018
As phone sales fall & app downloads stall a hardware company like Apple is pushing hard into services while quietly raking in utterly fantastic ad revenues from search & ads in their app store.
Part of the reason people are downloading fewer apps is so many apps require registration as soon as they are opened, or only let a user engage with them for seconds before pushing aggressive upsells. And then many apps which were formerly one-off purchases are becoming subscription plays. As traffic acquisition costs have jumped, many apps must engage in sleight of hand behaviors (free but not really, we are collecting data totally unrelated to the purpose of our app & oops we sold your data, etc.) in order to get the numbers to back out. This in turn causes app stores to slow down app reviews.
Apple acquired the news subscription service Texture & turned it into Apple News Plus. Not only is Apple keeping half the subscription revenues, but soon the service will only work for people using Apple devices, leaving nearly 100,000 other subscribers out in the cold: "if you’re part of the 30% who used Texture to get your favorite magazines digitally on Android or Windows devices, you will soon be out of luck. Only Apple iOS devices will be able to access the 300 magazines available from publishers. At the time of the sale in March 2018 to Apple, Texture had about 240,000 subscribers."
Apple is also going to spend over a half-billion Dollars exclusively licensing independently developed games:
Several people involved in the project’s development say Apple is spending several million dollars each on most of the more than 100 games that have been selected to launch on Arcade, with its total budget likely to exceed $500m. The games service is expected to launch later this year. ... Apple is offering developers an extra incentive if they agree for their game to only be available on Arcade, withholding their release on Google’s Play app store for Android smartphones or other subscription gaming bundles such as Microsoft’s Xbox game pass.
Verizon wants to launch a video game streaming service. It will probably be almost as successful as their Go90 OTT service was. Microsoft is pushing to make Xbox games work on Android devices. Amazon is developing a game streaming service to compliment Twitch.
The hosts on Twitch, some of whom sign up exclusively with the platform in order to gain access to its moneymaking tools, are rewarded for their ability to make a connection with viewers as much as they are for their gaming prowess. Viewers who pay $4.99 a month for a basic subscription — the money is split evenly between the streamers and Twitch — are looking for immediacy and intimacy. While some hosts at YouTube Gaming offer a similar experience, they have struggled to build audiences as large, and as dedicated, as those on Twitch. ... While YouTube has made millionaires out of the creators of popular videos through its advertising program, Twitch’s hosts make money primarily from subscribers and one-off donations or tips. YouTube Gaming has made it possible for viewers to support hosts this way, but paying audiences haven’t materialized at the scale they have on Twitch.
Google, having a bit of Twitch envy, is also launching a video game streaming service which will be deeply integrated into YouTube: "With Stadia, YouTube watchers can press “Play now” at the end of a video, and be brought into the game within 5 seconds. The service provides “instant access” via button or link, just like any other piece of content on the web."
Google will also launch their own game studio making exclusive games for their platform.
When consoles don't use discs or cartridges so they can sell a subscription access to their software library it is hard to be a game retailer! GameStop's stock has been performing like an ICO. And these sorts of announcements from the tech companies have been hitting stock prices for companies like Nintendo & Sony: “There is no doubt this service makes life even more difficult for established platforms,” Amir Anvarzadeh, a market strategist at Asymmetric Advisors Pte, said in a note to clients. “Google will help further fragment the gaming market which is already coming under pressure by big games which have adopted the mobile gaming business model of giving the titles away for free in hope of generating in-game content sales.”
The big tech companies which promoted everything in adjacent markets being free are now erecting paywalls for themselves, balkanizing the web by paying for exclusives to drive their bundled subscriptions.
How many paid movie streaming services will the web have by the end of next year? 20? 50? Does anybody know?
Disney alone with operate Disney+, ESPN+ as well as Hulu.
And then the tech companies are not only licensing exclusives to drive their subscription-based services, but we're going to see more exclusionary policies like YouTube not working on Amazon Echo, Netflix dumping support for Apple's Airplay, or Amazon refusing to sell devices like Chromecast or Apple TV.
The good news in a fractured web is a broader publishing industry that contains many micro markets will have many opportunities embedded in it. A Facebook pivot away from games toward news, or a pivot away from news toward video won't kill third party publishers who have a more diverse traffic profile and more direct revenues. And a regional law blocking porn or gambling websites might lead to an increase in demand for VPNs or free to play points-based games with paid upgrades. Even the rise of metered paywalls will lead to people using more web browsers & more VPNs. Each fracture (good or bad) will create more market edges & ultimately more opportunities. Chinese enforcement of their gambling laws created a real estate boom in Manila.
So long as there are 4 or 5 game stores, 4 or 5 movie streaming sites, etc. ... they have to compete on merit or use money to try to buy exclusives. Either way is better than the old monopoly strategy of take it or leave it ultimatums.
The publisher wins because there is a competitive bid. There won't be an arbitrary 30% tax on everything. So long as there is competition from the open web there will be means to bypass the junk fees & the most successful companies that do so might create their own stores with a lower rate: "Mr. Schachter estimates that Apple and Google could see a hit of about 14% to pretax earnings if they reduced their own app commissions to match Epic’s take."
As the big media companies & big tech companies race to create subscription products they'll spend many billions on exclusives. And they will be training consumers that there's nothing wrong with paying for content. This will eventually lead to hundreds of thousands or even millions of successful niche publications which have incentives better aligned than all the issues the ad supported web has faced.
Categories: 
publishing & media
from Digtal Marketing News http://www.seobook.com/fractured
0 notes
srasamua · 6 years ago
Text
The Fractured Web
Anyone can argue about the intent of a particular action & the outcome that is derived by it. But when the outcome is known, at some point the intent is inferred if the outcome is derived from a source of power & the outcome doesn't change.
Or, put another way, if a powerful entity (government, corporation, other organization) disliked an outcome which appeared to benefit them in the short term at great lasting cost to others, they could spend resources to adjust the system.
If they don't spend those resources (or, rather, spend them on lobbying rather than improving the ecosystem) then there is no desired change. The outcome is as desired. Change is unwanted.
Engagement is a toxic metric.Products which optimize for it become worse. People who optimize for it become less happy.It also seems to generate runaway feedback loops where most engagable people have a) worst individual experiences and then b) end up driving the product bus.— Patrick McKenzie (@patio11) April 9, 2019
News is a stock vs flow market where the flow of recent events drives most of the traffic to articles. News that is more than a couple days old is no longer news. A news site which stops publishing news stops becoming a habit & quickly loses relevancy. Algorithmically an abandoned archive of old news articles doesn't look much different than eHow, in spite of having a much higher cost structure.
According to SEMrush's traffic rank, ampproject.org gets more monthly visits than Yahoo.com.
That actually understates the prevalence of AMP because AMP is generally designed for mobile AND not all AMP-formatted content is displayed on ampproject.org.
Part of how AMP was able to get widespread adoption was because in the news vertical the organic search result set was displaced by an AMP block. If you were a news site either you were so differentiated that readers would scroll past the AMP block in the search results to look for you specifically, or you adopted AMP, or you were doomed.
Some news organizations like The Guardian have a team of about a dozen people reformatting their content to the duplicative & proprietary AMP format. That's wasteful, but necessary "In theory, adoption of AMP is voluntary. In reality, publishers that don’t want to see their search traffic evaporate have little choice. New data from publisher analytics firm Chartbeat shows just how much leverage Google has over publishers thanks to its dominant search engine."
It seems more than a bit backward that low margin publishers are doing duplicative work to distance themselves from their own readers while improving the profit margins of monopolies. But it is what it is. And that no doubt drew the ire of many publishers across the EU.
And now there are AMP Stories to eat up even more visual real estate.
If you spent a bunch of money to create a highly differentiated piece of content, why would you prefer that high spend flaghship content appear on a third party website rather than your own?
Google & Facebook have done such a fantastic job of eating the entire pie that some are celebrating Amazon as a prospective savior to the publishing industry. That view - IMHO - is rather suspect.
Where any of the tech monopolies dominate they cram down on partners. The New York Times acquired The Wirecutter in Q4 of 2016. In Q1 of 2017 Amazon adjusted their affiliate fee schedule.
Amazon generally treats consumers well, but they have been much harder on business partners with tough pricing negotiations, counterfeit protections, forced ad buying to have a high enough product rank to be able to rank organically, ad displacement of their organic search results below the fold (even for branded search queries), learning suppliers & cutting out the partners, private label products patterned after top sellers, in some cases running pop over ads for the private label products on product level pages where brands already spent money to drive traffic to the page, etc.
They've made things tougher for their partners in a way that mirrors the impact Facebook & Google have had on online publishers:
"Boyce’s experience on Amazon largely echoed what happens in the offline world: competitors entered the market, pushing down prices and making it harder to make a profit. So Boyce adapted. He stopped selling basketball hoops and developed his own line of foosball tables, air hockey tables, bocce ball sets and exercise equipment. The best way to make a decent profit on Amazon was to sell something no one else had and create your own brand. ... Amazon also started selling bocce ball sets that cost $15 less than Boyce’s. He says his products are higher quality, but Amazon gives prominent page space to its generic version and wins the cost-conscious shopper."
Google claims they have no idea how content publishers are with the trade off between themselves & the search engine, but every quarter Alphabet publish the share of ad spend occurring on owned & operated sites versus the share spent across the broader publisher network. And in almost every quarter for over a decade straight that ratio has grown worse for publishers.
When Google tells industry about how much $ it funnels to rest of ecosystem, just show them this chart. It's good to be the "revenue regulator" (note: G went public in 2004). pic.twitter.com/HCbCNgbzKc— Jason Kint (@jason_kint) February 5, 2019
The aggregate numbers for news publishers are worse than shown above as Google is ramping up ads in video games quite hard. They've partnered with Unity & promptly took away the ability to block ads from appearing in video games using googleadsenseformobileapps.com exclusion (hello flat thumb misclicks, my name is budget & I am gone!)
They will also track video game player behavior & alter game play to maximize revenues based on machine learning tied to surveillance of the user's account: "We’re bringing a new approach to monetization that combines ads and in-app purchases in one automated solution. Available today, new smart segmentation features in Google AdMob use machine learning to segment your players based on their likelihood to spend on in-app purchases. Ad units with smart segmentation will show ads only to users who are predicted not to spend on in-app purchases. Players who are predicted to spend will see no ads, and can simply continue playing."
And how does the growth of ampproject.org square against the following wisdom?
If you do use a CDN, I'd recommend using a domain name of your own (eg, https://t.co/fWMc6CFPZ0), so you can move to other CDNs if you feel the need to over time, without having to do any redirects.— John (@JohnMu) April 15, 2019
Literally only yesterday did Google begin supporting instant loading of self-hosted AMP pages.
China has a different set of tech leaders than the United States. Baidu, Alibaba, Tencent (BAT) instead of Facebook, Amazon, Apple, Netflix, Google (FANG). China tech companies may have won their domestic markets in part based on superior technology or better knowledge of the local culture, though those same companies have largely went nowhere fast in most foreign markets. A big part of winning was governmental assistance in putting a foot on the scales.
Part of the US-China trade war is about who controls the virtual "seas" upon which value flows:
it can easily be argued that the last 60 years were above all the era of the container-ship (with container-ships getting ever bigger). But will the coming decades still be the age of the container-ship? Possibly not, for the simple reason that things that have value increasingly no longer travel by ship, but instead by fiberoptic cables! ... you could almost argue that ZTE and Huawei have been the “East India Company” of the current imperial cycle. Unsurprisingly, it is these very companies, charged with laying out the “new roads” along which “tomorrow’s value” will flow, that find themselves at the center of the US backlash. ... if the symbol of British domination was the steamship, and the symbol of American strength was the Boeing 747, it seems increasingly clear that the question of the future will be whether tomorrow’s telecom switches and routers are produced by Huawei or Cisco. ... US attempts to take down Huawei and ZTE can be seen as the existing empire’s attempt to prevent the ascent of a new imperial power. With this in mind, I could go a step further and suggest that perhaps the Huawei crisis is this century’s version of Suez crisis. No wonder markets have been falling ever since the arrest of the Huawei CFO. In time, the Suez Crisis was brought to a halt by US threats to destroy the value of sterling. Could we now witness the same for the US dollar?
China maintains Huawei is an employee-owned company. But that proposition is suspect. Broadly stealing technology is vital to the growth of the Chinese economy & they have no incentive to stop unless their leading companies pay a direct cost. Meanwhile, China is investigating Ericsson over licensing technology.
India has taken notice of the success of Chinese tech companies & thus began to promote "national champion" company policies. That, in turn, has also meant some of the Chinese-styled laws requiring localized data, antitrust inquiries, foreign ownership restrictions, requirements for platforms to not sell their own goods, promoting limits on data encryption, etc.
The secretary of India’s Telecommunications Department, Aruna Sundararajan, last week told a gathering of Indian startups in a closed-door meeting in the tech hub of Bangalore that the government will introduce a “national champion” policy “very soon” to encourage the rise of Indian companies, according to a person familiar with the matter. She said Indian policy makers had noted the success of China’s internet giants, Alibaba Group Holding Ltd. and Tencent Holdings Ltd. ... Tensions began rising last year, when New Delhi decided to create a clearer set of rules for e-commerce and convened a group of local players to solicit suggestions. Amazon and Flipkart, even though they make up more than half the market, weren’t invited, according to people familiar with the matter.
Amazon vowed to invest $5 billion in India & they have done some remarkable work on logistics there. Walmart acquired Flipkart for $16 billion.
Other emerging markets also have many local ecommerce leaders like Jumia, MercadoLibre, OLX, Gumtree, Takealot, Konga, Kilimall, BidOrBuy, Tokopedia, Bukalapak, Shoppee, Lazada. If you live in the US you may have never heard of *any* of those companies. And if you live in an emerging market you may have never interacted with Amazon or eBay.
It makes sense that ecommerce leadership would be more localized since it requires moving things in the physical economy, dealing with local currencies, managing inventory, shipping goods, etc. whereas information flows are just bits floating on a fiber optic cable.
If the Internet is primarily seen as a communications platform it is easy for people in some emerging markets to think Facebook is the Internet. Free communication with friends and family members is a compelling offer & as the cost of data drops web usage increases.
At the same time, the web is incredibly deflationary. Every free form of entertainment which consumes time is time that is not spent consuming something else.
Add the technological disruption to the wealth polarization that happened in the wake of the great recession, then combine that with algorithms that promote extremist views & it is clearly causing increasing conflict.
If you are a parent and you think you child has no shot at a brighter future than your own life it is easy to be full of rage.
Empathy can radicalize otherwise normal people by giving them a more polarized view of the world:
Starting around 2000, the line starts to slide. More students say it's not their problem to help people in trouble, not their job to see the world from someone else's perspective. By 2009, on all the standard measures, Konrath found, young people on average measure 40 percent less empathetic than my own generation ... The new rule for empathy seems to be: reserve it, not for your "enemies," but for the people you believe are hurt, or you have decided need it the most. Empathy, but just for your own team. And empathizing with the other team? That's practically a taboo.
A complete lack of empathy could allow a psychopath (hi Chris!) to commit extreme crimes while feeling no guilt, shame or remorse. Extreme empathy can have the same sort of outcome:
"Sometimes we commit atrocities not out of a failure of empathy but rather as a direct consequence of successful, even overly successful, empathy. ... They emphasized that students would learn both sides, and the atrocities committed by one side or the other were always put into context. Students learned this curriculum, but follow-up studies showed that this new generation was more polarized than the one before. ... [Empathy] can be good when it leads to good action, but it can have downsides. For example, if you want the victims to say 'thank you.' You may even want to keep the people you help in that position of inferior victim because it can sustain your feeling of being a hero." - Fritz Breithaupt
News feeds will be read. Villages will be razed. Lynch mobs will become commonplace.
Many people will end up murdered by algorithmically generated empathy.
As technology increases absentee ownership & financial leverage, a society led by morally agnostic algorithms is not going to become more egalitarian.
The more I think about and discuss it, the more I think WhatsApp is simultaneously the future of Facebook, and the most potentially dangerous digital tool yet created. We haven't even begun to see the real impact yet of ubiquitous, unfettered and un-moderatable human telepathy.— Antonio García Martínez (@antoniogm) April 15, 2019
When politicians throw fuel on the fire it only gets worse:
It’s particularly odd that the government is demanding “accountability and responsibility” from a phone app when some ruling party politicians are busy spreading divisive fake news. How can the government ask WhatsApp to control mobs when those convicted of lynching Muslims have been greeted, garlanded and fed sweets by some of the most progressive and cosmopolitan members of Modi’s council of ministers?
Mark Zuckerburg won't get caught downstream from platform blowback as he spends $20 million a year on his security.
The web is a mirror. Engagement-based algorithms reinforcing our perceptions & identities.
And every important story has at least 2 sides!
The Rohingya asylum seekers are victims of their own violent Jihadist leadership that formed a militia to kill Buddhists and Hindus. Hindus are being massacred, where’s the outrage for them!? https://t.co/P3m6w4B1Po— Imam Tawhidi (@Imamofpeace) May 23, 2018
Some may "learn" vaccines don't work. Others may learn the vaccines their own children took did not work, as it failed to protect them from the antivax content spread by Facebook & Google, absorbed by people spreading measles & Medieval diseases.
Passion drives engagement, which drives algorithmic distribution: "There’s an asymmetry of passion at work. Which is to say, there’s very little counter-content to surface because it simply doesn’t occur to regular people (or, in this case, actual medical experts) that there’s a need to produce counter-content."
As the costs of "free" become harder to hide, social media companies which currently sell emerging markets as their next big growth area will end up having embedded regulatory compliance costs which will end up exceeding any sort of prospective revenue they could hope to generate.
The Pinterest S1 shows almost all their growth is in emerging markets, yet almost all their revenue is inside the United States.
As governments around the world see the real-world cost of the foreign tech companies & view some of them as piggy banks, eventually the likes of Facebook or Google will pull out of a variety of markets they no longer feel worth serving. It will be like Google did in mainland China with search after discovering pervasive hacking of activist Gmail accounts.
Just tried signing into Gmail from a new device. Unless I provide a phone number, there is no way to sign in and no one to call about it. Oh, and why do they say they need my phone? If you guessed "for my protection," you would be correct. Talk about Big Brother...— Simon Mikhailovich (@S_Mikhailovich) April 16, 2019
Lower friction & lower cost information markets will face more junk fees, hurdles & even some legitimate regulations. Information markets will start to behave more like physical goods markets.
The tech companies presume they will be able to use satellites, drones & balloons to beam in Internet while avoiding messy local issues tied to real world infrastructure, but when a local wealthy player is betting against them they'll probably end up losing those markets: "One of the biggest cheerleaders for the new rules was Reliance Jio, a fast-growing mobile phone company controlled by Mukesh Ambani, India’s richest industrialist. Mr. Ambani, an ally of Mr. Modi, has made no secret of his plans to turn Reliance Jio into an all-purpose information service that offers streaming video and music, messaging, money transfer, online shopping, and home broadband services."
Publishers do not have "their mojo back" because the tech companies have been so good to them, but rather because the tech companies have been so aggressive that they've earned so much blowback which will in turn lead publishers to opting out of future deals, which will eventually lead more people back to the trusted brands of yesterday.
Publishers feeling guilty about taking advertorial money from the tech companies to spread their propaganda will offset its publication with opinion pieces pointing in the other direction: "This is a lobbying campaign in which buying the good opinion of news brands is clearly important. If it was about reaching a target audience, there are plenty of metrics to suggest his words would reach further – at no cost – on Facebook. Similarly, Google is upping its presence in a less obvious manner via assorted media initiatives on both sides of the Atlantic. Its more direct approach to funding journalism seems to have the desired effect of making all media organisations (and indeed many academic institutions) touched by its money slightly less questioning and critical of its motives."
When Facebook goes down direct visits to leading news brand sites go up.
When Google penalizes a no-name me-too site almost nobody realizes it is missing. But if a big publisher opts out of the ecosystem people will notice.
The reliance on the tech platforms is largely a mirage. If enough key players were to opt out at the same time people would quickly reorient their information consumption habits.
If the platforms can change their focus overnight then why can't publishers band together & choose to dump them?
CEO Jack Dorsey said Twitter is looking to change the focus from following specific individuals to topics of interest, acknowledging that what's incentivized today on the platform is at odds with the goal of healthy dialoguehttps://t.co/31FYslbePA— Axios (@axios) April 16, 2019
In Europe there is GDPR, which aimed to protect user privacy, but ultimately acted as a tax on innovation by local startups while being a subsidy to the big online ad networks. They also have Article 11 & Article 13, which passed in spite of Google's best efforts on the scaremongering anti-SERP tests, lobbying & propaganda fronts: "Google has sparked criticism by encouraging news publishers participating in its Digital News Initiative to lobby against proposed changes to EU copyright law at a time when the beleaguered sector is increasingly turning to the search giant for help."
Remember the Eric Schmidt comment about how brands are how you sort out (the non-YouTube portion of) the cesspool? As it turns out, he was allegedly wrong as Google claims they have been fighting for the little guy the whole time:
Article 11 could change that principle and require online services to strike commercial deals with publishers to show hyperlinks and short snippets of news. This means that search engines, news aggregators, apps, and platforms would have to put commercial licences in place, and make decisions about which content to include on the basis of those licensing agreements and which to leave out. Effectively, companies like Google will be put in the position of picking winners and losers. ... Why are large influential companies constraining how new and small publishers operate? ... The proposed rules will undoubtedly hurt diversity of voices, with large publishers setting business models for the whole industry. This will not benefit all equally. ... We believe the information we show should be based on quality, not on payment.
Facebook claims there is a local news problem: "Facebook Inc. has been looking to boost its local-news offerings since a 2017 survey showed most of its users were clamoring for more. It has run into a problem: There simply isn’t enough local news in vast swaths of the country. ... more than one in five newspapers have closed in the past decade and a half, leaving half the counties in the nation with just one newspaper, and 200 counties with no newspaper at all."
Google is so for the little guy that for their local news experiments they've partnered with a private equity backed newspaper roll up firm & another newspaper chain which did overpriced acquisitions & is trying to act like a PE firm (trying to not get eaten by the PE firm).
Does the above stock chart look in any way healthy?
Does it give off the scent of a firm that understood the impact of digital & rode it to new heights?
If you want good market-based outcomes, why not partner with journalists directly versus operating through PE chop shops?
If Patch is profitable & Google were a neutral ranking system based on quality, couldn't Google partner with journalists directly?
Throwing a few dollars at a PE firm in some nebulous partnership sure beats the sort of regulations coming out of the EU. And the EU's regulations (and prior link tax attempts) are in addition to the three multi billion Euro fines the European Union has levied against Alphabet for shopping search, Android & AdSense.
Google was also fined in Russia over Android bundling. The fine was tiny, but after consumers gained a search engine choice screen (much like Google pushed for in Europe on Microsoft years ago) Yandex's share of mobile search grew quickly.
The UK recently published a white paper on online harms. In some ways it is a regulation just like the tech companies might offer to participants in their ecosystems:
Companies will have to fulfil their new legal duties or face the consequences and “will still need to be compliant with the overarching duty of care even where a specific code does not exist, for example assessing and responding to the risk associated with emerging harms or technology”.
If web publishers should monitor inbound links to look for anything suspicious then the big platforms sure as hell have the resources & profit margins to monitor behavior on their own websites.
Australia passed the Sharing of Abhorrent Violent Material bill which requires platforms to expeditiously remove violent videos & notify the Australian police about them.
There are other layers of fracturing going on in the web as well.
Programmatic advertising shifted revenue from publishers to adtech companies & the largest ad sellers. Ad blockers further lower the ad revenues of many publishers. If you routinely use an ad blocker, try surfing the web for a while without one & you will notice layover welcome AdSense ads on sites as you browse the web - the very type of ad they were allegedly against when promoting AMP.
There has been much more press in the past week about ad blocking as Google's influence is being questioned as it rolls out ad blocking as a feature built into Google's dominant Chrome web browser. https://t.co/LQmvJu9MYB— Jason Kint (@jason_kint) February 19, 2018
Tracking protection in browsers & ad blocking features built directly into browsers leave publishers more uncertain. And who even knows who visited an AMP page hosted on a third party server, particularly when things like GDPR are mixed in? Those who lack first party data may end up having to make large acquisitions to stay relevant.
Voice search & personal assistants are now ad channels.
Google Assistant Now Showing Sponsored Link Ads for Some Travel Related Queries "Similar results are delivered through both Google Home and Google Home Hub without the sponsored links." https://t.co/jSVKKI2AYT via @bretkinsella pic.twitter.com/0sjAswy14M— Glenn Gabe (@glenngabe) April 15, 2019
App stores are removing VPNs in China, removing Tiktok in India, and keeping female tracking apps in Saudi Arabia. App stores are centralized chokepoints for governments. Every centralized service is at risk of censorship. Web browsers from key state-connected players can also censor messages spread by developers on platforms like GitHub.
Microsoft's newest Edge web browser is based on Chromium, the source of Google Chrome. While Mozilla Firefox gets most of their revenue from a search deal with Google, Google has still went out of its way to use its services to both promote Chrome with pop overs AND break in competing web browsers:
"All of this is stuff you're allowed to do to compete, of course. But we were still a search partner, so we'd say 'hey what gives?' And every time, they'd say, 'oops. That was accidental. We'll fix it in the next push in 2 weeks.' Over and over. Oops. Another accident. We'll fix it soon. We want the same things. We're on the same team. There were dozens of oopses. Hundreds maybe?" - former Firefox VP Jonathan Nightingale
This is how it spreads. Google normalizes “web apps” that are really just Chrome apps. Then others follow. We’ve been here before, y’all. Remember IE? Browser hegemony is not a happy place. https://t.co/b29EvIty1H— DHH (@dhh) April 1, 2019
In fact, it’s alarming how much of Microsoft’s cut-off-the-air-supply playbook on browser dominance that Google is emulating. From browser-specific apps to embrace-n-extend AMP “standards”. It’s sad, but sadder still is when others follow suit.— DHH (@dhh) April 1, 2019
YouTube page load is 5x slower in Firefox and Edge than in Chrome because YouTube's Polymer redesign relies on the deprecated Shadow DOM v0 API only implemented in Chrome. You can restore YouTube's faster pre-Polymer design with this Firefox extension: https://t.co/F5uEn3iMLR— Chris Peterson (@cpeterso) July 24, 2018
As phone sales fall & app downloads stall a hardware company like Apple is pushing hard into services while quietly raking in utterly fantastic ad revenues from search & ads in their app store.
Part of the reason people are downloading fewer apps is so many apps require registration as soon as they are opened, or only let a user engage with them for seconds before pushing aggressive upsells. And then many apps which were formerly one-off purchases are becoming subscription plays. As traffic acquisition costs have jumped, many apps must engage in sleight of hand behaviors (free but not really, we are collecting data totally unrelated to the purpose of our app & oops we sold your data, etc.) in order to get the numbers to back out. This in turn causes app stores to slow down app reviews.
Apple acquired the news subscription service Texture & turned it into Apple News Plus. Not only is Apple keeping half the subscription revenues, but soon the service will only work for people using Apple devices, leaving nearly 100,000 other subscribers out in the cold: "if you’re part of the 30% who used Texture to get your favorite magazines digitally on Android or Windows devices, you will soon be out of luck. Only Apple iOS devices will be able to access the 300 magazines available from publishers. At the time of the sale in March 2018 to Apple, Texture had about 240,000 subscribers."
Apple is also going to spend over a half-billion Dollars exclusively licensing independently developed games:
Several people involved in the project’s development say Apple is spending several million dollars each on most of the more than 100 games that have been selected to launch on Arcade, with its total budget likely to exceed $500m. The games service is expected to launch later this year. ... Apple is offering developers an extra incentive if they agree for their game to only be available on Arcade, withholding their release on Google’s Play app store for Android smartphones or other subscription gaming bundles such as Microsoft’s Xbox game pass.
Verizon wants to launch a video game streaming service. It will probably be almost as successful as their Go90 OTT service was. Microsoft is pushing to make Xbox games work on Android devices. Amazon is developing a game streaming service to compliment Twitch.
The hosts on Twitch, some of whom sign up exclusively with the platform in order to gain access to its moneymaking tools, are rewarded for their ability to make a connection with viewers as much as they are for their gaming prowess. Viewers who pay $4.99 a month for a basic subscription — the money is split evenly between the streamers and Twitch — are looking for immediacy and intimacy. While some hosts at YouTube Gaming offer a similar experience, they have struggled to build audiences as large, and as dedicated, as those on Twitch. ... While YouTube has made millionaires out of the creators of popular videos through its advertising program, Twitch’s hosts make money primarily from subscribers and one-off donations or tips. YouTube Gaming has made it possible for viewers to support hosts this way, but paying audiences haven’t materialized at the scale they have on Twitch.
Google, having a bit of Twitch envy, is also launching a video game streaming service which will be deeply integrated into YouTube: "With Stadia, YouTube watchers can press “Play now” at the end of a video, and be brought into the game within 5 seconds. The service provides “instant access” via button or link, just like any other piece of content on the web."
Google will also launch their own game studio making exclusive games for their platform.
When consoles don't use discs or cartridges so they can sell a subscription access to their software library it is hard to be a game retailer! GameStop's stock has been performing like an ICO. And these sorts of announcements from the tech companies have been hitting stock prices for companies like Nintendo & Sony: “There is no doubt this service makes life even more difficult for established platforms,” Amir Anvarzadeh, a market strategist at Asymmetric Advisors Pte, said in a note to clients. “Google will help further fragment the gaming market which is already coming under pressure by big games which have adopted the mobile gaming business model of giving the titles away for free in hope of generating in-game content sales.”
The big tech companies which promoted everything in adjacent markets being free are now erecting paywalls for themselves, balkanizing the web by paying for exclusives to drive their bundled subscriptions.
How many paid movie streaming services will the web have by the end of next year? 20? 50? Does anybody know?
Disney alone with operate Disney+, ESPN+ as well as Hulu.
And then the tech companies are not only licensing exclusives to drive their subscription-based services, but we're going to see more exclusionary policies like YouTube not working on Amazon Echo, Netflix dumping support for Apple's Airplay, or Amazon refusing to sell devices like Chromecast or Apple TV.
The good news in a fractured web is a broader publishing industry that contains many micro markets will have many opportunities embedded in it. A Facebook pivot away from games toward news, or a pivot away from news toward video won't kill third party publishers who have a more diverse traffic profile and more direct revenues. And a regional law blocking porn or gambling websites might lead to an increase in demand for VPNs or free to play points-based games with paid upgrades. Even the rise of metered paywalls will lead to people using more web browsers & more VPNs. Each fracture (good or bad) will create more market edges & ultimately more opportunities. Chinese enforcement of their gambling laws created a real estate boom in Manila.
So long as there are 4 or 5 game stores, 4 or 5 movie streaming sites, etc. ... they have to compete on merit or use money to try to buy exclusives. Either way is better than the old monopoly strategy of take it or leave it ultimatums.
The publisher wins because there is a competitive bid. There won't be an arbitrary 30% tax on everything. So long as there is competition from the open web there will be means to bypass the junk fees & the most successful companies that do so might create their own stores with a lower rate: "Mr. Schachter estimates that Apple and Google could see a hit of about 14% to pretax earnings if they reduced their own app commissions to match Epic’s take."
As the big media companies & big tech companies race to create subscription products they'll spend many billions on exclusives. And they will be training consumers that there's nothing wrong with paying for content. This will eventually lead to hundreds of thousands or even millions of successful niche publications which have incentives better aligned than all the issues the ad supported web has faced.
Categories: 
publishing & media
from Digtal Marketing News http://www.seobook.com/fractured
0 notes
evaaguilaus · 6 years ago
Text
The Fractured Web
Anyone can argue about the intent of a particular action & the outcome that is derived by it. But when the outcome is known, at some point the intent is inferred if the outcome is derived from a source of power & the outcome doesn't change.
Or, put another way, if a powerful entity (government, corporation, other organization) disliked an outcome which appeared to benefit them in the short term at great lasting cost to others, they could spend resources to adjust the system.
If they don't spend those resources (or, rather, spend them on lobbying rather than improving the ecosystem) then there is no desired change. The outcome is as desired. Change is unwanted.
Engagement is a toxic metric.Products which optimize for it become worse. People who optimize for it become less happy.It also seems to generate runaway feedback loops where most engagable people have a) worst individual experiences and then b) end up driving the product bus.— Patrick McKenzie (@patio11) April 9, 2019
News is a stock vs flow market where the flow of recent events drives most of the traffic to articles. News that is more than a couple days old is no longer news. A news site which stops publishing news stops becoming a habit & quickly loses relevancy. Algorithmically an abandoned archive of old news articles doesn't look much different than eHow, in spite of having a much higher cost structure.
According to SEMrush's traffic rank, ampproject.org gets more monthly visits than Yahoo.com.
That actually understates the prevalence of AMP because AMP is generally designed for mobile AND not all AMP-formatted content is displayed on ampproject.org.
Part of how AMP was able to get widespread adoption was because in the news vertical the organic search result set was displaced by an AMP block. If you were a news site either you were so differentiated that readers would scroll past the AMP block in the search results to look for you specifically, or you adopted AMP, or you were doomed.
Some news organizations like The Guardian have a team of about a dozen people reformatting their content to the duplicative & proprietary AMP format. That's wasteful, but necessary "In theory, adoption of AMP is voluntary. In reality, publishers that don’t want to see their search traffic evaporate have little choice. New data from publisher analytics firm Chartbeat shows just how much leverage Google has over publishers thanks to its dominant search engine."
It seems more than a bit backward that low margin publishers are doing duplicative work to distance themselves from their own readers while improving the profit margins of monopolies. But it is what it is. And that no doubt drew the ire of many publishers across the EU.
And now there are AMP Stories to eat up even more visual real estate.
If you spent a bunch of money to create a highly differentiated piece of content, why would you prefer that high spend flaghship content appear on a third party website rather than your own?
Google & Facebook have done such a fantastic job of eating the entire pie that some are celebrating Amazon as a prospective savior to the publishing industry. That view - IMHO - is rather suspect.
Where any of the tech monopolies dominate they cram down on partners. The New York Times acquired The Wirecutter in Q4 of 2016. In Q1 of 2017 Amazon adjusted their affiliate fee schedule.
Amazon generally treats consumers well, but they have been much harder on business partners with tough pricing negotiations, counterfeit protections, forced ad buying to have a high enough product rank to be able to rank organically, ad displacement of their organic search results below the fold (even for branded search queries), learning suppliers & cutting out the partners, private label products patterned after top sellers, in some cases running pop over ads for the private label products on product level pages where brands already spent money to drive traffic to the page, etc.
They've made things tougher for their partners in a way that mirrors the impact Facebook & Google have had on online publishers:
"Boyce’s experience on Amazon largely echoed what happens in the offline world: competitors entered the market, pushing down prices and making it harder to make a profit. So Boyce adapted. He stopped selling basketball hoops and developed his own line of foosball tables, air hockey tables, bocce ball sets and exercise equipment. The best way to make a decent profit on Amazon was to sell something no one else had and create your own brand. ... Amazon also started selling bocce ball sets that cost $15 less than Boyce’s. He says his products are higher quality, but Amazon gives prominent page space to its generic version and wins the cost-conscious shopper."
Google claims they have no idea how content publishers are with the trade off between themselves & the search engine, but every quarter Alphabet publish the share of ad spend occurring on owned & operated sites versus the share spent across the broader publisher network. And in almost every quarter for over a decade straight that ratio has grown worse for publishers.
When Google tells industry about how much $ it funnels to rest of ecosystem, just show them this chart. It's good to be the "revenue regulator" (note: G went public in 2004). pic.twitter.com/HCbCNgbzKc— Jason Kint (@jason_kint) February 5, 2019
The aggregate numbers for news publishers are worse than shown above as Google is ramping up ads in video games quite hard. They've partnered with Unity & promptly took away the ability to block ads from appearing in video games using googleadsenseformobileapps.com exclusion (hello flat thumb misclicks, my name is budget & I am gone!)
They will also track video game player behavior & alter game play to maximize revenues based on machine learning tied to surveillance of the user's account: "We’re bringing a new approach to monetization that combines ads and in-app purchases in one automated solution. Available today, new smart segmentation features in Google AdMob use machine learning to segment your players based on their likelihood to spend on in-app purchases. Ad units with smart segmentation will show ads only to users who are predicted not to spend on in-app purchases. Players who are predicted to spend will see no ads, and can simply continue playing."
And how does the growth of ampproject.org square against the following wisdom?
If you do use a CDN, I'd recommend using a domain name of your own (eg, https://t.co/fWMc6CFPZ0), so you can move to other CDNs if you feel the need to over time, without having to do any redirects.— John (@JohnMu) April 15, 2019
Literally only yesterday did Google begin supporting instant loading of self-hosted AMP pages.
China has a different set of tech leaders than the United States. Baidu, Alibaba, Tencent (BAT) instead of Facebook, Amazon, Apple, Netflix, Google (FANG). China tech companies may have won their domestic markets in part based on superior technology or better knowledge of the local culture, though those same companies have largely went nowhere fast in most foreign markets. A big part of winning was governmental assistance in putting a foot on the scales.
Part of the US-China trade war is about who controls the virtual "seas" upon which value flows:
it can easily be argued that the last 60 years were above all the era of the container-ship (with container-ships getting ever bigger). But will the coming decades still be the age of the container-ship? Possibly not, for the simple reason that things that have value increasingly no longer travel by ship, but instead by fiberoptic cables! ... you could almost argue that ZTE and Huawei have been the “East India Company” of the current imperial cycle. Unsurprisingly, it is these very companies, charged with laying out the “new roads” along which “tomorrow’s value” will flow, that find themselves at the center of the US backlash. ... if the symbol of British domination was the steamship, and the symbol of American strength was the Boeing 747, it seems increasingly clear that the question of the future will be whether tomorrow’s telecom switches and routers are produced by Huawei or Cisco. ... US attempts to take down Huawei and ZTE can be seen as the existing empire’s attempt to prevent the ascent of a new imperial power. With this in mind, I could go a step further and suggest that perhaps the Huawei crisis is this century’s version of Suez crisis. No wonder markets have been falling ever since the arrest of the Huawei CFO. In time, the Suez Crisis was brought to a halt by US threats to destroy the value of sterling. Could we now witness the same for the US dollar?
China maintains Huawei is an employee-owned company. But that proposition is suspect. Broadly stealing technology is vital to the growth of the Chinese economy & they have no incentive to stop unless their leading companies pay a direct cost. Meanwhile, China is investigating Ericsson over licensing technology.
India has taken notice of the success of Chinese tech companies & thus began to promote "national champion" company policies. That, in turn, has also meant some of the Chinese-styled laws requiring localized data, antitrust inquiries, foreign ownership restrictions, requirements for platforms to not sell their own goods, promoting limits on data encryption, etc.
The secretary of India’s Telecommunications Department, Aruna Sundararajan, last week told a gathering of Indian startups in a closed-door meeting in the tech hub of Bangalore that the government will introduce a “national champion” policy “very soon” to encourage the rise of Indian companies, according to a person familiar with the matter. She said Indian policy makers had noted the success of China’s internet giants, Alibaba Group Holding Ltd. and Tencent Holdings Ltd. ... Tensions began rising last year, when New Delhi decided to create a clearer set of rules for e-commerce and convened a group of local players to solicit suggestions. Amazon and Flipkart, even though they make up more than half the market, weren’t invited, according to people familiar with the matter.
Amazon vowed to invest $5 billion in India & they have done some remarkable work on logistics there. Walmart acquired Flipkart for $16 billion.
Other emerging markets also have many local ecommerce leaders like Jumia, MercadoLibre, OLX, Gumtree, Takealot, Konga, Kilimall, BidOrBuy, Tokopedia, Bukalapak, Shoppee, Lazada. If you live in the US you may have never heard of *any* of those companies. And if you live in an emerging market you may have never interacted with Amazon or eBay.
It makes sense that ecommerce leadership would be more localized since it requires moving things in the physical economy, dealing with local currencies, managing inventory, shipping goods, etc. whereas information flows are just bits floating on a fiber optic cable.
If the Internet is primarily seen as a communications platform it is easy for people in some emerging markets to think Facebook is the Internet. Free communication with friends and family members is a compelling offer & as the cost of data drops web usage increases.
At the same time, the web is incredibly deflationary. Every free form of entertainment which consumes time is time that is not spent consuming something else.
Add the technological disruption to the wealth polarization that happened in the wake of the great recession, then combine that with algorithms that promote extremist views & it is clearly causing increasing conflict.
If you are a parent and you think you child has no shot at a brighter future than your own life it is easy to be full of rage.
Empathy can radicalize otherwise normal people by giving them a more polarized view of the world:
Starting around 2000, the line starts to slide. More students say it's not their problem to help people in trouble, not their job to see the world from someone else's perspective. By 2009, on all the standard measures, Konrath found, young people on average measure 40 percent less empathetic than my own generation ... The new rule for empathy seems to be: reserve it, not for your "enemies," but for the people you believe are hurt, or you have decided need it the most. Empathy, but just for your own team. And empathizing with the other team? That's practically a taboo.
A complete lack of empathy could allow a psychopath (hi Chris!) to commit extreme crimes while feeling no guilt, shame or remorse. Extreme empathy can have the same sort of outcome:
"Sometimes we commit atrocities not out of a failure of empathy but rather as a direct consequence of successful, even overly successful, empathy. ... They emphasized that students would learn both sides, and the atrocities committed by one side or the other were always put into context. Students learned this curriculum, but follow-up studies showed that this new generation was more polarized than the one before. ... [Empathy] can be good when it leads to good action, but it can have downsides. For example, if you want the victims to say 'thank you.' You may even want to keep the people you help in that position of inferior victim because it can sustain your feeling of being a hero." - Fritz Breithaupt
News feeds will be read. Villages will be razed. Lynch mobs will become commonplace.
Many people will end up murdered by algorithmically generated empathy.
As technology increases absentee ownership & financial leverage, a society led by morally agnostic algorithms is not going to become more egalitarian.
The more I think about and discuss it, the more I think WhatsApp is simultaneously the future of Facebook, and the most potentially dangerous digital tool yet created. We haven't even begun to see the real impact yet of ubiquitous, unfettered and un-moderatable human telepathy.— Antonio García Martínez (@antoniogm) April 15, 2019
When politicians throw fuel on the fire it only gets worse:
It’s particularly odd that the government is demanding “accountability and responsibility” from a phone app when some ruling party politicians are busy spreading divisive fake news. How can the government ask WhatsApp to control mobs when those convicted of lynching Muslims have been greeted, garlanded and fed sweets by some of the most progressive and cosmopolitan members of Modi’s council of ministers?
Mark Zuckerburg won't get caught downstream from platform blowback as he spends $20 million a year on his security.
The web is a mirror. Engagement-based algorithms reinforcing our perceptions & identities.
And every important story has at least 2 sides!
The Rohingya asylum seekers are victims of their own violent Jihadist leadership that formed a militia to kill Buddhists and Hindus. Hindus are being massacred, where’s the outrage for them!? https://t.co/P3m6w4B1Po— Imam Tawhidi (@Imamofpeace) May 23, 2018
Some may "learn" vaccines don't work. Others may learn the vaccines their own children took did not work, as it failed to protect them from the antivax content spread by Facebook & Google, absorbed by people spreading measles & Medieval diseases.
Passion drives engagement, which drives algorithmic distribution: "There’s an asymmetry of passion at work. Which is to say, there’s very little counter-content to surface because it simply doesn’t occur to regular people (or, in this case, actual medical experts) that there’s a need to produce counter-content."
As the costs of "free" become harder to hide, social media companies which currently sell emerging markets as their next big growth area will end up having embedded regulatory compliance costs which will end up exceeding any sort of prospective revenue they could hope to generate.
The Pinterest S1 shows almost all their growth is in emerging markets, yet almost all their revenue is inside the United States.
As governments around the world see the real-world cost of the foreign tech companies & view some of them as piggy banks, eventually the likes of Facebook or Google will pull out of a variety of markets they no longer feel worth serving. It will be like Google did in mainland China with search after discovering pervasive hacking of activist Gmail accounts.
Just tried signing into Gmail from a new device. Unless I provide a phone number, there is no way to sign in and no one to call about it. Oh, and why do they say they need my phone? If you guessed "for my protection," you would be correct. Talk about Big Brother...— Simon Mikhailovich (@S_Mikhailovich) April 16, 2019
Lower friction & lower cost information markets will face more junk fees, hurdles & even some legitimate regulations. Information markets will start to behave more like physical goods markets.
The tech companies presume they will be able to use satellites, drones & balloons to beam in Internet while avoiding messy local issues tied to real world infrastructure, but when a local wealthy player is betting against them they'll probably end up losing those markets: "One of the biggest cheerleaders for the new rules was Reliance Jio, a fast-growing mobile phone company controlled by Mukesh Ambani, India’s richest industrialist. Mr. Ambani, an ally of Mr. Modi, has made no secret of his plans to turn Reliance Jio into an all-purpose information service that offers streaming video and music, messaging, money transfer, online shopping, and home broadband services."
Publishers do not have "their mojo back" because the tech companies have been so good to them, but rather because the tech companies have been so aggressive that they've earned so much blowback which will in turn lead publishers to opting out of future deals, which will eventually lead more people back to the trusted brands of yesterday.
Publishers feeling guilty about taking advertorial money from the tech companies to spread their propaganda will offset its publication with opinion pieces pointing in the other direction: "This is a lobbying campaign in which buying the good opinion of news brands is clearly important. If it was about reaching a target audience, there are plenty of metrics to suggest his words would reach further – at no cost – on Facebook. Similarly, Google is upping its presence in a less obvious manner via assorted media initiatives on both sides of the Atlantic. Its more direct approach to funding journalism seems to have the desired effect of making all media organisations (and indeed many academic institutions) touched by its money slightly less questioning and critical of its motives."
When Facebook goes down direct visits to leading news brand sites go up.
When Google penalizes a no-name me-too site almost nobody realizes it is missing. But if a big publisher opts out of the ecosystem people will notice.
The reliance on the tech platforms is largely a mirage. If enough key players were to opt out at the same time people would quickly reorient their information consumption habits.
If the platforms can change their focus overnight then why can't publishers band together & choose to dump them?
CEO Jack Dorsey said Twitter is looking to change the focus from following specific individuals to topics of interest, acknowledging that what's incentivized today on the platform is at odds with the goal of healthy dialoguehttps://t.co/31FYslbePA— Axios (@axios) April 16, 2019
In Europe there is GDPR, which aimed to protect user privacy, but ultimately acted as a tax on innovation by local startups while being a subsidy to the big online ad networks. They also have Article 11 & Article 13, which passed in spite of Google's best efforts on the scaremongering anti-SERP tests, lobbying & propaganda fronts: "Google has sparked criticism by encouraging news publishers participating in its Digital News Initiative to lobby against proposed changes to EU copyright law at a time when the beleaguered sector is increasingly turning to the search giant for help."
Remember the Eric Schmidt comment about how brands are how you sort out (the non-YouTube portion of) the cesspool? As it turns out, he was allegedly wrong as Google claims they have been fighting for the little guy the whole time:
Article 11 could change that principle and require online services to strike commercial deals with publishers to show hyperlinks and short snippets of news. This means that search engines, news aggregators, apps, and platforms would have to put commercial licences in place, and make decisions about which content to include on the basis of those licensing agreements and which to leave out. Effectively, companies like Google will be put in the position of picking winners and losers. ... Why are large influential companies constraining how new and small publishers operate? ... The proposed rules will undoubtedly hurt diversity of voices, with large publishers setting business models for the whole industry. This will not benefit all equally. ... We believe the information we show should be based on quality, not on payment.
Facebook claims there is a local news problem: "Facebook Inc. has been looking to boost its local-news offerings since a 2017 survey showed most of its users were clamoring for more. It has run into a problem: There simply isn’t enough local news in vast swaths of the country. ... more than one in five newspapers have closed in the past decade and a half, leaving half the counties in the nation with just one newspaper, and 200 counties with no newspaper at all."
Google is so for the little guy that for their local news experiments they've partnered with a private equity backed newspaper roll up firm & another newspaper chain which did overpriced acquisitions & is trying to act like a PE firm (trying to not get eaten by the PE firm).
Does the above stock chart look in any way healthy?
Does it give off the scent of a firm that understood the impact of digital & rode it to new heights?
If you want good market-based outcomes, why not partner with journalists directly versus operating through PE chop shops?
If Patch is profitable & Google were a neutral ranking system based on quality, couldn't Google partner with journalists directly?
Throwing a few dollars at a PE firm in some nebulous partnership sure beats the sort of regulations coming out of the EU. And the EU's regulations (and prior link tax attempts) are in addition to the three multi billion Euro fines the European Union has levied against Alphabet for shopping search, Android & AdSense.
Google was also fined in Russia over Android bundling. The fine was tiny, but after consumers gained a search engine choice screen (much like Google pushed for in Europe on Microsoft years ago) Yandex's share of mobile search grew quickly.
The UK recently published a white paper on online harms. In some ways it is a regulation just like the tech companies might offer to participants in their ecosystems:
Companies will have to fulfil their new legal duties or face the consequences and “will still need to be compliant with the overarching duty of care even where a specific code does not exist, for example assessing and responding to the risk associated with emerging harms or technology”.
If web publishers should monitor inbound links to look for anything suspicious then the big platforms sure as hell have the resources & profit margins to monitor behavior on their own websites.
Australia passed the Sharing of Abhorrent Violent Material bill which requires platforms to expeditiously remove violent videos & notify the Australian police about them.
There are other layers of fracturing going on in the web as well.
Programmatic advertising shifted revenue from publishers to adtech companies & the largest ad sellers. Ad blockers further lower the ad revenues of many publishers. If you routinely use an ad blocker, try surfing the web for a while without one & you will notice layover welcome AdSense ads on sites as you browse the web - the very type of ad they were allegedly against when promoting AMP.
There has been much more press in the past week about ad blocking as Google's influence is being questioned as it rolls out ad blocking as a feature built into Google's dominant Chrome web browser. https://t.co/LQmvJu9MYB— Jason Kint (@jason_kint) February 19, 2018
Tracking protection in browsers & ad blocking features built directly into browsers leave publishers more uncertain. And who even knows who visited an AMP page hosted on a third party server, particularly when things like GDPR are mixed in? Those who lack first party data may end up having to make large acquisitions to stay relevant.
Voice search & personal assistants are now ad channels.
Google Assistant Now Showing Sponsored Link Ads for Some Travel Related Queries "Similar results are delivered through both Google Home and Google Home Hub without the sponsored links." https://t.co/jSVKKI2AYT via @bretkinsella pic.twitter.com/0sjAswy14M— Glenn Gabe (@glenngabe) April 15, 2019
App stores are removing VPNs in China, removing Tiktok in India, and keeping female tracking apps in Saudi Arabia. App stores are centralized chokepoints for governments. Every centralized service is at risk of censorship. Web browsers from key state-connected players can also censor messages spread by developers on platforms like GitHub.
Microsoft's newest Edge web browser is based on Chromium, the source of Google Chrome. While Mozilla Firefox gets most of their revenue from a search deal with Google, Google has still went out of its way to use its services to both promote Chrome with pop overs AND break in competing web browsers:
"All of this is stuff you're allowed to do to compete, of course. But we were still a search partner, so we'd say 'hey what gives?' And every time, they'd say, 'oops. That was accidental. We'll fix it in the next push in 2 weeks.' Over and over. Oops. Another accident. We'll fix it soon. We want the same things. We're on the same team. There were dozens of oopses. Hundreds maybe?" - former Firefox VP Jonathan Nightingale
This is how it spreads. Google normalizes “web apps” that are really just Chrome apps. Then others follow. We’ve been here before, y’all. Remember IE? Browser hegemony is not a happy place. https://t.co/b29EvIty1H— DHH (@dhh) April 1, 2019
In fact, it’s alarming how much of Microsoft’s cut-off-the-air-supply playbook on browser dominance that Google is emulating. From browser-specific apps to embrace-n-extend AMP “standards”. It’s sad, but sadder still is when others follow suit.— DHH (@dhh) April 1, 2019
YouTube page load is 5x slower in Firefox and Edge than in Chrome because YouTube's Polymer redesign relies on the deprecated Shadow DOM v0 API only implemented in Chrome. You can restore YouTube's faster pre-Polymer design with this Firefox extension: https://t.co/F5uEn3iMLR— Chris Peterson (@cpeterso) July 24, 2018
As phone sales fall & app downloads stall a hardware company like Apple is pushing hard into services while quietly raking in utterly fantastic ad revenues from search & ads in their app store.
Part of the reason people are downloading fewer apps is so many apps require registration as soon as they are opened, or only let a user engage with them for seconds before pushing aggressive upsells. And then many apps which were formerly one-off purchases are becoming subscription plays. As traffic acquisition costs have jumped, many apps must engage in sleight of hand behaviors (free but not really, we are collecting data totally unrelated to the purpose of our app & oops we sold your data, etc.) in order to get the numbers to back out. This in turn causes app stores to slow down app reviews.
Apple acquired the news subscription service Texture & turned it into Apple News Plus. Not only is Apple keeping half the subscription revenues, but soon the service will only work for people using Apple devices, leaving nearly 100,000 other subscribers out in the cold: "if you’re part of the 30% who used Texture to get your favorite magazines digitally on Android or Windows devices, you will soon be out of luck. Only Apple iOS devices will be able to access the 300 magazines available from publishers. At the time of the sale in March 2018 to Apple, Texture had about 240,000 subscribers."
Apple is also going to spend over a half-billion Dollars exclusively licensing independently developed games:
Several people involved in the project’s development say Apple is spending several million dollars each on most of the more than 100 games that have been selected to launch on Arcade, with its total budget likely to exceed $500m. The games service is expected to launch later this year. ... Apple is offering developers an extra incentive if they agree for their game to only be available on Arcade, withholding their release on Google’s Play app store for Android smartphones or other subscription gaming bundles such as Microsoft’s Xbox game pass.
Verizon wants to launch a video game streaming service. It will probably be almost as successful as their Go90 OTT service was. Microsoft is pushing to make Xbox games work on Android devices. Amazon is developing a game streaming service to compliment Twitch.
The hosts on Twitch, some of whom sign up exclusively with the platform in order to gain access to its moneymaking tools, are rewarded for their ability to make a connection with viewers as much as they are for their gaming prowess. Viewers who pay $4.99 a month for a basic subscription — the money is split evenly between the streamers and Twitch — are looking for immediacy and intimacy. While some hosts at YouTube Gaming offer a similar experience, they have struggled to build audiences as large, and as dedicated, as those on Twitch. ... While YouTube has made millionaires out of the creators of popular videos through its advertising program, Twitch’s hosts make money primarily from subscribers and one-off donations or tips. YouTube Gaming has made it possible for viewers to support hosts this way, but paying audiences haven’t materialized at the scale they have on Twitch.
Google, having a bit of Twitch envy, is also launching a video game streaming service which will be deeply integrated into YouTube: "With Stadia, YouTube watchers can press “Play now” at the end of a video, and be brought into the game within 5 seconds. The service provides “instant access” via button or link, just like any other piece of content on the web."
Google will also launch their own game studio making exclusive games for their platform.
When consoles don't use discs or cartridges so they can sell a subscription access to their software library it is hard to be a game retailer! GameStop's stock has been performing like an ICO. And these sorts of announcements from the tech companies have been hitting stock prices for companies like Nintendo & Sony: “There is no doubt this service makes life even more difficult for established platforms,” Amir Anvarzadeh, a market strategist at Asymmetric Advisors Pte, said in a note to clients. “Google will help further fragment the gaming market which is already coming under pressure by big games which have adopted the mobile gaming business model of giving the titles away for free in hope of generating in-game content sales.”
The big tech companies which promoted everything in adjacent markets being free are now erecting paywalls for themselves, balkanizing the web by paying for exclusives to drive their bundled subscriptions.
How many paid movie streaming services will the web have by the end of next year? 20? 50? Does anybody know?
Disney alone with operate Disney+, ESPN+ as well as Hulu.
And then the tech companies are not only licensing exclusives to drive their subscription-based services, but we're going to see more exclusionary policies like YouTube not working on Amazon Echo, Netflix dumping support for Apple's Airplay, or Amazon refusing to sell devices like Chromecast or Apple TV.
The good news in a fractured web is a broader publishing industry that contains many micro markets will have many opportunities embedded in it. A Facebook pivot away from games toward news, or a pivot away from news toward video won't kill third party publishers who have a more diverse traffic profile and more direct revenues. And a regional law blocking porn or gambling websites might lead to an increase in demand for VPNs or free to play points-based games with paid upgrades. Even the rise of metered paywalls will lead to people using more web browsers & more VPNs. Each fracture (good or bad) will create more market edges & ultimately more opportunities. Chinese enforcement of their gambling laws created a real estate boom in Manila.
So long as there are 4 or 5 game stores, 4 or 5 movie streaming sites, etc. ... they have to compete on merit or use money to try to buy exclusives. Either way is better than the old monopoly strategy of take it or leave it ultimatums.
The publisher wins because there is a competitive bid. There won't be an arbitrary 30% tax on everything. So long as there is competition from the open web there will be means to bypass the junk fees & the most successful companies that do so might create their own stores with a lower rate: "Mr. Schachter estimates that Apple and Google could see a hit of about 14% to pretax earnings if they reduced their own app commissions to match Epic’s take."
As the big media companies & big tech companies race to create subscription products they'll spend many billions on exclusives. And they will be training consumers that there's nothing wrong with paying for content. This will eventually lead to hundreds of thousands or even millions of successful niche publications which have incentives better aligned than all the issues the ad supported web has faced.
Categories: 
publishing & media
from Digtal Marketing News http://www.seobook.com/fractured
0 notes
oscarkruegerus · 6 years ago
Text
The Fractured Web
Anyone can argue about the intent of a particular action & the outcome that is derived by it. But when the outcome is known, at some point the intent is inferred if the outcome is derived from a source of power & the outcome doesn't change.
Or, put another way, if a powerful entity (government, corporation, other organization) disliked an outcome which appeared to benefit them in the short term at great lasting cost to others, they could spend resources to adjust the system.
If they don't spend those resources (or, rather, spend them on lobbying rather than improving the ecosystem) then there is no desired change. The outcome is as desired. Change is unwanted.
Engagement is a toxic metric.Products which optimize for it become worse. People who optimize for it become less happy.It also seems to generate runaway feedback loops where most engagable people have a) worst individual experiences and then b) end up driving the product bus.— Patrick McKenzie (@patio11) April 9, 2019
News is a stock vs flow market where the flow of recent events drives most of the traffic to articles. News that is more than a couple days old is no longer news. A news site which stops publishing news stops becoming a habit & quickly loses relevancy. Algorithmically an abandoned archive of old news articles doesn't look much different than eHow, in spite of having a much higher cost structure.
According to SEMrush's traffic rank, ampproject.org gets more monthly visits than Yahoo.com.
That actually understates the prevalence of AMP because AMP is generally designed for mobile AND not all AMP-formatted content is displayed on ampproject.org.
Part of how AMP was able to get widespread adoption was because in the news vertical the organic search result set was displaced by an AMP block. If you were a news site either you were so differentiated that readers would scroll past the AMP block in the search results to look for you specifically, or you adopted AMP, or you were doomed.
Some news organizations like The Guardian have a team of about a dozen people reformatting their content to the duplicative & proprietary AMP format. That's wasteful, but necessary "In theory, adoption of AMP is voluntary. In reality, publishers that don’t want to see their search traffic evaporate have little choice. New data from publisher analytics firm Chartbeat shows just how much leverage Google has over publishers thanks to its dominant search engine."
It seems more than a bit backward that low margin publishers are doing duplicative work to distance themselves from their own readers while improving the profit margins of monopolies. But it is what it is. And that no doubt drew the ire of many publishers across the EU.
And now there are AMP Stories to eat up even more visual real estate.
If you spent a bunch of money to create a highly differentiated piece of content, why would you prefer that high spend flaghship content appear on a third party website rather than your own?
Google & Facebook have done such a fantastic job of eating the entire pie that some are celebrating Amazon as a prospective savior to the publishing industry. That view - IMHO - is rather suspect.
Where any of the tech monopolies dominate they cram down on partners. The New York Times acquired The Wirecutter in Q4 of 2016. In Q1 of 2017 Amazon adjusted their affiliate fee schedule.
Amazon generally treats consumers well, but they have been much harder on business partners with tough pricing negotiations, counterfeit protections, forced ad buying to have a high enough product rank to be able to rank organically, ad displacement of their organic search results below the fold (even for branded search queries), learning suppliers & cutting out the partners, private label products patterned after top sellers, in some cases running pop over ads for the private label products on product level pages where brands already spent money to drive traffic to the page, etc.
They've made things tougher for their partners in a way that mirrors the impact Facebook & Google have had on online publishers:
"Boyce’s experience on Amazon largely echoed what happens in the offline world: competitors entered the market, pushing down prices and making it harder to make a profit. So Boyce adapted. He stopped selling basketball hoops and developed his own line of foosball tables, air hockey tables, bocce ball sets and exercise equipment. The best way to make a decent profit on Amazon was to sell something no one else had and create your own brand. ... Amazon also started selling bocce ball sets that cost $15 less than Boyce’s. He says his products are higher quality, but Amazon gives prominent page space to its generic version and wins the cost-conscious shopper."
Google claims they have no idea how content publishers are with the trade off between themselves & the search engine, but every quarter Alphabet publish the share of ad spend occurring on owned & operated sites versus the share spent across the broader publisher network. And in almost every quarter for over a decade straight that ratio has grown worse for publishers.
When Google tells industry about how much $ it funnels to rest of ecosystem, just show them this chart. It's good to be the "revenue regulator" (note: G went public in 2004). pic.twitter.com/HCbCNgbzKc— Jason Kint (@jason_kint) February 5, 2019
The aggregate numbers for news publishers are worse than shown above as Google is ramping up ads in video games quite hard. They've partnered with Unity & promptly took away the ability to block ads from appearing in video games using googleadsenseformobileapps.com exclusion (hello flat thumb misclicks, my name is budget & I am gone!)
They will also track video game player behavior & alter game play to maximize revenues based on machine learning tied to surveillance of the user's account: "We’re bringing a new approach to monetization that combines ads and in-app purchases in one automated solution. Available today, new smart segmentation features in Google AdMob use machine learning to segment your players based on their likelihood to spend on in-app purchases. Ad units with smart segmentation will show ads only to users who are predicted not to spend on in-app purchases. Players who are predicted to spend will see no ads, and can simply continue playing."
And how does the growth of ampproject.org square against the following wisdom?
If you do use a CDN, I'd recommend using a domain name of your own (eg, https://t.co/fWMc6CFPZ0), so you can move to other CDNs if you feel the need to over time, without having to do any redirects.— John (@JohnMu) April 15, 2019
Literally only yesterday did Google begin supporting instant loading of self-hosted AMP pages.
China has a different set of tech leaders than the United States. Baidu, Alibaba, Tencent (BAT) instead of Facebook, Amazon, Apple, Netflix, Google (FANG). China tech companies may have won their domestic markets in part based on superior technology or better knowledge of the local culture, though those same companies have largely went nowhere fast in most foreign markets. A big part of winning was governmental assistance in putting a foot on the scales.
Part of the US-China trade war is about who controls the virtual "seas" upon which value flows:
it can easily be argued that the last 60 years were above all the era of the container-ship (with container-ships getting ever bigger). But will the coming decades still be the age of the container-ship? Possibly not, for the simple reason that things that have value increasingly no longer travel by ship, but instead by fiberoptic cables! ... you could almost argue that ZTE and Huawei have been the “East India Company” of the current imperial cycle. Unsurprisingly, it is these very companies, charged with laying out the “new roads” along which “tomorrow’s value” will flow, that find themselves at the center of the US backlash. ... if the symbol of British domination was the steamship, and the symbol of American strength was the Boeing 747, it seems increasingly clear that the question of the future will be whether tomorrow’s telecom switches and routers are produced by Huawei or Cisco. ... US attempts to take down Huawei and ZTE can be seen as the existing empire’s attempt to prevent the ascent of a new imperial power. With this in mind, I could go a step further and suggest that perhaps the Huawei crisis is this century’s version of Suez crisis. No wonder markets have been falling ever since the arrest of the Huawei CFO. In time, the Suez Crisis was brought to a halt by US threats to destroy the value of sterling. Could we now witness the same for the US dollar?
China maintains Huawei is an employee-owned company. But that proposition is suspect. Broadly stealing technology is vital to the growth of the Chinese economy & they have no incentive to stop unless their leading companies pay a direct cost. Meanwhile, China is investigating Ericsson over licensing technology.
India has taken notice of the success of Chinese tech companies & thus began to promote "national champion" company policies. That, in turn, has also meant some of the Chinese-styled laws requiring localized data, antitrust inquiries, foreign ownership restrictions, requirements for platforms to not sell their own goods, promoting limits on data encryption, etc.
The secretary of India’s Telecommunications Department, Aruna Sundararajan, last week told a gathering of Indian startups in a closed-door meeting in the tech hub of Bangalore that the government will introduce a “national champion” policy “very soon” to encourage the rise of Indian companies, according to a person familiar with the matter. She said Indian policy makers had noted the success of China’s internet giants, Alibaba Group Holding Ltd. and Tencent Holdings Ltd. ... Tensions began rising last year, when New Delhi decided to create a clearer set of rules for e-commerce and convened a group of local players to solicit suggestions. Amazon and Flipkart, even though they make up more than half the market, weren’t invited, according to people familiar with the matter.
Amazon vowed to invest $5 billion in India & they have done some remarkable work on logistics there. Walmart acquired Flipkart for $16 billion.
Other emerging markets also have many local ecommerce leaders like Jumia, MercadoLibre, OLX, Gumtree, Takealot, Konga, Kilimall, BidOrBuy, Tokopedia, Bukalapak, Shoppee, Lazada. If you live in the US you may have never heard of *any* of those companies. And if you live in an emerging market you may have never interacted with Amazon or eBay.
It makes sense that ecommerce leadership would be more localized since it requires moving things in the physical economy, dealing with local currencies, managing inventory, shipping goods, etc. whereas information flows are just bits floating on a fiber optic cable.
If the Internet is primarily seen as a communications platform it is easy for people in some emerging markets to think Facebook is the Internet. Free communication with friends and family members is a compelling offer & as the cost of data drops web usage increases.
At the same time, the web is incredibly deflationary. Every free form of entertainment which consumes time is time that is not spent consuming something else.
Add the technological disruption to the wealth polarization that happened in the wake of the great recession, then combine that with algorithms that promote extremist views & it is clearly causing increasing conflict.
If you are a parent and you think you child has no shot at a brighter future than your own life it is easy to be full of rage.
Empathy can radicalize otherwise normal people by giving them a more polarized view of the world:
Starting around 2000, the line starts to slide. More students say it's not their problem to help people in trouble, not their job to see the world from someone else's perspective. By 2009, on all the standard measures, Konrath found, young people on average measure 40 percent less empathetic than my own generation ... The new rule for empathy seems to be: reserve it, not for your "enemies," but for the people you believe are hurt, or you have decided need it the most. Empathy, but just for your own team. And empathizing with the other team? That's practically a taboo.
A complete lack of empathy could allow a psychopath (hi Chris!) to commit extreme crimes while feeling no guilt, shame or remorse. Extreme empathy can have the same sort of outcome:
"Sometimes we commit atrocities not out of a failure of empathy but rather as a direct consequence of successful, even overly successful, empathy. ... They emphasized that students would learn both sides, and the atrocities committed by one side or the other were always put into context. Students learned this curriculum, but follow-up studies showed that this new generation was more polarized than the one before. ... [Empathy] can be good when it leads to good action, but it can have downsides. For example, if you want the victims to say 'thank you.' You may even want to keep the people you help in that position of inferior victim because it can sustain your feeling of being a hero." - Fritz Breithaupt
News feeds will be read. Villages will be razed. Lynch mobs will become commonplace.
Many people will end up murdered by algorithmically generated empathy.
As technology increases absentee ownership & financial leverage, a society led by morally agnostic algorithms is not going to become more egalitarian.
The more I think about and discuss it, the more I think WhatsApp is simultaneously the future of Facebook, and the most potentially dangerous digital tool yet created. We haven't even begun to see the real impact yet of ubiquitous, unfettered and un-moderatable human telepathy.— Antonio García Martínez (@antoniogm) April 15, 2019
When politicians throw fuel on the fire it only gets worse:
It’s particularly odd that the government is demanding “accountability and responsibility” from a phone app when some ruling party politicians are busy spreading divisive fake news. How can the government ask WhatsApp to control mobs when those convicted of lynching Muslims have been greeted, garlanded and fed sweets by some of the most progressive and cosmopolitan members of Modi’s council of ministers?
Mark Zuckerburg won't get caught downstream from platform blowback as he spends $20 million a year on his security.
The web is a mirror. Engagement-based algorithms reinforcing our perceptions & identities.
And every important story has at least 2 sides!
The Rohingya asylum seekers are victims of their own violent Jihadist leadership that formed a militia to kill Buddhists and Hindus. Hindus are being massacred, where’s the outrage for them!? https://t.co/P3m6w4B1Po— Imam Tawhidi (@Imamofpeace) May 23, 2018
Some may "learn" vaccines don't work. Others may learn the vaccines their own children took did not work, as it failed to protect them from the antivax content spread by Facebook & Google, absorbed by people spreading measles & Medieval diseases.
Passion drives engagement, which drives algorithmic distribution: "There’s an asymmetry of passion at work. Which is to say, there’s very little counter-content to surface because it simply doesn’t occur to regular people (or, in this case, actual medical experts) that there’s a need to produce counter-content."
As the costs of "free" become harder to hide, social media companies which currently sell emerging markets as their next big growth area will end up having embedded regulatory compliance costs which will end up exceeding any sort of prospective revenue they could hope to generate.
The Pinterest S1 shows almost all their growth is in emerging markets, yet almost all their revenue is inside the United States.
As governments around the world see the real-world cost of the foreign tech companies & view some of them as piggy banks, eventually the likes of Facebook or Google will pull out of a variety of markets they no longer feel worth serving. It will be like Google did in mainland China with search after discovering pervasive hacking of activist Gmail accounts.
Just tried signing into Gmail from a new device. Unless I provide a phone number, there is no way to sign in and no one to call about it. Oh, and why do they say they need my phone? If you guessed "for my protection," you would be correct. Talk about Big Brother...— Simon Mikhailovich (@S_Mikhailovich) April 16, 2019
Lower friction & lower cost information markets will face more junk fees, hurdles & even some legitimate regulations. Information markets will start to behave more like physical goods markets.
The tech companies presume they will be able to use satellites, drones & balloons to beam in Internet while avoiding messy local issues tied to real world infrastructure, but when a local wealthy player is betting against them they'll probably end up losing those markets: "One of the biggest cheerleaders for the new rules was Reliance Jio, a fast-growing mobile phone company controlled by Mukesh Ambani, India’s richest industrialist. Mr. Ambani, an ally of Mr. Modi, has made no secret of his plans to turn Reliance Jio into an all-purpose information service that offers streaming video and music, messaging, money transfer, online shopping, and home broadband services."
Publishers do not have "their mojo back" because the tech companies have been so good to them, but rather because the tech companies have been so aggressive that they've earned so much blowback which will in turn lead publishers to opting out of future deals, which will eventually lead more people back to the trusted brands of yesterday.
Publishers feeling guilty about taking advertorial money from the tech companies to spread their propaganda will offset its publication with opinion pieces pointing in the other direction: "This is a lobbying campaign in which buying the good opinion of news brands is clearly important. If it was about reaching a target audience, there are plenty of metrics to suggest his words would reach further – at no cost – on Facebook. Similarly, Google is upping its presence in a less obvious manner via assorted media initiatives on both sides of the Atlantic. Its more direct approach to funding journalism seems to have the desired effect of making all media organisations (and indeed many academic institutions) touched by its money slightly less questioning and critical of its motives."
When Facebook goes down direct visits to leading news brand sites go up.
When Google penalizes a no-name me-too site almost nobody realizes it is missing. But if a big publisher opts out of the ecosystem people will notice.
The reliance on the tech platforms is largely a mirage. If enough key players were to opt out at the same time people would quickly reorient their information consumption habits.
If the platforms can change their focus overnight then why can't publishers band together & choose to dump them?
CEO Jack Dorsey said Twitter is looking to change the focus from following specific individuals to topics of interest, acknowledging that what's incentivized today on the platform is at odds with the goal of healthy dialoguehttps://t.co/31FYslbePA— Axios (@axios) April 16, 2019
In Europe there is GDPR, which aimed to protect user privacy, but ultimately acted as a tax on innovation by local startups while being a subsidy to the big online ad networks. They also have Article 11 & Article 13, which passed in spite of Google's best efforts on the scaremongering anti-SERP tests, lobbying & propaganda fronts: "Google has sparked criticism by encouraging news publishers participating in its Digital News Initiative to lobby against proposed changes to EU copyright law at a time when the beleaguered sector is increasingly turning to the search giant for help."
Remember the Eric Schmidt comment about how brands are how you sort out (the non-YouTube portion of) the cesspool? As it turns out, he was allegedly wrong as Google claims they have been fighting for the little guy the whole time:
Article 11 could change that principle and require online services to strike commercial deals with publishers to show hyperlinks and short snippets of news. This means that search engines, news aggregators, apps, and platforms would have to put commercial licences in place, and make decisions about which content to include on the basis of those licensing agreements and which to leave out. Effectively, companies like Google will be put in the position of picking winners and losers. ... Why are large influential companies constraining how new and small publishers operate? ... The proposed rules will undoubtedly hurt diversity of voices, with large publishers setting business models for the whole industry. This will not benefit all equally. ... We believe the information we show should be based on quality, not on payment.
Facebook claims there is a local news problem: "Facebook Inc. has been looking to boost its local-news offerings since a 2017 survey showed most of its users were clamoring for more. It has run into a problem: There simply isn’t enough local news in vast swaths of the country. ... more than one in five newspapers have closed in the past decade and a half, leaving half the counties in the nation with just one newspaper, and 200 counties with no newspaper at all."
Google is so for the little guy that for their local news experiments they've partnered with a private equity backed newspaper roll up firm & another newspaper chain which did overpriced acquisitions & is trying to act like a PE firm (trying to not get eaten by the PE firm).
Does the above stock chart look in any way healthy?
Does it give off the scent of a firm that understood the impact of digital & rode it to new heights?
If you want good market-based outcomes, why not partner with journalists directly versus operating through PE chop shops?
If Patch is profitable & Google were a neutral ranking system based on quality, couldn't Google partner with journalists directly?
Throwing a few dollars at a PE firm in some nebulous partnership sure beats the sort of regulations coming out of the EU. And the EU's regulations (and prior link tax attempts) are in addition to the three multi billion Euro fines the European Union has levied against Alphabet for shopping search, Android & AdSense.
Google was also fined in Russia over Android bundling. The fine was tiny, but after consumers gained a search engine choice screen (much like Google pushed for in Europe on Microsoft years ago) Yandex's share of mobile search grew quickly.
The UK recently published a white paper on online harms. In some ways it is a regulation just like the tech companies might offer to participants in their ecosystems:
Companies will have to fulfil their new legal duties or face the consequences and “will still need to be compliant with the overarching duty of care even where a specific code does not exist, for example assessing and responding to the risk associated with emerging harms or technology”.
If web publishers should monitor inbound links to look for anything suspicious then the big platforms sure as hell have the resources & profit margins to monitor behavior on their own websites.
Australia passed the Sharing of Abhorrent Violent Material bill which requires platforms to expeditiously remove violent videos & notify the Australian police about them.
There are other layers of fracturing going on in the web as well.
Programmatic advertising shifted revenue from publishers to adtech companies & the largest ad sellers. Ad blockers further lower the ad revenues of many publishers. If you routinely use an ad blocker, try surfing the web for a while without one & you will notice layover welcome AdSense ads on sites as you browse the web - the very type of ad they were allegedly against when promoting AMP.
There has been much more press in the past week about ad blocking as Google's influence is being questioned as it rolls out ad blocking as a feature built into Google's dominant Chrome web browser. https://t.co/LQmvJu9MYB— Jason Kint (@jason_kint) February 19, 2018
Tracking protection in browsers & ad blocking features built directly into browsers leave publishers more uncertain. And who even knows who visited an AMP page hosted on a third party server, particularly when things like GDPR are mixed in? Those who lack first party data may end up having to make large acquisitions to stay relevant.
Voice search & personal assistants are now ad channels.
Google Assistant Now Showing Sponsored Link Ads for Some Travel Related Queries "Similar results are delivered through both Google Home and Google Home Hub without the sponsored links." https://t.co/jSVKKI2AYT via @bretkinsella pic.twitter.com/0sjAswy14M— Glenn Gabe (@glenngabe) April 15, 2019
App stores are removing VPNs in China, removing Tiktok in India, and keeping female tracking apps in Saudi Arabia. App stores are centralized chokepoints for governments. Every centralized service is at risk of censorship. Web browsers from key state-connected players can also censor messages spread by developers on platforms like GitHub.
Microsoft's newest Edge web browser is based on Chromium, the source of Google Chrome. While Mozilla Firefox gets most of their revenue from a search deal with Google, Google has still went out of its way to use its services to both promote Chrome with pop overs AND break in competing web browsers:
"All of this is stuff you're allowed to do to compete, of course. But we were still a search partner, so we'd say 'hey what gives?' And every time, they'd say, 'oops. That was accidental. We'll fix it in the next push in 2 weeks.' Over and over. Oops. Another accident. We'll fix it soon. We want the same things. We're on the same team. There were dozens of oopses. Hundreds maybe?" - former Firefox VP Jonathan Nightingale
This is how it spreads. Google normalizes “web apps” that are really just Chrome apps. Then others follow. We’ve been here before, y’all. Remember IE? Browser hegemony is not a happy place. https://t.co/b29EvIty1H— DHH (@dhh) April 1, 2019
In fact, it’s alarming how much of Microsoft’s cut-off-the-air-supply playbook on browser dominance that Google is emulating. From browser-specific apps to embrace-n-extend AMP “standards”. It’s sad, but sadder still is when others follow suit.— DHH (@dhh) April 1, 2019
YouTube page load is 5x slower in Firefox and Edge than in Chrome because YouTube's Polymer redesign relies on the deprecated Shadow DOM v0 API only implemented in Chrome. You can restore YouTube's faster pre-Polymer design with this Firefox extension: https://t.co/F5uEn3iMLR— Chris Peterson (@cpeterso) July 24, 2018
As phone sales fall & app downloads stall a hardware company like Apple is pushing hard into services while quietly raking in utterly fantastic ad revenues from search & ads in their app store.
Part of the reason people are downloading fewer apps is so many apps require registration as soon as they are opened, or only let a user engage with them for seconds before pushing aggressive upsells. And then many apps which were formerly one-off purchases are becoming subscription plays. As traffic acquisition costs have jumped, many apps must engage in sleight of hand behaviors (free but not really, we are collecting data totally unrelated to the purpose of our app & oops we sold your data, etc.) in order to get the numbers to back out. This in turn causes app stores to slow down app reviews.
Apple acquired the news subscription service Texture & turned it into Apple News Plus. Not only is Apple keeping half the subscription revenues, but soon the service will only work for people using Apple devices, leaving nearly 100,000 other subscribers out in the cold: "if you’re part of the 30% who used Texture to get your favorite magazines digitally on Android or Windows devices, you will soon be out of luck. Only Apple iOS devices will be able to access the 300 magazines available from publishers. At the time of the sale in March 2018 to Apple, Texture had about 240,000 subscribers."
Apple is also going to spend over a half-billion Dollars exclusively licensing independently developed games:
Several people involved in the project’s development say Apple is spending several million dollars each on most of the more than 100 games that have been selected to launch on Arcade, with its total budget likely to exceed $500m. The games service is expected to launch later this year. ... Apple is offering developers an extra incentive if they agree for their game to only be available on Arcade, withholding their release on Google’s Play app store for Android smartphones or other subscription gaming bundles such as Microsoft’s Xbox game pass.
Verizon wants to launch a video game streaming service. It will probably be almost as successful as their Go90 OTT service was. Microsoft is pushing to make Xbox games work on Android devices. Amazon is developing a game streaming service to compliment Twitch.
The hosts on Twitch, some of whom sign up exclusively with the platform in order to gain access to its moneymaking tools, are rewarded for their ability to make a connection with viewers as much as they are for their gaming prowess. Viewers who pay $4.99 a month for a basic subscription — the money is split evenly between the streamers and Twitch — are looking for immediacy and intimacy. While some hosts at YouTube Gaming offer a similar experience, they have struggled to build audiences as large, and as dedicated, as those on Twitch. ... While YouTube has made millionaires out of the creators of popular videos through its advertising program, Twitch’s hosts make money primarily from subscribers and one-off donations or tips. YouTube Gaming has made it possible for viewers to support hosts this way, but paying audiences haven’t materialized at the scale they have on Twitch.
Google, having a bit of Twitch envy, is also launching a video game streaming service which will be deeply integrated into YouTube: "With Stadia, YouTube watchers can press “Play now” at the end of a video, and be brought into the game within 5 seconds. The service provides “instant access” via button or link, just like any other piece of content on the web."
Google will also launch their own game studio making exclusive games for their platform.
When consoles don't use discs or cartridges so they can sell a subscription access to their software library it is hard to be a game retailer! GameStop's stock has been performing like an ICO. And these sorts of announcements from the tech companies have been hitting stock prices for companies like Nintendo & Sony: “There is no doubt this service makes life even more difficult for established platforms,” Amir Anvarzadeh, a market strategist at Asymmetric Advisors Pte, said in a note to clients. “Google will help further fragment the gaming market which is already coming under pressure by big games which have adopted the mobile gaming business model of giving the titles away for free in hope of generating in-game content sales.”
The big tech companies which promoted everything in adjacent markets being free are now erecting paywalls for themselves, balkanizing the web by paying for exclusives to drive their bundled subscriptions.
How many paid movie streaming services will the web have by the end of next year? 20? 50? Does anybody know?
Disney alone with operate Disney+, ESPN+ as well as Hulu.
And then the tech companies are not only licensing exclusives to drive their subscription-based services, but we're going to see more exclusionary policies like YouTube not working on Amazon Echo, Netflix dumping support for Apple's Airplay, or Amazon refusing to sell devices like Chromecast or Apple TV.
The good news in a fractured web is a broader publishing industry that contains many micro markets will have many opportunities embedded in it. A Facebook pivot away from games toward news, or a pivot away from news toward video won't kill third party publishers who have a more diverse traffic profile and more direct revenues. And a regional law blocking porn or gambling websites might lead to an increase in demand for VPNs or free to play points-based games with paid upgrades. Even the rise of metered paywalls will lead to people using more web browsers & more VPNs. Each fracture (good or bad) will create more market edges & ultimately more opportunities. Chinese enforcement of their gambling laws created a real estate boom in Manila.
So long as there are 4 or 5 game stores, 4 or 5 movie streaming sites, etc. ... they have to compete on merit or use money to try to buy exclusives. Either way is better than the old monopoly strategy of take it or leave it ultimatums.
The publisher wins because there is a competitive bid. There won't be an arbitrary 30% tax on everything. So long as there is competition from the open web there will be means to bypass the junk fees & the most successful companies that do so might create their own stores with a lower rate: "Mr. Schachter estimates that Apple and Google could see a hit of about 14% to pretax earnings if they reduced their own app commissions to match Epic’s take."
As the big media companies & big tech companies race to create subscription products they'll spend many billions on exclusives. And they will be training consumers that there's nothing wrong with paying for content. This will eventually lead to hundreds of thousands or even millions of successful niche publications which have incentives better aligned than all the issues the ad supported web has faced.
Categories: 
publishing & media
from Digtal Marketing News http://www.seobook.com/fractured
0 notes
davidrsmithlove · 6 years ago
Text
How to Estimate the Total Available Organic Search Traffic for a Website
Whether you’re expanding an existing business into a new market, bringing out a new product range, or building a brand new website from scratch, a key question you’ll need to answer is how much organic search traffic is reasonable to expect, and what the ceiling on that number might be. It’s also useful for an established site to know how close to the ceiling you are. In this post, I outline a process for coming to a reasonable estimate for that number.
How we do it
This is all an estimate, while it tries to be as rigorous as possible there are a lot of unknowns that could cause the estimate to be off in either direction. The intention is to get a ballpark estimate in order to set expectations and give an indication of the level of investment that should be devoted to a site.
In order to work out how much organic traffic we can get, we need four things:
1. A list of keywords we can reasonably expect to rank for
In order to gather this list of keywords, a level of keyword research is required. There are many resources available online for keyword research, including the Keyword Research module on DistilledU. In order to get the most accurate estimates, we want to aim for as wide as possible a set of keywords that are actually relevant to the site we’re proposing.
The intention of this methodology is focussed on the core non-branded keywords a site could rank for, e.g. the names of products and services offered by your website. Branded keywords should be considered separately, as they are much more prone to changes in search volume for a new brand being launched.
Tangential or unrelated keywords (that might be the subject of a blog or resources page) are generally better considered separately, as there’s often an unbounded number of these. My recommendation would be that when deciding on content to create, to run this process at a smaller scale over those particular keywords to estimate the organic traffic available for a piece of content.
Below is a scaled-down example list of keywords that we could put together for a site selling colourful knitwear. In reality, your keyword list should probably run to hundreds or thousands of keywords.
2. Search volume for each keyword in our list
There are many SEO tools available to get search volume data from. At Distilled we like to use Ahrefs, but any SEO tool will have this data available. Google Keyword Planner also gives ranges of keyword volume for free, although that data tends to be grouped into ranges. For the purposes of this exercise, you can take the midpoint of those ranges.
No keyword volume source, including Google, will be completely accurate, and they will generally give a monthly average not taking into account seasonal fluctuations. You should also be aware of keyword volumes for close variants being grouped together - there’s a risk you’ll be double counting without taking this into account.
3. For each keyword, the highest ranking we can reasonably expect to achieve
Once we have our list of keywords with search volume, we should look at what’s currently ranking in the search results, in order to estimate where our site could expect to rank.
Note that what I’m about to outline is not a rigorous process. As mentioned above, the idea is to get a ballpark figure, and this method relies on some assumptions that are not to be held as claims about how SEO works. Specifically, we’re assuming that:
We can make pages of high enough quality, and well-targeted enough, to rank well for a given keyword
With this being the case, we could aspire to rank as high as anyone currently ranking in the SERP who has a less strong domain (in terms of backlink profile) than our site.
Of course, SEO is realistically much more complicated than that, but for the purposes of developing an estimate, we can hold our nose and go ahead with these assumptions.
Want more advice like this in your inbox? Join the monthly newsletter.
// <![CDATA[ hbspt.forms.create({ target: '#bottom-of-blog-hubspot-cta-form', css: '', cssRequired: '', errorClass: 'none', errorMessageClass: 'hide', submitButtonClass: 'button orange', portalId: '2124102', formId: '8813300d-507f-42eb-94fe-b6452a7cc124' }); // ]]>
This process is replicating some of the inputs that a lot of keyword tools use to calculate their difficulty scores. The reason we’re doing this is that we need to use a specific ranking position in order to project the amount of traffic we can get, which difficulty scores don’t give us.
In order to work out where can rank, we need to see who’s currently ranking for each keyword. Many rank tracking tools (including STAT, which we use at Distilled) will give you this data, as will search analytics tools such as Ahrefs and SEMrush. Let’s take the top 20 ranking URLs for each keyword. Don’t forget to get the rankings for the country/market you’re doing this analysis on!
View full size image
We then want to cross-check these URLs against backlink data. Select your favourite backlink tool (Moz/Ahrefs/Majestic), and look up the domain-level strength of each domain that ranks for your keywords. If you’re doing this at a large scale, I’d recommend using the API of your tool, or URL Profiler. For this example, I’ve used Ahrefs domain rating (with 100 for any ranking positions that don’t have an organic result).
View full size image
We are making the assumption that we can rank as well as any currently ranking site with a lower domain score than our site. If you’re running this analysis for a new site that doesn’t exist (and therefore doesn’t have a backlink profile), you can use a hypothetical target backlink score based on what you believe to be achievable.
For each keyword, find the highest ranking position which is currently occupied by a weaker domain than yours. If none such pages exist in the top 20 results, we can discount those keywords from our analysis, and assume that we can’t rank for them.
View full size image
4. For each ranking position, the amount of traffic we could expect to get each month
Now that we have our highest achievable rank, and monthly search volume, we can combine them to get the maximum monthly traffic we can expect from each keyword. To do this, we use a clickthrough rate model to estimate what percentage of searches lead to a click on a search result in each ranking position.
There are some very good public resources - I like Advanced Web Ranking’s data. While generic data like this will never be perfect, it will get us a good-enough estimate that we can use for this task.
If we multiply the monthly search volume by the click-through rate estimate for the highest achievable ranking position, that gives us the traffic expectation for each keyword. Add these together, and we get the total traffic we can expect for the site. We’re done!
What we can do with this information
Now that we have a number for the traffic we can expect, what can we do with it? Let’s revisit our assumptions. We’ve gone into this analysis assuming a static list of keywords, and a fixed backlink profile. What if we change that up?
What if our backlink profile improves?
Because we’ve set this up as a model in a spreadsheet, it’s relatively simple to tweak the domain score rating. By doing this, we can see how performance would improve by, for example, adding 10 points to our domain rating. The sensitivity of the traffic levels to small changes in this would be a good indication that it’s a good idea to invest in growing your backlink profile.
What if we target a broader range of keywords?
The other way we could get more traffic would be by targeting more keywords. This could be via blog or resources style content, or by adding more transactional keywords targeting more terms. In order to update the model, it’s a case of essentially repeating the above process with a new list of keywords, and adding it into the initial model. This can be a great way to identify new keyword opportunities.
Improving the model
As mentioned above, this was (intentionally) a very simple model. We’ve glossed over a lot of nuance in order to come to a ballpark figure of achievable search traffic. For a more rigorous estimate of the available search traffic, there are a few modifications I would make:
Use a tailored CTR model
The AWR link I provided above provides a good breakdown of clickthrough rate by industry and device, as well as different types of search intent. If you have the time, you should go through your keywords and pick out the appropriate CTR curve per keyword.
Break it up by device type
Some keywords can have dramatically different search results by device type. Tracking keywords separately on mobile and desktop, as is possible using some rank tracking tools including STAT, would allow you to separately estimate the highest achievable rank on each device. Unfortunately, there’s no source of search volume that I’m aware of that allows you to split search volumes by device type (please correct me in the comments if you know of one!), so you can split the search volume in half between mobile and desktop, or in a different proportion if you know that the particular space has a bias to mobile or desktop.
Look at the SERPs
The above methodology focuses on backlinks as the key decider of whether a page can rank or not. This is not how search works - there are hundreds of factors that go into deciding who ranks where for which keywords. Looking more closely at who is currently ranking will give great insight, as it might show opportunities that wouldn’t otherwise be apparent bu looking at links alone.
For example, if a weaker domain ranks in the top three positions for a keyword that it is an exact match for, which would indicate that those results aren’t a good opportunity. Conversely, if the pages ranking for a given keyword aren’t a very good match for the intent of the search, and you can create a page that matches the intent better, that represents a better opportunity than link numbers would suggest.
Also, if all of the pages ranking for a search term aren’t a close match to your site (e.g. if all of the results are informational for a search that you would target with a transactional page), it’s probably not a good opportunity and you should remove it from your list.
Conclusion
Thanks for reading through this post! I’d love to hear if anyone has any feedback on the process or ways it could be improved. Let me know in the comments!
0 notes
dillenwaeraa · 6 years ago
Text
How to Estimate the Total Available Organic Search Traffic for a Website
Whether you’re expanding an existing business into a new market, bringing out a new product range, or building a brand new website from scratch, a key question you’ll need to answer is how much organic search traffic is reasonable to expect, and what the ceiling on that number might be. It’s also useful for an established site to know how close to the ceiling you are. In this post, I outline a process for coming to a reasonable estimate for that number.
How we do it
This is all an estimate, while it tries to be as rigorous as possible there are a lot of unknowns that could cause the estimate to be off in either direction. The intention is to get a ballpark estimate in order to set expectations and give an indication of the level of investment that should be devoted to a site.
In order to work out how much organic traffic we can get, we need four things:
1. A list of keywords we can reasonably expect to rank for
In order to gather this list of keywords, a level of keyword research is required. There are many resources available online for keyword research, including the Keyword Research module on DistilledU. In order to get the most accurate estimates, we want to aim for as wide as possible a set of keywords that are actually relevant to the site we’re proposing.
The intention of this methodology is focussed on the core non-branded keywords a site could rank for, e.g. the names of products and services offered by your website. Branded keywords should be considered separately, as they are much more prone to changes in search volume for a new brand being launched.
Tangential or unrelated keywords (that might be the subject of a blog or resources page) are generally better considered separately, as there’s often an unbounded number of these. My recommendation would be that when deciding on content to create, to run this process at a smaller scale over those particular keywords to estimate the organic traffic available for a piece of content.
Below is a scaled-down example list of keywords that we could put together for a site selling colourful knitwear. In reality, your keyword list should probably run to hundreds or thousands of keywords.
2. Search volume for each keyword in our list
There are many SEO tools available to get search volume data from. At Distilled we like to use Ahrefs, but any SEO tool will have this data available. Google Keyword Planner also gives ranges of keyword volume for free, although that data tends to be grouped into ranges. For the purposes of this exercise, you can take the midpoint of those ranges.
No keyword volume source, including Google, will be completely accurate, and they will generally give a monthly average not taking into account seasonal fluctuations. You should also be aware of keyword volumes for close variants being grouped together - there’s a risk you’ll be double counting without taking this into account.
3. For each keyword, the highest ranking we can reasonably expect to achieve
Once we have our list of keywords with search volume, we should look at what’s currently ranking in the search results, in order to estimate where our site could expect to rank.
Note that what I’m about to outline is not a rigorous process. As mentioned above, the idea is to get a ballpark figure, and this method relies on some assumptions that are not to be held as claims about how SEO works. Specifically, we’re assuming that:
We can make pages of high enough quality, and well-targeted enough, to rank well for a given keyword
With this being the case, we could aspire to rank as high as anyone currently ranking in the SERP who has a less strong domain (in terms of backlink profile) than our site.
Of course, SEO is realistically much more complicated than that, but for the purposes of developing an estimate, we can hold our nose and go ahead with these assumptions.
Want more advice like this in your inbox? Join the monthly newsletter.
// <![CDATA[ hbspt.forms.create({ target: '#bottom-of-blog-hubspot-cta-form', css: '', cssRequired: '', errorClass: 'none', errorMessageClass: 'hide', submitButtonClass: 'button orange', portalId: '2124102', formId: '8813300d-507f-42eb-94fe-b6452a7cc124' }); // ]]>
This process is replicating some of the inputs that a lot of keyword tools use to calculate their difficulty scores. The reason we’re doing this is that we need to use a specific ranking position in order to project the amount of traffic we can get, which difficulty scores don’t give us.
In order to work out where can rank, we need to see who’s currently ranking for each keyword. Many rank tracking tools (including STAT, which we use at Distilled) will give you this data, as will search analytics tools such as Ahrefs and SEMrush. Let’s take the top 20 ranking URLs for each keyword. Don’t forget to get the rankings for the country/market you’re doing this analysis on!
View full size image
We then want to cross-check these URLs against backlink data. Select your favourite backlink tool (Moz/Ahrefs/Majestic), and look up the domain-level strength of each domain that ranks for your keywords. If you’re doing this at a large scale, I’d recommend using the API of your tool, or URL Profiler. For this example, I’ve used Ahrefs domain rating (with 100 for any ranking positions that don’t have an organic result).
View full size image
We are making the assumption that we can rank as well as any currently ranking site with a lower domain score than our site. If you’re running this analysis for a new site that doesn’t exist (and therefore doesn’t have a backlink profile), you can use a hypothetical target backlink score based on what you believe to be achievable.
For each keyword, find the highest ranking position which is currently occupied by a weaker domain than yours. If none such pages exist in the top 20 results, we can discount those keywords from our analysis, and assume that we can’t rank for them.
View full size image
4. For each ranking position, the amount of traffic we could expect to get each month
Now that we have our highest achievable rank, and monthly search volume, we can combine them to get the maximum monthly traffic we can expect from each keyword. To do this, we use a clickthrough rate model to estimate what percentage of searches lead to a click on a search result in each ranking position.
There are some very good public resources - I like Advanced Web Ranking’s data. While generic data like this will never be perfect, it will get us a good-enough estimate that we can use for this task.
If we multiply the monthly search volume by the click-through rate estimate for the highest achievable ranking position, that gives us the traffic expectation for each keyword. Add these together, and we get the total traffic we can expect for the site. We’re done!
What we can do with this information
Now that we have a number for the traffic we can expect, what can we do with it? Let’s revisit our assumptions. We’ve gone into this analysis assuming a static list of keywords, and a fixed backlink profile. What if we change that up?
What if our backlink profile improves?
Because we’ve set this up as a model in a spreadsheet, it’s relatively simple to tweak the domain score rating. By doing this, we can see how performance would improve by, for example, adding 10 points to our domain rating. The sensitivity of the traffic levels to small changes in this would be a good indication that it’s a good idea to invest in growing your backlink profile.
What if we target a broader range of keywords?
The other way we could get more traffic would be by targeting more keywords. This could be via blog or resources style content, or by adding more transactional keywords targeting more terms. In order to update the model, it’s a case of essentially repeating the above process with a new list of keywords, and adding it into the initial model. This can be a great way to identify new keyword opportunities.
Improving the model
As mentioned above, this was (intentionally) a very simple model. We’ve glossed over a lot of nuance in order to come to a ballpark figure of achievable search traffic. For a more rigorous estimate of the available search traffic, there are a few modifications I would make:
Use a tailored CTR model
The AWR link I provided above provides a good breakdown of clickthrough rate by industry and device, as well as different types of search intent. If you have the time, you should go through your keywords and pick out the appropriate CTR curve per keyword.
Break it up by device type
Some keywords can have dramatically different search results by device type. Tracking keywords separately on mobile and desktop, as is possible using some rank tracking tools including STAT, would allow you to separately estimate the highest achievable rank on each device. Unfortunately, there’s no source of search volume that I’m aware of that allows you to split search volumes by device type (please correct me in the comments if you know of one!), so you can split the search volume in half between mobile and desktop, or in a different proportion if you know that the particular space has a bias to mobile or desktop.
Look at the SERPs
The above methodology focuses on backlinks as the key decider of whether a page can rank or not. This is not how search works - there are hundreds of factors that go into deciding who ranks where for which keywords. Looking more closely at who is currently ranking will give great insight, as it might show opportunities that wouldn’t otherwise be apparent bu looking at links alone.
For example, if a weaker domain ranks in the top three positions for a keyword that it is an exact match for, which would indicate that those results aren’t a good opportunity. Conversely, if the pages ranking for a given keyword aren’t a very good match for the intent of the search, and you can create a page that matches the intent better, that represents a better opportunity than link numbers would suggest.
Also, if all of the pages ranking for a search term aren’t a close match to your site (e.g. if all of the results are informational for a search that you would target with a transactional page), it’s probably not a good opportunity and you should remove it from your list.
Conclusion
Thanks for reading through this post! I’d love to hear if anyone has any feedback on the process or ways it could be improved. Let me know in the comments!
from Marketing https://www.distilled.net/resources/how-to-estimate-the-total-available-organic-search-traffic-for-a-website/ via http://www.rssmix.com/
0 notes
anthonykrierion · 6 years ago
Text
How to Estimate the Total Available Organic Search Traffic for a Website
Whether you’re expanding an existing business into a new market, bringing out a new product range, or building a brand new website from scratch, a key question you’ll need to answer is how much organic search traffic is reasonable to expect, and what the ceiling on that number might be. It’s also useful for an established site to know how close to the ceiling you are. In this post, I outline a process for coming to a reasonable estimate for that number.
How we do it
This is all an estimate, while it tries to be as rigorous as possible there are a lot of unknowns that could cause the estimate to be off in either direction. The intention is to get a ballpark estimate in order to set expectations and give an indication of the level of investment that should be devoted to a site.
In order to work out how much organic traffic we can get, we need four things:
1. A list of keywords we can reasonably expect to rank for
In order to gather this list of keywords, a level of keyword research is required. There are many resources available online for keyword research, including the Keyword Research module on DistilledU. In order to get the most accurate estimates, we want to aim for as wide as possible a set of keywords that are actually relevant to the site we’re proposing.
The intention of this methodology is focussed on the core non-branded keywords a site could rank for, e.g. the names of products and services offered by your website. Branded keywords should be considered separately, as they are much more prone to changes in search volume for a new brand being launched.
Tangential or unrelated keywords (that might be the subject of a blog or resources page) are generally better considered separately, as there’s often an unbounded number of these. My recommendation would be that when deciding on content to create, to run this process at a smaller scale over those particular keywords to estimate the organic traffic available for a piece of content.
Below is a scaled-down example list of keywords that we could put together for a site selling colourful knitwear. In reality, your keyword list should probably run to hundreds or thousands of keywords.
2. Search volume for each keyword in our list
There are many SEO tools available to get search volume data from. At Distilled we like to use Ahrefs, but any SEO tool will have this data available. Google Keyword Planner also gives ranges of keyword volume for free, although that data tends to be grouped into ranges. For the purposes of this exercise, you can take the midpoint of those ranges.
No keyword volume source, including Google, will be completely accurate, and they will generally give a monthly average not taking into account seasonal fluctuations. You should also be aware of keyword volumes for close variants being grouped together - there’s a risk you’ll be double counting without taking this into account.
3. For each keyword, the highest ranking we can reasonably expect to achieve
Once we have our list of keywords with search volume, we should look at what’s currently ranking in the search results, in order to estimate where our site could expect to rank.
Note that what I’m about to outline is not a rigorous process. As mentioned above, the idea is to get a ballpark figure, and this method relies on some assumptions that are not to be held as claims about how SEO works. Specifically, we’re assuming that:
We can make pages of high enough quality, and well-targeted enough, to rank well for a given keyword
With this being the case, we could aspire to rank as high as anyone currently ranking in the SERP who has a less strong domain (in terms of backlink profile) than our site.
Of course, SEO is realistically much more complicated than that, but for the purposes of developing an estimate, we can hold our nose and go ahead with these assumptions.
Want more advice like this in your inbox? Join the monthly newsletter.
This process is replicating some of the inputs that a lot of keyword tools use to calculate their difficulty scores. The reason we’re doing this is that we need to use a specific ranking position in order to project the amount of traffic we can get, which difficulty scores don’t give us.
In order to work out where can rank, we need to see who’s currently ranking for each keyword. Many rank tracking tools (including STAT, which we use at Distilled) will give you this data, as will search analytics tools such as Ahrefs and SEMrush. Let’s take the top 20 ranking URLs for each keyword. Don’t forget to get the rankings for the country/market you’re doing this analysis on!
View full size image
We then want to cross-check these URLs against backlink data. Select your favourite backlink tool (Moz/Ahrefs/Majestic), and look up the domain-level strength of each domain that ranks for your keywords. If you’re doing this at a large scale, I’d recommend using the API of your tool, or URL Profiler. For this example, I’ve used Ahrefs domain rating (with 100 for any ranking positions that don’t have an organic result).
View full size image
We are making the assumption that we can rank as well as any currently ranking site with a lower domain score than our site. If you’re running this analysis for a new site that doesn’t exist (and therefore doesn’t have a backlink profile), you can use a hypothetical target backlink score based on what you believe to be achievable.
For each keyword, find the highest ranking position which is currently occupied by a weaker domain than yours. If none such pages exist in the top 20 results, we can discount those keywords from our analysis, and assume that we can’t rank for them.
View full size image
4. For each ranking position, the amount of traffic we could expect to get each month
Now that we have our highest achievable rank, and monthly search volume, we can combine them to get the maximum monthly traffic we can expect from each keyword. To do this, we use a clickthrough rate model to estimate what percentage of searches lead to a click on a search result in each ranking position.
There are some very good public resources - I like Advanced Web Ranking’s data. While generic data like this will never be perfect, it will get us a good-enough estimate that we can use for this task.
If we multiply the monthly search volume by the click-through rate estimate for the highest achievable ranking position, that gives us the traffic expectation for each keyword. Add these together, and we get the total traffic we can expect for the site. We’re done!
What we can do with this information
Now that we have a number for the traffic we can expect, what can we do with it? Let’s revisit our assumptions. We’ve gone into this analysis assuming a static list of keywords, and a fixed backlink profile. What if we change that up?
What if our backlink profile improves?
Because we’ve set this up as a model in a spreadsheet, it’s relatively simple to tweak the domain score rating. By doing this, we can see how performance would improve by, for example, adding 10 points to our domain rating. The sensitivity of the traffic levels to small changes in this would be a good indication that it’s a good idea to invest in growing your backlink profile.
What if we target a broader range of keywords?
The other way we could get more traffic would be by targeting more keywords. This could be via blog or resources style content, or by adding more transactional keywords targeting more terms. In order to update the model, it’s a case of essentially repeating the above process with a new list of keywords, and adding it into the initial model. This can be a great way to identify new keyword opportunities.
Improving the model
As mentioned above, this was (intentionally) a very simple model. We’ve glossed over a lot of nuance in order to come to a ballpark figure of achievable search traffic. For a more rigorous estimate of the available search traffic, there are a few modifications I would make:
Use a tailored CTR model
The AWR link I provided above provides a good breakdown of clickthrough rate by industry and device, as well as different types of search intent. If you have the time, you should go through your keywords and pick out the appropriate CTR curve per keyword.
Break it up by device type
Some keywords can have dramatically different search results by device type. Tracking keywords separately on mobile and desktop, as is possible using some rank tracking tools including STAT, would allow you to separately estimate the highest achievable rank on each device. Unfortunately, there’s no source of search volume that I’m aware of that allows you to split search volumes by device type (please correct me in the comments if you know of one!), so you can split the search volume in half between mobile and desktop, or in a different proportion if you know that the particular space has a bias to mobile or desktop.
Look at the SERPs
The above methodology focuses on backlinks as the key decider of whether a page can rank or not. This is not how search works - there are hundreds of factors that go into deciding who ranks where for which keywords. Looking more closely at who is currently ranking will give great insight, as it might show opportunities that wouldn’t otherwise be apparent bu looking at links alone.
For example, if a weaker domain ranks in the top three positions for a keyword that it is an exact match for, which would indicate that those results aren’t a good opportunity. Conversely, if the pages ranking for a given keyword aren’t a very good match for the intent of the search, and you can create a page that matches the intent better, that represents a better opportunity than link numbers would suggest.
Also, if all of the pages ranking for a search term aren’t a close match to your site (e.g. if all of the results are informational for a search that you would target with a transactional page), it’s probably not a good opportunity and you should remove it from your list.
Conclusion
Thanks for reading through this post! I’d love to hear if anyone has any feedback on the process or ways it could be improved. Let me know in the comments!
How to Estimate the Total Available Organic Search Traffic for a Website was originally posted by Video And Blog Marketing
0 notes
ronijashworth · 6 years ago
Text
How to Estimate the Total Available Organic Search Traffic for a Website
Whether you’re expanding an existing business into a new market, bringing out a new product range, or building a brand new website from scratch, a key question you’ll need to answer is how much organic search traffic is reasonable to expect, and what the ceiling on that number might be. It’s also useful for an established site to know how close to the ceiling you are. In this post, I outline a process for coming to a reasonable estimate for that number.
How we do it
This is all an estimate, while it tries to be as rigorous as possible there are a lot of unknowns that could cause the estimate to be off in either direction. The intention is to get a ballpark estimate in order to set expectations and give an indication of the level of investment that should be devoted to a site.
In order to work out how much organic traffic we can get, we need four things:
1. A list of keywords we can reasonably expect to rank for
In order to gather this list of keywords, a level of keyword research is required. There are many resources available online for keyword research, including the Keyword Research module on DistilledU. In order to get the most accurate estimates, we want to aim for as wide as possible a set of keywords that are actually relevant to the site we’re proposing.
The intention of this methodology is focussed on the core non-branded keywords a site could rank for, e.g. the names of products and services offered by your website. Branded keywords should be considered separately, as they are much more prone to changes in search volume for a new brand being launched.
Tangential or unrelated keywords (that might be the subject of a blog or resources page) are generally better considered separately, as there’s often an unbounded number of these. My recommendation would be that when deciding on content to create, to run this process at a smaller scale over those particular keywords to estimate the organic traffic available for a piece of content.
Below is a scaled-down example list of keywords that we could put together for a site selling colourful knitwear. In reality, your keyword list should probably run to hundreds or thousands of keywords.
2. Search volume for each keyword in our list
There are many SEO tools available to get search volume data from. At Distilled we like to use Ahrefs, but any SEO tool will have this data available. Google Keyword Planner also gives ranges of keyword volume for free, although that data tends to be grouped into ranges. For the purposes of this exercise, you can take the midpoint of those ranges.
No keyword volume source, including Google, will be completely accurate, and they will generally give a monthly average not taking into account seasonal fluctuations. You should also be aware of keyword volumes for close variants being grouped together - there’s a risk you’ll be double counting without taking this into account.
3. For each keyword, the highest ranking we can reasonably expect to achieve
Once we have our list of keywords with search volume, we should look at what’s currently ranking in the search results, in order to estimate where our site could expect to rank.
Note that what I’m about to outline is not a rigorous process. As mentioned above, the idea is to get a ballpark figure, and this method relies on some assumptions that are not to be held as claims about how SEO works. Specifically, we’re assuming that:
We can make pages of high enough quality, and well-targeted enough, to rank well for a given keyword
With this being the case, we could aspire to rank as high as anyone currently ranking in the SERP who has a less strong domain (in terms of backlink profile) than our site.
Of course, SEO is realistically much more complicated than that, but for the purposes of developing an estimate, we can hold our nose and go ahead with these assumptions.
Want more advice like this in your inbox? Join the monthly newsletter.
// <![CDATA[ hbspt.forms.create({ target: '#bottom-of-blog-hubspot-cta-form', css: '', cssRequired: '', errorClass: 'none', errorMessageClass: 'hide', submitButtonClass: 'button orange', portalId: '2124102', formId: '8813300d-507f-42eb-94fe-b6452a7cc124' }); // ]]>
This process is replicating some of the inputs that a lot of keyword tools use to calculate their difficulty scores. The reason we’re doing this is that we need to use a specific ranking position in order to project the amount of traffic we can get, which difficulty scores don’t give us.
In order to work out where can rank, we need to see who’s currently ranking for each keyword. Many rank tracking tools (including STAT, which we use at Distilled) will give you this data, as will search analytics tools such as Ahrefs and SEMrush. Let’s take the top 20 ranking URLs for each keyword. Don’t forget to get the rankings for the country/market you’re doing this analysis on!
View full size image
We then want to cross-check these URLs against backlink data. Select your favourite backlink tool (Moz/Ahrefs/Majestic), and look up the domain-level strength of each domain that ranks for your keywords. If you’re doing this at a large scale, I’d recommend using the API of your tool, or URL Profiler. For this example, I’ve used Ahrefs domain rating (with 100 for any ranking positions that don’t have an organic result).
View full size image
We are making the assumption that we can rank as well as any currently ranking site with a lower domain score than our site. If you’re running this analysis for a new site that doesn’t exist (and therefore doesn’t have a backlink profile), you can use a hypothetical target backlink score based on what you believe to be achievable.
For each keyword, find the highest ranking position which is currently occupied by a weaker domain than yours. If none such pages exist in the top 20 results, we can discount those keywords from our analysis, and assume that we can’t rank for them.
View full size image
4. For each ranking position, the amount of traffic we could expect to get each month
Now that we have our highest achievable rank, and monthly search volume, we can combine them to get the maximum monthly traffic we can expect from each keyword. To do this, we use a clickthrough rate model to estimate what percentage of searches lead to a click on a search result in each ranking position.
There are some very good public resources - I like Advanced Web Ranking’s data. While generic data like this will never be perfect, it will get us a good-enough estimate that we can use for this task.
If we multiply the monthly search volume by the click-through rate estimate for the highest achievable ranking position, that gives us the traffic expectation for each keyword. Add these together, and we get the total traffic we can expect for the site. We’re done!
What we can do with this information
Now that we have a number for the traffic we can expect, what can we do with it? Let’s revisit our assumptions. We’ve gone into this analysis assuming a static list of keywords, and a fixed backlink profile. What if we change that up?
What if our backlink profile improves?
Because we’ve set this up as a model in a spreadsheet, it’s relatively simple to tweak the domain score rating. By doing this, we can see how performance would improve by, for example, adding 10 points to our domain rating. The sensitivity of the traffic levels to small changes in this would be a good indication that it’s a good idea to invest in growing your backlink profile.
What if we target a broader range of keywords?
The other way we could get more traffic would be by targeting more keywords. This could be via blog or resources style content, or by adding more transactional keywords targeting more terms. In order to update the model, it’s a case of essentially repeating the above process with a new list of keywords, and adding it into the initial model. This can be a great way to identify new keyword opportunities.
Improving the model
As mentioned above, this was (intentionally) a very simple model. We’ve glossed over a lot of nuance in order to come to a ballpark figure of achievable search traffic. For a more rigorous estimate of the available search traffic, there are a few modifications I would make:
Use a tailored CTR model
The AWR link I provided above provides a good breakdown of clickthrough rate by industry and device, as well as different types of search intent. If you have the time, you should go through your keywords and pick out the appropriate CTR curve per keyword.
Break it up by device type
Some keywords can have dramatically different search results by device type. Tracking keywords separately on mobile and desktop, as is possible using some rank tracking tools including STAT, would allow you to separately estimate the highest achievable rank on each device. Unfortunately, there’s no source of search volume that I’m aware of that allows you to split search volumes by device type (please correct me in the comments if you know of one!), so you can split the search volume in half between mobile and desktop, or in a different proportion if you know that the particular space has a bias to mobile or desktop.
Look at the SERPs
The above methodology focuses on backlinks as the key decider of whether a page can rank or not. This is not how search works - there are hundreds of factors that go into deciding who ranks where for which keywords. Looking more closely at who is currently ranking will give great insight, as it might show opportunities that wouldn’t otherwise be apparent bu looking at links alone.
For example, if a weaker domain ranks in the top three positions for a keyword that it is an exact match for, which would indicate that those results aren’t a good opportunity. Conversely, if the pages ranking for a given keyword aren’t a very good match for the intent of the search, and you can create a page that matches the intent better, that represents a better opportunity than link numbers would suggest.
Also, if all of the pages ranking for a search term aren’t a close match to your site (e.g. if all of the results are informational for a search that you would target with a transactional page), it’s probably not a good opportunity and you should remove it from your list.
Conclusion
Thanks for reading through this post! I’d love to hear if anyone has any feedback on the process or ways it could be improved. Let me know in the comments!
from Digital Marketing https://www.distilled.net/resources/how-to-estimate-the-total-available-organic-search-traffic-for-a-website/ via http://www.rssmix.com/
0 notes
heavenwheel · 6 years ago
Text
How to Estimate the Total Available Organic Search Traffic for a Website
Whether you’re expanding an existing business into a new market, bringing out a new product range, or building a brand new website from scratch, a key question you’ll need to answer is how much organic search traffic is reasonable to expect, and what the ceiling on that number might be. It’s also useful for an established site to know how close to the ceiling you are. In this post, I outline a process for coming to a reasonable estimate for that number.
How we do it
This is all an estimate, while it tries to be as rigorous as possible there are a lot of unknowns that could cause the estimate to be off in either direction. The intention is to get a ballpark estimate in order to set expectations and give an indication of the level of investment that should be devoted to a site.
In order to work out how much organic traffic we can get, we need four things:
1. A list of keywords we can reasonably expect to rank for
In order to gather this list of keywords, a level of keyword research is required. There are many resources available online for keyword research, including the Keyword Research module on DistilledU. In order to get the most accurate estimates, we want to aim for as wide as possible a set of keywords that are actually relevant to the site we’re proposing.
The intention of this methodology is focussed on the core non-branded keywords a site could rank for, e.g. the names of products and services offered by your website. Branded keywords should be considered separately, as they are much more prone to changes in search volume for a new brand being launched.
Tangential or unrelated keywords (that might be the subject of a blog or resources page) are generally better considered separately, as there’s often an unbounded number of these. My recommendation would be that when deciding on content to create, to run this process at a smaller scale over those particular keywords to estimate the organic traffic available for a piece of content.
Below is a scaled-down example list of keywords that we could put together for a site selling colourful knitwear. In reality, your keyword list should probably run to hundreds or thousands of keywords.
2. Search volume for each keyword in our list
There are many SEO tools available to get search volume data from. At Distilled we like to use Ahrefs, but any SEO tool will have this data available. Google Keyword Planner also gives ranges of keyword volume for free, although that data tends to be grouped into ranges. For the purposes of this exercise, you can take the midpoint of those ranges.
No keyword volume source, including Google, will be completely accurate, and they will generally give a monthly average not taking into account seasonal fluctuations. You should also be aware of keyword volumes for close variants being grouped together - there’s a risk you’ll be double counting without taking this into account.
3. For each keyword, the highest ranking we can reasonably expect to achieve
Once we have our list of keywords with search volume, we should look at what’s currently ranking in the search results, in order to estimate where our site could expect to rank.
Note that what I’m about to outline is not a rigorous process. As mentioned above, the idea is to get a ballpark figure, and this method relies on some assumptions that are not to be held as claims about how SEO works. Specifically, we’re assuming that:
We can make pages of high enough quality, and well-targeted enough, to rank well for a given keyword
With this being the case, we could aspire to rank as high as anyone currently ranking in the SERP who has a less strong domain (in terms of backlink profile) than our site.
Of course, SEO is realistically much more complicated than that, but for the purposes of developing an estimate, we can hold our nose and go ahead with these assumptions.
Want more advice like this in your inbox? Join the monthly newsletter.
// <![CDATA[ hbspt.forms.create({ target: '#bottom-of-blog-hubspot-cta-form', css: '', cssRequired: '', errorClass: 'none', errorMessageClass: 'hide', submitButtonClass: 'button orange', portalId: '2124102', formId: '8813300d-507f-42eb-94fe-b6452a7cc124' }); // ]]>
This process is replicating some of the inputs that a lot of keyword tools use to calculate their difficulty scores. The reason we’re doing this is that we need to use a specific ranking position in order to project the amount of traffic we can get, which difficulty scores don’t give us.
In order to work out where can rank, we need to see who’s currently ranking for each keyword. Many rank tracking tools (including STAT, which we use at Distilled) will give you this data, as will search analytics tools such as Ahrefs and SEMrush. Let’s take the top 20 ranking URLs for each keyword. Don’t forget to get the rankings for the country/market you’re doing this analysis on!
View full size image
We then want to cross-check these URLs against backlink data. Select your favourite backlink tool (Moz/Ahrefs/Majestic), and look up the domain-level strength of each domain that ranks for your keywords. If you’re doing this at a large scale, I’d recommend using the API of your tool, or URL Profiler. For this example, I’ve used Ahrefs domain rating (with 100 for any ranking positions that don’t have an organic result).
View full size image
We are making the assumption that we can rank as well as any currently ranking site with a lower domain score than our site. If you’re running this analysis for a new site that doesn’t exist (and therefore doesn’t have a backlink profile), you can use a hypothetical target backlink score based on what you believe to be achievable.
For each keyword, find the highest ranking position which is currently occupied by a weaker domain than yours. If none such pages exist in the top 20 results, we can discount those keywords from our analysis, and assume that we can’t rank for them.
View full size image
4. For each ranking position, the amount of traffic we could expect to get each month
Now that we have our highest achievable rank, and monthly search volume, we can combine them to get the maximum monthly traffic we can expect from each keyword. To do this, we use a clickthrough rate model to estimate what percentage of searches lead to a click on a search result in each ranking position.
There are some very good public resources - I like Advanced Web Ranking’s data. While generic data like this will never be perfect, it will get us a good-enough estimate that we can use for this task.
If we multiply the monthly search volume by the click-through rate estimate for the highest achievable ranking position, that gives us the traffic expectation for each keyword. Add these together, and we get the total traffic we can expect for the site. We’re done!
What we can do with this information
Now that we have a number for the traffic we can expect, what can we do with it? Let’s revisit our assumptions. We’ve gone into this analysis assuming a static list of keywords, and a fixed backlink profile. What if we change that up?
What if our backlink profile improves?
Because we’ve set this up as a model in a spreadsheet, it’s relatively simple to tweak the domain score rating. By doing this, we can see how performance would improve by, for example, adding 10 points to our domain rating. The sensitivity of the traffic levels to small changes in this would be a good indication that it’s a good idea to invest in growing your backlink profile.
What if we target a broader range of keywords?
The other way we could get more traffic would be by targeting more keywords. This could be via blog or resources style content, or by adding more transactional keywords targeting more terms. In order to update the model, it’s a case of essentially repeating the above process with a new list of keywords, and adding it into the initial model. This can be a great way to identify new keyword opportunities.
Improving the model
As mentioned above, this was (intentionally) a very simple model. We’ve glossed over a lot of nuance in order to come to a ballpark figure of achievable search traffic. For a more rigorous estimate of the available search traffic, there are a few modifications I would make:
Use a tailored CTR model
The AWR link I provided above provides a good breakdown of clickthrough rate by industry and device, as well as different types of search intent. If you have the time, you should go through your keywords and pick out the appropriate CTR curve per keyword.
Break it up by device type
Some keywords can have dramatically different search results by device type. Tracking keywords separately on mobile and desktop, as is possible using some rank tracking tools including STAT, would allow you to separately estimate the highest achievable rank on each device. Unfortunately, there’s no source of search volume that I’m aware of that allows you to split search volumes by device type (please correct me in the comments if you know of one!), so you can split the search volume in half between mobile and desktop, or in a different proportion if you know that the particular space has a bias to mobile or desktop.
Look at the SERPs
The above methodology focuses on backlinks as the key decider of whether a page can rank or not. This is not how search works - there are hundreds of factors that go into deciding who ranks where for which keywords. Looking more closely at who is currently ranking will give great insight, as it might show opportunities that wouldn’t otherwise be apparent bu looking at links alone.
For example, if a weaker domain ranks in the top three positions for a keyword that it is an exact match for, which would indicate that those results aren’t a good opportunity. Conversely, if the pages ranking for a given keyword aren’t a very good match for the intent of the search, and you can create a page that matches the intent better, that represents a better opportunity than link numbers would suggest.
Also, if all of the pages ranking for a search term aren’t a close match to your site (e.g. if all of the results are informational for a search that you would target with a transactional page), it’s probably not a good opportunity and you should remove it from your list.
Conclusion
Thanks for reading through this post! I’d love to hear if anyone has any feedback on the process or ways it could be improved. Let me know in the comments!
from Digital https://www.distilled.net/resources/how-to-estimate-the-total-available-organic-search-traffic-for-a-website/ via http://www.rssmix.com/
0 notes
donnafmae · 6 years ago
Text
How to Estimate the Total Available Organic Search Traffic for a Website
Whether you’re expanding an existing business into a new market, bringing out a new product range, or building a brand new website from scratch, a key question you’ll need to answer is how much organic search traffic is reasonable to expect, and what the ceiling on that number might be. It’s also useful for an established site to know how close to the ceiling you are. In this post, I outline a process for coming to a reasonable estimate for that number.
How we do it
This is all an estimate, while it tries to be as rigorous as possible there are a lot of unknowns that could cause the estimate to be off in either direction. The intention is to get a ballpark estimate in order to set expectations and give an indication of the level of investment that should be devoted to a site.
In order to work out how much organic traffic we can get, we need four things:
1. A list of keywords we can reasonably expect to rank for
In order to gather this list of keywords, a level of keyword research is required. There are many resources available online for keyword research, including the Keyword Research module on DistilledU. In order to get the most accurate estimates, we want to aim for as wide as possible a set of keywords that are actually relevant to the site we’re proposing.
The intention of this methodology is focussed on the core non-branded keywords a site could rank for, e.g. the names of products and services offered by your website. Branded keywords should be considered separately, as they are much more prone to changes in search volume for a new brand being launched.
Tangential or unrelated keywords (that might be the subject of a blog or resources page) are generally better considered separately, as there’s often an unbounded number of these. My recommendation would be that when deciding on content to create, to run this process at a smaller scale over those particular keywords to estimate the organic traffic available for a piece of content.
Below is a scaled-down example list of keywords that we could put together for a site selling colourful knitwear. In reality, your keyword list should probably run to hundreds or thousands of keywords.
2. Search volume for each keyword in our list
There are many SEO tools available to get search volume data from. At Distilled we like to use Ahrefs, but any SEO tool will have this data available. Google Keyword Planner also gives ranges of keyword volume for free, although that data tends to be grouped into ranges. For the purposes of this exercise, you can take the midpoint of those ranges.
No keyword volume source, including Google, will be completely accurate, and they will generally give a monthly average not taking into account seasonal fluctuations. You should also be aware of keyword volumes for close variants being grouped together - there’s a risk you’ll be double counting without taking this into account.
3. For each keyword, the highest ranking we can reasonably expect to achieve
Once we have our list of keywords with search volume, we should look at what’s currently ranking in the search results, in order to estimate where our site could expect to rank.
Note that what I’m about to outline is not a rigorous process. As mentioned above, the idea is to get a ballpark figure, and this method relies on some assumptions that are not to be held as claims about how SEO works. Specifically, we’re assuming that:
We can make pages of high enough quality, and well-targeted enough, to rank well for a given keyword
With this being the case, we could aspire to rank as high as anyone currently ranking in the SERP who has a less strong domain (in terms of backlink profile) than our site.
Of course, SEO is realistically much more complicated than that, but for the purposes of developing an estimate, we can hold our nose and go ahead with these assumptions.
Want more advice like this in your inbox? Join the monthly newsletter.
// <![CDATA[ hbspt.forms.create({ target: '#bottom-of-blog-hubspot-cta-form', css: '', cssRequired: '', errorClass: 'none', errorMessageClass: 'hide', submitButtonClass: 'button orange', portalId: '2124102', formId: '8813300d-507f-42eb-94fe-b6452a7cc124' }); // ]]>
This process is replicating some of the inputs that a lot of keyword tools use to calculate their difficulty scores. The reason we’re doing this is that we need to use a specific ranking position in order to project the amount of traffic we can get, which difficulty scores don’t give us.
In order to work out where can rank, we need to see who’s currently ranking for each keyword. Many rank tracking tools (including STAT, which we use at Distilled) will give you this data, as will search analytics tools such as Ahrefs and SEMrush. Let’s take the top 20 ranking URLs for each keyword. Don’t forget to get the rankings for the country/market you’re doing this analysis on!
View full size image
We then want to cross-check these URLs against backlink data. Select your favourite backlink tool (Moz/Ahrefs/Majestic), and look up the domain-level strength of each domain that ranks for your keywords. If you’re doing this at a large scale, I’d recommend using the API of your tool, or URL Profiler. For this example, I’ve used Ahrefs domain rating (with 100 for any ranking positions that don’t have an organic result).
View full size image
We are making the assumption that we can rank as well as any currently ranking site with a lower domain score than our site. If you’re running this analysis for a new site that doesn’t exist (and therefore doesn’t have a backlink profile), you can use a hypothetical target backlink score based on what you believe to be achievable.
For each keyword, find the highest ranking position which is currently occupied by a weaker domain than yours. If none such pages exist in the top 20 results, we can discount those keywords from our analysis, and assume that we can’t rank for them.
View full size image
4. For each ranking position, the amount of traffic we could expect to get each month
Now that we have our highest achievable rank, and monthly search volume, we can combine them to get the maximum monthly traffic we can expect from each keyword. To do this, we use a clickthrough rate model to estimate what percentage of searches lead to a click on a search result in each ranking position.
There are some very good public resources - I like Advanced Web Ranking’s data. While generic data like this will never be perfect, it will get us a good-enough estimate that we can use for this task.
If we multiply the monthly search volume by the click-through rate estimate for the highest achievable ranking position, that gives us the traffic expectation for each keyword. Add these together, and we get the total traffic we can expect for the site. We’re done!
What we can do with this information
Now that we have a number for the traffic we can expect, what can we do with it? Let’s revisit our assumptions. We’ve gone into this analysis assuming a static list of keywords, and a fixed backlink profile. What if we change that up?
What if our backlink profile improves?
Because we’ve set this up as a model in a spreadsheet, it’s relatively simple to tweak the domain score rating. By doing this, we can see how performance would improve by, for example, adding 10 points to our domain rating. The sensitivity of the traffic levels to small changes in this would be a good indication that it’s a good idea to invest in growing your backlink profile.
What if we target a broader range of keywords?
The other way we could get more traffic would be by targeting more keywords. This could be via blog or resources style content, or by adding more transactional keywords targeting more terms. In order to update the model, it’s a case of essentially repeating the above process with a new list of keywords, and adding it into the initial model. This can be a great way to identify new keyword opportunities.
Improving the model
As mentioned above, this was (intentionally) a very simple model. We’ve glossed over a lot of nuance in order to come to a ballpark figure of achievable search traffic. For a more rigorous estimate of the available search traffic, there are a few modifications I would make:
Use a tailored CTR model
The AWR link I provided above provides a good breakdown of clickthrough rate by industry and device, as well as different types of search intent. If you have the time, you should go through your keywords and pick out the appropriate CTR curve per keyword.
Break it up by device type
Some keywords can have dramatically different search results by device type. Tracking keywords separately on mobile and desktop, as is possible using some rank tracking tools including STAT, would allow you to separately estimate the highest achievable rank on each device. Unfortunately, there’s no source of search volume that I’m aware of that allows you to split search volumes by device type (please correct me in the comments if you know of one!), so you can split the search volume in half between mobile and desktop, or in a different proportion if you know that the particular space has a bias to mobile or desktop.
Look at the SERPs
The above methodology focuses on backlinks as the key decider of whether a page can rank or not. This is not how search works - there are hundreds of factors that go into deciding who ranks where for which keywords. Looking more closely at who is currently ranking will give great insight, as it might show opportunities that wouldn’t otherwise be apparent bu looking at links alone.
For example, if a weaker domain ranks in the top three positions for a keyword that it is an exact match for, which would indicate that those results aren’t a good opportunity. Conversely, if the pages ranking for a given keyword aren’t a very good match for the intent of the search, and you can create a page that matches the intent better, that represents a better opportunity than link numbers would suggest.
Also, if all of the pages ranking for a search term aren’t a close match to your site (e.g. if all of the results are informational for a search that you would target with a transactional page), it’s probably not a good opportunity and you should remove it from your list.
Conclusion
Thanks for reading through this post! I’d love to hear if anyone has any feedback on the process or ways it could be improved. Let me know in the comments!
from Marketing https://www.distilled.net/resources/how-to-estimate-the-total-available-organic-search-traffic-for-a-website/ via http://www.rssmix.com/
0 notes
bethobriendesign-blog1 · 7 years ago
Text
SEMrush Ranking Factors Study 2017 -- Methodology Demystified
In the second edition of theSEMrush Ranking Factors Study 2017 weve added 5 more backlink-related factors and compared the strength of their influence on a particular URL vs. an entire domain. According to tradition, we offer you a deeper look at our methodology. Back in June, when the first edition of the study was published, many brows were raised in disbelief indeed, direct website visits are usually assumed to be the result of higher SERP positions, not vice versa. And yet site visits is exactly what our study confirmed to be the most important Google ranking factor among those we analyzed, both times. Moreover, the methodology we used was unique to the field of SEO studies we traded correlation analysis for the Random Forest machine learning algorithm. As the ultimate goal of our study was to help SEOs prioritize tasks and do their jobs more effectively, we would like to reveal the behind-the-scenes details of our researchand bust some popular misconceptions, so that you can safely rely on our takeaways.
Tumblr media
Jokes aside, this post is for real nerds, so here is a short glossary: Decision tree a tree-like structure that represents a machine learning algorithm usually applied to classification tasks. It splits a training sample dataset into homogeneous groups/subsets based on the most significant of all the attributes. Supervised machine learning a type of machine learning algorithm that trains a model to find patterns in the relationship between input variables (features, A) and output variable (target value, B): B = f(A). The goal of SML is to train this model on a sample of the data so that, when offered, the out-of-sample data the algorithm could be able to predict the target value precisely, based on the features set offered. The training dataset represents the teacher looking after the learning process. The training is considered successful and terminates when the algorithm achieves an acceptable performance quality. Feature (or attribute, or input variable) a characteristic of a separate data entry used in analysis. For our study and this blog post, features are the alleged ranking factors. Binary classification a type of classification tasks, that falls into supervised learning category. The goal of this task is to predict a target value (=class) for each data entry, and for binary classification, it can be either 1 or 0 only. Using the Random Forest Algorithm For the Ranking Factors Study The Random Forest algorithm was developed by Leo Breiman and Adele Cutler in the mid-1990s. It hasnt undergone any major changes since then, which proves its high quality and universality: it is used for classification, regression, clustering, feature selection and other tasks. Although the Random Forest algorithm is not very well known to the general public, we picked it for a number of good reasons: It is one of the most popular machine learning algorithms, that features unexcelled accuracy. Its first and foremost application is ranking the importance of variables (and its nature is perfect for this task well cover this later in this post), so it seemed an obvious choice. The algorithm treats data in a certain way that minimizes errors: The random subspace method offers each learner random samples of features, not all of them. This guarantees that the learner wont be overly focused on a pre-defined set of features and wont make biased decisions about an out-of-sample dataset. The baggingor bootstrap aggregating method also improves precision. Its main point is offering learners not a whole dataset, but random samples of data. Given that we do not have a single decision tree, but rather a whole forest of hundreds of trees, we can be sure that each feature and each pair of domains will be analyzed approximately the same number of times. Therefore, the Random Forest method is stable and operates with minimum errors. The Pairwise Approach: Pre-Processing Input Data We have decided to base our study on a set of 600,000 keywords from the worldwide database (US, Spain, France, Italy, Germany and others), the URL position data for top 20 search results, and a list of alleged ranking factors. As we were not going to use correlation analysis, we had to conduct binary classification prior to applying the machine learning algorithm to it. This task was implemented with the Pairwise approach one of the most popular machine-learned ranking methods used, among others, by Microsoft in its research projects. The Pairwise approach implies that instead of examining an entire dataset, each SERP is studied individually - we compare all possible pairs of URLs (the first result on the page with the fifth, the seventh result with the second, etc.) in regards to each feature. Each pair is assigned a set of absolute values, where each value is a quotient after dividing the feature value for the first URL by the feature value for the second URL. On top of that, each pair is also assigned a target value that indicates whether the first URL is positioned higher than the second one on the SERP (target value = 1) or lower (target value = 0). Procedure outcomes: Each URL pair receives a set of quotients for each feature and a target value of either 1 or 0. This variety of numbers will be used as a training dataset for the decision trees.We are now able to make statistical observations that certain features values and their combinations tend to result in a higher SERP position for a URL. This allows us to build a hypothesis about the importance of certain features and make a forecast about whether a certain set of feature values will lead to higher rankings.Growing the Decision Tree Ensemble: Supervised Learning The dataset we received after the previous step is absolutely universal and can be used for any machine learning algorithm. Our preferred choice was Random Forest, an ensemble of decision trees. Before the trees can make any reasonable decisions, they have to train this is when the supervised machine learning takes place. To make sure the training is done correctly and unbiased decisions about the main data set are made, the bagging and random subspace methods are used.
Tumblr media
Bagging is the process of creating a training dataset by sampling with replacement. Lets say we have X lines of data. According to bagging principles, we are going to create a training dataset for each decision tree, and this set will have the same number of X lines. However, these sample sets will be populated randomly and with replacement so it will include only approximately two-thirds of the original X lines, and there will be value duplicates. About one-third of the original values remain untouched and will be used once the learning is over. Wedid the similar thing for the features using the random subspace method the decision trees were trained on random samples of features instead of the entire feature set. Not a single tree uses the whole dataset and the whole list of features. But having a forest of multiple trees allows us to saythat every value and every feature are very likely to be used approximately the same amount of times. Growing the Forest Each decision tree repetitively partitions the training sample dataset based on the most important variable and does so until each subset consists of homogeneous data entries. The tree scans the whole training dataset and chooses the most important feature and its precise value, which becomes a kind of a pivot point (node) and splits the data into two groups. For the one group, the condition chosen above is true; for the other one false (YES and NO branches). All final subgroups (node leaves) receive an average target value based on the target values of the URL pairs that were placed into a certain subgroup. Since the trees use the sample dataset to grow, they learn while growing. Their learning is considered successful and high-quality when a target percentage of correctly guessed target values is achieved. Once the whole ensemble of trees is grown and trained, the magic begins the trees are now allowed to process the out-of-sample data (about one-third of the original dataset). A URL pair is offered to a tree only if it hasnt encountered the same pair during training. This means that a https://alanleenhouts.com/store/website-speedup/ URL pair is not offered to 100 percent of the trees in the forest. Then, voting takes place: for each pair of URLs, a tree gives its verdict, aka the probability of one URL taking a higher position in the SERP compared to the second one. The same action is taken by all other trees that meet the havent seen this URL pair before requirement, and in the end, each URL pair gets a set of probability values. Then all the received probabilities are averaged. Now there is enough data for the next step. Estimating Attribute Importance with Random Forest Random Forest produces extremely credible results when it comes to attributing importance estimation. The assessment is conducted as follows: The attribute values are mixed up across all URL pairs, and these updated setsof values are offered to the algorithm. Any changes in the algorithms quality or stability are measured (whether the percentage of correctly guessed target values remains the same or not). Then, based on the values received, conclusions can be made: If the algorithms quality drops significantly, the attribute is important. Wherein the heavier is the slump in quality, the more important the attribute is. If the algorithms quality remains the same, then the attribute is of minor importance. The procedure is repeated for all the attributes. As a result, a rating of the most important ranking factors is obtained. Why We Think Correlation Analysis is Bad for Ranking Factors Studies We intentionally abandoned the general practice of using correlation analysis, andwe have still received quite a few comments like Correlation doesnt mean causation, Those dont look like ranking factors, but more like correlations. Therefore we feel this point deserves a separate paragraph. First and foremost, we would like to stress again that the initial dataset used for the study is a set of highly changeable values. Just to remind you that we examined not one, but 600,000 SERPs. Each SERP is characterized by its own average attribute value, and this uniqueness is completely disregarded in the process of correlation analysis. That being said, we believe that each SERP should be treated separately and with respect to its originality. Correlation analysis gives reliable results only when examining the relationship between two variables (for example, theimpact of the number of backlinks on a SERP position). Does this particular factor influence position? this question can be answered quite preciselysince the only impacting variable is involved. But are we in a position to study each factor in isolation? Probably not, as we all know that there is a whole bunch of factors that influence a URL position in a SERP. Another quality criterion for correlation analysis is the variety of the received correlation ratios. For example, if there is a lineup of correlation ratios like (-1, 0.3 and 0.8), then it is pretty fair to say that there is one parameter that is more important than others. The closer the ratios absolute value, or modulus, is to one, the stronger the correlation. If the ratios modulus is under 0.3, such a correlation can be disregarded the dependency between the two variables, in this case, is too weak to make any trustworthy conclusions. For all the factors we analyzed, the correlation ratio was under 0.3, so we had to shed this method. One more reason to dismiss this analysis method was the high sensitivity of the correlation value to outliers and noises, and the data for various keywords suggests a lot of them. If one extra data entry is added to the dataset, the correlation ratio changes immediately. Hence this metric cant be viable in the case of multiple variables, e.g. in a ranking factors study, and can even lead to incorrect deductions. Coming down to the final curtain, it is hard to believe that one or two factors with a correlation ratio modulus so close to one exist if this were true, anyone could easily hack Googles algorithms, and we would all be in position 1! Frequently Asked Questions Although we tried to answer most of the frequently raised questions above, here are some more for the more curious readers. Whydidnt we use artificial neural networks (ANNs)? Although artificial neural networks are perfect for tasks with a large number of variables, e.g. image recognition (where each pixel is a variable), they produce results that are difficult to interpret and dont allow you to compare the weight of each factor. Besides, ANNs require a massive dataset and a huge number of features to produce reliable results, and the input data we had collected didnt match this description. Unlike Random Forest, where each decision tree votes independently and thus a high level of reliability is guaranteed, neural networks process data in one pot. There is nothing to indicate that using ANNs for this study would result in more accurate results. Our main requirements for a research method were stability and the ability to identify the importance of the factors. That being said, Random Forest was a perfect fit for our task, which is proven by numerous ranking tasks of a similar nature, also implemented with the help of this algorithm. Why are website visits the most important Google ranking factor? Hands down, this was probably the most controversial takeaway of our study. When we saw the results of our analysis, we were equally surprised. At the same time, our algorithm was trained on a solid scope of data, so we decided to double-check the facts. We excluded the organic and paid search data, as well as social and referral traffic, and taken into account only the direct traffic, and the results were pretty much the same the position distribution remained unchanged (the graphs on pp. 40-41 of the study illustrate this point). To us, this finding makes perfect sense and confirms that Google prioritizes domains with more authority, as described in its Search Quality Evaluator Guidelines. Although it may seem that domain authority is just a lame excuse and a very vague and ephemeral concept, these guidelines dispel this myth completely. So, back in 2015 Google introduced this handbook to help estimate website quality and reflect what Google thinks search users want. The handbook lists E-A-T, which stands for Expertise, Authoritativeness, and Trustworthiness, as an important webpage-quality indicator. Main content quality and amount, website information (i.e. who is responsible for the website), and website reputation all influence the E-A-T of a website. We suggest thinking of it in the following way: if a URL ranks in the top 10, by default, it contains content that is relevant to a user search query. But to distribute the placesbetweenthese ten leaders, Google starts to count the additional parameters. We all know that there is a whole team of search quality raters behind the scenes, which is responsible for training the Googles search algorithms and improving search results' relevance. As advised by Google Quality Evaluator Guidelines, raters should give priority to the high-quality pages and teach the algos to do so as well. So, the ranking algorithm is trained to assign a higher position to pages that belong to trusted and highly authoritative domains, and we think this may be the reason behind the data we received for direct traffic and for its importance as a signal. For more information, check out our EAT and YMYL: New Google Search Guidelines Acronyms of Quality Content blog post.
Tumblr media
Heres more: at the recent SMX East conference, Googles Gary Illyes confirmed that how people perceive your site will affect your business. And although this, according to Illyes, does not necessarily affect how Google ranks your site, it still seems important to invest in earning users loyalty: happy users = happy Google. The Google algorithm is like a human. It looks at brand sentiment and online reputation to understand your website better. @methode #SMX Ari Finkelstein (@arifinkels) October 26, 2017 What does this mean to you again? Well, brand awareness (estimated, among other things, by your number of direct website visits) strongly affects your rankings and deserves your putting effort into it on par with SEO. Difference in Ranking Factors Impacton a URL vs a Domain As you may have spotted, every graph from our study shows a noticeable spike for the second position. We promised to have a closer look at this deviation and thus added a new dimension to our study. The second edition covers the impact of the three most important factors (direct website visits, time on site and the number of referring domains) on the rankings of a particular URL, rather than just the domain that it resides on. One would assume that the websites on the first position are the most optimized, and yet we saw that every trend line showed a drop on the first position. We connected this deviation with branded keyword search queries. A domain will probably take the first position in the SERP for any search query that contains its branded keywords. And despite how well a website is optimized, it will rank number one anyway, so it has nothing to do with SEO efforts. This explains why ranking factors affect a SERPs second position more than the first one. To prove this, we decided to look at our data from a new angle: we investigated how the ranking factors impact single URLs that appear on the SERP. For each factor, we built separate graphs showing the distribution of URLs and domains across the first 10 SERP positions (please see pp. 50-54). Although the study includes graphs only for the top three most influential factors, the tendency that we discovered persists for other factors as well. What does this mean to you as a marketer? When a domain is ranking for a branded keyword, many factors lose their influence. However when optimizing for non-branded keywords, keep in mind that the analyzed ranking factors have more influence on the positions of the particular URL than on the domain on which it resides. That means that the rankings of a specific page are more sensitive to on-page optimization, link-building efforts and other optimization techniques. Conclusion: How to Use the SEMrush Ranking Factors Study There is no guarantee that, if you improve your websites metrics for any of the above factors, your pages will start to rank higher. We conducted a very thorough study that allowed us to draw reliable conclusions about the importance of these 17 factors to ranking higher on Google SERPs. Yet, this is just a reverse-engineering job well done, not a universal action plan and this is what each and every ranking factors study is about. No one but Google knows all the secrets. However, here is a workflow that we suggest for dealing with our research: Step 1. Understand which keywords you rank for do they belong to low, medium or high search volume groups? Step 2. Benchmark yourself against the competition: take a closer look at the methods they use to hit top 10 and at their metrics Do they have a large scope of backlinks? Are their domains secured with HTTPS? Step 3. Using this study, pick and start implementing the optimization techniques that will yield the best results based on your keywords and the competition level on SERPs. Once again, we encourage you to take a closer look at our study, reconsider the E-A-T concept and get yourself a good, fact-based SEO strategy! What makes your rankings go up when you're done with the on-page SEO? Ranking Factors study 2.0 gives the answer Get PDF https://www.semrush.com/blog/semrush-ranking-factors-study-2017-methodology-demystified/
0 notes