#vs seeing sciences as a specific definable category of practices?
Explore tagged Tumblr posts
Link
In the summer of 2014, I gave birth to a baby boy. He was born with a perfect Apgar score, after a very easy delivery. But my labor had not been smooth—in fact, throughout the day and a half of contractions, I believed there was something decidedly wrong. I also felt that way as I held him for the first time, and he writhed violently under my hands. In a video taken about 10 minutes after he was born, he can be seen lifting his head up off my chest. “Ooooh, look at how advanced he is!” someone can be heard trilling in the background, before her voice is overtaken by my own. “Don’t do that, love,” I say. Then, to the camera: “Does he seem like he’s in pain to you?”
It took my husband and me three years to understand that in fact I was right that day in the delivery room. Our son was hurt. And it will take him years to heal—longer than it should have, and that is on top of the injustice of the original wound—though I thank God every day that we figured it out.
The first breakthrough came when my husband David remembered a book about brain science he had read a decade earlier, by a doctor named Norman Doidge. It changed our lives, by allowing us to properly understand our son’s injury (and to understand why we couldn’t manage to get a straight answer about it from any of the “experts” we had seen). It’s been a tough road, but from that moment on, we at least knew what to do—and why.
A year or so later, we met Doidge and his wife, Karen, for dinner, and it is here that the story may become pertinent for you.
After we ordered, I told Norman I had a question I’d been wanting to ask—and that I wanted his honest answer to it, even if it meant that I had done something wrong. I proceeded to relay to him the entire tale, from the very beginning to that very moment, of what felt to me like our Kafkaesque medical mystery journey.
How was it, I then asked, that it took my husband and me—both children of doctors, both people with reporting and researching backgrounds, among the lucky who have health insurance, and with access through family and friends to what is billed as the best medical care in the country—years to figure this out, and that in the end we only did so basically by accident?
Norman looked at us sympathetically. “I don’t know how else to tell you this but bluntly,” he said. “There are still many good individuals involved in medicine, but the American medical system is profoundly broken. When you look at the rate of medical error—it's now the third leading cause of death in the U.S.—the overmedication, creation of addiction, the quick-fix mentality, not funding the poor, quotas to admit from ERs, needless operations, the monetization of illness vs. health, the monetization of side effects, a peer review system run by journals paid for by Big Pharma, the destruction of the health of doctors and nurses themselves by administrators, who demand that they rush through 10-minute patient visits, when so often an hour or more is required, and which means that in order to be ‘successful,’ doctors must overlook complexity rather than search for it ... Alana, the unique thing here isn’t that you fell down so many rabbit holes. What’s unique is that you found your way out at all.”
I had barely started processing this when Norman moved to change the subject: “Now, can I ask you two something? How come so much of the journalism I read seems like garbage?”
Oh, God.
David and I looked at each other, simultaneously realizing that the after-school special we thought we were in was actually a horror movie. If the medical industry was comprehensively broken, as Norman said, and the media was irrevocably broken, as we knew it was ... Was everything in America broken? Was education broken? Housing? Farming? Cities? Was religion broken?
Everything is broken.
…
For seven decades, the country’s intellectual and cultural life was produced and protected by a set of institutions—universities, newspapers, magazines, record companies, professional associations, cultural venues, publishing houses, Hollywood studios, think tanks, etc. Collectively, these institutions reflected a diversity of experiences and then stamped them all as “American”—conjuring coherence out of the chaos of a big and unwieldy country. This wasn’t a set of factories pumping out identical widgets, but rather a broad and messy jazz band of disparate elements that together produced something legible, clear, and at times even beautiful when each did their part.
…
This was the tinder. The tech revolution was the match—one-upping the ’70s economy by demanding more efficiency and more speed and more boundarylessness, and demanding it everywhere. They introduced not only a host of inhuman wage-suppressing tactics, like replacing full-time employees with benefits with gig workers with lower wages and no benefits, but also a whole new aesthetic that has come to dominate every aspect of our lives—a set of principles that collectively might be thought of as flatness.
Flatness is the reason the three jobs with the most projected growth in your country all earn less than $27,000 a year, and it is also the reason that all the secondary institutions that once gave structure and meaning to hundreds of millions of American lives—jobs and unions but also local newspapers, churches, Rotary Clubs, main streets—have been decimated. And flatness is the mechanism by which, over the past decade and with increasing velocity over the last three years, a single ideologically driven cohort captured the entire interlocking infrastructure of American cultural and intellectual life. It is how the Long March went from a punchline to reality, as one institution after another fell and then entire sectors, like journalism, succumbed to control by narrow bands of sneering elitists who arrogated to themselves the license to judge and control the lives of their perceived inferiors.
Flatness broke everything.
…
Today’s revolution has been defined by a set of very specific values: boundarylessness; speed; universal accessibility; an allergy to hierarchy, so much so that the weighting or preferring of some voices or products over others is seen as illegitimate; seeing one’s own words and face reflected back as part of a larger current; a commitment to gratification at the push of a button; equality of access to commodified experiences as the right of every human being on Earth; the idea that all choices can and should be made instantaneously, and that the choices made by the majority in a given moment, on a given platform represent a larger democratic choice, which is therefore both true and good—until the next moment, on the next platform.
…
“You might not even realize you’re not where you started.” The machines trained us to accept, even chase, this high. Once we accepted it, we turned from willful individuals into parts of a mass that could move, or be moved, anywhere. Once people accepted the idea of an app, you could get them to pay for dozens of them—if not more. You could get people to send thousands of dollars to strangers in other countries to stay in homes they’d never seen in cities they’d never visited. You could train them to order in food—most of their food, even all of their food—from restaurants that they’d never been to, based on recommendations from people they’d never met. You could get them to understand their social world not as consisting of people whose families and faces one knew, which was literally the definition of social life for hundreds of thousands of years, but rather as composed of people who belonged to categories—“also followed by,” “friends in common,” “BIPOC”—that didn’t even exist 15 years ago. You could create a culture in which it was normal to have sex with someone whose two-dimensional picture you saw on a phone, once.
You could, seemingly overnight, transform people’s views about anything—even everything.
The Obama administration could swiftly overturn the decision-making space in which Capitol Hill staff and newspaper reporters functioned so that Iran, a country that had killed thousands of Americans and consistently announces itself to be America’s greatest enemy, is now to be seen as inherently as trustworthy and desirable an ally as France or Germany. Flatness, frictionlessness.
The biological difference between the sexes, which had been a foundational assumption of medicine as well as of the feminist movement, was almost instantaneously replaced not only by the idea that there are numerous genders but that reference in medicine, law or popular culture to the existence of a gender binary is actually bigoted and abusive. Flatness.
Facebook’s longtime motto was, famously, “Move fast and break shit,” which is exactly what Silicon Valley enabled others to do.
The internet tycoons used the ideology of flatness to hoover up the value from local businesses, national retailers, the whole newspaper industry, etc.—and no one seemed to care. This heist—by which a small group of people, using the wiring of flatness, could transfer to themselves enormous assets without any political, legal or social pushback—enabled progressive activists and their oligarchic funders to pull off a heist of their own, using the same wiring. They seized on the fact that the entire world was already adapting to a life of practical flatness in order to push their ideology of political flatness—what they call social justice, but which has historically meant the transfer of enormous amounts of power and wealth to a select few.
Because this cohort insists on sameness and purity, they have turned the once-independent parts of the American cultural complex into a mutually validating pipeline for conformists with approved viewpoints—who then credential, promote and marry each other. A young Ivy League student gets A’s by parroting intersectional gospel, which in turn means that he is recommended by his professors for an entry-level job at a Washington think tank or publication that is also devoted to these ideas. His ability to widely promote those viewpoints on social media is likely to attract the approval of his next possible boss or the reader of his graduate school application or future mates. His success in clearing those bars will in turn open future opportunities for love and employment. Doing the opposite has an inverse effect, which is nearly impossible to avoid given how tightly this system is now woven. A person who is determined to forgo such worldly enticements—because they are especially smart, or rich, or stubborn—will see only examples of even more talented and accomplished people who have seen their careers crushed and reputations destroyed for daring to stick a toe over the ever multiplying maze of red lines.
So, instead of reflecting the diversity of a large country, these institutions have now been repurposed as instruments to instill and enforce the narrow and rigid agenda of one cohort of people, forbidding exploration or deviation—a regime that has ironically left homeless many, if not most, of the country’s best thinkers and creators. Anyone actually concerned with solving deep-rooted social and economic problems, or God forbid with creating something unique or beautiful—a process that is inevitably messy and often involves exploring heresies and making mistakes—will hit a wall. If they are young and remotely ambitious they will simply snuff out that part of themselves early on, strangling the voice that they know will get them in trouble before they’ve ever had the chance to really hear it sing.
…
I’m not looking to rewind the clock back to a time before we all had email and cellphones. What I want is to be inspired by the last generation that made a new life-world—the postwar American abstract expressionist painters, jazz musicians, and writers and poets who created an alternate American modernism that directly challenged the ascendant Communist modernism: a blend of forms and techniques with an emphasis not on the facelessness of mass production, but on individual creativity and excellence.
Like them, our aim should be to take the central, unavoidable and potentially beneficent parts of the Flatness Aesthetic (including speed, accessibility; portability) while discarding the poisonous parts (frictionlessness; surveilled conformism; the allergy to excellence). We should seek out friction and thorniness, hunt for complexity and delight in unpredictability. Our lives should be marked not by “comps” and metrics and filters and proofs of concept and virality but by tight circles and improvisation and adventure and lots and lots of creative waste.
And not just to save ourselves, but to save each other. The vast majority of Americans are not ideologues. They are people who wish to live in a free country and get along with their neighbors while engaging in profitable work, getting married, raising families, being entertained, and fulfilling their American right to adventure and self-invention. They are also the consumer base for movies, TV, books, and other cultural products. Every time Americans are given the option to ratify progressive dictates through their consumer choices, they vote in the opposite direction. When HBO removed Gone with the Wind from its on-demand library last year, it became the #1 bestselling movie on Amazon. Meanwhile, endless numbers of Hollywood right-think movies and supposed literary masterworks about oppression are dismal failures for studios and publishing houses that would rather sink into debt than face a social-justice firing squad on Twitter.
2 notes
·
View notes
Text
Spirituality Defined
Where did our working definition of spirituality come from? How has it evolved over centuries of research, ritual and belief? Philosophy grad Brayte Singletary stopped by the blog this week to take us on deep dive into the ever-elusive meaning of spirituality. Enjoy!
What even is spirituality? Rachel asks that very question in one of this blog’s first posts, and gives her answer there too. It’s one of the fundamental questions of spiritual direction. Those seeking or giving spiritual direction are liable to stumble on it sooner or later, through education or reflection. This post is one of those trips—and since it’s a bone we may need help chewing, I attempt to shine some Sirius-light on the best research I could dig up. Hopefully it’s illuminating.
In 2016 some researchers in Germany and the U.S. published the results of a formal investigation into the meaning of spirituality [A]. They based their investigation on a 2011 survey of Germans and Americans that asked, among other questions, “How would you define the term ‘spirituality’?” Approximately eighteen hundred different definitions came back, about forty percent German and sixty percent American. Quantifying these samples, the researchers started running statistical analysis.
First they looked for categories of response, grouping similar categories together and narrowing the list down to just those that make the most sense of overall response patterns [B]. They found that ten basically distinct concept clusters [C] come under the heading of spirituality, almost always in some combination [D]:
A keenly-felt connection to and harmony with nature, humanity, the world, the universe, or the whole of reality.
Dependence on, relationship to, or union with the divine; a part of religion, esp. Christianity.
A search for one’s higher or true inner self, meaning, purpose; knowledge of these things; attainment of peace or enlightenment, esp. in terms of a path or journey [E].
Holding and daily acting according to ethical values, especially in relation to others, one’s community, or humanity; a moral way of life [F].
Faith or belief in transmundane forces, energies, beings, a higher power, gods or God.
A noncommittal, indefinite, but intensely emotional, maybe loving sense that there is some thing(s) or being(s) higher than and beyond this world, this life, or oneself [G].
Experience and contemplation of reality and the truth, meaning, purpose, and wisdom, esp. if considered beyond scientific or rational understanding, inexplicable and indemonstrable.
Awareness of and attunement to another, immaterial or supernatural realm and its denizens (spirits, angels, ghosts, etc.); feeling their presence; using special techniques to perceive and interact with them (tarot, crystals, seances, etc.).
Opposite religion, dogma, rules, traditions; unstructured, irreverent, religious individualism.
Individual or private religious practice; prayer, worship, or meditation; relationship-deepening or connection-fostering personal rituals and devotional acts.
Doing the same grouping and narrowing to unearth anything deeper, they found that all of these ten clusters fall somewhere on three scales, which they call the dimensions of spirituality [H]:
I. Vertical vs. horizontal general terminology for transcendence [I]
II. Theistic vs. non-theistic specific terminology for transcendence
III. Individual vs. institutional mediation of transcendence
Finally they found that this analysis confirms their larger research team’s theoretically-grounded hypothesis that the root definition of spirituality is:
Individually-mediated, experience-directed religion, esp. among religious nones [J]: i.e., religion oriented away from mediation through institutions, dependence on organizational structures and absolute authority claims, toward the immediacy of firsthand experience, emancipatory independence and value relative to the individual [K].
All this verbiage cries out for explanation. But for the moment let’s step back to marvel at our good luck in having research like this. Its conclusions about the meaning of spirituality—at least the ten concept clusters and three scales—came through something nearer experimentation in a laboratory than reflection in an armchair. In philosophical jargon, this argus-eyed approach was a posteriori rather than a priori; in anthropological jargon, emic rather than etic. As a result, we better see wrinkles in the meaning of spirituality, including internal inconsistencies that a cyclopic definitional scheme might smooth over, e.g., as a part of religion (2) and as opposite it (10).
For starters then, we see that this definition of spirituality is tripartite: “individually-mediated”, “experience-directed”, and “religion”. Since spirituality here is a kind of religion, religion is the core concept, so we’ll take it from there. That will lead to the three scales of spirituality, ‘vertical vs. horizontal terminology’ (I), ‘theistic vs. non-theistic terminology’ (II), and ‘individual vs. institutional mediation’ (III). “Individually-mediated” will come along with the third. That leaves only “experience-directed” and closing remarks. Now where did I put my patience for dry exposition…?
If none of it jibes with your own sense of spirituality, all the better! We all have much to learn, and outliers—you whose lives are led under stones yet unturned by science—have much to teach us.
First “religion”: For these researchers religion is any socially constructed system of symbols and rituals that interprets transcendent experience in ultimate terms [L]. This applies even to people who don’t consider themselves religious, including those who would self-describe as “spiritual but not religious”. But precisely what do transcendent experience and ultimate mean here? Transcendent experience—or simply ‘transcendence’—is any experience of “distance and departure from [the] everyday”, above and beyond the boundaries of ordinary experience [M]. More than just extraordinary, it exceeds our expectations of life and the world as we know it, e.g., by excelling in its class or defying classification (almost) altogether: the weirder and more wonderful, the more transcendent. So transcendent experience is often what we would traditionally call ‘religious experience’, but they make the distinction that it only counts as religious if on interpretation it’s cast in ultimate terms. Turning to “ultimate” then, here this is really elliptical for ‘of ultimate concern or importance to a person’. The ultimate is what “gives depth, direction and unity to all other concerns”, as theologian Paul Tillich puts it, from whom they draw the idea—e.g., our answers to basic questions about the world and our place in it [N]. Bringing these ideas together, a merely transcendent experience becomes genuinely religious when we see in it something all-important to us, and it becomes full-fledged religion when we build around it a symbolic-ritualistic framework of beliefs and practices. One’s framework needn’t be grand or widely-shared: it might be a slim private affair, like a single-person tent that’s as easy to pitch as to pack up and carry. Likewise a person can bring to transcendent experience a religious interpretive lens, or craft one afterwards just to come to terms with it. Either priority fits.
Before we move on to the next concept, let’s clear up some potentially misleading language in this definition of religion. To start, “socially constructed” here doesn’t necessarily mean ‘made up’, ‘fake’, or otherwise unreal. It just means that if nobody thought or talked about religion, there wouldn’t be any: its existence depends on its exercise. Likewise the claim that it “interprets” transcendent experience doesn't imply that it therefore misinterprets it. Indeed the opposite may well be true. Even elementary sense perception needs interpretation to become understanding: naked experience unclothed by categories or classifications is at best a muddle—e.g., in rounding an unfamiliar corner in the city or in coming out without warning on an open expanse in the country, when the sudden change of scenery produces a visual experience of undifferentiated shape and color, it’s all just optical nonsense until reason and intellect, as it were, catch up, and organize this sense data into a coherent picture: only then when interpretation goes to work does one finally know what she’s looking at. Although we may at times be apt to make meaning where there is none, often enough we find it right where it belongs. So this definition doesn’t debunk religion; it merely says that, assuming it has this experiential basis, it’s imbued with the meaning we give it, veracious or fallacious.
The terminology of our interpretation, i.e., our way of using terms for and ideas about the ultimate, admits of a couple distinctions. These are also the first and second scales of spirituality above (I-II): vertical-horizontal, and within that, theistic and non-theistic [O]. The former measures the metaphysical distance transcendent experience crosses. The latter measures the unity and personality and sometimes also the clarity of the religious object. Vertical terminology characteristically evokes what we would traditionally call the transcendent, e.g., God and heaven—generally, the otherworldly. It aims at things other than and over this world and oneself in it. Horizontal terminology tends the other way, toward the traditionally immanent, e.g., nature and humanity. Leaning this-worldly, it aims at things in and of the world and the world itself. Notably, whereas the vertical is often explicitly religious, the horizontal’s religiosity can even escape the notice of the person professing it [P]. Within this distinction is that between theistic and non-theistic terminology. The apparent presence of God, gods, and god-like beings or forces maps an important area of vertically transcendent experience, as their apparent absence does an antipodean area of horizontally transcendent experience. But this also sheds light on terminology between vertical and horizontal. This family of views sees the ultimate as in neither our world nor a world beyond, but rather in “a world behind”, i.e., behind and beneath the world’s surface appearances [Q]. Typically this is non-theistic, e.g., about ghosts, spirits, energies, or forces.
A gloss of the third scale (III) now moves into view, and with it “individually-mediated”: Individual-institutional mediation of transcendence measures the directness or indirectness of a person’s access to transcendent experience, i.e., the extent and power of the gatekeepers standing in her way. As these researchers put it, “Institutionalized mediation says that ... there is no other way to transcendence than through the church, sacraments, and priests; that there is no other truth than the sanctioned teachings; and that the ultimate concern is determined by the institution and its tradition” [R]. By contrast, and often in vociferous reply, individual-mediation says, “there is no or very little mediation of transcendence, but rather the experiential immediacy of the individual; there are no claims of absoluteness, but the individualistic evidence of experience; there is no or very little organization or structure" [S]. In this way, against so-called organized religion’s usual mediation by institutions, esp. hierarchical structures operating them, spirituality favors an unpatrolled, gates-wide-open setup. Yet it doesn’t follow from such independence that spirituality is therefore a lonely pursuit—though “flight of the alone to the Alone”, i.e., hermetic mysticism, is surely right at home here too [T]. We’re able to have experiences with others, just not for them, so it can be equally possible to pursue direct experience of transcendence with others as by oneself.
Lastly, “experience-directed”: This means that, whereas transcendent experience might play no ongoing role in a religion’s usual exercise, e.g., as none other than an oft-remembered historical event, in spirituality it takes the lead. Ritual, symbol, etc., become at best aids to pursuit of transcendence, but at worst impediments. Therefore spirituality in its purest, i.e., barest, form may focus on such experience exclusively; and since “directed” here means both ‘directed to’ and ‘directed by’, the religious ideal may resemble an upward spiral of being led from transcendence to transcendence by transcendence. Still this isn’t to say that spirituality takes direction from nothing else, or that by focusing on transcendence even exclusively, the rest of familiar religion vanishes. A spiritual purist may disavow religious side projects in pursuit of her wonted mode of transcendence, or she may simply subordinate them to it as various means to this end. Yet while she might style herself as therefore unencumbered in her pursuit of raw experience, her religious interpretive lens remains ever-present, however unwittingly. It must, or else her chase after the spiritual would be of the wild-goose variety. E.g., someone undergoing a crisis of faith might discover to her horror that she’s no longer able to participate in her favorite religious exercises, since the vinegar of doubt now spoils every well from which she used to draw joy. Since her experiences can’t mean what they used to, they can’t be what they used to either.
Let’s sum up with a little illustration. Consider this spiritual foil: one an atheistic nature lover, the other a Catholic anchoress. The former’s approach is thoroughly horizontal and non-theistic. She takes regular hikes to feast on natural beauty and sublimity, but deems it all mere serendipity in a chaotic cosmos. She’s a proficient adventurer, as comfortable with friends as without. She might not spurn a Beatrice to guide her through some earthly paradise, but her trust would be that when she came face to facelessness with wild abundance, her delight would need no shepherd. The abundance itself would call out of her everything necessary for its appreciation. In this way she mediates her own pursuit of these experiences. Their ultimacy for her comes not only from her denial of the otherworldly, but also from her judgment that nature is intrinsically, i.e., ultimately, good—or at least, that immersion in it stirs and sustains her is. Conversely, the latter’s approach is thoroughly theistic and vertical, and manifestly ultimate. She spends her life in solitary prayer. Sometimes during contemplation of the divine she has ecstatic visions or auditions. But whatever happens, her daily goal is total abandonment to God. Still even with the individuality of her self-mediating lifestyle, it retains considerable institutionality. She holds fast to piety towards the Church, its orthodoxy and orthopraxy. Yet despite this rigid adherence to ecclesiastical authority—or, she would say, because of it—, she lives as a recluse whose sole aim is attaining union with Him Whom she worships as Transcendence Itself. Both in their disparate ways are individually-mediated, experience-directed religion.
Here we are then! We’ve gained at long last the real meaning of spirituality, right? Well, maybe: We have to trust not only that German and American ideas of spirituality are the same as everybody else’s, but also that the notions of these particular people are the same as those of other Germans and Americans [U]. Moreover we must take for granted that what they put in Tweet-sized writing when a survey bluntly asked them their opinion is the same as what they think all the time, even when they’re not thinking about what they think [V]. Still science has yet to master the art of mind-reading. So even if this isn’t the definitive definition of ‘spirituality’, it’s got my money for our best guess yet.
In Rachel’s post, she’s wise to the width of variety, saying, “Spirituality has been defined and redefined throughout human history, and it is now my intention to shout yet another definition to the abyss.” For her, its definition is: “the practice of deriving any amount of meaning from any event, thought, or activity.” Looking back at the ten concept clusters above, this bears striking resemblance to parts of (3) and (7). She’s in good company. Clinicians and care professionals typically promote this conception: e.g., psychological measures of wellbeing that account for spirituality usually cast it in these terms, viz., purpose and meaning. Though some have wondered whether this confuses spirituality with a part of mental health, the findings above resoundingly vindicate it as an important part of the spiritual puzzle [W]. If they also solve that puzzle, hopefully they do so more in the spirit of Ariadne’s clue out of the Labyrinth than Alexander’s sword through the Knot. At the very least, such research is a waypoint on the path to understanding. If none of it jibes with your own sense of spirituality, all the better! We all have much to learn, and outliers—you whose lives are led under stones yet unturned by science—have much to teach us. So it’s still worth asking:
What does spirituality mean to you? Please share your definition in the comments.
Unpack what spirituality uniquely means to you through the ancient practice of spiritual direction. Schedule a free online session through the link in the comments.
Endnotes:
A. Eisenmann, Clemens, et al. “Dimensions of “Spirituality”: The Semantics of Subjective Definitions.” Semantics and Psychology of Spirituality: A Cross-Cultural Analysis, ed. by Heinz Streib & Ralph Wood, Jr., Springer, 2016, p. 125.
B. Op. cit., pp.129-35. Before grouping and narrowing them together and down, these were the forty-four recurring categories they found:
Faith and belief, believing, belief system
Connectedness, relationship, in touch with, harmony
Individual, personal, private, subjective
Everyday, daily life, way of life, to act
Values, (higher) order, morals, karma
God (also the Father, Lord, Creator, the Divine)
Unspecified transcendent: something bigger, beyond, greater; “may be”
Feeling, emotion, intuition, empathy, heart, love
Within, self, higher Self, inner core, essence
Seeking, path, journey, reaching, to evolve, to achieve
Awareness, consciousness, sense of, feeling a presence, in tune
Supernatural, non-material, cannot see or touch
Transcendental higher power/forces/energy
Thinking about, to understand, to reflect, contemplation
Relation to the world, nature, environment, universe
Cannot be explained or scientifically proven, beyond understanding
Higher/beyond/greater/other than oneself/humans/this life
Relation to others, community, all humanity, humankind
Experience, sensory perception Spirit and mind
Rest (i.e., the remainder of uncategorized responses)
Practices, to practice (one’s faith), music, prayer, worship, meditation
(Inner) peace, enlightenment and other attitudes and states of being
Guided, destined, controlled, saved, healed, dependent
Part of religion, Christian, biblical
All-connectedness, part of something bigger
Meaning and (higher) purpose, questions and answers
Transcendental absolute, “unity of existence,” omnipresent and indiscriminate, the one
Otherworldly, beyond this world, “spiritual” realms Acknowledge, to recognize, to accept, to realize Vague, unclear, unsure; bullshit, fantasy, hocus pocus Without rules, tradition, norms, dogma, structure, directions (21) Something else than religion, without worship
Energies, vital principle, ghosts, angels and demons, spirits
The truth, true nature of existence, wisdom, reality (4) Jesus, Christ, Holy Spirit, the Son Greater being/person, deities, gods Soul
Universal category, basis of mankind Esoteric, occultism, spiritism, mystic, magic (39) Deal with, interest in, engagement, focus
Part and beyond religion Obedience and devotion Life after death.
C. I borrow the notion of concept clusters from passing familiarity with Ludwig Wittgenstein’s philosophy of language.
D. Op. cit., pp. 137-8. Paraphrase.
E. Whereas spirituality conceived of as a part of religion (2) fits nicely with its mostly premodern history as just that, the conception immediately following of it as a journey to one’s true inner self (3) sits well with modern social movements toward individualism and subjectivism: op. cit., p. 146.
F. Spirituality conceived of as living out one’s values may partly underlie the self-identification “spiritual but not religious”. Here ’spirituality’ primarily indicates an ethical concern that being merely ‘religious’ doesn’t—not just talking the talk but walking the walk: ibid. More clearly this identification involves some combination of clusters with (9).
G. The much-maligned vagueness of spirituality’s meaning may come from this conception of it as a sense of something indefinite and beyond: ibid. N.b., philosophers of language usually distinguish vagueness, i.e., unclear meaning due to imprecise extension over borderline cases, from ambiguity, i.e., unclear meaning due to polysemy—having multiple meanings.
H. Op. cit., p. 143. Paraphrase. Their dimensions are: (I) mystical vs. humanistic transcending; (II) theistic vs. non-theistic transcendence; and (III) individual “lived” experience vs. dogmatism.
I. I use “transcendence” and “transcendent experience” interchangeably throughout this post. Though there may be other forms of transcendence than experience, talk of ‘transcendence’ as an event and not, e.g., as a divine attribute, usually means ‘experience of transcendence’, i.e., ‘transcendent experience’.
J. Religious nones get their names from those who answer “none” to demographic polls asking their religious affiliation. In other words, they are the religiously unaffiliated. Cf. unchurched.
K. Op. cit., p. 148. Paraphrase. Their definition is privatized experience-oriented religion, following research by other members of their team: Streib, Heinz, & Wood, Jr., Ralph. “Understanding “Spirituality”—Conceputal Considerations.” Semantics and Psychology of Spirituality: A Cross-Cultural Analysis, ed. by Heinz Streib & Ralph Wood, Jr., Springer, 2016, p. 9. Ensuing fns. refer to that ch.
L. Op. cit., p. 11. Cf. Emile Durkheim’s definition of religion, popular esp. in U.S. religious studies depts.: “a unified system of beliefs and practices relative to sacred things, that is to say, things set apart and forbidden—beliefs and practices which unite into one single moral community called a Church, all those who adhere to them”: The Elementary Forms of Religious Life. trans. Carol Cosman, Oxford Univ. Press, 2001, p. 46.
M. Op. cit., p. 10.
N. Op. cit., p. 11.
O. Strictly speaking, non-theistic terminology could be either vertical or horizontal, while theistic terminology is by definition vertical. As it happens however, or at least according to this research, our thinking about spirituality typically separates out the theistic and vertical from the non-theistic and horizontal.
P. Op. cit., p. 12.
Q. Ibid.
R. Op. cit. p. 14.
S. Ibid. They also mention here sectarian middle mediation “through a prophetic and charismatic person”.
T. Famous last words of the Neoplatonic classic: Plotinus. Enneads. VI.9.11. trans. Andrew Louth, qtd. in The Origins of the Christian Mystical Tradition: From Plato to Denys, Oxford Univ. Press, 1981, p. 51.
U. Cf. WEIRD bias (Western, educated, industrialized, rich, and democratic), an ongoing problem for representative sampling: Henrich, Joseph, Heine, Steven J., & Norenzayan, Ara. “The weirdest people in the world?” Behavioral and Brain Sciences, 33, 2-3, 2010, 61–83. In fact there were some statistically significant differences between German and American responses: American definitions of spirituality were more Christian or otherwise traditionally religious, mentioning Jesus and the Holy Spirit much more, but God only a little more—presumably because theism goes beyond Christianity. Still when they did mention God it was more often in Christian terms of a personal and sovereign lord. Likewise they mentioned faith and belief much more often, and this was more often faith or belief in something beyond, higher power(s), god(s), or God (5). Their notions of spiritual power were also further outside and over themselves, as in talk of guidance or obedience. By contrast German definitions of spirituality were warier of dogma and authority, whether religious orthodoxy or scientific consensus. They mentioned experience, as opposed to belief, more often, and were generally more esoteric, occult, and magical in their terminology, talking of the otherworldly in more universal but impersonal or abstract, terms. They were also more critical of spirituality, oftener complaining of its vagueness or even dismissing it as bovine fecal material. Still despite all this the researchers noted that American and German definitions were much, much more alike than different. These differences should therefore be understood as in emphasis, not substance. Their considerable overlap, striking in itself, forms the basis of the ten concept clusters and the three scales.
V. We must also assume that the scientific method deserves our confidence, and that the concept of spirituality, if not spirituality itself, is amenable to investigation by it. Other assumptions include those about word meaning, natural kinds, and other hot topics of debate in the philosophy of language and science—all of which would take us far afield of the present discussion. May curious readers experience transcendence of this post!
W. Eisenmann, Clemens, et al., p. 147.
#spiritual#spirituality#Religion#religious#theology#philosophy#Religious Studies#spiritual but not religious#spiritual direction#spiritual director#spiritual direction denver#denver spirituality#history of religion#spiritual journey#faith#belief#soul care
3 notes
·
View notes
Text
Summary of Notes
What’s expected?
Requirements Filled in and signed cover page uploaded to Turnitin. Attendance. Final presentation uploaded to your Tumblr blog.
Learning Outcomes On successful completion of this paper students will be able to:
1. Demonstrate knowledge of fundamental interaction design principles and concepts.
2. Analyse digital user interface (UI) problems, then undertake user research and evaluation.
3. Explore, experiment and analyse a range of creative ideas and concepts.
4. Prototype, further analyse, refine and communicate concepts.
5. Reflect on design processes and learning.
Siobhan’s Notes
Week 1.2
First assignment is a infographic of what you've learnt.
Taxonomy/diagram/infographic
Put it all together in the last week
Section 2
making - based on systems - grids, typography etc.
design system (music app)
do it with prototyping software
hand in design system, asset library, all the icons, all stages, plan for colours, typography, artwork
Taxonomy (general) is the practice and science of classification of things or concepts, including the principles that underlie such classification. Originally used only about biological classification, taxonomy has developed to be a synonym for classification (see Classification (general theory)).
Design for finding
Design for understanding
The hamburger menu, or the hamburger icon, is the button in websites and apps that typically opens up into a side menu or navigation drawer. It was created by interaction designer Norm Cox for the Xerox Star personal workstation in 1981 as an easy way to communicate to users that the button contained a list of items.
The Power of Design Jams (And How They Can Help Your Startup)
we work for findability.
classify things
basically organise our places of information - we are placemakers
we make online environments structured in such a way so that people understand
the understanding is driven from context, content and user.
Design for Understanding
Some organising principles that carry over to information environments from physical environments include: structure and order, rhythm, typologies and modularity and extensibility.
We experience information environments as places where we go to transact, learn and connect with other people, among many other activities.
How people make sense of where they are and what they can do there
Placemaking in the physical world and in information environments
Design for Finding.
Information needs are like fishing: sometimes people know exactly what they're looking for, but often they're casting a wider net.
Everything (exhaustive research)
A few Good things (exploratory seeking)
The right-thing (known item seeking)
Need it again (re-finding)
IA starts with people and the reason they use your product or service: they have an information need.
Three circles of information architecture
We need to understand the business goals behind the project and the resources available for design and implementation.
Context | Business goals, funding, politics, culture, technology, resources and constraints. We’re looking to understand goals, budgets, schedules, technology infrastructure, human resources, corporate culture, and politics.
Content | Document/data types, content objects, volume, existing structure. We’re looking to understand “the stuff in the information environment”.
Users | Audience, tasks, needs, information-seeking behaviour, experience. We’re looking to understand the people – real, living human beings – who will be using the information environment.
Finding and managing
The organisations and people who manage information are important, too. An information architecture must balance the needs of users with the goals of the business. Efficient content management and clear policies and procedures are essential.
Findability is a critical success factor for overall usability. If users can't find what they need through some combination of browsing, searching, and asking, then the system fails. But designing for the needs of the user isn't enough.
Structuring, organising and labelling.
Labelling means figuring out what to call those categories and the navigation structure elements that lead to them.
Organising involves grouping those components into meaningful and distinctive categories, creating the right contexts for users to understand the environment they are in and what they're looking at.
Structuring involves determining the appropriate levels of granularity for the information "atoms" in your product or service and deciding how to relate them to one another.
Information
We are concerned with information of all shapes and sizes: websites, documents, software applications, images, and more. We are also concerned with metadata: terms used to describe and represent content objects such as documents, people, processes, and organisations.
Knowledge managers develop tools, processes, and incentives to encourage people to share that stuff. Information exists in the messy middle. With information systems, there's often no single "right" answer to a given question.
We use the term "information" to distinguish information architecture from data and knowledge management. Data is facts and figure. Relational databases are highly structured and produce specific answers to specific questions. Knowledge is the stuff in people's head.
Information Architecture
The art and science of shaping information products and experiences to support usability, findability, and understanding.
The synthesis of organisation, labelling, search, and navigation systems within digital, physical, and cross-channel ecosystems.
The structural design of shared information environments.
It does this by asking the designer to think about problems through two important perspectives: that our products and services are perceived as places made of information, and that they function as ecosystems that can be designed for maximum effectiveness.
Information architecture is focused on making information findable and understandable. Because of this, it is uniquely well suited to address these issues.
This has had two important effects in our time: information is more abundant that ever before, and we have more ways of interacting with it than ever before.
Historically, information has shown a tendency to dematerialise, going from having one-to-one relationship with its containers to being completely detached from its containers.
Week 2.1
The Anatomy of Information Architecture
Organisation systems present the site’s information to us in a variety of ways, such as contect categories, or to specific audiences.
Navigation systems help users move through the content, such as with the custom organisation of the individual drop-down menus in the main navigation bar.
the only thing is sometimes organising info can hide info.
Search systems allow users to search the content; when the user starts typing in the site’s search bar, a list of suggestions is shown with potential matches for the user’s search term.
Labelling systems describe categories, options, and links in language that (hopefully) is meaningful to users; you will see examples throughout the page.
Top-Down Information Architecture
In top-down info architecture, the environment's designers posit a structure that aims to answer users' questions such as these.
The form that the environment takes – its content, page layout, etc. - is designed and produced to support this structure that has been centrally defined “from above”.
Categories are used to group pages and applications throughout the site.
Labels systematically stand for the site’s content.
Navigation systems and a search system can be used to move through the site
Questions users will ask from a top-down perspective.
Where am I? I know what I am looking for; how do I search for it? How do I get around this site? What’s important and unique? What’s available on this site? What’s happening there? How do I engage with them via various other popular digital channels? How can I contact a human? What’s their address? How can I access my account?
Bottom-Up Information Architecture
Instead of being dictated from above, bottom-up info arch is suggested by and inherent in the systems content (e.g., Netflix as it’s Based on what you've been watching/ Spotify is a mixture of top-down and bottom-up)
It’s important because users are increasingly likely to bypass your system’s top-down information architecture; instead, they’re using webwide search tools like google search, clicking through ad, while reading your content vis social media to find themselves deep in your site.
UNDERSTAND: Top-down vs Bottom Up - Initially I didn't really grasp what this meant
Week 2.2
DEFINE High-fidelity prototype?
Organising information then visualising information
Grid systems
Slides readable
Make sure you can communicate your research to your team and stakeholders
Fruit stall - hands on card game.
Assumptions VS. Facts
Week 3.1 | INRD | Card Sorting
Top-Down Information Architecture
LABELS systematically represent the site’s content.
We label things all the time.
Labelling is the most obvious way to show our organisation schemes across multiple systems and contexts.
We must try to design labels that speak the same language as our environment’s users, while also reflecting its content.
Textual labels are the most common type we encounter in our work; they include contextual links, headings, navigation system options, and index terms.
Iconic labels are less common, but the widespread adoption of devices with less screen real estate means that they are an important component of many information environments.
Designing labels is one of the most difficult aspects of information architecture.
TEXTUAL LABEL TYPES:
Contextual Links
Hyperlinks to chunks of information on other pages or to other locations on the same page
Headings
Keywords, tags, and subject headings that represent content for searching or browsing.
Labels that simply describe the content that follows them, just as print headings do.
Navigation system choices
Labels representing the options in navigation systems.
Index terms
Content, users, and context affect all aspects of an information architecture, and this is particularly true with labels. Any of the variables attached to users, content, and context can drag a label into the land of ambiguity.
Presentation
Similarly, consistent application of fonts, font sizes, colours, whitespace, and grouping can help visually reinforce the systematic nature of a group of labels.
Syntax
It not uncommon to find the following mixed together.
Verb-based labels (e.g., “Grooming Your Dog)
Noun-based labels (e.g. Diets for Dogs)
Question-Based Labels (e.g. How do you paper train your dog?)
Within a specific labelling system, consider choosing a single syntactical approach and
sticking with it.
Granularity
Within a labelling system, it can be helpful to present labels that are roughly equal in their specificity. Exceptions (such as site indexes) aside, it’s confusing o encounter a set of labels that cover differing levels of granularity – for example “chinese restaurants,” “restaurants,” “taquerias” “fast food franchises” “burger king”
Comprehensiveness
People can be tripped up by noticeable gaps in a labelling system. Aside from improving consistency, a comprehensive scope also helps people do a better job of quickly scanning and inferring the environment’s content.
Audience
Consider the languages of your environment's major audiences. If each audience uses a very different terminology, you may have to develop a separate labelling system for each audience, even if these systems are describing the same content.
Open Card Sorts vs. Closed Card Sorts vs. Hybrid Card Sorts
Open card sorts allow participant to cluster labels for existing content into their own categories and then label those categories (and clearly, card sorting is useful when designing organisation systems as well as labelling systems.
Closed card sorts provide participants with existing categories and ask them to sort content into those categories. At the start of a closed card sort, you can ask users to explain what they think each category label represents and compare these definitions to your own.
Hybrid card sort use elements of both
Week 3.2
Recap card sorting
Good research means asking the right questions, and choosing the right questions requires a conceptual framework of the broader environment.
We use our Content/Context/Users conceptual framework as the basis of our research.
Diagramming Information Architecture
Provide multiple ‘views’ of your information architecture.
Information environments are too complex to show all at once; a diagram that tries to be all things to all people is destined to fail.
Instead consider using a variety of techniques to display different aspects of the architecture.
No single view takes in the whole picture, but the combination of multiple diagrams might come close.
Develop those views for a specific audience and needs.
You might find that a visually stunning diagram is compelling to client prospects, therefore justifying its expense.
However, it probably requires too many resources to use in a production environment, where diagrams may change multiple times per day.
Whenever possible, determine what others need from your diagrams before creating them.
You may need very different diagrams for communicating “upstream” with stakeholders and executives than for communicating “downstream” with designers and developers.
Content components.
What constitutes a unit of content, and how those components should be grouped and sequenced.
Connections between content components
How content components are linked to enable actions such as navigating between them
Sitemaps show the relationships between information elements such as pages and other content components, and can be used to portray organisation, navigation, and labelling systems.
Both the diagram and the navigation system display the ‘shape’ of the information space in overview, functioning as a condensed map for site developers and users, respectively.
As you create sitemaps, it’s important to avoid getting locked into a particular type of layout. Instead, let form follow function.
Keeping sitemaps simple.
As a project moves from strategy to design to implementation, sitemaps become more utilitarian.
At this stage, they are focused more on communicating the information architecture to others involved in design and development, and less on strategy and product.
Bring detail to your sitemaps.
As you move deeper into the implementation stage, your focus naturally shifts from external to internal.
Rather than communicating high-level architectural concepts to the client, your job is now to communicate detailed organisation, labelling, and navigation decisions to your colleagues on the development team.
Modularise your sitemap.
The top-level sitemap links to subsidiary sitemaps, and so on. These diagrams are tied together through a scheme of unique IDs.
UNDERSTAND upstream and downstream communication
Content Components
How they are being grouped or sequenced - their relationships to each other
Let form follow function - don't get locked down into a particular type of layout. Don't force it into a certain shape
Sitemaps become more utilitarian as a project strategy moves form design to implementation.
You may need a second site map to explain smaller - itemised components from the bigger picture of the first sitemap.
top-level sitemaps links to subsidiary sitemaps, and so on, and so on. These diagrams are tied together through a scheme of unique IDs e.g. colour, how components are related to each other - keep people interested and make it so that understanding is easy. e.g. certain colours refer to certain level, if you were to zoom in on the sitemap you'd know exactly where you are on the sitemap. e.g. keep it organised and easy to understand
interaction-design.org
informationisbeautiful.net
Week 4.1
Wireframes
Consistency is key, especially when presenting multiple wireframes.
More importantly, colleagues take wireframes quite literally, so consistency makes their design and production work go more smoothly.
Callouts – small notes placed around and over your wireframes – are an effective way to provide details about the functionality of page elements. Be sure to leave room for them at the sides and top of your wireframes.
Like any other deliverable, wireframes should be usable and professionally developed. So, tie your collection of wireframes together with page numbers, page titles, project titles, and last revision dates.
When more than one team member is creating a project’s wireframes, be sure to establish procedures for developing, sharing, and maintaining common templates and components.
Schedule time in your project plan for synchronising the team’s wireframes to ensure consistent appearance, and for confirming that these discrete sdocuments do indeed fit together functionally.
In this design phase, the emphasis of the project moves from process to deliverables – it's where the information architecture starts to become manifest.
These deliverables aren’t the whole story – process is as important during this phase as it is during research and strategy.
Make sure your team are all working in up-to-date version, be clear to title your work so it's clear which is which. Establish procedures for developing, sharing and maintaining common templates and components. It's about how to design the design. You will never work by yourself in the real world - you will always work in a team.
Schedule time in your project plan for synchronising the team's wireframes to ensure consistent appearance, and for confirming that these discrete documents do indeed fit together functionally.
Make sure there is consistency in your work.
In your wireframe you can show your mistakes or opportunities for improvement in your sitemap. The process is still a way to figure things out.
Information Architecture diagrams define content components and the connections between them.
Sitemaps show the relationships between information elements such as pages and other content components, and can be used to portray organisation, navigation, and labelling systems.
Wireframes depict how an individual page or template should look from an architectural perspective. (Wireframes are working documents.)
A2 diagram/infographic of everything you've learnt so far.
Peer session with post it notes to comment on people's work.
You learn by giving feedback.
A2 poster
Choose a visual language - decide how you want.
Wireframes - how are you going to link your map to the wireframe.
Wed: wireframe to a prototype on Wednesday
Do this in Xd or Figma?
A2 examples of other's work will be shown.
Wireframes
-
Grid system - the easiest way to find out grid system for the website you're looking for - it's almost never touching content - there will always be margins and gutters, etc. go deeper into grid systems in Project 2, but it's important to keep in mind here.
Each website will have a grid system - examine them and get your head around.
Never try and make a grid system where one row interacts with the other.
0 notes
Text
Iris Publishers-Open Access Journal of Rheumatology & Arthritis Research
Authored by Hicks A*
Introduction
diseases (RDs) in aged individuals facing housing instability has been generally overlooked [1]. Most homeless research in the recent years have mainly focused on substance abuse and mental health and the population of interest for public health interventions are generally directed towards the youths and the young adults [2]. However, available data suggest that individuals facing multiple social burdens, termed intersectional vulnerability, have worse health outcomes when compared to the at-risk population. We argue that a lack of more targeted studies on aged homeless might contribute to the invisibility surrounding this particular sub-population. Thus, the goal of this study is to generate knowledge and increase awareness on the health of individuals living at the intersection of aging and homelessness. More specifically, the research intents to explore demographic data, clinical patterns, and outcomes of rheumatic disorders in the North American homeless population. We also analyze what model could derive from the interaction to explain increased mortality in aged homeless in US and Canada. Findings for this work will shed a new light into the pathways to premature death and provide ground for a renewed interest in health disparities in line with the Healthy People 2020 Agenda [3].
Method
This work uses a secondary data analysis design. Relevant articles were retrieved from different online databases and were screened for both qualitative and quantitative data on age, homelessness, and rheumatic disorders
Search strategy
We used both an electronic and a manual search. A first line of search was conducted on PubMed using the Boolean operators « AND » and « OR » with the different MESHs: « Homeless person » and a series of different rheumatic diseases selected based on their prevalence in the general US and Canadian populations. Hence, the following MESHs were used: « arthritis »; « osteoarthritis »; « gout »; « lupus »; « spondylarthritis ». We also included « chronic pain » as a search term as most RD include pain in their clinical symptoms. A second line of search was conducted on the Web of Science database using the same method. Then a manual search followed using a snowballing technique earlier described by Claes. The technique consists in screening the references of articles retrieved electronically for relevant publications until a point of saturation is reached, where new articles are not found [4].
Inclusion criteria
Abstracts of the articles were screened for eligibility. To be included in the analysis, extracted data should satisfy a PICO framework Schardt C, et al. [5], where the study population (P) included at least older homeless individuals; the Intervention (I) was any type of clinical or public health program targeting rheumatologic disorders; the Comparison (C), was a pre or post intervention analysis within the aged homeless population or with other populations; finally, the Outcomes (O) should include the results of the intervention and their impact on homelessness. While our study population was the aged homeless in the United States and Canada, studies conducted elsewhere were included but for the discussion portion of the research. Articles were not tested for quality, and no publication date restriction was imposed.
Descriptive Data of the Aged Homeless in The US
Definition of aged homeless
If aged homelessness appears intuitively understandable, the practical definition of the concept is much more nuanced. A first complicating factor of the definition is the distinction that should be made on the trajectory of aged individuals onto homelessness. Two distinct trajectories are described: the first group includes individuals who have been without secure housing and have been living in shelters or on the streets for a significant time and are now at a certain age. The second group instead designs individuals who experience homelessness for the first time when they are at an advanced age [6]. That trajectorial distinction is important from an interventional standpoint since it represents a different social dynamic with similarly different health outcomes. Another element adding to a definitional complexity comes from the notion of age itself. Indeed, it is difficult to set the precise definition of advanced age when dealing with a population whose life expectancy is somewhere between 42 and 52 [7]. The homeless population experiences an earlier onset of age-related disorders, compared to the general population [8]. Thus, the age threshold varies according to authors: Grenier and colleagues establish it to be 50-rather than the standard 65-and see it as most appropriate marker of old age in a homeless population [9]. But there’s a general consensus among researchers that the physical age of the homeless to be about 10 to 20 years below their chronological age [6].
Demographics
According to the 2018 Annual Homeless Assessment Report (AHAR), the Point-In-Time (PIT) count in January, 2018 determined 552,830 people to be experiencing homelessness in the United States, making it the largest homeless population in Western countries with an estimated number of over 3 million in the course of the year [10]. At the national level, ethnoracial distribution of the aged homeless population reflects significant disparities in risk. In both sheltered and unsheltered homeless, Caucasian Americans represent 47%, followed by African Americans at 40%, as per a 2019 PIT Report from the HUD (https://files.hudexchange.info/ reports/published/CoC_PopSub_NatlTerrDC_2019.pdf). There was also a strong male predominance with 60% males versus 38% females, and less than 1% for transgenders. Additionally, US service Veterans compose approximately 6% of the homeless population. However, one factor that stood out from the different studies is the new age distribution in that group. Indeed, the aging of the homeless population follows the general trend of the of the US population, only at an accelerated pace. In the 1990s, only 11% of the homeless population was 50 years of age or older [11]. A decade, later, that proportion rose to 30% [12] and is now believed to be at 50% [13]. In Canada, the 2014 PIT Report was estimated at 35 000. It’s also estimated that between 13 to 33 000 people are affected by chronic homelessness [14]. If the increase in the homeless population is relatively slower when compared to the US trend, it is nonetheless concerning [14]. That rapid increase in the aged homeless population in both the US and Canada poses new challenges for health interventions and reflects a shift in their health needs.
Rheumatic Diseases in Aged Homeless
Arthritis
The American College of Rheumatology (ACR) defines arthritis as an inflammatory process affecting the osteoarticular system. It encompasses a broad category of diverse diseases from different etiological patterns which all affect the skeleto-muscular system, either primarily or secondarily, on a chronic, sub-chronic or acute basis. In 2011, a cross-sectional study was conducted in the Boston, MA area which aimed to establish the prevalence of geriatric syndromes in older homeless adults. The study included two hundred and forty-seven homeless adults aged 50–69 recruited from eight homeless shelters. The study compared the prevalence of geriatric syndromes in older homeless adults to findings of the prevalence of these syndromes in the general elderly adult population. Studies included data from the MOBILIZE Boston Study (MBS), the National Health And Nutrition Examination Study (NHANES) and the National Health Interview Survey (NHIS). The prevalence of arthritis in the homeless aged adults was found to be 44.9%. This prevalence was similar to that reported for the general population in MBS (48%, p value=0.31), but the mean age difference of those suffering from arthritis between the two populations was 22 years: (56 (±4.8) Vs (78.1(± 5.4), p value < 0.001). Also, the authors found that arthritis was more prevalent in the homeless population than in national averages: 44.9% vs 35%, p value <0.001 for NHANES, and 44.9% vs 38.6%, p value <0.04 for NHIS. The authors also reported that arthritis presented more often as part of a comorbid condition with asthma or COPD and depression in aged homeless more than in the reference populations of the study (p value <0.001). As a consequence, activities of daily living (ADL), instrumental activities of daily living (IADL) and mobility were significantly impaired in the study group when compared to the MBS (resp. 30 Vs 22 (p value = 0.004) [12]. Similar findings are reported by Gutman and coworkers in their exploratory research in New York [14]. The study consisted in home safety visits of formerly homeless adults, aged between 40–64 yr, who were, at the time of the study, residing in supportive housing. Arthritis was the second most common condition reported by the surveyed group (52%), second only to coronary artery disease (60%). The prevalence of arthritis was equal to that of diabetes (52%) and higher than high cholesterol (44%) and hypertension (44%). That, homelessness is an important contributing factor to arthritis development is also corroborated by Jutkowitz et al. in a study focusing on veterans admitted in a community nursing home with a prior history of homelessness [15]. The authors reported that former homeless veterans that were admitted in the facility were slightly more at risk of developing rheumatic diseases then their stable housed counterparts with an adjusted relative risk of 1.1 (CI: 98%) when compared to their stably housed counterparts.
Clinical presentations
Another aspect that has been investigated is the clinical presentation of arthritis in the aged homeless subpopulation. Chronic pain in general and in particular chronic articular pain are constant companions of aged homelessness. A Canadian study by Hwang and collaborators investigated chronic pain in the aged homeless using a cross-sectional design based on the Chronic Pain Grade questionnaire. It was established that the age at the onset of pain was as early as in the 30’s, but the pain localization involved almost all the articulation of the body with a significantly higher frequency for the knees (36.4%) and an equal frequency for the back and shoulders (18.2%). Grade IV pain was also common especially for the back (63%). It should be noted that injuries along with arthritis were the principal causes of that chronic pain [15]. Data from the UK provide further support to Hwang findings. Fisher et al. evaluated chronic pain in UK aged homeless half a decade ago. The lower limbs were the most common site of pain (51.4%), and polyarticular involvement was common (27.9%). In the US, a recent study (2017) in California by Landefeld et al. based on interviews of 350 homeless individuals aged 50 and older concluded that almost half of the study group (44.3%) experienced chronic pain from arthritis, varying from moderate to severe. Like the Canadian and UK findings, the author reported the prematurity of the symptoms as some participants reported pain dating back 10 years. In addition to the premature, polyarticular onset of pain typical to arthritis in aged homeless, another particularity resides in the misleading clinical form that arthritis takes in this vulnerable, comorbid population. Kristopher, Menachof and Fathi report a case of scurvy in an adult homeless mimicking a reactive arthritis [16]. Prevalent also in this debilitated population are secondary articular localizations of infectious diseases such as hepatitis C or TB. Gonococcal and other forms of septic arthritis are regularly reported in the homeless population in general [17-19].
Falls
With arthritis, falls form a vicious cycle in aged homeless. As reported by a study in aged homeless women, management of arthritic pain and access to quality treatment is very challenging. When individuals are unable to control when and where to rest, elevate affected limbs, apply heat/cold or access specialist treatment or physical therapy, control of the diagnosed condition is very difficult [20]. Thus, the poor management of arthritic pain exposes that subpopulation to an increased risk of falls when compared to their stable housed counterparts. This, in turn, significantly increases the risk of ankylosing spondylitis which further deepens the rheumatic symptomatology [21]. A very recent study by Abbs and coworkers as part of the Health Outcomes in People Experiencing Homelessness in Older Middle agE (HOPE HOME) project, investigated risk factors of falls in aged homeless. Study design consisted in a longitudinal multiethnic cohort of 350 aged homeless with participant interviews every 6 months for 3 years in Oakland, California [22, 23]. Adjusted odd ratios (AOR) was estimated for specific risk factors. It was demonstrated that falls were more frequent in aged homeless when compared to the general population with increasing risk with age and for women (AOR=1.45 (1.02-2.04, CI 95%)). Whites were at higher risk (AOR: 1.65 (1.12–2.43), CI 95%) than African Americans. Arthritis was also identified as a risk factor for falls along with pain, social and physical environment, and physical assault in the previous 6 months [23]. Another recent study investigated the adequacy of the Permanent Supportive Housing (PSH) system for aged homeless in regard to the increased risk of falls in that population. They also researched the risk factors of falls in aged adults with prior experience of homelessness. Interestingly, the authors found that although PSH has been credited with the decline in homelessness in the US since 2007, tenants have a high prevalence of falls and serious fall-related injuries. This risk was correlated with lifetime years of homelessness. Additional risk included functional impairment, frailty, and persistent pain [24].
Connective tissue disorders (CTDs)
The literature on CTDs in older homeless populations is scarce and hard to find. Reasons accounting for such a scarcity relate to the nature of CTDs as a complex and polymorphic group of diseases difficult to diagnose and even much more arduous to investigate in a public health setting and in the context of homelessness. Nonetheless, the white paper referenced earlier and focusing on aged homeless women acknowledges that CTDs such as lupus are not rare in that population [20]. Tran and Panush from Los Angeles County General Hospital/Medical Center reported on a case of rheumatoid arthritis (RA) in an African American aged homeless patient two years ago. The authors provided a description of the case, outlining the physical, mental, and social suffering of the patient sitting uncomfortably in a wheelchair, grimacing, with a floridly active RA. While it is well established that patients diagnosed and treated early respond well to treatment, for an aged homeless patient diagnosed at an average of 6 years younger and living in the street, the outcomes of RA or even CTDs in general are significantly different and the progression of disease can be irreversible as acknowledged by the authors [25].
Psycho-social Interactions With the Care Personnel
Management of rheumatic disorders more often requires a prolonged follow up of patients. This creates a relationship between the care provider and the patients or with his family. In their study on doctor-patient relationship with regards to rheumatic diseases, Haugly, Strand and Finset interviewed two groups of patients: one with a well-defined inflammatory condition (rheumatoid arthritis (RA) or ankylosing spondylitis) and one with non-inflammatory widespread chronic pain such as fibromyalgia. The investigators reported that both groups saw as of central importance “to be seen” and “to be believed” [26]. The authors explain the concept of visibility or “to be seen” as an expression of positive affirmation of the patient as a human being. The credibility concept, “to be believed”, relates to pain and suffering including all those subjective symptoms that the care provider cannot feel or evidence yet experienced by the patient. When the rheumatic patient is one that is at the intersection of old age, ethnicity, and homelessness, those two concepts of visibility and credibility resonate in a very particular manner. In the case reported by Tran and Panush discussed earlier, the authors share how they felt during the first encounter with that elderly African American homeless female patient suffering with RA. The psycho-social interaction between aged homeless affected with RD and their care providers is fraught with complications. These interactions come with the frustration of seeing a patient in great suffering and needing on-going care but knowing that he or she is at great risk of being lost to follow-up [25]. At the same time, the prevalence of co-morbidities in older homeless persons including mental health issues requires a tactful approach by the healthcare providers [27]. Stigma of homelessness and mental health problems can be a barrier to effective communication between patients and providers.
Management of Rheumatic Diseases in Aged Homeless
The management of RD in aged homeless patients often lacks coherent evidence-based practice and policies. Findings from multiple studies in high income countries reveal that although the homeless population has increased utilization of the ER, in general, there are no specific protocols for discharge and follow-up treatment. This lack of standardization in seen in community health interventions as well as in medical care for aged homeless affected with RD [28]. This is corroborated by the uncertainty raised by the authors of the RA case report as to the possibility of conducting an appropriate follow up of RA in a patient with no fixed domicile and living in unpredictable conditions [25]. Lack of specific programs or medical approach constitutes one of the key factors furthering the functional impairment in aged homeless individuals as reported by Brown and coworkers [29].
Discussion
Most health interventions and programs in homeless over the recent decades could be classified into three major categories: mental health, substance abuse and addiction, and infectious diseases [30]. If such a triple-focus is amply justified, given their relative prevalence, it appears that there is an insidious condition that is being overlooked. The necessity of focusing on RD in the homeless population is generally unrecognized. In a context where co-morbidity is common, it is necessary from a policy standpoint to clearly identify interventional priorities to maximize health gains [9]. It appears that, given their multifactorial nature, RD greatly increase the suffering of homeless populations when they reach a certain age. One study conducted in the aged homeless population in California illustrated that becoming unable to perform ADL is a significant fear for this population [31]. Figure 1 depicts the vicious cycle that deepens and compromises the pathway out of homelessness. Patients remain at risk, even years after ending homelessness [24]. In other words, the cumulative long-term effects of homelessness in RD is a sequence that may be irreversible. That deterministic nature of homelessness on health in general has been described by former homeless women [32]. The implications for health policy in homeless populations are two-fold: primary prevention of homelessness is always the best avenue given its detrimental nature on health. The second policy option is secondary prevention, but with a focus on high impact diseases which should at least include RD. Some researchers recommend lowering the age of health benefits from 65 to 50 if individuals have a past or present experience of homelessness. Specific rheumatological programs combining access to medical care, falls prevention programs and physical rehabilitation should become standardized protocols for the care of RD in the aged homeless. Future research should focus on how to develop such policies and turn them into actual community health programs (Figure 1).
Conclusion
Exploratory studies have the ability to address emerging issues in a direct, concise, and timely manner [33, 34]. With the aging of the general population in the US and in Canada, new research fields are quickly emerging. The homeless population is also exhibiting a shift in age, manifesting changing health needs. Our analysis aimed at exploring prevalence, clinical presentation, management, and outcomes of RD in aged homeless individuals. It appears that this specific category of diseases, although highly prevalent in individual facing housing instability, is often neglected in health interventions. This quick overview of the body of evidence suggests that RD are part of a morbid cycle contributing to falls, pain and physical impairment and ultimately to premature death in aged homeless. More research should be done in individuals facing intersectional type of vulnerability, here age and homelessness to inform both population health action.
Read more: Full Text To know more about Iris Publishers Research: https://unam.academia.edu/irispublishers
#iris publishers#iris publishers llc#iris publishers reviews#Rheumatology Scientific Journals#Arthritis Open access journals
0 notes
Text
Movement Statement
Over the last three weeks each of the lecturers have presented us a different definition of movement. This has been really helpful, as the broadness of ‘movement’ makes it easy to latch on to the first definition you come to. I spend the three weeks exploring the concept of movement and motion to find a direction that I thought was interesting and that had enough scope to be turned into a concept for studio.
To me, one of the core aspects and abilities of Creative Technologies is to translate a complex concept into something more accessible. The technology allows the project to exist and function, either in a physical or digital space where an audience can experience it.
The art side of Creative Tech informs the concept, and the conscious and subconscious effect the piece has. It develops the message, and can elicit emotion and thoughtfulness.
These two hemispheres do no work apart, but instead feed to and from each other In a constant feedback look. As a concept informs the specifics, so too do the technicalities affect the aesthetic. This dichotomy is at the heart of the creative technology discipline, and a practitioner must have ‘a foot in each’. Technology and art are hugely broad terms, and each project will enlist one or more previously discrete practices into each category. The outcome of this hybrid practice is greater than the sum of its parts, and not only the work but the workflow itself is unique and distinctive.
This is the aspect of creative Technologies that I am most passionate about – creating moving and thoughtful outcomes that have a valid message or question, and communicating that in a way enabled by technology. The thoughtful and critically creative application of technology can create incredible, impossible experiences for audiences. A conceptual and ideological synaesthesia, a potent translation between once disparate pillars of human experience.
One of my main influences from the lectures was “Meeting the Universe Halfway” [2], a book by Karen Bard. The book is about Agential realism, that existence is secondary to an intra-action, and those agents exist because of the action, not the other way around. This seems counterintuitive and farfetched, but if you take a second to ponder it you begin to look at the world differently.
I like to think there is a difference between ‘knowing’ and ‘believing’ – knowing being conscious thought and memory, and believing being the domain of the subconscious.
By focusing it seems like we can manually interface with our subconscious. You can convince yourself that no object exists without intra-action and that agential realism is the true mechanic of our universe, in a real and tangible sense. You can bridge the gap between knowing and believing, and for a moment your world changes because you’ve changed the way you perceive it. In that moment, things have deeply different instinctive (subconscious /believed) feeling, because you are able to believe a theory like agential realism, instead of just knowing and accepting it, and it becomes the truth that your brain runs on, the unconscious absolutes of your reality.
But the things experienced and intra-acted with exist differently now because they are being perceived and intra-acted with differently. The act of engaging with an agent creates and defines that agent; that is the core idea of agential realism, demonstrated by thinking about agential realism. I love the idea of being able to have influence over your own mind, to have the feeling of experiencing reality through another lens, one that you didn’t know could be changed. Because of this, I was drawn to research how humans experience movement.
First, how do humans perceive or detect movement? Movement can’t exist as an instant. It needs a frame of reference and context. To identify a movement, we need to know something’s state in the past. With that we can tell how fast and in what direction something is moving. This isn’t just physical, mechanical movements. This could be a social movement, where ideas or opinions spread and move through people and communities, however what interests me most is the physical movement because of its effect on humans. An object moving towards you will trigger a subconscious reaction and sensation.
This has a lot to do with how humans experience time. Unlike computers, time as a human experience is not definite, as we have no internal clock running to second or millisecond accuracy.
As far as movement, an important part is what we experience as “now” – the present. According to many sources [1] our experience of now is not a linear “knife edge” but instead “has a complex texture”. This is known as the specious present – a short duration of time (less than a second) that we experience as ‘now’. One key point is that there is always a ‘center’ to the experience of the present [1], with the trailing events moving from ‘now’ to ‘now but not central now’ to ‘past’.
During my interrogation of what movement was, I had an idea for a project. The idea at first was from Physical computing as a test application for some stepper motors I had bought. I thought it would be interesting to enter a location on earth into a processing sketch and have a ‘pointer’ driver by two stepper motors point at that location. Because we live on a sphere, the location would generally be downwards. As I though more about this idea, I began to see the merits of its concept. We know that country like – for example - Germany exists, but in a somewhat abstract sense. Maybe you think of it on a map, or even as “up” - as it would be in relation to us on a model globe. What we don’t think of is literally ‘where’ a country would be.
If someone asked you where the Sky Tower or Rangitoto was you would be able to point at them. They are of a scale that we can understand. This goes back to my ‘knowing’ vs ‘believing’ idea – we know we live on a sphere (oval), but we don’t believe it. It isn’t how we relate to the world, because it’s too large.
This little pointing gadget would be a perfect example of that core Creative Technology tenet of translation or transposition. Our ability to understand the gesture of a point, and apply that to the knowledge that this technology has calculated its point to show exactly where a given location is to us can give a momentary collapse of knowing into believing. Even when thinking through the experience of interacting with this piece, you are suddenly aware of the planet, the sphere hurtling through space that we live on. You feel yourself making the connection between the pointer aiming itself at a seemingly arbitrary point on the floor, the significance of that motion, and then the implications of where it is pointing. The planet is at a scale too large for us to believe, we can’t hold that in our subconscious. For a second the Earth shrinks to the size of a ball in front of you, and it feels tangible. This fades and as the instinctive and abstract reconcile, your experience of life and knowledge of where it takes place mesh.
This is where I found my starting point for the studio project, and what aspect of movement I wanted to focus on – the human sensation of movement and how that can be triggered to communicate an idea. Because these reactions are so deeply embedded in our subconscious, they have a large and lasting effect on us – more than a static piece of the same caliber could. Movement can tap into the most primal of human instincts.
The feeling of really ‘being on a planet’ that the pointer idea had made me feel stuck with me. I wanted to interrogate further down this path of inquiry, and see how that could link to movement. The obvious answer is our orbit around the sun. At 500 meters per second, we are all in motion, but the frame of reference and scale mean we don’t experience it. Our senses work on a smaller scale, and depend on the gravity of earth as a stationary constant.
Maybe a pointer that followed the Sun or other celestial bodies could give people the sensation of orbiting by providing an ‘artificial reference point’ for the movement.
[1] The Specious Present: A Neurophenomenology of Time Consciousness from: Naturalizing Phenomenology: Issues in Contemporary Phenomenology and Cognitive Science Edited by Jean, Petitot, Francisco J. Varela, Bernard Pachoud abd Jean-Michel Roy Stanford University Press, Stanford Chapter 9, pp.266-329
[2] Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning Author(s): Karen Barad, Published: 2007
6 notes
·
View notes
Text
Typography for Developers
Taimur Abdaal leads design at Retool, a fast way to build internal tools. They're working on a new design system for their platform, to let anyone easily build beautiful custom apps. Typography will be a huge part of this and Taimur wrote this based on that experience.
You may have read the title for this post and thought, "Why on earth does a developer need to know anything about typography?" I mean, there’s already a lot on your plate and you’re making hundreds of decisions a day. Should you use React or Vue? npm or Yarn? ES6 or ES7? Sadly, this often leaves something like type as an afterthought. But, let’s remember that web design is 95% typography:
95% of the information on the web is written language. It is only logical to say that a web designer should get good training in the main discipline of shaping written information, in other words: Typography.
Even though we deal with content everyday — whether reading, writing, or designing it — typography can be daunting to delve into because it’s filled with jargon and subjectivity, and it’s uncommon to see it taught extensively at school.
This is intended as a practical guide for developers to learn web typography. We’ll cover a range of practical and useful topics, like how to choose and use custom fonts on the web, but more importantly, how to lay text out to create a pleasant user experience. We’ll go over the principles of typography and the CSS properties that control them, as well as handy tips to get good results, quickly.
What is typography?
First and foremost, typography is about usability. Type is the user interface for conveying information, and conveying information is what we’re here to do on the web. There are many levers we can pull to affect the usability of reading text, and only by being deliberate about these can we create a pleasant experience for our users.
After (and only after) usability, typography is about emotion. Do the letters complement your content, or contradict it? Do they amplify your brand’s personality, or dampen it? Applied to the same text, different type will make people feel different things. Being deliberate about typography lets us control these feelings.
Same text, different personalities. I’ll bet the first experience is much more expensive. Typefaces: Bodoni 72 (Top), Tsukushi A Round Gothic (Bottom)
Despite what many golden ratio enthusiasts might try to tell you, typography isn’t an exact science. Good guidelines will get you most of the way there, but you’ll need to apply a little intuition too. Luckily, you’ve been reading text your whole life — books, magazines, the Internet — so you have a lot more untapped intuition than you think!
What’s in a font?
Let’s start off by getting a handle on some basic terminology and how fonts are categorized.
Typeface vs. Font
This is how the two have traditionally been defined and distinguished from one another:
Typeface: The design of a collection of glyphs (e.g. letters, numbers, symbols)
Font: A specific size, weight, or style of a typeface (e.g. regular, bold, italic)
In essence, a typeface is like a song and a font is like its MP3 file. You’ll see both terms used in typography literature, so it’s worth knowing the distinction. "Font vs. Typeface" is also a bit of meme in the design community — you might see it crop up on Twitter, so it’ll help to be "in the know."
HOW 2 MAJOR IN GRAPHIC DESIGN: - CHOOSE A FONT - "ITS CALLED A TYPEFACE NOT A FONT" - CHOOSE A GODDAMN TYPEFACE - U MADE THE WRONG CHOICE
— Deena (@itsDeenasaur) November 2, 2014
I may have used font and typeface interchangeably in this story. Please send all complaints to [email protected]. https://t.co/IuHHmlaNAT
— Tristin Hopper (@TristinHopper) January 14, 2019
But, more recently, you can essentially use both terms interchangeably and people will know what you mean.
How fonts are categorized
The broadest split is between serif and sans-serif typefaces. It’s likely you’ve come across these terms just by seeing some some typeface names floating around (like, ahem, Comic Sans).
A "serif" is a small stroke attached to the ends of letters, giving them a traditional feel. In fact, most books and newspapers are set in serif typefaces. On the contrary, sans-serif typefaces don’t have these extra strokes, giving them a smooth, modern feel.
Times (left) and Helvetica Neue (right)
Both serif and sans-serif have categories within them. For example, serif has sub-categories including didone, slab and old style.
Didot (left), Rockwell (center) and Hoefler Text (right)
As far as sans-serif goes, it includes humanist, geometric and grotesk as sub-categories.
Gill Sans (left), Futura (center) and Aktiv Grotesk (right)
Monospace fonts (yes, fonts) are a noteworthy category all its own. Each glyph (i.e. letter/number/symbol) in a monospace font has the same width (hence the mono spacing terminology), so it’s possible to arrange them into visual structures. You may be well familiar with monospace because we see it often when writing code, where it’s helpful to make brackets and indents line up visually. The code editor you have open right now is likely using monospace.
Monaco
How to choose fonts and font pairings
This is subjective, and depends on what you’re trying to do. Each typeface has its own personality, so you’ll have to find one that aligns with your brand or the content it’s communicating. Typefaces have to be considered within the context that they’re being applied. Are we looking for formality? Maybe start with the serif family. Warm, fun and friendly? That might be a cue for sans-serif. Heck, there’s a time and a place even for Comic Sans… really! Just know that there is no hard science to choosing a font, and even the most trained of typographers are considering contextual cues to find the "right" one.
But fonts can be paired together in the same design. For example, some designs use one typeface for headings and another for body text. This is a good way to get a unique look, but it does take a fair bit of work to get it right. Like colors in a palette, some typefaces work well together while others clash. And even purposeful clashing can make sense, again, given the right context.
The only way to find out if two fonts are complementary is to see them together. Tools like FontPair and TypeConnection can help with this. If a font catches your eye when surfing the web, WhatFont is a great browser extension to help identify it. Another great resource is Typewolf which allows you to see examples of great web typography, including different font combinations in use:
Alice Lee (left), American Documentary (center) and Studio Stereo (right)
While there’s a lot of subjectivity in choosing fonts, there are some objective considerations.
Font weight as a consideration
Some font families have a wide range of font weights — Light, Book, Regular, Medium, Semi-Bold, Bold, Black — whereas others just have a couple.
Inter UI comes with a fantastic range of weights
If you’re building a complex web app UI, you might need a range of font weights to establish hierarchy in different contexts. For something less complex, say a blog, you’ll likely be fine with just a couple.
Complex UI hierarchy (left) and Simple blog hierarchy (right)
Variable fonts are an exciting new technology that provide precise control over a font's characteristics. For font weights, this means there's no longer limitations to using "Light," "Regular," and "Bold," but really any weight in between. Try some variable fonts out here and check out Ollie Williams' post as well.
It's still early days — there aren't many variable fonts available right now, and browser support is limited. But definitely a space worth watching!
Consider the legibility of a font
Some typefaces are harder to read than others. Stay away from elaborate fonts when it comes to paragraphs, and if you must have tiny text somewhere, make sure its typeface is legible at those tiny sizes.
Of these two examples, which is easier on the eyes? (There is a right answer here. 🙂)
See the Pen Font Comparison: Body by Geoff Graham (@geoffgraham) on CodePen.
Remember, fonts come in a variety of styles
Making a font bold isn’t as simple as adding an outline to the text, and italics are more than slanted letters. Good fonts have specific bold and italic styles, without which the browser tries to "fake" it:
Lato
These faux styles tend to reduce legibility and break the text’s visual cohesion because the browser can only do so much with what it’s given. Stick to fonts that offer true bold/italics styles, if you can.
Other things worth consideration
What we’ve looked at so far are the high level characteristics and features of fonts that can help us decide which to choose for a particular design or context. There are many other things that can (and probably should) be taken into consideration, including:
Language support: Some fonts have glyphs for foreign characters, which may be a requirement for a project geared toward a multilingual audience or that contains multilingual content.
Ligatures: Some fonts (typically serifs) have glyphs to replace otherwise awkward characters, like ffi and ffl which are multiple characters in a single glyph.
File size: You’re a developer, so I know you care about performance. Some fonts come in bigger files than others and those can cause a site to take a hit on load times.
Using fonts on the world wide web
A mere 10 years ago, there were two complex ways to use custom fonts on the web:
SiFR: Embedding a Flash (remember that thing?) widget to render text.
cufon: converting the font to a proprietary format, and using a specific JavaScript rendering engine to generate the text using VML on an HTML <canvas>.
Today, there’s all we need is a single simple one: @font-face. It lets you specify a font file URL, which the browser then downloads and uses on your site.
You can use the @font-face declaration to define your custom font:
@font-face { font-family: 'MyWebFont'; src: url('webfont.eot'); /* IE9 Compat Modes */ src: url('webfont.eot?#iefix') format('embedded-opentype'), /* IE6-IE8 */ url('webfont.woff2') format('woff2'), /* Super Modern Browsers */ url('webfont.woff') format('woff'), /* Pretty Modern Browsers */ url('webfont.ttf') format('truetype'), /* Safari, Android, iOS */ url('webfont.svg#svgFontName') format('svg'); /* Legacy iOS */ }
And then refer to it as normal, by the font-family attribute:
body { font-family: 'MyWebFont', Fallback, sans-serif; }
Even though @font-face is light years ahead of the old school approaches to custom web fonts, there is quite a bit to consider as far as browser support goes. Here’s a guide to getting the right amount of cross-browser support.
The impact of fonts on performance
Performance is the only downside to using custom fonts on the web. Fonts are relatively large assets — often hundreds of kilobytes in size — and will have and adverse effect on the speed of any site.
Here are some tips to soften the blow:
Use GZIP compression on your web font files.
Disable font features you don’t need, e.g. hinting, kerning.
Remove glyphs that you don’t need, e.g. foreign language characters.
One tool that can help tailor a font file to suit your needs is Transfonter. Just make sure you have the rights to any font that you want to use on your site because, well, it’s just the right thing to do.
Some services will host web fonts for you
One other significant way to shed some of the weight of using custom web fonts is to use a service that will host and serve them for you. The benefit here is that third-parties store the files on a speedy CDN, optimize them, and serve them to your site via a JavaScript snippet that gets dropped into the document head. In short: it takes a lot of hassle off your hands.
How easy is it to use a hosted service? Consider that using anything on Google Fonts only requires a single <link> in the HTML head:
<html> <head> <link href="https://fonts.googleapis.com/css?family=Roboto" rel="stylesheet"> <style> body { font-family: 'Roboto', sans-serif; } </style> </head> <!-- Rest of the document -->
This example was taken from Google Fonts — the most popular hosted font service. It’s free to use, and, at the time of writing, has a catalogue of 915 font families. The quality of these can be hit-or-miss. Some are truly top-notch — beautiful design, many weights, true bold/italics, and advanced features like ligatures. Others, typically the more novel designs, are bare-bones, and might not be suitable for serious projects with evolving needs.
Adobe Fonts is also very popular. It comes bundled with Adobe’s Creative Cloud offering, which starts at $20.99 per month. At the time of writing, it has a catalogue of 1,636 font families. These are typically high quality fonts that you’d have to pay for, but with Adobe Fonts, you can access them all in a Netflix all-you-can-eat model.
Here are some other options:
Adobe Edge Web Fonts: This is the free version of Adobe Fonts. Adobe partnered with Google on this, so there’s a lot of overlap with the Google Fonts library. It has 503 font families in total.
Fontspring: This is a massive library of over 14,000 font families, but with an individual licensing model. This means paying more up-front, starting at ~$20 per individual font (and per weight or style), but no recurring costs. It’s also self-serve — you’ll have to host the files yourself.
MyFonts by Monotype: This is major type foundry. Similar to Fontspring: massive library of font families with an individual licensing model.
Fonts.com: This is similar to Fontspring. Options for one-off or pay-as-you go pricing (based on page views).
Cloud.typography by Hoefler&Co: This is a major type foundry. Over 1,000 font families (all by Hoefler&Co), with options for hosted (subscription, starting at $99 per year) or self-hosted (individually licensed) web fonts. The benefit here is that you get access to a library of fonts you cannot find anywhere else.
Fontstand: This services allows you to "rent" individual fonts cheaply on a monthly basis, starting ~$10 per month), which you get to own automatically after 12 months. To date, it boasts 1,500 families, with a hosted web font service.
CSS and Typography
OK, let’s move on to the intersection of design and development. We’ve covered a lot of the foundational terms and concepts of typography, but now we can turn our attention to what affordances we have to manipulate and style type, specifically with CSS.
Adjusting font sizing
Changing the size of a font is something that inevitably pops up in a typical project. Size is important because it creates hierarchy, implicitly guiding the user through the page. So, in this section, we’re going to look at two CSS features that are available in our tool belt for adjusting font sizes to create better content hierarchy.
Sizes can be expressed in a different units of measure
Did you know that there are 15 different units for sizing in CSS? Crazy, right? Most of us are probably aware of and comfortable working with pixels (px) but there are so many other ways we can define the size of a font.
Some of these are relative units, and others are absolute. The actual size of an element expressed in relative units depends on other parts of the page, while the actual size of an element expressed in absolute units is always the same. And, as you might expect, that difference is important because they serve different functions, depending on a design’s needs.
Of the 15 units we have available, we will selectively look at two in particular: px and em. Not only are these perhaps the two most used units that you’ll see used to define font sizes, but they also perfectly demonstrate the difference between absolute and relative sizing.
First off, px is an absolute unit and it actually doesn’t have much to do with pixels (it’s considered an angular measurement), but it’s defined so that a line of width 1px appears sharp and visible, no matter the screen resolution. The px value corresponds, roughly, to a little more than the distance from the highest ascender (e.g. the top of the letter A) to the lowest descender (e.g. the bottom of the letter p) in the font. Some fonts have taller letters than others, so 16px in one font might look noticeably "bigger" than 16px in another. Yet another consideration to take into account when choosing a font!
Interestingly enough, when it comes to typography, the px is a unit to be understood, but not used. The true purpose of px units is to serve as the foundation of a type system based on relative units. In other words, it’s an absolute value that a relative unit can point to in order to define its own size relative to that value.
Which takes us to em, a relative unit. Text with a font size of 2em would be twice as big as the font-size of its parent element. The body doesn’t have a parent element, but each device has its own default font-size for the body element. For example, desktop browsers usually default to 16px. Other devices (e.g. mobile phones and TVs) might have different defaults that are optimized for their form factors.
The em lets you reason about your typography intuitively: "I want this title to be twice as big as the paragraph text" corresponds directly to 2em. It’s also easy to maintain — if you need a media query to make everything bigger on mobile, it’s a simple matter of increasing one font-size attribute. Dealing in em units also lets you set other typographical elements in relation to font-size, as we’ll see later.
There are semantic HTML elements that CSS can target to adjust size
You’re sold on the em, but how many em units should each element be? You’ll want to have a range of text sizes to establish hierarchy between different elements of your page. More often than not, you’ll see hierarchies that are six level deep, which corresponds to the HTML heading elements, h1 through h6.
The right size scale depends on the use case. As a guideline, many people choose to use a modular scale. That refers to a scale based on a constant ratio for successive text elements in the hierarchy.
Example of a modular scale with a ratio of 1.4
Modular Scales is a tool by Tim Brown, who popularized this approach, which makes it easy to visualize different size scales.
Adjusting a font’s vertical spacing and alignment
The physical size of a font is a super common way we use CSS to style a font, but vertical spacing is another incredibly powerful way CSS can help improve the legibility of content.
Use line-height to define how tall to make a line of text
Line height is one of the most important factors affecting readability. It’s controlled by the line-height property, best expressed as a unit-less number that corresponds to a multiple of the defined font size.
Let’s say we are working with a computed font size of 16px (specified in the CSS as em units, of course), a line-height of 1.2 would make each line 19.2px tall. (Remember that px aren’t actually pixels, so decimals are valid!)
Most browsers default to a line-height of 1.2 but the problem is that this is usually too tight — you probably want something closer to 1.5 because provides a little more breathing room for your eyes when reading.
Here are some general guidelines to define a good line height:
Increase line-height for thick fonts
Increase line-height when fonts are a dark color
Increase line-height for long-form content
Increasing the line-height can drastically improve legibility
Fonts can’t dance, but they still have rhythm
Rhythm describes how text flows vertically on a page. Much like music, some amount of consistency and structure usually leads to good "rhythm." Like most design techniques, rhythm isn’t an exact science but there are some sensible guidelines that give good results.
One school of thought prescribes the use of paragraph line height as a baseline unit from which all other spacing is derived. This includes gaps between paragraphs and headings, and padding between the text and other page elements.
Under this system, you might set the line height of a heading to twice the line height of paragraphs, and leave a one-line gap between a heading and paragraph. Handily, you can use the same em units to define margins and padding, so there’s really no need to hard-code anything.
Read more about rhythm here.
Horizontal Shape
Hopefully you’re convinced by now that vertical spacing is an important factor in improving legibility that we can control in CSS. Equally important is the horizontal spacing of individual characters and the overall width of content.
CSS can control the space between letters
Letter spacing is one of the most important factors affecting legibility. It is controlled by the CSS letter-spacing property, best expressed (again) in em units to keep everything relative. Tracking (the typography term for "letter spacing") depends entirely on the font — some fonts will look great by default, while others might need a little tweaking.
In general, you only need to worry about letter spacing for text elements that are particularly big or small because most fonts are spaced well at a typical paragraph size.
For larger text, like headings and titles, you’ll often want to reduce the space between letters. And a little bit of space goes a long way. For example, -0.02em is a tiny decimal, but a good starting point, which can be tweaked until it looks just right to your eye. The only time you should think about increasing letter spacing is when dealing with stylistically unusual text — things like all-capped titles and even some number sequences.
Adding or subtracting as little as 0.02em can refine the appearance of words
Some specific pairs of letters, like AV, can look awkwardly spaced without a little manual tweaking. Well-crafted fonts typically specify custom spacing for such pairs, and setting the font-kerning property to normal lets you enable this. Browsers disable this on smaller text by default. Here is a little more on letter spacing, as well as other CSS tools we have to control font spacing in general.
The length of a line of text is more important than you might think
It’s unpleasant for our eyes to move long distances while reading, so the width of lines should be deliberate. General consensus is that a good width is somewhere between 60 and 70 characters per line. If you find that a line of text (especially for long-form content) drastically falls outside this range, then start adjusting.
The ch is a little-known CSS unit that we didn’t cover earlier, but can be helpful to keep line length in check. It’s a relative unit, defined as the width of the 0 character in the element’s font. Because narrow characters like l and i are relatively frequent, setting the width of a text container to something like 50ch should result in lines that are 60-70 character long.
p { width: 50ch; }
CSS can smooth otherwise crispy fonts
-webkit-font-smoothing is a nifty CSS property that controls how fonts are anti-aliased. This is a fancy way of saying it can draw shades of gray around otherwise square pixels to create a smoother appearance. Do note, however, the -webkit prefix there, because that indicates that the property is only supported by WebKit browsers, e.g. Safari.
The default setting is subpixel-antialiased, but it’s worth seeing what your type looks like when it’s set to just antialiased. It can improve the way many fonts look, especially for text on non-white backgrounds.
At small font sizes, this should be used with caution — it lowers contrast, affecting readability. You should also make sure to adjust letter-spacing when using this, since greater anti-aliasing will increase the space between letters.
Anti-aliased (left) Subpixel Anti-aliased (right)
Wrapping up
Phew! We covered a lot of ground in a relatively short amount of space. Still, this is by no means an exhaustive guide, but rather, something that I hope will encourage you to take more control over the typography in your future projects, and to seek an even deeper knowledge of the topic. For that, I’ve compiled a list of additional resources to help you level up from here.
Learning about typography
The Elements of Typographic Style Applied to the Web
Butterick’s Practical Typography
Better Web Type
Typography Inspiration
Typewolf
Fonts in Use
Awwwards
Identifying Fonts
WhatFont
Identifont
WhatTheFont
Typography in CSS
CSS Reference
CSS-Tricks Guide to Typography
The post Typography for Developers appeared first on CSS-Tricks.
Typography for Developers published first on https://deskbysnafu.tumblr.com/
0 notes
Text
Typography for Developers
Taimur Abdaal leads design at Retool, a fast way to build internal tools. They're working on a new design system for their platform, to let anyone easily build beautiful custom apps. Typography will be a huge part of this and Taimur wrote this based on that experience.
You may have read the title for this post and thought, "Why on earth does a developer need to know anything about typography?" I mean, there’s already a lot on your plate and you’re making hundreds of decisions a day. Should you use React or Vue? npm or Yarn? ES6 or ES7? Sadly, this often leaves something like type as an afterthought. But, let’s remember that web design is 95% typography:
95% of the information on the web is written language. It is only logical to say that a web designer should get good training in the main discipline of shaping written information, in other words: Typography.
Even though we deal with content everyday — whether reading, writing, or designing it — typography can be daunting to delve into because it’s filled with jargon and subjectivity, and it’s uncommon to see it taught extensively at school.
This is intended as a practical guide for developers to learn web typography. We’ll cover a range of practical and useful topics, like how to choose and use custom fonts on the web, but more importantly, how to lay text out to create a pleasant user experience. We’ll go over the principles of typography and the CSS properties that control them, as well as handy tips to get good results, quickly.
What is typography?
First and foremost, typography is about usability. Type is the user interface for conveying information, and conveying information is what we’re here to do on the web. There are many levers we can pull to affect the usability of reading text, and only by being deliberate about these can we create a pleasant experience for our users.
After (and only after) usability, typography is about emotion. Do the letters complement your content, or contradict it? Do they amplify your brand’s personality, or dampen it? Applied to the same text, different type will make people feel different things. Being deliberate about typography lets us control these feelings.
Same text, different personalities. I’ll bet the first experience is much more expensive. Typefaces: Bodoni 72 (Top), Tsukushi A Round Gothic (Bottom)
Despite what many golden ratio enthusiasts might try to tell you, typography isn’t an exact science. Good guidelines will get you most of the way there, but you’ll need to apply a little intuition too. Luckily, you’ve been reading text your whole life — books, magazines, the Internet — so you have a lot more untapped intuition than you think!
What’s in a font?
Let’s start off by getting a handle on some basic terminology and how fonts are categorized.
Typeface vs. Font
This is how the two have traditionally been defined and distinguished from one another:
Typeface: The design of a collection of glyphs (e.g. letters, numbers, symbols)
Font: A specific size, weight, or style of a typeface (e.g. regular, bold, italic)
In essence, a typeface is like a song and a font is like its MP3 file. You’ll see both terms used in typography literature, so it’s worth knowing the distinction. "Font vs. Typeface" is also a bit of meme in the design community — you might see it crop up on Twitter, so it’ll help to be "in the know."
HOW 2 MAJOR IN GRAPHIC DESIGN: - CHOOSE A FONT - "ITS CALLED A TYPEFACE NOT A FONT" - CHOOSE A GODDAMN TYPEFACE - U MADE THE WRONG CHOICE
— Deena (@itsDeenasaur) November 2, 2014
I may have used font and typeface interchangeably in this story. Please send all complaints to [email protected]. https://t.co/IuHHmlaNAT
— Tristin Hopper (@TristinHopper) January 14, 2019
But, more recently, you can essentially use both terms interchangeably and people will know what you mean.
How fonts are categorized
The broadest split is between serif and sans-serif typefaces. It’s likely you’ve come across these terms just by seeing some some typeface names floating around (like, ahem, Comic Sans).
A "serif" is a small stroke attached to the ends of letters, giving them a traditional feel. In fact, most books and newspapers are set in serif typefaces. On the contrary, sans-serif typefaces don’t have these extra strokes, giving them a smooth, modern feel.
Times (left) and Helvetica Neue (right)
Both serif and sans-serif have categories within them. For example, serif has sub-categories including didone, slab and old style.
Didot (left), Rockwell (center) and Hoefler Text (right)
As far as sans-serif goes, it includes humanist, geometric and grotesk as sub-categories.
Gill Sans (left), Futura (center) and Aktiv Grotesk (right)
Monospace fonts (yes, fonts) are a noteworthy category all its own. Each glyph (i.e. letter/number/symbol) in a monospace font has the same width (hence the mono spacing terminology), so it’s possible to arrange them into visual structures. You may be well familiar with monospace because we see it often when writing code, where it’s helpful to make brackets and indents line up visually. The code editor you have open right now is likely using monospace.
Monaco
How to choose fonts and font pairings
This is subjective, and depends on what you’re trying to do. Each typeface has its own personality, so you’ll have to find one that aligns with your brand or the content it’s communicating. Typefaces have to be considered within the context that they’re being applied. Are we looking for formality? Maybe start with the serif family. Warm, fun and friendly? That might be a cue for sans-serif. Heck, there’s a time and a place even for Comic Sans… really! Just know that there is no hard science to choosing a font, and even the most trained of typographers are considering contextual cues to find the "right" one.
But fonts can be paired together in the same design. For example, some designs use one typeface for headings and another for body text. This is a good way to get a unique look, but it does take a fair bit of work to get it right. Like colors in a palette, some typefaces work well together while others clash. And even purposeful clashing can make sense, again, given the right context.
The only way to find out if two fonts are complementary is to see them together. Tools like FontPair and TypeConnection can help with this. If a font catches your eye when surfing the web, WhatFont is a great browser extension to help identify it. Another great resource is Typewolf which allows you to see examples of great web typography, including different font combinations in use:
Alice Lee (left), American Documentary (center) and Studio Stereo (right)
While there’s a lot of subjectivity in choosing fonts, there are some objective considerations.
Font weight as a consideration
Some font families have a wide range of font weights — Light, Book, Regular, Medium, Semi-Bold, Bold, Black — whereas others just have a couple.
Inter UI comes with a fantastic range of weights
If you’re building a complex web app UI, you might need a range of font weights to establish hierarchy in different contexts. For something less complex, say a blog, you’ll likely be fine with just a couple.
Complex UI hierarchy (left) and Simple blog hierarchy (right)
Variable fonts are an exciting new technology that provide precise control over a font's characteristics. For font weights, this means there's no longer limitations to using "Light," "Regular," and "Bold," but really any weight in between. Try some variable fonts out here and check out Ollie Williams' post as well.
It's still early days — there aren't many variable fonts available right now, and browser support is limited. But definitely a space worth watching!
Consider the legibility of a font
Some typefaces are harder to read than others. Stay away from elaborate fonts when it comes to paragraphs, and if you must have tiny text somewhere, make sure its typeface is legible at those tiny sizes.
Of these two examples, which is easier on the eyes? (There is a right answer here. 🙂)
See the Pen Font Comparison: Body by Geoff Graham (@geoffgraham) on CodePen.
Remember, fonts come in a variety of styles
Making a font bold isn’t as simple as adding an outline to the text, and italics are more than slanted letters. Good fonts have specific bold and italic styles, without which the browser tries to "fake" it:
Lato
These faux styles tend to reduce legibility and break the text’s visual cohesion because the browser can only do so much with what it’s given. Stick to fonts that offer true bold/italics styles, if you can.
Other things worth consideration
What we’ve looked at so far are the high level characteristics and features of fonts that can help us decide which to choose for a particular design or context. There are many other things that can (and probably should) be taken into consideration, including:
Language support: Some fonts have glyphs for foreign characters, which may be a requirement for a project geared toward a multilingual audience or that contains multilingual content.
Ligatures: Some fonts (typically serifs) have glyphs to replace otherwise awkward characters, like ffi and ffl which are multiple characters in a single glyph.
File size: You’re a developer, so I know you care about performance. Some fonts come in bigger files than others and those can cause a site to take a hit on load times.
Using fonts on the world wide web
A mere 10 years ago, there were two complex ways to use custom fonts on the web:
SiFR: Embedding a Flash (remember that thing?) widget to render text.
cufon: converting the font to a proprietary format, and using a specific JavaScript rendering engine to generate the text using VML on an HTML <canvas>.
Today, there’s all we need is a single simple one: @font-face. It lets you specify a font file URL, which the browser then downloads and uses on your site.
You can use the @font-face declaration to define your custom font:
@font-face { font-family: 'MyWebFont'; src: url('webfont.eot'); /* IE9 Compat Modes */ src: url('webfont.eot?#iefix') format('embedded-opentype'), /* IE6-IE8 */ url('webfont.woff2') format('woff2'), /* Super Modern Browsers */ url('webfont.woff') format('woff'), /* Pretty Modern Browsers */ url('webfont.ttf') format('truetype'), /* Safari, Android, iOS */ url('webfont.svg#svgFontName') format('svg'); /* Legacy iOS */ }
And then refer to it as normal, by the font-family attribute:
body { font-family: 'MyWebFont', Fallback, sans-serif; }
Even though @font-face is light years ahead of the old school approaches to custom web fonts, there is quite a bit to consider as far as browser support goes. Here’s a guide to getting the right amount of cross-browser support.
The impact of fonts on performance
Performance is the only downside to using custom fonts on the web. Fonts are relatively large assets — often hundreds of kilobytes in size — and will have and adverse effect on the speed of any site.
Here are some tips to soften the blow:
Use GZIP compression on your web font files.
Disable font features you don’t need, e.g. hinting, kerning.
Remove glyphs that you don’t need, e.g. foreign language characters.
One tool that can help tailor a font file to suit your needs is Transfonter. Just make sure you have the rights to any font that you want to use on your site because, well, it’s just the right thing to do.
Some services will host web fonts for you
One other significant way to shed some of the weight of using custom web fonts is to use a service that will host and serve them for you. The benefit here is that third-parties store the files on a speedy CDN, optimize them, and serve them to your site via a JavaScript snippet that gets dropped into the document head. In short: it takes a lot of hassle off your hands.
How easy is it to use a hosted service? Consider that using anything on Google Fonts only requires a single <link> in the HTML head:
<html> <head> <link href="https://fonts.googleapis.com/css?family=Roboto" rel="stylesheet"> <style> body { font-family: 'Roboto', sans-serif; } </style> </head> <!-- Rest of the document -->
This example was taken from Google Fonts — the most popular hosted font service. It’s free to use, and, at the time of writing, has a catalogue of 915 font families. The quality of these can be hit-or-miss. Some are truly top-notch — beautiful design, many weights, true bold/italics, and advanced features like ligatures. Others, typically the more novel designs, are bare-bones, and might not be suitable for serious projects with evolving needs.
Adobe Fonts is also very popular. It comes bundled with Adobe’s Creative Cloud offering, which starts at $20.99 per month. At the time of writing, it has a catalogue of 1,636 font families. These are typically high quality fonts that you’d have to pay for, but with Adobe Fonts, you can access them all in a Netflix all-you-can-eat model.
Here are some other options:
Adobe Edge Web Fonts: This is the free version of Adobe Fonts. Adobe partnered with Google on this, so there’s a lot of overlap with the Google Fonts library. It has 503 font families in total.
Fontspring: This is a massive library of over 14,000 font families, but with an individual licensing model. This means paying more up-front, starting at ~$20 per individual font (and per weight or style), but no recurring costs. It’s also self-serve — you’ll have to host the files yourself.
MyFonts by Monotype: This is major type foundry. Similar to Fontspring: massive library of font families with an individual licensing model.
Fonts.com: This is similar to Fontspring. Options for one-off or pay-as-you go pricing (based on page views).
Cloud.typography by Hoefler&Co: This is a major type foundry. Over 1,000 font families (all by Hoefler&Co), with options for hosted (subscription, starting at $99 per year) or self-hosted (individually licensed) web fonts. The benefit here is that you get access to a library of fonts you cannot find anywhere else.
Fontstand: This services allows you to "rent" individual fonts cheaply on a monthly basis, starting ~$10 per month), which you get to own automatically after 12 months. To date, it boasts 1,500 families, with a hosted web font service.
CSS and Typography
OK, let’s move on to the intersection of design and development. We’ve covered a lot of the foundational terms and concepts of typography, but now we can turn our attention to what affordances we have to manipulate and style type, specifically with CSS.
Adjusting font sizing
Changing the size of a font is something that inevitably pops up in a typical project. Size is important because it creates hierarchy, implicitly guiding the user through the page. So, in this section, we’re going to look at two CSS features that are available in our tool belt for adjusting font sizes to create better content hierarchy.
Sizes can be expressed in a different units of measure
Did you know that there are 15 different units for sizing in CSS? Crazy, right? Most of us are probably aware of and comfortable working with pixels (px) but there are so many other ways we can define the size of a font.
Some of these are relative units, and others are absolute. The actual size of an element expressed in relative units depends on other parts of the page, while the actual size of an element expressed in absolute units is always the same. And, as you might expect, that difference is important because they serve different functions, depending on a design’s needs.
Of the 15 units we have available, we will selectively look at two in particular: px and em. Not only are these perhaps the two most used units that you’ll see used to define font sizes, but they also perfectly demonstrate the difference between absolute and relative sizing.
First off, px is an absolute unit and it actually doesn’t have much to do with pixels (it’s considered an angular measurement), but it’s defined so that a line of width 1px appears sharp and visible, no matter the screen resolution. The px value corresponds, roughly, to a little more than the distance from the highest ascender (e.g. the top of the letter A) to the lowest descender (e.g. the bottom of the letter p) in the font. Some fonts have taller letters than others, so 16px in one font might look noticeably "bigger" than 16px in another. Yet another consideration to take into account when choosing a font!
Interestingly enough, when it comes to typography, the px is a unit to be understood, but not used. The true purpose of px units is to serve as the foundation of a type system based on relative units. In other words, it’s an absolute value that a relative unit can point to in order to define its own size relative to that value.
Which takes us to em, a relative unit. Text with a font size of 2em would be twice as big as the font-size of its parent element. The body doesn’t have a parent element, but each device has its own default font-size for the body element. For example, desktop browsers usually default to 16px. Other devices (e.g. mobile phones and TVs) might have different defaults that are optimized for their form factors.
The em lets you reason about your typography intuitively: "I want this title to be twice as big as the paragraph text" corresponds directly to 2em. It’s also easy to maintain — if you need a media query to make everything bigger on mobile, it’s a simple matter of increasing one font-size attribute. Dealing in em units also lets you set other typographical elements in relation to font-size, as we’ll see later.
There are semantic HTML elements that CSS can target to adjust size
You’re sold on the em, but how many em units should each element be? You’ll want to have a range of text sizes to establish hierarchy between different elements of your page. More often than not, you’ll see hierarchies that are six level deep, which corresponds to the HTML heading elements, h1 through h6.
The right size scale depends on the use case. As a guideline, many people choose to use a modular scale. That refers to a scale based on a constant ratio for successive text elements in the hierarchy.
Example of a modular scale with a ratio of 1.4
Modular Scales is a tool by Tim Brown, who popularized this approach, which makes it easy to visualize different size scales.
Adjusting a font’s vertical spacing and alignment
The physical size of a font is a super common way we use CSS to style a font, but vertical spacing is another incredibly powerful way CSS can help improve the legibility of content.
Use line-height to define how tall to make a line of text
Line height is one of the most important factors affecting readability. It’s controlled by the line-height property, best expressed as a unit-less number that corresponds to a multiple of the defined font size.
Let’s say we are working with a computed font size of 16px (specified in the CSS as em units, of course), a line-height of 1.2 would make each line 19.2px tall. (Remember that px aren’t actually pixels, so decimals are valid!)
Most browsers default to a line-height of 1.2 but the problem is that this is usually too tight — you probably want something closer to 1.5 because provides a little more breathing room for your eyes when reading.
Here are some general guidelines to define a good line height:
Increase line-height for thick fonts
Increase line-height when fonts are a dark color
Increase line-height for long-form content
Increasing the line-height can drastically improve legibility
Fonts can’t dance, but they still have rhythm
Rhythm describes how text flows vertically on a page. Much like music, some amount of consistency and structure usually leads to good "rhythm." Like most design techniques, rhythm isn’t an exact science but there are some sensible guidelines that give good results.
One school of thought prescribes the use of paragraph line height as a baseline unit from which all other spacing is derived. This includes gaps between paragraphs and headings, and padding between the text and other page elements.
Under this system, you might set the line height of a heading to twice the line height of paragraphs, and leave a one-line gap between a heading and paragraph. Handily, you can use the same em units to define margins and padding, so there’s really no need to hard-code anything.
Read more about rhythm here.
Horizontal Shape
Hopefully you’re convinced by now that vertical spacing is an important factor in improving legibility that we can control in CSS. Equally important is the horizontal spacing of individual characters and the overall width of content.
CSS can control the space between letters
Letter spacing is one of the most important factors affecting legibility. It is controlled by the CSS letter-spacing property, best expressed (again) in em units to keep everything relative. Tracking (the typography term for "letter spacing") depends entirely on the font — some fonts will look great by default, while others might need a little tweaking.
In general, you only need to worry about letter spacing for text elements that are particularly big or small because most fonts are spaced well at a typical paragraph size.
For larger text, like headings and titles, you’ll often want to reduce the space between letters. And a little bit of space goes a long way. For example, -0.02em is a tiny decimal, but a good starting point, which can be tweaked until it looks just right to your eye. The only time you should think about increasing letter spacing is when dealing with stylistically unusual text — things like all-capped titles and even some number sequences.
Adding or subtracting as little as 0.02em can refine the appearance of words
Some specific pairs of letters, like AV, can look awkwardly spaced without a little manual tweaking. Well-crafted fonts typically specify custom spacing for such pairs, and setting the font-kerning property to normal lets you enable this. Browsers disable this on smaller text by default. Here is a little more on letter spacing, as well as other CSS tools we have to control font spacing in general.
The length of a line of text is more important than you might think
It’s unpleasant for our eyes to move long distances while reading, so the width of lines should be deliberate. General consensus is that a good width is somewhere between 60 and 70 characters per line. If you find that a line of text (especially for long-form content) drastically falls outside this range, then start adjusting.
The ch is a little-known CSS unit that we didn’t cover earlier, but can be helpful to keep line length in check. It’s a relative unit, defined as the width of the 0 character in the element’s font. Because narrow characters like l and i are relatively frequent, setting the width of a text container to something like 50ch should result in lines that are 60-70 character long.
p { width: 50ch; }
CSS can smooth otherwise crispy fonts
-webkit-font-smoothing is a nifty CSS property that controls how fonts are anti-aliased. This is a fancy way of saying it can draw shades of gray around otherwise square pixels to create a smoother appearance. Do note, however, the -webkit prefix there, because that indicates that the property is only supported by WebKit browsers, e.g. Safari.
The default setting is subpixel-antialiased, but it’s worth seeing what your type looks like when it’s set to just antialiased. It can improve the way many fonts look, especially for text on non-white backgrounds.
At small font sizes, this should be used with caution — it lowers contrast, affecting readability. You should also make sure to adjust letter-spacing when using this, since greater anti-aliasing will increase the space between letters.
Anti-aliased (left) Subpixel Anti-aliased (right)
Wrapping up
Phew! We covered a lot of ground in a relatively short amount of space. Still, this is by no means an exhaustive guide, but rather, something that I hope will encourage you to take more control over the typography in your future projects, and to seek an even deeper knowledge of the topic. For that, I’ve compiled a list of additional resources to help you level up from here.
Learning about typography
The Elements of Typographic Style Applied to the Web
Butterick’s Practical Typography
Better Web Type
Typography Inspiration
Typewolf
Fonts in Use
Awwwards
Identifying Fonts
WhatFont
Identifont
WhatTheFont
Typography in CSS
CSS Reference
CSS-Tricks Guide to Typography
The post Typography for Developers appeared first on CSS-Tricks.
😉SiliconWebX | 🌐CSS-Tricks
0 notes
Quote
Introduction We are at an impasse. The current politics of trans liberation have staked their claims on a redemptive understanding of identity. Whether through a doctor or psychologist’s diagnosis, or through a personal self affirmation in the form of a social utterance, we have come to believe that there is some internal truth to gender that we must divine. An endless set of positive political projects have marked the road we currently travel; an infinite set of pronouns, pride flags, and labels. The current movement within trans politics has sought to try to broaden gender categories, in the hope that we can alleviate their harm. This is naive. Judith Butler refers to gender as, “the apparatus by which the production and normalization of masculine and feminine take place along with the interstitial forms of hormonal, chromosomal, psychic, and performative that gender assumes.” If the current liberal politics of our trans comrades and siblings are rooted in trying to expand the social dimensions created by this apparatus, our work is a demand to see it burned to the ground. We are radicals who have had enough with attempts to salvage gender. We do not believe we can make it work for us. We look at the transmisogyny we have faced in our own lives, the gendered violence that our comrades, both trans and cis have faced, and we realize that the apparatus itself makes such violence inevitable. We have had enough. We are not looking to create a better system, for we are not interested in positive politics at all. All we demand in the present is a relentless attack on gender and the modes of social meaning and intelligibility it creates. At the core of this Gender Nihilism lies several principles that will be explored in detail here: Antihumanism as foundation and cornerstone, gender abolition as a demand, and radical negativity as method. Antihumanism Antihumanism is a cornerstone which holds gender nihilist analysis together. It is the point from which we begin to understand our present situation; it is crucial. By antihumanism, we mean a rejection of essentialism. There is no essential human. There is no human nature. There is no transcendent self. To be a subject is not to share in common a metaphysical state of being (ontology) with other subjects. The self, the subject is a product of power. The “I” in “I am a man” or “I am a woman” is not an “I” which transcends those statements. Those statements do not reveal a truth about the “I,” rather they constitute the “I.” Man and Woman do not exist as labels for certain metaphysical or essential categories of being, they are rather discursive, social, and linguistic symbols which are historically contingent. They evolve and change over time; their implications have always been determined by power. Who we are, the very core of our being, might perhaps not be found in the categorical realm of being at all. The self is a convergence of power and discourses. Every word you use to define yourself, every category of identity within which you find yourself place, is the result of a historical development of power. Gender, race, sexuality, and every other normative category is not referencing a truth about the body of the subject or about the soul of the subject. These categories construct the subject and the self. There is no static self, no consistent “I”, no history transcending subject. We can only refer to a self with the language given to us, and that language has radically fluctuated throughout history, and continues to fluctuate in our day to day life. We are nothing but the convergence of many different discourses and languages which are utterly beyond our control, yet we experience the sensation of agency. We navigate these discourses, occasionally subverting, always surviving. The ability to navigate does not indicate a metaphysical self which acts upon a sense of agency, it only indicates that there is symbolic and discursive looseness surrounding our constitution. We thus understand gender through these terms. We see gender as a specific set of discourses embodied in medicine, psychiatry, the social sciences, religion, and our daily interactions with others. We do not see gender as a feature of our “true selves,” but as a whole order of meaning and intelligibility which we find ourselves operating in. We do not look at gender as a thing which a stable self can be said to possess. On the contrary we say that gender is done and participated in, and that this doing is a creative act by which the self is constructed and given social significance and meaning. Our radicalism cannot stop here, we further state that historical evidence can be provided to show that gender operates in such a manner. The work of many decolonial feminists has been influential in demonstrating the ways that western gender categories were violently forced onto indigenous societies, and how this required a complete linguistic and discursive shift. Colonialism produced new gender categories, and with them new violent means of reinforcing a certain set of gendered norms. The visual and cultural aspects of masculinity and femininity have changed over the centuries. There is no static gender. There is a practical component to all of this. The question of humanism vs antihumanism is the question upon which the debate between liberal feminism and nihilist gender abolitionism will be based. The liberal feminist says “I am a woman” and by that means that they are spiritually, ontologically, metaphysically, genetically, or any other modes of “essentially” a woman. The gender nihilist says “I am a woman” and means that they are located within a certain position in a matrix of power which constitutes them as such. The liberal feminist is not aware of the ways power creates gender, and thus clings to gender as a means of legitimizing themselves in the eyes of power. They rely on trying to use various systems of knowledge (genetic sciences, metaphysical claims about the soul, kantian ontology) in order to prove to power they can operate within it. The gender nihilist, the gender abolitionist, looks at the system of gender itself and see’s the violence at its core. We say no to a positive embrace of gender. We want to see it gone. We know appealing to the current formulations of power is always a liberal trap. We refuse to legitimize ourselves. It is imperative that this be understood. Antihumanism does not deny the lived experience of many of our trans siblings who have had an experience of gender since a young age. Rather we acknowledge that such an experience of gender was always already determined through the terms of power. We look to our own childhood experiences. We see that even in the transgressive statement of “We are women” wherein we deny the category power has imposed onto our bodies, we speak the language of gender. We reference an idea of “woman” which does not exist within us as a stable truth, but references the discourses by which we are constituted. Thus we affirm that there is no true self that can be divined prior to discourse, prior to encounters with others, prior to the mediation of the symbolic. We are products of power, so what are we to do? So we end our exploration of antihumanism with a return to the words of Butler: “My agency does not consist in denying this condition of my constitution. If I have any agency, it is opened up by the fact that I am constituted by a social world I never chose. That my agency is riven with paradox does not mean it is impossible. It means only that paradox is the condition of its possibility.” Gender Abolition If we accept that gender is not to be found within ourselves as a transcendent truth, but rather exists outside us in the realm of discourse, what are we to strive for? To say gender is discursive is to say that gender occurs not as a metaphysical truth within the subject, but occurs as a means of mediating social interaction. Gender is a frame, a subset of language, and set of symbols and signs, communicated between us, constructing us and being reconstructed by us constantly. Thus the apparatus of gender operates cyclically; as we are constituted through it, so too do our daily actions, rituals, norms, and performances reconstitute it. It is this realization which allows for a movement against the cycle itself to manifest. Such a movement must understand the deeply penetrative and pervasive nature of the apparatus. Normalization has an insidious way of naturalizing, accounting for, and subsuming resistance. At this point it becomes tempting to embrace a certain liberal politics of expansion. Countless theorists and activists have laid stake to the claim that our experience of transgender embodiment might be able to pose a threat to the process of normalization that is gender. We have heard the suggestion that non-binary identity, trans identity, and queer identity might be able to create a subversion of gender. This cannot be the case. In staking our claim on identity labels of non-binary, we find ourselves always again caught back in the realm of gender. To take on identity in a rejection of the gender binary is still to accept the binary as a point of reference. In the resistance to it, one only reconstructs the normative status of the binary. Norms have already accounted for dissent; they lay the frameworks and languages through which dissent can be expressed. It is not merely that our verbal dissent occurs in the language of gender, but that the actions we take to subvert gender in dress and affect are themselves only subversive through their reference to the norm. If an identity politics of non-binary identity cannot liberate us, is is also true that a queer or trans identity politics offers us no hope. Both fall into the same trap of referencing the norm by trying to “do” gender differently. The very basis of such politics is grounded in the logic of identity, which is itself a product of modern and contemporary discourses of power. As we have already shown quite thoroughly, there can be no stable identity which we can reference. Thus any appeal to a revolutionary or emancipatory identity is only an appeal to certain discourses. In this case, that discourse is gender. This is not to say that those who identify as trans, queer, or non-binary are at fault for gender. This is the mistake of the traditional radical feminist approach. We repudiate such claims, as they merely attack those most hurt by gender. Even if deviation from the norm is always accounted for and neutralized, it sure as hell is still punished. The queer, the trans, the non-binary body is still the site of massive violence. Our siblings and comrades still are murdered all around us, still live in poverty, still live in the shadows. We do not denounce them, for that would be to denounce ourselves. Instead we call for an honest discussion about the limits of our politics and a demand for a new way forward. With this attitude at the forefront, it is not merely certain formulations of identity politics which we seek to combat, but the need for identity altogether. Our claim is that the ever expanding list of personal preferred pronouns, the growing and ever more nuanced labels for various expressions of sexuality and gender, and the attempt to construct new identity categories more broadly is not worth the effort. If we have shown that identity is not a truth but a social and discursive construction, we can then realize that the creation of these new identities is not the sudden discovery of previously unknown lived experience, but rather the creation of new terms upon which we can be constituted. All we do when we expand gender categories is to create new more nuanced channels through which power can operate. We do not liberate ourselves, we ensnare ourselves in countless and even more nuanced and powerful norms. Each one a new chain. To use this terminology is not hyperbolic; the violence of gender cannot be overestimated. Each trans woman murdered, each intersex infant coercively operated on, each queer kid thrown onto the streets is a victim of gender. The deviance from the norm is always punished. Even though gender has accounted for deviation, it still punishes it. Expansions of norms is an expansion of deviance; it is an expansion of ways we can fall outside a discursive ideal. Infinite gender identities create infinite new spaces of deviation which will be violently punished. Gender must punish deviance, thus gender must go. And thus we arrive at the need for the abolition of gender. If all of our attempts at positive projects of expansion have fallen short and only snared us in a new set of traps, then there must be another approach. That the expansion of gender has failed, does not imply that contraction would serve our purposes. Such an impulse is purely reactionary and must be done away with. The reactionary radical feminist sees gender abolition as such a contraction. For them, we must abolish gender so that sex (the physical characteristics of the body) can be a stable material basis upon which we can be grouped. We reject this whole heartedly. Sex itself is grounded in discursive groupings, given an authority through medicine, and violently imposed onto the bodies of intersex individuals. We decry this violence. No, a return to a simpler and smaller understanding of gender (even if supposedly material conception) will not do. It is the very normative grouping of bodies in the first place which we push back against. Neither contraction nor expansion will save us. Our only path is that of destruction. Radical Negativity At the heart of our gender abolition is a negativity. We seek not to abolish gender so that a true self can be returned to; there is no such self. It is not as though the abolition of gender will free us to exist as true or genuine selves, freed from certain norms. Such a conclusion would be at odds with the entirety of our antihumanist claims. And thus we must take a leap into the void. A moment of lucid clarity is required here. If what we are is a product of discourses of power, and we seek to abolish and destroy those discourses, we are taking the greatest risk possible. We are diving into an unknown. The very terms, symbols, ideas, and realities by which we have been shaped and created will burn in flames, and we cannot know or predict what we will be when we come out the other side. This is why we must embrace an attitude of radical negativity. All the previous attempts at positive and expansionist gender politics have failed us. We must cease to presume a knowledge of what liberation or emancipation might look like, for those ideas are themselves grounded upon an idea of the self which cannot stand up to scrutiny; it is an idea which for the longest time has been used to limit our horizons. Only pure rejection, the move away from any sort of knowable or intelligible future can allow us the possibility for a future at all. While this risk is a powerful one, it is necessary. Yet in plunging into the unknown, we enter the waters of unintelligibility. These waters are not without their dangers; and there is a real possibility for a radical loss self. The very terms by which we recognize each other may be dissolved. But there is no other way out of this dilemma. We are daily being attacked by a process of normalization that codes us as deviant. If we do not lose ourselves in the movement of negativity, we will be destroyed by the status quo. We have only one option, risks be damned. This powerfully captures the predicament that we are in at this moment. While the risk of embracing negativity is high, we know the alternative will destroy us. If we lose ourselves in the process, we have merely suffered the same fate we would have otherwise. Thus it is with reckless abandon that we refuse to postulate about what a future might hold, and what we might be within that future. A rejection of meaning, a rejection of known possibility, a rejection of being itself. Nihilism. That is our stance and method. Relentless critique of positive gender politics is thus a starting point, but one which must occur cautiously. For if we are to criticize their own normative underpinnings in favor of an alternative, we only fall prey once again to the neutralizing power of normalization. Thus we answer the demand for a clearly stated alternative and for a program of actions to be taken with a resolute “no.” The days of manifestos and platforms are over. The negation of all things, ourselves included, is the only means through which we will ever be able to gain anything
Gender Nihilism - Alyson Escalante
6 notes
·
View notes
Text
SearchLove London 2019: The Great Big Round Up
On 14th and 15th October, we made our annual visit to The Brewery in London for our UK edition of SearchLove. This year’s conference was our most successful yet, not only in terms of the number of folks attending but also with regard to the high calibre of speakers who joined us over the jam-packed two days to share their invaluable industry insights.
Let the show begin! #searchlove #seo pic.twitter.com/zDIRbbX2KG
— Udo Leinhäuser (@u_leinhaeuser) October 14, 2019
This post is a quick-fire summary of the knowledge our speakers had to share, plus their slides & a few photos from across the conference. All sessions in their entirety will be available with a DistilledU membership in a couple of weeks' time. And don’t forget that if you feel you missed out this year, make sure you sign up to our mailing list to be the first in the know for next year’s conference! Are you ready? Let’s get started!
Marie Haynes - ‘Practical Tips For Improving E-A-T’
Google’s algorithms are increasingly considering E-A-T components (expertise, authority and trust) when evaluating sites. Marie shared why and how to improve E-A-T so that you have the best chance at winning in the current and future search landscape.
One of the most important things to focus on is the accuracy of the information on your site. This is especially important if your pages are primarily YMYL (‘your money or your life’, in other words, content that can affect someone’s health, safety, financial stability, etc.).
Google’s quality raters use the quality raters guidelines as their textbook. If you take a look at the guidelines, you can get a better idea about what Google is actually looking at when they’re evaluating E-A-T components. Try doing a CTRL+F for your industry to see what they suggest for your vertical.
There are some practical things you can do on your site to help Google understand that you’re trustworthy and authoritative:
Have contact information available.
If you’re eCommerce, ensure that your refund policy and customer service information is clearly accessible.
Make sure your site is secure (HTTPS)
Have correct grammar. How your page reads is important!
Make sure that the information on your site doesn’t contradict any known facts, something called scientific consensus. Site all sources as necessary.
SearchLove London 2019 - Marie Haynes - Practical Tips for Improving E-A-T from Distilled
Sarah Gurbach - ‘Using Qualitative Data To Make Human-Centered Decisions’
SEOs have a huge amount of data to work with, but often, the data that gets overlooked is that which comes directly from the humans who are driving all of our data points.
By performing qualitative research in tandem with quantitative, we can get insights on the actual human wants, barriers, and confusions that drive our customers to make their decisions and move through the funnel.
Sarah’s steps to conducting qualitative research include:
Defining your objective. Write it as a question. Keep it specific, focused and simple.
Asking open-ended questions to customers to define the personas you should be targeting. Sarah recommends surveys of 10 questions to 5 customers that should only take around 20 minutes each. More than this will likely be redundant.
Actually observing our users to figure out what and how they’re searching and moving through the funnel.
You can then quantify this data by combining it with other data sources (i.e. PPC data, conversion data, etc.).
If you don’t have time to conduct surveys, then you can go to social media and ask a question!
Want more on questions you can ask your customers? Check out this resource from Sarah.
SearchLove London 2019 - Sarah Gurbach - Using Qualitative Data to Make Human-centered Decisions from Distilled
Greg Gifford - ‘Doc Brown’s Plutonium-Powered SEO Playbook’
Greg delivered an entertaining, informative and best of all highly actionable talk on local SEO. If you have physical locations for your business, you should not be neglecting your local SEO strategy! It’s important to remember that there is a different algorithm for local SEO compared to the traditional SERP, and therefore you need to approach local SEO slightly differently.
Greg’s key tips to nailing your local SEO strategy are as follows:
Links are weighted differently for local SEO! Make sure you acquire local links - quality, and whether these are follow or nofollow, matters far less than in the standard SERP. The key is to make sure your links are local - get your hands dirty with some old-school marketing and get out into your local community to build links from churches, businesses and community websites in your area.
Content needs to actually be about your business and local area. If you can use your website copy for a site in another area, you’re doing it wrong. Also, make sure that your blog is a local destination - if your content is more localised than competitors, then you’ll be one step ahead of competitors.
Citations are also important, but you only need a handful! Make sure you link to your website from places that customers will actually see, such as your Facebook, Twitter and other social profiles. Ensure your business information is accurate across platforms.
Reviews need to be strong across platforms - there’s no use having excellent reviews in Google My Business, and then bad reviews on TripAdvisor!
Google My Business is your new homepage, so make sure you give it some attention!
Bear in mind that users can not only ask questions but also answer them - make sure you create your own Q&A here and upvote your answers so that they appear at the top.
Also be aware that clicks from GMB are recorded as direct! If you use UTM tracking parameters, then you can update the tracking so that you can attribute it correctly to organic.
SearchLove London 2019 - Greg Gifford - Doc Brown's Plutonium-powered Local SEO Playbook from Distilled
Luke Carthy - ‘Finding Powerful CRO and UX Opportunities Using SEO Crawlers’
Luke Carthy discussed the importance of not always striving to drive more traffic, but making the most of the traffic you currently do have. More traffic does not necessarily equal more conversions! He explored different ways to identify opportunities using crawl software and custom extraction, and to use these insights to improve conversion rates on your website.
His top recommendations include:
Look at the internal search experience of users - do they get a ‘no results found’ page? What does this look like - does it provide a good user experience? Does it guide users to alternative products?
Custom extraction is an excellent way to mine websites for information (your own and especially competitors!)
Consider scraping product recommendations:
What products are competitor sites recommending? These are often based on dynamic algorithms, so provide a good insight into what products customers buy together
Also pay attention to the price of the recommended products vs. the main product - recommended items are often more expensive, to encourage users to spend more
Also consider scraping competitor sites for prices, review and stock
Are you cheaper than competitors?
Do competitors have popular products that you don’t have? What are their best and worst-performing products? Often category or search results pages are ordered by best-sellers, and you can take advantage of this by mining this information
To deepen your analysis, plugin other data such as log file data, Google Analytics, XML sitemaps and backlinks to try to understand how you can improve your current results, and to obtain comprehensive insights that you can share with the wider team
SearchLove London 2019 - Luke Carthy - Finding Powerful CRO and UX Opportunities Using SEO Crawlers from Distilled
Andi Jarvis - ‘The Science of Persuasion’
Human psychology affects consumers’ buying behavior tremendously. Andi covered how we as SEOs can better understand these factors to influence our SEO strategy and improve conversions.
Scarcity: you can create the impression of scarcity even when it doesn’t exist, by creating scarcity of time to drive demand. An example of this is how Hotels.com creates a sense of urgency by including things like “Only 4 rooms left!” Test and learn with different time scales (hours, days, weeks or more) to see what works best for your product offering.
Authority: building authority helps people understand who they should trust. When you’ve got authority, you are more likely to persuade people. You can build authority simply by talking about yourself, and by labelling yourself as an authority in your industry.
Likeability: The reason that influencer marketing works is due to the principle of liking: we prefer to buy from people who we are attracted to and who we aspire to be. If we can envision ourselves using a product or service by seeing ourselves in its marketing, then we are more likely to convert.
Pretty Little Thing has started doing this by incorporating two models to model clothing, to increase the likelihood of users identifying with their models
Purpose: People are more likely to buy when they feel they are contributing to a cause, for example, Pampers who has a partnership with Unicef, so consumers feel like they are doing a good deed when they buy Pampers products. This is known as cause-based or purpose-based marketing.
Social proofing: It’s been known for a long time that people are influenced by the behaviour of others. In the early 1800s, theatres would pay people to clap at the right moments in a show, to encourage others to join in. Similarly today, if a brand has several endorsements from celebrities or users, people are more likely to purchase their products.
Reciprocation: Offering customers a free gift (even if small) can have a positive impact on re-purchase rates. Make sure though that you evolve what you do if you have a regular purchase cycle - offer customers different gifts so that they don’t know what to expect, otherwise the positive effect wears off.
SearchLove London 2019 - Andi Jarvis - The Science of Persuasion from Distilled
Heather Physioc - ‘Building a Discoverability Powerhouse: Lessons From Integrating Organic Search, Paid Search & Performance Content’
Organic, paid content and the like all impact discoverability. Yet, in many organisations, these teams are siloed. Heather discussed tips for integrating and collaborating between teams to build a “discoverabilty powerhouse”.
There are definite obstacles to integrating marketing teams like paid, social, or organic.
It’s not unlikely that merging teams too much can actually diminish agility. Depending on what marketing needs are at different times, allow for independence of teams when it’s necessary to get a job done.
Every team has their own processes for getting things done. Don’t try to overhaul everything at once. Talk with each other to see where integration makes the most sense.
There are also clear wins when you’re able to collaborate effectively.
When you’re in harmony with each team, you can more seamlessly find opportunities for discoverability. This can ultimately lead to up-sells or cross-sells.
By working together, we can share knowledge more deeply and have richer data. We can then leverage this to capture as much of the SERP as possible.
Cross-training teams can help build empathy and trust. When separate teams gain an understanding of how and why certain tasks (i.e. keyword research) are done, it can help everyone work better together and streamline processes.
SearchLove London 2019 - Heather Physioc - Building a Discoverability Powerhouse from Distilled
Robin Lord - ‘Excel? It Would Be Easier To Go To Jupyter’
Robin, a senior consultant here at Distilled, demonstrated the various shortcomings of Excel and showed an easier, repeatable, and more effective way to get things done - using Jupyter Notebooks and Python.
Below we outline Robin’s main points:
Excel and Google Sheets are very error-prone - especially if you’re dealing with larger data sets! If you need to process a lot of data, then you should consider using Jupyter Notebooks, as it can handle much bigger data sets (think: analysing backlinks, doing keyword research, log file analysis)
Jupyter Notebooks are reusable: if you create a Jupyter script to do any repeatable task (i.e. reporting or keyword research) then you can reuse it. This makes your life much easier because you don’t have to go back and dissect an old process.
Jupyter allows you to use Regex. This gives a huge advantage over excel because it is far more efficient at allowing you to account for misspellings. This, for example, can give you a far more accurate chance at accounting for things like branded search query permutations.
Jupyter allows you to write notes and keep every step in your process ordered. This means that your methodology is noted and the next time you perform this task, you remember exactly the steps you took. This is especially useful for when clients ask you questions about your work weeks or months down the line!
Finally - Jupyter notebooks allow us to get answers that we can’t get from Excel. We’re able to not only consider the data set from new angles, but we also have more time to go about other tasks, such as thinking about client strategy or improving other processes.
Robin has so many slides it breaks Slideshare. Instead, you can download his slides from Dropbox.
Jes Scholz - ‘Giving Robots An All Access Pass’
Jes Scholz uses the analogy of a nightclub to explain how Googlebot interacts with your website. The goal? To become part of the exclusive “Club Valid”. Her main points are outlined below:
As stated by John Mueller himself, “crawl budget is overrated - most sites never need to worry about this”. So instead of focusing on how much Google is crawling your site, you should be most concerned with how Google is crawling it
Status codes are not good or bad - there are right codes and wrong codes for different situations
In a similar vein, duplicate content is not “bad”, in fact, it’s entirely natural. You just need to make sure that you’re handling it correctly
JavaScript is your ticket to better UX, however, bear in mind that this often presents a host of SEO difficulties. Make sure that you don’t rely on the mobile friendly testing tool to see if Google is able to crawl your JavaScript - this tool actually uses different software to Googlebot (this is a common misconception!) The URL inspection tool is a bit better for checking this, however, bear in mind it’s more patient that Googlebot when it comes to rendering JavaScript, so isn’t 100% accurate.
SearchLove London 2019 - Jes Scholtz - Giving Robots an All Access Pass from Distilled
Rand Fishkin - ‘The Search Landscape in 2019’
As the web evolves, it’s important to evaluate the areas you could invest in carefully. Rand explored the key changes affecting search marketers and how SEOs can take these changes into account when determining strategy.
Should you invest in voice search? It’s probably a bit too early. There is little difference in the results you get from a voice search vs. a manual search.
Both mobile and desktop are big - don’t neglect one at the expense of the other!
The zero-click search is where the biggest search growth is happening right now. It now accounts for about half (48.96% in the US) of all searches!
If you could benefit from answering zero-click searches, then you should prepare for that. You can determine whether you’d benefit by evaluating the value in ranking for a particular query without necessarily getting traffic.
With changes in Google search appearance recently, ads have become more seamless in the SERP. This has led to paid click-through-rate rising a lot. However, if history is correct, then it will probably slowly decline until the next big search change.
As Google’s algorithms evolve, you’ll likely receive huge ranking benefits from focusing on growing authority signals (E-A-T).
Check out Rand’s slides to see where you should be spending your time and money as the search landscape evolves.
SearchLove London 2019 - Rand Fishkin - The Search Landscape in 2019 from Distilled
Emily Potter - ‘Can Anything in SEO Be Proven? A Deep-Dive Into SEO Split-Testing’
Split testing SEO changes allow us to say with confidence whether or not a specific change hurts or helps organic traffic. Emily discusses various SEO split tests she’s run and potential reasons for their outcome.
The main levers for SEO tend to boil down to
1. Improving organic click-through-rate (CTR)
2. Improving organic rankings of current keywords
3. Ranking for new keywords
Split testing changes that we want to make to our site can help us to make business cases, rescue sessions, and gain a competitive advantage.
Determining which of the three levers causes a particular test to be positive or negative is challenging because since they all impact each other, the data is noisy. Measuring organic sessions relieves us of this noise.
Following “best practices” or what your competitors are doing is not always going to result in wins. Testing shows you what actually works for your site. For example, adding quantity of products in your titles or structured data for breadcrumbs might actually negatively impact your SEO, even if it seems like everyone else is doing so.
Check out Emily’s slides to see more split test case studies and learnings!
Lessons from another year in SEO A/B Testing - SearchLove London 2019 from Emily Potter
Jill Quick - ‘Segments: How To Get Juicy Insights & Avoid The Pips!’
In her excellent talk, Jill highlights how “average data gives you average insights”, and discusses the importance of segmenting your data to gain deeper insights into user behaviour. While analytics and segments are awesome, don’t become overwhelmed with the possibilities - focus on your strategy and work from there.
Jill’s other tips include:
Adding custom dimensions to forms on your website allows you to create more relevant and specific data segments
For example, if you have a website in the education sector, you can add custom dimensions to a form that asks people to fill in their profession. You can then create a segment where custom dimension = headteacher, and you can then analyse the behaviour of this specific group of people
Build segments that look at your best buyers (people who convert well) as well as your worst customers (those who spend barely any time on site and never convert). You can learn a lot about your ideal customer, as well as what you need to improve on your site, by doing this.
Use your segments to build retargeting lists - this will usually result in lower CPAs for paid search, helping your PPC budget go further
Don’t forget to use advanced segments (using sequences and conditions) to create granular segments that matter to your business
You can use segments in Google Data Studio, which is awesome! Just bear in mind that in Data Studio you can’t see if your segment data is sampled, so it’s best to go into the GA interface to check
If you want to hear more about Jill's session, she's written a post to supplement her slides.
Segments in Google Analytics from The Coloring In Department
Rory Truesdale - ‘Using The SERPs to Know Your Audience’
It can be easy to get lost in evaluating metrics like monthly search volume, but we often forget that for each query, there is a person with a very specific motivation and need. Rory discussed how we can utilise Google’s algorithmic re-writing of the SERP to help identify those motivations and more effectively optimise for search intent - the SERPs give us amazing insight into what customers want!
Google rewrites the SERP displayed meta description 84% of the time (it thinks it’s smarter than us!) However, we can use this rewrite data to our advantage.
The best ways to get SERP data are through crawling SERPs in screaming frog, the scraper API or chrome extension, “Thruuu” (a SERP analysis tool), and then using Jupyter Notebooks to analyse it.
Scraping of SERPs, product reviews, comments, or reddit forums can be really powerful in that it will give you a data source that can reveal insight about what your customers want. Then you can optimise the content on your pages to appeal to them.
If you can get a better idea about what language and tone resonates with users, you can incorporate it into CTAs and content.
Check our Rory’s slides as well as the Jupyter notebook he uses to analyse SERP data.
SearchLove London 2019 - Rory Truesdale - Using the SERPs to Know Your Audience from Distilled
Miracle Inameti Archibong - ‘The Complete Guide To Actionable Speed Audits: Getting Your Developer To Work With You’
It can be a huge challenge to get devs to implement our wishlist of SEO recommendations. Miracle discussed the practical steps to getting developers to take your recommendations seriously.
If you take some time to understand the Web Dev roles at your company, then it will help you better communicate your needs as an SEO and get things rolled out. You can do this by:
Learning the language that they’re using. Do some research into the terminology as well as possible limitations of your ask. This will make you more credible and you’re more likely to be taken seriously.
A team of developers may have different KPIs than you. It may be beneficial to use something like revenue as a way to get them on board with the change you want to make.
Try to make every ask more collaborative rather than instructive. For example, instead of simply presenting “insert this code,” try “here’s some example code, maybe we can incorporate x elements. What do you think?” A conversation may be the difference in effecting change.
Prioritising your requests in an easily readable way for web dev teams is always a good idea. It will give them the most information on what needs to get done in what timeline.
SearchLove London 2019 - Miracle Inameti-Archibong - The Complete Guide to Actionable Speed Audits from Distilled
Faisal Anderson - ‘Spying On Google: Using Log File Analysis To Reveal Invaluable SEO Insights’
Log files contain hugely valuable insight on how Googlebot and other crawlers behave on your site. Rory uncovered why you should be looking at your logs as well as how to analyse them effectively to reveal big wins that you may have otherwise been unable to quantify.
Looking at log files is a great way to see the truest and freshest data on how Google is crawling your site. It’s most accurate because it’s the actual logs of how Google (and any other bot) is crawling your website.
Getting log file data can be tricky, so it’s helpful to ask devs about your hosting setup (if your server uses load balancing, the log files may be split between various hosts). You’ll want to get 6 months of data if you can.
The three main things to evaluate when you’re analysing log files
Crawl behavior: look at most and least crawled URLs, look at crawl frequency by depth and internal links
Budget waste: find low value urls (faceted nav, query params, etc.) there are likely some subdirectories you want crawled more than others
Site health: look for inconsistent server responses
Using Jupyter to do log file analysis is great because it’s reusable and you’ll be able to use it again and again.
SearchLoveLondon 2019 - Faisal Anderson - Spying on Google: Using Log File Analysis to Reveal Invaluable SEO Insights from Distilled
Dr Pete Myers - ‘Scaling Keyword Research: More Isn’t Better’
Dr Pete Myers discussed how more is not better when it comes to keyword research! Ditch the thousands of keywords and instead focus on a smaller set of keywords that actually matter for you or your clients. Below are his top tips:
Pete has developed a simple metric called RankVol to help determine the importance of a keyword
RankVol = 1 / (rank x square root of volume)
Using this metric is better than sorting by search volume, as often the highest volume keywords that a site is appearing for are not the most relevant
Lots of data in keyword research can be irrelevant. Using John Lewis as an example:
9% of keywords John Lewis ranks for are mis-spellings
Almost 20% of keywords they rank for are very close variants (plural vs. singular, for example)
Dr Pete provides a short script in his deck to group keywords to help strip out noise in your data set
If sitelinks appear for your website, Google thinks you’re a brand
A new SERP feature (‘best of’ carousel) is appearing in the US, and will likely be rolled out into Europe soon
This feature takes you to a heavily paid SERP, with lots of ads (some well-disguised!)
If a keyword has a heavily paid SERP, you should probably not bother trying to rank for it, as the pay-off will be small
‘People also ask’ is on 90% of searches - be sure to try and take advantage of this SERP space
To summarise, perception is everything with keyword research - make sure you filter out the noise!
SearchLove London 2019 - Dr. Pete Meyers - Scaling Keyword Research: More Isn't Better from Distilled
Lindsay Wassell - ‘Managing Multinational & Multilingual SEO in Motion’
Lindsay covered the many challenges involved in handling migrations involving multiple international site variants. Her key points are highlighted below:
Ask your dev team to make sure it’s possible to implement hreflang via XML sitemaps or on-page; then if there are problems implementing one method, you have another as a fall-back option
When deciding site structure and where international sites should be located (sub-folder? Subdomain? ccTLD?) bear in mind that there are no one-size-fits all solutions. It may be best to have a mixture of solutions, depending on each market.
If you have hreflang relationship issues, Lindsay advises to use Google Sheets to manage hreflang mappings, in combination with a script that can automatically generate XML sitemaps (link provided in her deck)
In order to encourage more people in your organisation to understand the importance of SEO and to make it a priority, highlight statistics such as traffic levels and revenue coming from organic search
Also keep in mind that every department has a wish list when it comes to a migration! Be tactical and tack onto other people’s wishlists to get SEO items implemented
As a final tip - check redirects before going live, as often dev teams will say it’s under control, and then there can be problems at the last minute
SearchLove London 2019 - Lindsay Wassell - Managing Multinational & Multilingual SEO in Motion from Distilled
Stacey MacNaught - ‘Actioning Search Intent - What To Do With All That Data’
By analysing search intent, you can gain a ton of really insightful data. Stacey discussed how you can utilise all of this data to optimise your site for organic search and ultimately increase revenue and traffic.
Traditionally, search intent is categorised broadly as navigational, informational, and transactional. However, it’s often unclear where things are categorised because sometimes keywords are really ambiguous. Often you can break these categories down into more specific categories.
In terms of targeting keywords on your site, look out for opportunities where you may not be delivering the right content based on what query you’re targeting.
For example, if you’re targeting an informational keyword with a transactional result, you’re not going to rank. This can be an opportunity for you to create the kind of page that will rank for a select query. If the phrase is “best ballet shoes” and the results are informational pages, then you shouldn’t be serving a transactional result.
If you can be objective about the topic at hand and you have someone qualified to write that content, then you should definitely do it.
If your rankings drop but revenue unaffected, it’s likely you’ve lost rankings on informational keywords
Don’t assume that users will come back of their own accord - work with PPC and get them to retarget to users who have read your content
Build out different audience lists according to the types of content or topics that users have been reading
Build out separate PPC campaigns for this so you can easily monitor results
Stacey saw CPA fall by -34% when she did this for a healthcare site
To generate content ideas, talk to the sales and customer service teams to find out what users are asking, then build content around it
You can also use Google Forms to survey previous customers to find out what drove their purchase
SearchLove London 2019 - Stacey MacNaught - Actioning Search Intent: What to Do with All That Data from Distilled
Will Critchlow - ‘Misunderstood Concepts at the Heart of SEO - Get An Edge By Understanding These Areas’
Most things in SEO can be boiled down to technical accessibility, relevance, quality, and authority. Or: can it be crawled, does it meet a keyword need, and is it trustworthy? However, some of the foundational elements of SEO are misunderstood.
Regarding crawlability, it’s important to understand how setting directives in robots.txt will impact your site if handled incorrectly.
Robots.txt directives do not cascade. For example, if you set a specific directive to disallow Googlebot from /example, that is the one it will follow. Even if you specify that * (all user agents) are disallowed from /dont-crawl elsewhere in the file, Googlebot will only follow it’s set directive not to crawl /example and still be able to crawl /dont-crawl.
The Google documentation, robots.txt checker in GSC, and the open source parser tend to disagree on what is allowed and disallowed. So, you’ll need to do some testing to ensure that the directives you’re setting are what you intended.
We often have a lot of intuition about how things like pagerank work, but too many of our recommendations are based on misconceptions about how authority flows
There are some huge changes coming to major browser cookie handling. The cookie window will be shorter, which means that a lot of traffic that’s currently classified as organic will be classified as direct. Understanding the language around the changes that are happening is, and will be, important
There are common misconceptions too about the meaning of ‘long tail keywords’
50% of Twitter respondents incorrectly think it means that there are many words in a query
40% understand the correct meaning, which is that they are keywords with low search volume
SearchLove London 2019 - Will Critchlow - Misunderstood Concepts at the Heart of SEO from Distilled
That's it for our London conference for another year. But the good news is we are heading to San Diego in March where we'll be getting some sun, sea and search at SearchLove San Diego!
If you have any questions about our conferences please leave a comment below or come and say hello over on Twitter.
from Digital Marketing https://www.distilled.net/resources/searchlove-london-2019-round-up/ via http://www.rssmix.com/
0 notes
Text
SearchLove London 2019: The Great Big Round Up
On 14th and 15th October, we made our annual visit to The Brewery in London for our UK edition of SearchLove. This year’s conference was our most successful yet, not only in terms of the number of folks attending but also with regard to the high calibre of speakers who joined us over the jam-packed two days to share their invaluable industry insights.
Let the show begin! #searchlove #seo pic.twitter.com/zDIRbbX2KG
— Udo Leinhäuser (@u_leinhaeuser) October 14, 2019
This post is a quick-fire summary of the knowledge our speakers had to share, plus their slides & a few photos from across the conference. All sessions in their entirety will be available with a DistilledU membership in a couple of weeks' time. And don’t forget that if you feel you missed out this year, make sure you sign up to our mailing list to be the first in the know for next year’s conference! Are you ready? Let’s get started!
Marie Haynes - ‘Practical Tips For Improving E-A-T’
Google’s algorithms are increasingly considering E-A-T components (expertise, authority and trust) when evaluating sites. Marie shared why and how to improve E-A-T so that you have the best chance at winning in the current and future search landscape.
One of the most important things to focus on is the accuracy of the information on your site. This is especially important if your pages are primarily YMYL (‘your money or your life’, in other words, content that can affect someone’s health, safety, financial stability, etc.).
Google’s quality raters use the quality raters guidelines as their textbook. If you take a look at the guidelines, you can get a better idea about what Google is actually looking at when they’re evaluating E-A-T components. Try doing a CTRL+F for your industry to see what they suggest for your vertical.
There are some practical things you can do on your site to help Google understand that you’re trustworthy and authoritative:
Have contact information available.
If you’re eCommerce, ensure that your refund policy and customer service information is clearly accessible.
Make sure your site is secure (HTTPS)
Have correct grammar. How your page reads is important!
Make sure that the information on your site doesn’t contradict any known facts, something called scientific consensus. Site all sources as necessary.
SearchLove London 2019 - Marie Haynes - Practical Tips for Improving E-A-T from Distilled
Sarah Gurbach - ‘Using Qualitative Data To Make Human-Centered Decisions’
SEOs have a huge amount of data to work with, but often, the data that gets overlooked is that which comes directly from the humans who are driving all of our data points.
By performing qualitative research in tandem with quantitative, we can get insights on the actual human wants, barriers, and confusions that drive our customers to make their decisions and move through the funnel.
Sarah’s steps to conducting qualitative research include:
Defining your objective. Write it as a question. Keep it specific, focused and simple.
Asking open-ended questions to customers to define the personas you should be targeting. Sarah recommends surveys of 10 questions to 5 customers that should only take around 20 minutes each. More than this will likely be redundant.
Actually observing our users to figure out what and how they’re searching and moving through the funnel.
You can then quantify this data by combining it with other data sources (i.e. PPC data, conversion data, etc.).
If you don’t have time to conduct surveys, then you can go to social media and ask a question!
Want more on questions you can ask your customers? Check out this resource from Sarah.
SearchLove London 2019 - Sarah Gurbach - Using Qualitative Data to Make Human-centered Decisions from Distilled
Greg Gifford - ‘Doc Brown’s Plutonium-Powered SEO Playbook’
Greg delivered an entertaining, informative and best of all highly actionable talk on local SEO. If you have physical locations for your business, you should not be neglecting your local SEO strategy! It’s important to remember that there is a different algorithm for local SEO compared to the traditional SERP, and therefore you need to approach local SEO slightly differently.
Greg’s key tips to nailing your local SEO strategy are as follows:
Links are weighted differently for local SEO! Make sure you acquire local links - quality, and whether these are follow or nofollow, matters far less than in the standard SERP. The key is to make sure your links are local - get your hands dirty with some old-school marketing and get out into your local community to build links from churches, businesses and community websites in your area.
Content needs to actually be about your business and local area. If you can use your website copy for a site in another area, you’re doing it wrong. Also, make sure that your blog is a local destination - if your content is more localised than competitors, then you’ll be one step ahead of competitors.
Citations are also important, but you only need a handful! Make sure you link to your website from places that customers will actually see, such as your Facebook, Twitter and other social profiles. Ensure your business information is accurate across platforms.
Reviews need to be strong across platforms - there’s no use having excellent reviews in Google My Business, and then bad reviews on TripAdvisor!
Google My Business is your new homepage, so make sure you give it some attention!
Bear in mind that users can not only ask questions but also answer them - make sure you create your own Q&A here and upvote your answers so that they appear at the top.
Also be aware that clicks from GMB are recorded as direct! If you use UTM tracking parameters, then you can update the tracking so that you can attribute it correctly to organic.
SearchLove London 2019 - Greg Gifford - Doc Brown's Plutonium-powered Local SEO Playbook from Distilled
Luke Carthy - ‘Finding Powerful CRO and UX Opportunities Using SEO Crawlers’
Luke Carthy discussed the importance of not always striving to drive more traffic, but making the most of the traffic you currently do have. More traffic does not necessarily equal more conversions! He explored different ways to identify opportunities using crawl software and custom extraction, and to use these insights to improve conversion rates on your website.
His top recommendations include:
Look at the internal search experience of users - do they get a ‘no results found’ page? What does this look like - does it provide a good user experience? Does it guide users to alternative products?
Custom extraction is an excellent way to mine websites for information (your own and especially competitors!)
Consider scraping product recommendations:
What products are competitor sites recommending? These are often based on dynamic algorithms, so provide a good insight into what products customers buy together
Also pay attention to the price of the recommended products vs. the main product - recommended items are often more expensive, to encourage users to spend more
Also consider scraping competitor sites for prices, review and stock
Are you cheaper than competitors?
Do competitors have popular products that you don’t have? What are their best and worst-performing products? Often category or search results pages are ordered by best-sellers, and you can take advantage of this by mining this information
To deepen your analysis, plugin other data such as log file data, Google Analytics, XML sitemaps and backlinks to try to understand how you can improve your current results, and to obtain comprehensive insights that you can share with the wider team
SearchLove London 2019 - Luke Carthy - Finding Powerful CRO and UX Opportunities Using SEO Crawlers from Distilled
Andi Jarvis - ‘The Science of Persuasion’
Human psychology affects consumers’ buying behavior tremendously. Andi covered how we as SEOs can better understand these factors to influence our SEO strategy and improve conversions.
Scarcity: you can create the impression of scarcity even when it doesn’t exist, by creating scarcity of time to drive demand. An example of this is how Hotels.com creates a sense of urgency by including things like “Only 4 rooms left!” Test and learn with different time scales (hours, days, weeks or more) to see what works best for your product offering.
Authority: building authority helps people understand who they should trust. When you’ve got authority, you are more likely to persuade people. You can build authority simply by talking about yourself, and by labelling yourself as an authority in your industry.
Likeability: The reason that influencer marketing works is due to the principle of liking: we prefer to buy from people who we are attracted to and who we aspire to be. If we can envision ourselves using a product or service by seeing ourselves in its marketing, then we are more likely to convert.
Pretty Little Thing has started doing this by incorporating two models to model clothing, to increase the likelihood of users identifying with their models
Purpose: People are more likely to buy when they feel they are contributing to a cause, for example, Pampers who has a partnership with Unicef, so consumers feel like they are doing a good deed when they buy Pampers products. This is known as cause-based or purpose-based marketing.
Social proofing: It’s been known for a long time that people are influenced by the behaviour of others. In the early 1800s, theatres would pay people to clap at the right moments in a show, to encourage others to join in. Similarly today, if a brand has several endorsements from celebrities or users, people are more likely to purchase their products.
Reciprocation: Offering customers a free gift (even if small) can have a positive impact on re-purchase rates. Make sure though that you evolve what you do if you have a regular purchase cycle - offer customers different gifts so that they don’t know what to expect, otherwise the positive effect wears off.
SearchLove London 2019 - Andi Jarvis - The Science of Persuasion from Distilled
Heather Physioc - ‘Building a Discoverability Powerhouse: Lessons From Integrating Organic Search, Paid Search & Performance Content’
Organic, paid content and the like all impact discoverability. Yet, in many organisations, these teams are siloed. Heather discussed tips for integrating and collaborating between teams to build a “discoverabilty powerhouse”.
There are definite obstacles to integrating marketing teams like paid, social, or organic.
It’s not unlikely that merging teams too much can actually diminish agility. Depending on what marketing needs are at different times, allow for independence of teams when it’s necessary to get a job done.
Every team has their own processes for getting things done. Don’t try to overhaul everything at once. Talk with each other to see where integration makes the most sense.
There are also clear wins when you’re able to collaborate effectively.
When you’re in harmony with each team, you can more seamlessly find opportunities for discoverability. This can ultimately lead to up-sells or cross-sells.
By working together, we can share knowledge more deeply and have richer data. We can then leverage this to capture as much of the SERP as possible.
Cross-training teams can help build empathy and trust. When separate teams gain an understanding of how and why certain tasks (i.e. keyword research) are done, it can help everyone work better together and streamline processes.
SearchLove London 2019 - Heather Physioc - Building a Discoverability Powerhouse from Distilled
Robin Lord - ‘Excel? It Would Be Easier To Go To Jupyter’
Robin, a senior consultant here at Distilled, demonstrated the various shortcomings of Excel and showed an easier, repeatable, and more effective way to get things done - using Jupyter Notebooks and Python.
Below we outline Robin’s main points:
Excel and Google Sheets are very error-prone - especially if you’re dealing with larger data sets! If you need to process a lot of data, then you should consider using Jupyter Notebooks, as it can handle much bigger data sets (think: analysing backlinks, doing keyword research, log file analysis)
Jupyter Notebooks are reusable: if you create a Jupyter script to do any repeatable task (i.e. reporting or keyword research) then you can reuse it. This makes your life much easier because you don’t have to go back and dissect an old process.
Jupyter allows you to use Regex. This gives a huge advantage over excel because it is far more efficient at allowing you to account for misspellings. This, for example, can give you a far more accurate chance at accounting for things like branded search query permutations.
Jupyter allows you to write notes and keep every step in your process ordered. This means that your methodology is noted and the next time you perform this task, you remember exactly the steps you took. This is especially useful for when clients ask you questions about your work weeks or months down the line!
Finally - Jupyter notebooks allow us to get answers that we can’t get from Excel. We’re able to not only consider the data set from new angles, but we also have more time to go about other tasks, such as thinking about client strategy or improving other processes.
Robin has so many slides it breaks Slideshare. Instead, you can download his slides from Dropbox.
Jes Scholz - ‘Giving Robots An All Access Pass’
Jes Scholz uses the analogy of a nightclub to explain how Googlebot interacts with your website. The goal? To become part of the exclusive “Club Valid”. Her main points are outlined below:
As stated by John Mueller himself, “crawl budget is overrated - most sites never need to worry about this”. So instead of focusing on how much Google is crawling your site, you should be most concerned with how Google is crawling it
Status codes are not good or bad - there are right codes and wrong codes for different situations
In a similar vein, duplicate content is not “bad”, in fact, it’s entirely natural. You just need to make sure that you’re handling it correctly
JavaScript is your ticket to better UX, however, bear in mind that this often presents a host of SEO difficulties. Make sure that you don’t rely on the mobile friendly testing tool to see if Google is able to crawl your JavaScript - this tool actually uses different software to Googlebot (this is a common misconception!) The URL inspection tool is a bit better for checking this, however, bear in mind it’s more patient that Googlebot when it comes to rendering JavaScript, so isn’t 100% accurate.
SearchLove London 2019 - Jes Scholtz - Giving Robots an All Access Pass from Distilled
Rand Fishkin - ‘The Search Landscape in 2019’
As the web evolves, it’s important to evaluate the areas you could invest in carefully. Rand explored the key changes affecting search marketers and how SEOs can take these changes into account when determining strategy.
Should you invest in voice search? It’s probably a bit too early. There is little difference in the results you get from a voice search vs. a manual search.
Both mobile and desktop are big - don’t neglect one at the expense of the other!
The zero-click search is where the biggest search growth is happening right now. It now accounts for about half (48.96% in the US) of all searches!
If you could benefit from answering zero-click searches, then you should prepare for that. You can determine whether you’d benefit by evaluating the value in ranking for a particular query without necessarily getting traffic.
With changes in Google search appearance recently, ads have become more seamless in the SERP. This has led to paid click-through-rate rising a lot. However, if history is correct, then it will probably slowly decline until the next big search change.
As Google’s algorithms evolve, you’ll likely receive huge ranking benefits from focusing on growing authority signals (E-A-T).
Check out Rand’s slides to see where you should be spending your time and money as the search landscape evolves.
SearchLove London 2019 - Rand Fishkin - The Search Landscape in 2019 from Distilled
Emily Potter - ‘Can Anything in SEO Be Proven? A Deep-Dive Into SEO Split-Testing’
Split testing SEO changes allow us to say with confidence whether or not a specific change hurts or helps organic traffic. Emily discusses various SEO split tests she’s run and potential reasons for their outcome.
The main levers for SEO tend to boil down to
1. Improving organic click-through-rate (CTR)
2. Improving organic rankings of current keywords
3. Ranking for new keywords
Split testing changes that we want to make to our site can help us to make business cases, rescue sessions, and gain a competitive advantage.
Determining which of the three levers causes a particular test to be positive or negative is challenging because since they all impact each other, the data is noisy. Measuring organic sessions relieves us of this noise.
Following “best practices” or what your competitors are doing is not always going to result in wins. Testing shows you what actually works for your site. For example, adding quantity of products in your titles or structured data for breadcrumbs might actually negatively impact your SEO, even if it seems like everyone else is doing so.
Check out Emily’s slides to see more split test case studies and learnings!
Lessons from another year in SEO A/B Testing - SearchLove London 2019 from Emily Potter
Jill Quick - ‘Segments: How To Get Juicy Insights & Avoid The Pips!’
In her excellent talk, Jill highlights how “average data gives you average insights”, and discusses the importance of segmenting your data to gain deeper insights into user behaviour. While analytics and segments are awesome, don’t become overwhelmed with the possibilities - focus on your strategy and work from there.
Jill’s other tips include:
Adding custom dimensions to forms on your website allows you to create more relevant and specific data segments
For example, if you have a website in the education sector, you can add custom dimensions to a form that asks people to fill in their profession. You can then create a segment where custom dimension = headteacher, and you can then analyse the behaviour of this specific group of people
Build segments that look at your best buyers (people who convert well) as well as your worst customers (those who spend barely any time on site and never convert). You can learn a lot about your ideal customer, as well as what you need to improve on your site, by doing this.
Use your segments to build retargeting lists - this will usually result in lower CPAs for paid search, helping your PPC budget go further
Don’t forget to use advanced segments (using sequences and conditions) to create granular segments that matter to your business
You can use segments in Google Data Studio, which is awesome! Just bear in mind that in Data Studio you can’t see if your segment data is sampled, so it’s best to go into the GA interface to check
If you want to hear more about Jill's session, she's written a post to supplement her slides.
Segments in Google Analytics from The Coloring In Department
Rory Truesdale - ‘Using The SERPs to Know Your Audience’
It can be easy to get lost in evaluating metrics like monthly search volume, but we often forget that for each query, there is a person with a very specific motivation and need. Rory discussed how we can utilise Google’s algorithmic re-writing of the SERP to help identify those motivations and more effectively optimise for search intent - the SERPs give us amazing insight into what customers want!
Google rewrites the SERP displayed meta description 84% of the time (it thinks it’s smarter than us!) However, we can use this rewrite data to our advantage.
The best ways to get SERP data are through crawling SERPs in screaming frog, the scraper API or chrome extension, “Thruuu” (a SERP analysis tool), and then using Jupyter Notebooks to analyse it.
Scraping of SERPs, product reviews, comments, or reddit forums can be really powerful in that it will give you a data source that can reveal insight about what your customers want. Then you can optimise the content on your pages to appeal to them.
If you can get a better idea about what language and tone resonates with users, you can incorporate it into CTAs and content.
Check our Rory’s slides as well as the Jupyter notebook he uses to analyse SERP data.
SearchLove London 2019 - Rory Truesdale - Using the SERPs to Know Your Audience from Distilled
Miracle Inameti Archibong - ‘The Complete Guide To Actionable Speed Audits: Getting Your Developer To Work With You’
It can be a huge challenge to get devs to implement our wishlist of SEO recommendations. Miracle discussed the practical steps to getting developers to take your recommendations seriously.
If you take some time to understand the Web Dev roles at your company, then it will help you better communicate your needs as an SEO and get things rolled out. You can do this by:
Learning the language that they’re using. Do some research into the terminology as well as possible limitations of your ask. This will make you more credible and you’re more likely to be taken seriously.
A team of developers may have different KPIs than you. It may be beneficial to use something like revenue as a way to get them on board with the change you want to make.
Try to make every ask more collaborative rather than instructive. For example, instead of simply presenting “insert this code,” try “here’s some example code, maybe we can incorporate x elements. What do you think?” A conversation may be the difference in effecting change.
Prioritising your requests in an easily readable way for web dev teams is always a good idea. It will give them the most information on what needs to get done in what timeline.
SearchLove London 2019 - Miracle Inameti-Archibong - The Complete Guide to Actionable Speed Audits from Distilled
Faisal Anderson - ‘Spying On Google: Using Log File Analysis To Reveal Invaluable SEO Insights’
Log files contain hugely valuable insight on how Googlebot and other crawlers behave on your site. Rory uncovered why you should be looking at your logs as well as how to analyse them effectively to reveal big wins that you may have otherwise been unable to quantify.
Looking at log files is a great way to see the truest and freshest data on how Google is crawling your site. It’s most accurate because it’s the actual logs of how Google (and any other bot) is crawling your website.
Getting log file data can be tricky, so it’s helpful to ask devs about your hosting setup (if your server uses load balancing, the log files may be split between various hosts). You’ll want to get 6 months of data if you can.
The three main things to evaluate when you’re analysing log files
Crawl behavior: look at most and least crawled URLs, look at crawl frequency by depth and internal links
Budget waste: find low value urls (faceted nav, query params, etc.) there are likely some subdirectories you want crawled more than others
Site health: look for inconsistent server responses
Using Jupyter to do log file analysis is great because it’s reusable and you’ll be able to use it again and again.
SearchLoveLondon 2019 - Faisal Anderson - Spying on Google: Using Log File Analysis to Reveal Invaluable SEO Insights from Distilled
Dr Pete Myers - ‘Scaling Keyword Research: More Isn’t Better’
Dr Pete Myers discussed how more is not better when it comes to keyword research! Ditch the thousands of keywords and instead focus on a smaller set of keywords that actually matter for you or your clients. Below are his top tips:
Pete has developed a simple metric called RankVol to help determine the importance of a keyword
RankVol = 1 / (rank x square root of volume)
Using this metric is better than sorting by search volume, as often the highest volume keywords that a site is appearing for are not the most relevant
Lots of data in keyword research can be irrelevant. Using John Lewis as an example:
9% of keywords John Lewis ranks for are mis-spellings
Almost 20% of keywords they rank for are very close variants (plural vs. singular, for example)
Dr Pete provides a short script in his deck to group keywords to help strip out noise in your data set
If sitelinks appear for your website, Google thinks you’re a brand
A new SERP feature (‘best of’ carousel) is appearing in the US, and will likely be rolled out into Europe soon
This feature takes you to a heavily paid SERP, with lots of ads (some well-disguised!)
If a keyword has a heavily paid SERP, you should probably not bother trying to rank for it, as the pay-off will be small
‘People also ask’ is on 90% of searches - be sure to try and take advantage of this SERP space
To summarise, perception is everything with keyword research - make sure you filter out the noise!
SearchLove London 2019 - Dr. Pete Meyers - Scaling Keyword Research: More Isn't Better from Distilled
Lindsay Wassell - ‘Managing Multinational & Multilingual SEO in Motion’
Lindsay covered the many challenges involved in handling migrations involving multiple international site variants. Her key points are highlighted below:
Ask your dev team to make sure it’s possible to implement hreflang via XML sitemaps or on-page; then if there are problems implementing one method, you have another as a fall-back option
When deciding site structure and where international sites should be located (sub-folder? Subdomain? ccTLD?) bear in mind that there are no one-size-fits all solutions. It may be best to have a mixture of solutions, depending on each market.
If you have hreflang relationship issues, Lindsay advises to use Google Sheets to manage hreflang mappings, in combination with a script that can automatically generate XML sitemaps (link provided in her deck)
In order to encourage more people in your organisation to understand the importance of SEO and to make it a priority, highlight statistics such as traffic levels and revenue coming from organic search
Also keep in mind that every department has a wish list when it comes to a migration! Be tactical and tack onto other people’s wishlists to get SEO items implemented
As a final tip - check redirects before going live, as often dev teams will say it’s under control, and then there can be problems at the last minute
SearchLove London 2019 - Lindsay Wassell - Managing Multinational & Multilingual SEO in Motion from Distilled
Stacey MacNaught - ‘Actioning Search Intent - What To Do With All That Data’
By analysing search intent, you can gain a ton of really insightful data. Stacey discussed how you can utilise all of this data to optimise your site for organic search and ultimately increase revenue and traffic.
Traditionally, search intent is categorised broadly as navigational, informational, and transactional. However, it’s often unclear where things are categorised because sometimes keywords are really ambiguous. Often you can break these categories down into more specific categories.
In terms of targeting keywords on your site, look out for opportunities where you may not be delivering the right content based on what query you’re targeting.
For example, if you’re targeting an informational keyword with a transactional result, you’re not going to rank. This can be an opportunity for you to create the kind of page that will rank for a select query. If the phrase is “best ballet shoes” and the results are informational pages, then you shouldn’t be serving a transactional result.
If you can be objective about the topic at hand and you have someone qualified to write that content, then you should definitely do it.
If your rankings drop but revenue unaffected, it’s likely you’ve lost rankings on informational keywords
Don’t assume that users will come back of their own accord - work with PPC and get them to retarget to users who have read your content
Build out different audience lists according to the types of content or topics that users have been reading
Build out separate PPC campaigns for this so you can easily monitor results
Stacey saw CPA fall by -34% when she did this for a healthcare site
To generate content ideas, talk to the sales and customer service teams to find out what users are asking, then build content around it
You can also use Google Forms to survey previous customers to find out what drove their purchase
SearchLove London 2019 - Stacey MacNaught - Actioning Search Intent: What to Do with All That Data from Distilled
Will Critchlow - ‘Misunderstood Concepts at the Heart of SEO - Get An Edge By Understanding These Areas’
Most things in SEO can be boiled down to technical accessibility, relevance, quality, and authority. Or: can it be crawled, does it meet a keyword need, and is it trustworthy? However, some of the foundational elements of SEO are misunderstood.
Regarding crawlability, it’s important to understand how setting directives in robots.txt will impact your site if handled incorrectly.
Robots.txt directives do not cascade. For example, if you set a specific directive to disallow Googlebot from /example, that is the one it will follow. Even if you specify that * (all user agents) are disallowed from /dont-crawl elsewhere in the file, Googlebot will only follow it’s set directive not to crawl /example and still be able to crawl /dont-crawl.
The Google documentation, robots.txt checker in GSC, and the open source parser tend to disagree on what is allowed and disallowed. So, you’ll need to do some testing to ensure that the directives you’re setting are what you intended.
We often have a lot of intuition about how things like pagerank work, but too many of our recommendations are based on misconceptions about how authority flows
There are some huge changes coming to major browser cookie handling. The cookie window will be shorter, which means that a lot of traffic that’s currently classified as organic will be classified as direct. Understanding the language around the changes that are happening is, and will be, important
There are common misconceptions too about the meaning of ‘long tail keywords’
50% of Twitter respondents incorrectly think it means that there are many words in a query
40% understand the correct meaning, which is that they are keywords with low search volume
SearchLove London 2019 - Will Critchlow - Misunderstood Concepts at the Heart of SEO from Distilled
That's it for our London conference for another year. But the good news is we are heading to San Diego in March where we'll be getting some sun, sea and search at SearchLove San Diego!
If you have any questions about our conferences please leave a comment below or come and say hello over on Twitter.
from Marketing https://www.distilled.net/resources/searchlove-london-2019-round-up/ via http://www.rssmix.com/
0 notes
Text
SearchLove London 2019: The Great Big Round Up
On 14th and 15th October, we made our annual visit to The Brewery in London for our UK edition of SearchLove. This year’s conference was our most successful yet, not only in terms of the number of folks attending but also with regard to the high calibre of speakers who joined us over the jam-packed two days to share their invaluable industry insights.
Let the show begin! #searchlove #seo pic.twitter.com/zDIRbbX2KG
— Udo Leinhäuser (@u_leinhaeuser) October 14, 2019
This post is a quick-fire summary of the knowledge our speakers had to share, plus their slides & a few photos from across the conference. All sessions in their entirety will be available with a DistilledU membership in a couple of weeks' time. And don’t forget that if you feel you missed out this year, make sure you sign up to our mailing list to be the first in the know for next year’s conference! Are you ready? Let’s get started!
Marie Haynes - ‘Practical Tips For Improving E-A-T’
Google’s algorithms are increasingly considering E-A-T components (expertise, authority and trust) when evaluating sites. Marie shared why and how to improve E-A-T so that you have the best chance at winning in the current and future search landscape.
One of the most important things to focus on is the accuracy of the information on your site. This is especially important if your pages are primarily YMYL (‘your money or your life’, in other words, content that can affect someone’s health, safety, financial stability, etc.).
Google’s quality raters use the quality raters guidelines as their textbook. If you take a look at the guidelines, you can get a better idea about what Google is actually looking at when they’re evaluating E-A-T components. Try doing a CTRL+F for your industry to see what they suggest for your vertical.
There are some practical things you can do on your site to help Google understand that you’re trustworthy and authoritative:
Have contact information available.
If you’re eCommerce, ensure that your refund policy and customer service information is clearly accessible.
Make sure your site is secure (HTTPS)
Have correct grammar. How your page reads is important!
Make sure that the information on your site doesn’t contradict any known facts, something called scientific consensus. Site all sources as necessary.
SearchLove London 2019 - Marie Haynes - Practical Tips for Improving E-A-T from Distilled
Sarah Gurbach - ‘Using Qualitative Data To Make Human-Centered Decisions’
SEOs have a huge amount of data to work with, but often, the data that gets overlooked is that which comes directly from the humans who are driving all of our data points.
By performing qualitative research in tandem with quantitative, we can get insights on the actual human wants, barriers, and confusions that drive our customers to make their decisions and move through the funnel.
Sarah’s steps to conducting qualitative research include:
Defining your objective. Write it as a question. Keep it specific, focused and simple.
Asking open-ended questions to customers to define the personas you should be targeting. Sarah recommends surveys of 10 questions to 5 customers that should only take around 20 minutes each. More than this will likely be redundant.
Actually observing our users to figure out what and how they’re searching and moving through the funnel.
You can then quantify this data by combining it with other data sources (i.e. PPC data, conversion data, etc.).
If you don’t have time to conduct surveys, then you can go to social media and ask a question!
Want more on questions you can ask your customers? Check out this resource from Sarah.
SearchLove London 2019 - Sarah Gurbach - Using Qualitative Data to Make Human-centered Decisions from Distilled
Greg Gifford - ‘Doc Brown’s Plutonium-Powered SEO Playbook’
Greg delivered an entertaining, informative and best of all highly actionable talk on local SEO. If you have physical locations for your business, you should not be neglecting your local SEO strategy! It’s important to remember that there is a different algorithm for local SEO compared to the traditional SERP, and therefore you need to approach local SEO slightly differently.
Greg’s key tips to nailing your local SEO strategy are as follows:
Links are weighted differently for local SEO! Make sure you acquire local links - quality, and whether these are follow or nofollow, matters far less than in the standard SERP. The key is to make sure your links are local - get your hands dirty with some old-school marketing and get out into your local community to build links from churches, businesses and community websites in your area.
Content needs to actually be about your business and local area. If you can use your website copy for a site in another area, you’re doing it wrong. Also, make sure that your blog is a local destination - if your content is more localised than competitors, then you’ll be one step ahead of competitors.
Citations are also important, but you only need a handful! Make sure you link to your website from places that customers will actually see, such as your Facebook, Twitter and other social profiles. Ensure your business information is accurate across platforms.
Reviews need to be strong across platforms - there’s no use having excellent reviews in Google My Business, and then bad reviews on TripAdvisor!
Google My Business is your new homepage, so make sure you give it some attention!
Bear in mind that users can not only ask questions but also answer them - make sure you create your own Q&A here and upvote your answers so that they appear at the top.
Also be aware that clicks from GMB are recorded as direct! If you use UTM tracking parameters, then you can update the tracking so that you can attribute it correctly to organic.
SearchLove London 2019 - Greg Gifford - Doc Brown's Plutonium-powered Local SEO Playbook from Distilled
Luke Carthy - ‘Finding Powerful CRO and UX Opportunities Using SEO Crawlers’
Luke Carthy discussed the importance of not always striving to drive more traffic, but making the most of the traffic you currently do have. More traffic does not necessarily equal more conversions! He explored different ways to identify opportunities using crawl software and custom extraction, and to use these insights to improve conversion rates on your website.
His top recommendations include:
Look at the internal search experience of users - do they get a ‘no results found’ page? What does this look like - does it provide a good user experience? Does it guide users to alternative products?
Custom extraction is an excellent way to mine websites for information (your own and especially competitors!)
Consider scraping product recommendations:
What products are competitor sites recommending? These are often based on dynamic algorithms, so provide a good insight into what products customers buy together
Also pay attention to the price of the recommended products vs. the main product - recommended items are often more expensive, to encourage users to spend more
Also consider scraping competitor sites for prices, review and stock
Are you cheaper than competitors?
Do competitors have popular products that you don’t have? What are their best and worst-performing products? Often category or search results pages are ordered by best-sellers, and you can take advantage of this by mining this information
To deepen your analysis, plugin other data such as log file data, Google Analytics, XML sitemaps and backlinks to try to understand how you can improve your current results, and to obtain comprehensive insights that you can share with the wider team
SearchLove London 2019 - Luke Carthy - Finding Powerful CRO and UX Opportunities Using SEO Crawlers from Distilled
Andi Jarvis - ‘The Science of Persuasion’
Human psychology affects consumers’ buying behavior tremendously. Andi covered how we as SEOs can better understand these factors to influence our SEO strategy and improve conversions.
Scarcity: you can create the impression of scarcity even when it doesn’t exist, by creating scarcity of time to drive demand. An example of this is how Hotels.com creates a sense of urgency by including things like “Only 4 rooms left!” Test and learn with different time scales (hours, days, weeks or more) to see what works best for your product offering.
Authority: building authority helps people understand who they should trust. When you’ve got authority, you are more likely to persuade people. You can build authority simply by talking about yourself, and by labelling yourself as an authority in your industry.
Likeability: The reason that influencer marketing works is due to the principle of liking: we prefer to buy from people who we are attracted to and who we aspire to be. If we can envision ourselves using a product or service by seeing ourselves in its marketing, then we are more likely to convert.
Pretty Little Thing has started doing this by incorporating two models to model clothing, to increase the likelihood of users identifying with their models
Purpose: People are more likely to buy when they feel they are contributing to a cause, for example, Pampers who has a partnership with Unicef, so consumers feel like they are doing a good deed when they buy Pampers products. This is known as cause-based or purpose-based marketing.
Social proofing: It’s been known for a long time that people are influenced by the behaviour of others. In the early 1800s, theatres would pay people to clap at the right moments in a show, to encourage others to join in. Similarly today, if a brand has several endorsements from celebrities or users, people are more likely to purchase their products.
Reciprocation: Offering customers a free gift (even if small) can have a positive impact on re-purchase rates. Make sure though that you evolve what you do if you have a regular purchase cycle - offer customers different gifts so that they don’t know what to expect, otherwise the positive effect wears off.
SearchLove London 2019 - Andi Jarvis - The Science of Persuasion from Distilled
Heather Physioc - ‘Building a Discoverability Powerhouse: Lessons From Integrating Organic Search, Paid Search & Performance Content’
Organic, paid content and the like all impact discoverability. Yet, in many organisations, these teams are siloed. Heather discussed tips for integrating and collaborating between teams to build a “discoverabilty powerhouse”.
There are definite obstacles to integrating marketing teams like paid, social, or organic.
It’s not unlikely that merging teams too much can actually diminish agility. Depending on what marketing needs are at different times, allow for independence of teams when it’s necessary to get a job done.
Every team has their own processes for getting things done. Don’t try to overhaul everything at once. Talk with each other to see where integration makes the most sense.
There are also clear wins when you’re able to collaborate effectively.
When you’re in harmony with each team, you can more seamlessly find opportunities for discoverability. This can ultimately lead to up-sells or cross-sells.
By working together, we can share knowledge more deeply and have richer data. We can then leverage this to capture as much of the SERP as possible.
Cross-training teams can help build empathy and trust. When separate teams gain an understanding of how and why certain tasks (i.e. keyword research) are done, it can help everyone work better together and streamline processes.
SearchLove London 2019 - Heather Physioc - Building a Discoverability Powerhouse from Distilled
Robin Lord - ‘Excel? It Would Be Easier To Go To Jupyter’
Robin, a senior consultant here at Distilled, demonstrated the various shortcomings of Excel and showed an easier, repeatable, and more effective way to get things done - using Jupyter Notebooks and Python.
Below we outline Robin’s main points:
Excel and Google Sheets are very error-prone - especially if you’re dealing with larger data sets! If you need to process a lot of data, then you should consider using Jupyter Notebooks, as it can handle much bigger data sets (think: analysing backlinks, doing keyword research, log file analysis)
Jupyter Notebooks are reusable: if you create a Jupyter script to do any repeatable task (i.e. reporting or keyword research) then you can reuse it. This makes your life much easier because you don’t have to go back and dissect an old process.
Jupyter allows you to use Regex. This gives a huge advantage over excel because it is far more efficient at allowing you to account for misspellings. This, for example, can give you a far more accurate chance at accounting for things like branded search query permutations.
Jupyter allows you to write notes and keep every step in your process ordered. This means that your methodology is noted and the next time you perform this task, you remember exactly the steps you took. This is especially useful for when clients ask you questions about your work weeks or months down the line!
Finally - Jupyter notebooks allow us to get answers that we can’t get from Excel. We’re able to not only consider the data set from new angles, but we also have more time to go about other tasks, such as thinking about client strategy or improving other processes.
Robin has so many slides it breaks Slideshare. Instead, you can download his slides from Dropbox.
Jes Scholz - ‘Giving Robots An All Access Pass’
Jes Scholz uses the analogy of a nightclub to explain how Googlebot interacts with your website. The goal? To become part of the exclusive “Club Valid”. Her main points are outlined below:
As stated by John Mueller himself, “crawl budget is overrated - most sites never need to worry about this”. So instead of focusing on how much Google is crawling your site, you should be most concerned with how Google is crawling it
Status codes are not good or bad - there are right codes and wrong codes for different situations
In a similar vein, duplicate content is not “bad”, in fact, it’s entirely natural. You just need to make sure that you’re handling it correctly
JavaScript is your ticket to better UX, however, bear in mind that this often presents a host of SEO difficulties. Make sure that you don’t rely on the mobile friendly testing tool to see if Google is able to crawl your JavaScript - this tool actually uses different software to Googlebot (this is a common misconception!) The URL inspection tool is a bit better for checking this, however, bear in mind it’s more patient that Googlebot when it comes to rendering JavaScript, so isn’t 100% accurate.
SearchLove London 2019 - Jes Scholtz - Giving Robots an All Access Pass from Distilled
Rand Fishkin - ‘The Search Landscape in 2019’
As the web evolves, it’s important to evaluate the areas you could invest in carefully. Rand explored the key changes affecting search marketers and how SEOs can take these changes into account when determining strategy.
Should you invest in voice search? It’s probably a bit too early. There is little difference in the results you get from a voice search vs. a manual search.
Both mobile and desktop are big - don’t neglect one at the expense of the other!
The zero-click search is where the biggest search growth is happening right now. It now accounts for about half (48.96% in the US) of all searches!
If you could benefit from answering zero-click searches, then you should prepare for that. You can determine whether you’d benefit by evaluating the value in ranking for a particular query without necessarily getting traffic.
With changes in Google search appearance recently, ads have become more seamless in the SERP. This has led to paid click-through-rate rising a lot. However, if history is correct, then it will probably slowly decline until the next big search change.
As Google’s algorithms evolve, you’ll likely receive huge ranking benefits from focusing on growing authority signals (E-A-T).
Check out Rand’s slides to see where you should be spending your time and money as the search landscape evolves.
SearchLove London 2019 - Rand Fishkin - The Search Landscape in 2019 from Distilled
Emily Potter - ‘Can Anything in SEO Be Proven? A Deep-Dive Into SEO Split-Testing’
Split testing SEO changes allow us to say with confidence whether or not a specific change hurts or helps organic traffic. Emily discusses various SEO split tests she’s run and potential reasons for their outcome.
The main levers for SEO tend to boil down to
1. Improving organic click-through-rate (CTR)
2. Improving organic rankings of current keywords
3. Ranking for new keywords
Split testing changes that we want to make to our site can help us to make business cases, rescue sessions, and gain a competitive advantage.
Determining which of the three levers causes a particular test to be positive or negative is challenging because since they all impact each other, the data is noisy. Measuring organic sessions relieves us of this noise.
Following “best practices” or what your competitors are doing is not always going to result in wins. Testing shows you what actually works for your site. For example, adding quantity of products in your titles or structured data for breadcrumbs might actually negatively impact your SEO, even if it seems like everyone else is doing so.
Check out Emily’s slides to see more split test case studies and learnings!
Lessons from another year in SEO A/B Testing - SearchLove London 2019 from Emily Potter
Jill Quick - ‘Segments: How To Get Juicy Insights & Avoid The Pips!’
In her excellent talk, Jill highlights how “average data gives you average insights”, and discusses the importance of segmenting your data to gain deeper insights into user behaviour. While analytics and segments are awesome, don’t become overwhelmed with the possibilities - focus on your strategy and work from there.
Jill’s other tips include:
Adding custom dimensions to forms on your website allows you to create more relevant and specific data segments
For example, if you have a website in the education sector, you can add custom dimensions to a form that asks people to fill in their profession. You can then create a segment where custom dimension = headteacher, and you can then analyse the behaviour of this specific group of people
Build segments that look at your best buyers (people who convert well) as well as your worst customers (those who spend barely any time on site and never convert). You can learn a lot about your ideal customer, as well as what you need to improve on your site, by doing this.
Use your segments to build retargeting lists - this will usually result in lower CPAs for paid search, helping your PPC budget go further
Don’t forget to use advanced segments (using sequences and conditions) to create granular segments that matter to your business
You can use segments in Google Data Studio, which is awesome! Just bear in mind that in Data Studio you can’t see if your segment data is sampled, so it’s best to go into the GA interface to check
If you want to hear more about Jill's session, she's written a post to supplement her slides.
Segments in Google Analytics from The Coloring In Department
Rory Truesdale - ‘Using The SERPs to Know Your Audience’
It can be easy to get lost in evaluating metrics like monthly search volume, but we often forget that for each query, there is a person with a very specific motivation and need. Rory discussed how we can utilise Google’s algorithmic re-writing of the SERP to help identify those motivations and more effectively optimise for search intent - the SERPs give us amazing insight into what customers want!
Google rewrites the SERP displayed meta description 84% of the time (it thinks it’s smarter than us!) However, we can use this rewrite data to our advantage.
The best ways to get SERP data are through crawling SERPs in screaming frog, the scraper API or chrome extension, “Thruuu” (a SERP analysis tool), and then using Jupyter Notebooks to analyse it.
Scraping of SERPs, product reviews, comments, or reddit forums can be really powerful in that it will give you a data source that can reveal insight about what your customers want. Then you can optimise the content on your pages to appeal to them.
If you can get a better idea about what language and tone resonates with users, you can incorporate it into CTAs and content.
Check our Rory’s slides as well as the Jupyter notebook he uses to analyse SERP data.
SearchLove London 2019 - Rory Truesdale - Using the SERPs to Know Your Audience from Distilled
Miracle Inameti Archibong - ‘The Complete Guide To Actionable Speed Audits: Getting Your Developer To Work With You’
It can be a huge challenge to get devs to implement our wishlist of SEO recommendations. Miracle discussed the practical steps to getting developers to take your recommendations seriously.
If you take some time to understand the Web Dev roles at your company, then it will help you better communicate your needs as an SEO and get things rolled out. You can do this by:
Learning the language that they’re using. Do some research into the terminology as well as possible limitations of your ask. This will make you more credible and you’re more likely to be taken seriously.
A team of developers may have different KPIs than you. It may be beneficial to use something like revenue as a way to get them on board with the change you want to make.
Try to make every ask more collaborative rather than instructive. For example, instead of simply presenting “insert this code,” try “here’s some example code, maybe we can incorporate x elements. What do you think?” A conversation may be the difference in effecting change.
Prioritising your requests in an easily readable way for web dev teams is always a good idea. It will give them the most information on what needs to get done in what timeline.
SearchLove London 2019 - Miracle Inameti-Archibong - The Complete Guide to Actionable Speed Audits from Distilled
Faisal Anderson - ‘Spying On Google: Using Log File Analysis To Reveal Invaluable SEO Insights’
Log files contain hugely valuable insight on how Googlebot and other crawlers behave on your site. Rory uncovered why you should be looking at your logs as well as how to analyse them effectively to reveal big wins that you may have otherwise been unable to quantify.
Looking at log files is a great way to see the truest and freshest data on how Google is crawling your site. It’s most accurate because it’s the actual logs of how Google (and any other bot) is crawling your website.
Getting log file data can be tricky, so it’s helpful to ask devs about your hosting setup (if your server uses load balancing, the log files may be split between various hosts). You’ll want to get 6 months of data if you can.
The three main things to evaluate when you’re analysing log files
Crawl behavior: look at most and least crawled URLs, look at crawl frequency by depth and internal links
Budget waste: find low value urls (faceted nav, query params, etc.) there are likely some subdirectories you want crawled more than others
Site health: look for inconsistent server responses
Using Jupyter to do log file analysis is great because it’s reusable and you’ll be able to use it again and again.
SearchLoveLondon 2019 - Faisal Anderson - Spying on Google: Using Log File Analysis to Reveal Invaluable SEO Insights from Distilled
Dr Pete Myers - ‘Scaling Keyword Research: More Isn’t Better’
Dr Pete Myers discussed how more is not better when it comes to keyword research! Ditch the thousands of keywords and instead focus on a smaller set of keywords that actually matter for you or your clients. Below are his top tips:
Pete has developed a simple metric called RankVol to help determine the importance of a keyword
RankVol = 1 / (rank x square root of volume)
Using this metric is better than sorting by search volume, as often the highest volume keywords that a site is appearing for are not the most relevant
Lots of data in keyword research can be irrelevant. Using John Lewis as an example:
9% of keywords John Lewis ranks for are mis-spellings
Almost 20% of keywords they rank for are very close variants (plural vs. singular, for example)
Dr Pete provides a short script in his deck to group keywords to help strip out noise in your data set
If sitelinks appear for your website, Google thinks you’re a brand
A new SERP feature (‘best of’ carousel) is appearing in the US, and will likely be rolled out into Europe soon
This feature takes you to a heavily paid SERP, with lots of ads (some well-disguised!)
If a keyword has a heavily paid SERP, you should probably not bother trying to rank for it, as the pay-off will be small
‘People also ask’ is on 90% of searches - be sure to try and take advantage of this SERP space
To summarise, perception is everything with keyword research - make sure you filter out the noise!
SearchLove London 2019 - Dr. Pete Meyers - Scaling Keyword Research: More Isn't Better from Distilled
Lindsay Wassell - ‘Managing Multinational & Multilingual SEO in Motion’
Lindsay covered the many challenges involved in handling migrations involving multiple international site variants. Her key points are highlighted below:
Ask your dev team to make sure it’s possible to implement hreflang via XML sitemaps or on-page; then if there are problems implementing one method, you have another as a fall-back option
When deciding site structure and where international sites should be located (sub-folder? Subdomain? ccTLD?) bear in mind that there are no one-size-fits all solutions. It may be best to have a mixture of solutions, depending on each market.
If you have hreflang relationship issues, Lindsay advises to use Google Sheets to manage hreflang mappings, in combination with a script that can automatically generate XML sitemaps (link provided in her deck)
In order to encourage more people in your organisation to understand the importance of SEO and to make it a priority, highlight statistics such as traffic levels and revenue coming from organic search
Also keep in mind that every department has a wish list when it comes to a migration! Be tactical and tack onto other people’s wishlists to get SEO items implemented
As a final tip - check redirects before going live, as often dev teams will say it’s under control, and then there can be problems at the last minute
SearchLove London 2019 - Lindsay Wassell - Managing Multinational & Multilingual SEO in Motion from Distilled
Stacey MacNaught - ‘Actioning Search Intent - What To Do With All That Data’
By analysing search intent, you can gain a ton of really insightful data. Stacey discussed how you can utilise all of this data to optimise your site for organic search and ultimately increase revenue and traffic.
Traditionally, search intent is categorised broadly as navigational, informational, and transactional. However, it’s often unclear where things are categorised because sometimes keywords are really ambiguous. Often you can break these categories down into more specific categories.
In terms of targeting keywords on your site, look out for opportunities where you may not be delivering the right content based on what query you’re targeting.
For example, if you’re targeting an informational keyword with a transactional result, you’re not going to rank. This can be an opportunity for you to create the kind of page that will rank for a select query. If the phrase is “best ballet shoes” and the results are informational pages, then you shouldn’t be serving a transactional result.
If you can be objective about the topic at hand and you have someone qualified to write that content, then you should definitely do it.
If your rankings drop but revenue unaffected, it’s likely you’ve lost rankings on informational keywords
Don’t assume that users will come back of their own accord - work with PPC and get them to retarget to users who have read your content
Build out different audience lists according to the types of content or topics that users have been reading
Build out separate PPC campaigns for this so you can easily monitor results
Stacey saw CPA fall by -34% when she did this for a healthcare site
To generate content ideas, talk to the sales and customer service teams to find out what users are asking, then build content around it
You can also use Google Forms to survey previous customers to find out what drove their purchase
SearchLove London 2019 - Stacey MacNaught - Actioning Search Intent: What to Do with All That Data from Distilled
Will Critchlow - ‘Misunderstood Concepts at the Heart of SEO - Get An Edge By Understanding These Areas’
Most things in SEO can be boiled down to technical accessibility, relevance, quality, and authority. Or: can it be crawled, does it meet a keyword need, and is it trustworthy? However, some of the foundational elements of SEO are misunderstood.
Regarding crawlability, it’s important to understand how setting directives in robots.txt will impact your site if handled incorrectly.
Robots.txt directives do not cascade. For example, if you set a specific directive to disallow Googlebot from /example, that is the one it will follow. Even if you specify that * (all user agents) are disallowed from /dont-crawl elsewhere in the file, Googlebot will only follow it’s set directive not to crawl /example and still be able to crawl /dont-crawl.
The Google documentation, robots.txt checker in GSC, and the open source parser tend to disagree on what is allowed and disallowed. So, you’ll need to do some testing to ensure that the directives you’re setting are what you intended.
We often have a lot of intuition about how things like pagerank work, but too many of our recommendations are based on misconceptions about how authority flows
There are some huge changes coming to major browser cookie handling. The cookie window will be shorter, which means that a lot of traffic that’s currently classified as organic will be classified as direct. Understanding the language around the changes that are happening is, and will be, important
There are common misconceptions too about the meaning of ‘long tail keywords’
50% of Twitter respondents incorrectly think it means that there are many words in a query
40% understand the correct meaning, which is that they are keywords with low search volume
SearchLove London 2019 - Will Critchlow - Misunderstood Concepts at the Heart of SEO from Distilled
That's it for our London conference for another year. But the good news is we are heading to San Diego in March where we'll be getting some sun, sea and search at SearchLove San Diego!
If you have any questions about our conferences please leave a comment below or come and say hello over on Twitter.
from Digital https://www.distilled.net/resources/searchlove-london-2019-round-up/ via http://www.rssmix.com/
0 notes
Text
SearchLove London 2019: The Great Big Round Up
On 14th and 15th October, we made our annual visit to The Brewery in London for our UK edition of SearchLove. This year’s conference was our most successful yet, not only in terms of the number of folks attending but also with regard to the high calibre of speakers who joined us over the jam-packed two days to share their invaluable industry insights.
Let the show begin! #searchlove #seo pic.twitter.com/zDIRbbX2KG
— Udo Leinhäuser (@u_leinhaeuser) October 14, 2019
This post is a quick-fire summary of the knowledge our speakers had to share, plus their slides & a few photos from across the conference. All sessions in their entirety will be available with a DistilledU membership in a couple of weeks' time. And don’t forget that if you feel you missed out this year, make sure you sign up to our mailing list to be the first in the know for next year’s conference! Are you ready? Let’s get started!
Marie Haynes - ‘Practical Tips For Improving E-A-T’
Google’s algorithms are increasingly considering E-A-T components (expertise, authority and trust) when evaluating sites. Marie shared why and how to improve E-A-T so that you have the best chance at winning in the current and future search landscape.
One of the most important things to focus on is the accuracy of the information on your site. This is especially important if your pages are primarily YMYL (‘your money or your life’, in other words, content that can affect someone’s health, safety, financial stability, etc.).
Google’s quality raters use the quality raters guidelines as their textbook. If you take a look at the guidelines, you can get a better idea about what Google is actually looking at when they’re evaluating E-A-T components. Try doing a CTRL+F for your industry to see what they suggest for your vertical.
There are some practical things you can do on your site to help Google understand that you’re trustworthy and authoritative:
Have contact information available.
If you’re eCommerce, ensure that your refund policy and customer service information is clearly accessible.
Make sure your site is secure (HTTPS)
Have correct grammar. How your page reads is important!
Make sure that the information on your site doesn’t contradict any known facts, something called scientific consensus. Site all sources as necessary.
SearchLove London 2019 - Marie Haynes - Practical Tips for Improving E-A-T from Distilled
Sarah Gurbach - ‘Using Qualitative Data To Make Human-Centered Decisions’
SEOs have a huge amount of data to work with, but often, the data that gets overlooked is that which comes directly from the humans who are driving all of our data points.
By performing qualitative research in tandem with quantitative, we can get insights on the actual human wants, barriers, and confusions that drive our customers to make their decisions and move through the funnel.
Sarah’s steps to conducting qualitative research include:
Defining your objective. Write it as a question. Keep it specific, focused and simple.
Asking open-ended questions to customers to define the personas you should be targeting. Sarah recommends surveys of 10 questions to 5 customers that should only take around 20 minutes each. More than this will likely be redundant.
Actually observing our users to figure out what and how they’re searching and moving through the funnel.
You can then quantify this data by combining it with other data sources (i.e. PPC data, conversion data, etc.).
If you don’t have time to conduct surveys, then you can go to social media and ask a question!
Want more on questions you can ask your customers? Check out this resource from Sarah.
SearchLove London 2019 - Sarah Gurbach - Using Qualitative Data to Make Human-centered Decisions from Distilled
Greg Gifford - ‘Doc Brown’s Plutonium-Powered SEO Playbook’
Greg delivered an entertaining, informative and best of all highly actionable talk on local SEO. If you have physical locations for your business, you should not be neglecting your local SEO strategy! It’s important to remember that there is a different algorithm for local SEO compared to the traditional SERP, and therefore you need to approach local SEO slightly differently.
Greg’s key tips to nailing your local SEO strategy are as follows:
Links are weighted differently for local SEO! Make sure you acquire local links - quality, and whether these are follow or nofollow, matters far less than in the standard SERP. The key is to make sure your links are local - get your hands dirty with some old-school marketing and get out into your local community to build links from churches, businesses and community websites in your area.
Content needs to actually be about your business and local area. If you can use your website copy for a site in another area, you’re doing it wrong. Also, make sure that your blog is a local destination - if your content is more localised than competitors, then you’ll be one step ahead of competitors.
Citations are also important, but you only need a handful! Make sure you link to your website from places that customers will actually see, such as your Facebook, Twitter and other social profiles. Ensure your business information is accurate across platforms.
Reviews need to be strong across platforms - there’s no use having excellent reviews in Google My Business, and then bad reviews on TripAdvisor!
Google My Business is your new homepage, so make sure you give it some attention!
Bear in mind that users can not only ask questions but also answer them - make sure you create your own Q&A here and upvote your answers so that they appear at the top.
Also be aware that clicks from GMB are recorded as direct! If you use UTM tracking parameters, then you can update the tracking so that you can attribute it correctly to organic.
SearchLove London 2019 - Greg Gifford - Doc Brown's Plutonium-powered Local SEO Playbook from Distilled
Luke Carthy - ‘Finding Powerful CRO and UX Opportunities Using SEO Crawlers’
Luke Carthy discussed the importance of not always striving to drive more traffic, but making the most of the traffic you currently do have. More traffic does not necessarily equal more conversions! He explored different ways to identify opportunities using crawl software and custom extraction, and to use these insights to improve conversion rates on your website.
His top recommendations include:
Look at the internal search experience of users - do they get a ‘no results found’ page? What does this look like - does it provide a good user experience? Does it guide users to alternative products?
Custom extraction is an excellent way to mine websites for information (your own and especially competitors!)
Consider scraping product recommendations:
What products are competitor sites recommending? These are often based on dynamic algorithms, so provide a good insight into what products customers buy together
Also pay attention to the price of the recommended products vs. the main product - recommended items are often more expensive, to encourage users to spend more
Also consider scraping competitor sites for prices, review and stock
Are you cheaper than competitors?
Do competitors have popular products that you don’t have? What are their best and worst-performing products? Often category or search results pages are ordered by best-sellers, and you can take advantage of this by mining this information
To deepen your analysis, plugin other data such as log file data, Google Analytics, XML sitemaps and backlinks to try to understand how you can improve your current results, and to obtain comprehensive insights that you can share with the wider team
SearchLove London 2019 - Luke Carthy - Finding Powerful CRO and UX Opportunities Using SEO Crawlers from Distilled
Andi Jarvis - ‘The Science of Persuasion’
Human psychology affects consumers’ buying behavior tremendously. Andi covered how we as SEOs can better understand these factors to influence our SEO strategy and improve conversions.
Scarcity: you can create the impression of scarcity even when it doesn’t exist, by creating scarcity of time to drive demand. An example of this is how Hotels.com creates a sense of urgency by including things like “Only 4 rooms left!” Test and learn with different time scales (hours, days, weeks or more) to see what works best for your product offering.
Authority: building authority helps people understand who they should trust. When you’ve got authority, you are more likely to persuade people. You can build authority simply by talking about yourself, and by labelling yourself as an authority in your industry.
Likeability: The reason that influencer marketing works is due to the principle of liking: we prefer to buy from people who we are attracted to and who we aspire to be. If we can envision ourselves using a product or service by seeing ourselves in its marketing, then we are more likely to convert.
Pretty Little Thing has started doing this by incorporating two models to model clothing, to increase the likelihood of users identifying with their models
Purpose: People are more likely to buy when they feel they are contributing to a cause, for example, Pampers who has a partnership with Unicef, so consumers feel like they are doing a good deed when they buy Pampers products. This is known as cause-based or purpose-based marketing.
Social proofing: It’s been known for a long time that people are influenced by the behaviour of others. In the early 1800s, theatres would pay people to clap at the right moments in a show, to encourage others to join in. Similarly today, if a brand has several endorsements from celebrities or users, people are more likely to purchase their products.
Reciprocation: Offering customers a free gift (even if small) can have a positive impact on re-purchase rates. Make sure though that you evolve what you do if you have a regular purchase cycle - offer customers different gifts so that they don’t know what to expect, otherwise the positive effect wears off.
SearchLove London 2019 - Andi Jarvis - The Science of Persuasion from Distilled
Heather Physioc - ‘Building a Discoverability Powerhouse: Lessons From Integrating Organic Search, Paid Search & Performance Content’
Organic, paid content and the like all impact discoverability. Yet, in many organisations, these teams are siloed. Heather discussed tips for integrating and collaborating between teams to build a “discoverabilty powerhouse”.
There are definite obstacles to integrating marketing teams like paid, social, or organic.
It’s not unlikely that merging teams too much can actually diminish agility. Depending on what marketing needs are at different times, allow for independence of teams when it’s necessary to get a job done.
Every team has their own processes for getting things done. Don’t try to overhaul everything at once. Talk with each other to see where integration makes the most sense.
There are also clear wins when you’re able to collaborate effectively.
When you’re in harmony with each team, you can more seamlessly find opportunities for discoverability. This can ultimately lead to up-sells or cross-sells.
By working together, we can share knowledge more deeply and have richer data. We can then leverage this to capture as much of the SERP as possible.
Cross-training teams can help build empathy and trust. When separate teams gain an understanding of how and why certain tasks (i.e. keyword research) are done, it can help everyone work better together and streamline processes.
SearchLove London 2019 - Heather Physioc - Building a Discoverability Powerhouse from Distilled
Robin Lord - ‘Excel? It Would Be Easier To Go To Jupyter’
Robin, a senior consultant here at Distilled, demonstrated the various shortcomings of Excel and showed an easier, repeatable, and more effective way to get things done - using Jupyter Notebooks and Python.
Below we outline Robin’s main points:
Excel and Google Sheets are very error-prone - especially if you’re dealing with larger data sets! If you need to process a lot of data, then you should consider using Jupyter Notebooks, as it can handle much bigger data sets (think: analysing backlinks, doing keyword research, log file analysis)
Jupyter Notebooks are reusable: if you create a Jupyter script to do any repeatable task (i.e. reporting or keyword research) then you can reuse it. This makes your life much easier because you don’t have to go back and dissect an old process.
Jupyter allows you to use Regex. This gives a huge advantage over excel because it is far more efficient at allowing you to account for misspellings. This, for example, can give you a far more accurate chance at accounting for things like branded search query permutations.
Jupyter allows you to write notes and keep every step in your process ordered. This means that your methodology is noted and the next time you perform this task, you remember exactly the steps you took. This is especially useful for when clients ask you questions about your work weeks or months down the line!
Finally - Jupyter notebooks allow us to get answers that we can’t get from Excel. We’re able to not only consider the data set from new angles, but we also have more time to go about other tasks, such as thinking about client strategy or improving other processes.
Robin has so many slides it breaks Slideshare. Instead, you can download his slides from Dropbox.
Jes Scholz - ‘Giving Robots An All Access Pass’
Jes Scholz uses the analogy of a nightclub to explain how Googlebot interacts with your website. The goal? To become part of the exclusive “Club Valid”. Her main points are outlined below:
As stated by John Mueller himself, “crawl budget is overrated - most sites never need to worry about this”. So instead of focusing on how much Google is crawling your site, you should be most concerned with how Google is crawling it
Status codes are not good or bad - there are right codes and wrong codes for different situations
In a similar vein, duplicate content is not “bad”, in fact, it’s entirely natural. You just need to make sure that you’re handling it correctly
JavaScript is your ticket to better UX, however, bear in mind that this often presents a host of SEO difficulties. Make sure that you don’t rely on the mobile friendly testing tool to see if Google is able to crawl your JavaScript - this tool actually uses different software to Googlebot (this is a common misconception!) The URL inspection tool is a bit better for checking this, however, bear in mind it’s more patient that Googlebot when it comes to rendering JavaScript, so isn’t 100% accurate.
SearchLove London 2019 - Jes Scholtz - Giving Robots an All Access Pass from Distilled
Rand Fishkin - ‘The Search Landscape in 2019’
As the web evolves, it’s important to evaluate the areas you could invest in carefully. Rand explored the key changes affecting search marketers and how SEOs can take these changes into account when determining strategy.
Should you invest in voice search? It’s probably a bit too early. There is little difference in the results you get from a voice search vs. a manual search.
Both mobile and desktop are big - don’t neglect one at the expense of the other!
The zero-click search is where the biggest search growth is happening right now. It now accounts for about half (48.96% in the US) of all searches!
If you could benefit from answering zero-click searches, then you should prepare for that. You can determine whether you’d benefit by evaluating the value in ranking for a particular query without necessarily getting traffic.
With changes in Google search appearance recently, ads have become more seamless in the SERP. This has led to paid click-through-rate rising a lot. However, if history is correct, then it will probably slowly decline until the next big search change.
As Google’s algorithms evolve, you’ll likely receive huge ranking benefits from focusing on growing authority signals (E-A-T).
Check out Rand’s slides to see where you should be spending your time and money as the search landscape evolves.
SearchLove London 2019 - Rand Fishkin - The Search Landscape in 2019 from Distilled
Emily Potter - ‘Can Anything in SEO Be Proven? A Deep-Dive Into SEO Split-Testing’
Split testing SEO changes allow us to say with confidence whether or not a specific change hurts or helps organic traffic. Emily discusses various SEO split tests she’s run and potential reasons for their outcome.
The main levers for SEO tend to boil down to
1. Improving organic click-through-rate (CTR)
2. Improving organic rankings of current keywords
3. Ranking for new keywords
Split testing changes that we want to make to our site can help us to make business cases, rescue sessions, and gain a competitive advantage.
Determining which of the three levers causes a particular test to be positive or negative is challenging because since they all impact each other, the data is noisy. Measuring organic sessions relieves us of this noise.
Following “best practices” or what your competitors are doing is not always going to result in wins. Testing shows you what actually works for your site. For example, adding quantity of products in your titles or structured data for breadcrumbs might actually negatively impact your SEO, even if it seems like everyone else is doing so.
Check out Emily’s slides to see more split test case studies and learnings!
Lessons from another year in SEO A/B Testing - SearchLove London 2019 from Emily Potter
Jill Quick - ‘Segments: How To Get Juicy Insights & Avoid The Pips!’
In her excellent talk, Jill highlights how “average data gives you average insights”, and discusses the importance of segmenting your data to gain deeper insights into user behaviour. While analytics and segments are awesome, don’t become overwhelmed with the possibilities - focus on your strategy and work from there.
Jill’s other tips include:
Adding custom dimensions to forms on your website allows you to create more relevant and specific data segments
For example, if you have a website in the education sector, you can add custom dimensions to a form that asks people to fill in their profession. You can then create a segment where custom dimension = headteacher, and you can then analyse the behaviour of this specific group of people
Build segments that look at your best buyers (people who convert well) as well as your worst customers (those who spend barely any time on site and never convert). You can learn a lot about your ideal customer, as well as what you need to improve on your site, by doing this.
Use your segments to build retargeting lists - this will usually result in lower CPAs for paid search, helping your PPC budget go further
Don’t forget to use advanced segments (using sequences and conditions) to create granular segments that matter to your business
You can use segments in Google Data Studio, which is awesome! Just bear in mind that in Data Studio you can’t see if your segment data is sampled, so it’s best to go into the GA interface to check
If you want to hear more about Jill's session, she's written a post to supplement her slides.
Segments in Google Analytics from The Coloring In Department
Rory Truesdale - ‘Using The SERPs to Know Your Audience’
It can be easy to get lost in evaluating metrics like monthly search volume, but we often forget that for each query, there is a person with a very specific motivation and need. Rory discussed how we can utilise Google’s algorithmic re-writing of the SERP to help identify those motivations and more effectively optimise for search intent - the SERPs give us amazing insight into what customers want!
Google rewrites the SERP displayed meta description 84% of the time (it thinks it’s smarter than us!) However, we can use this rewrite data to our advantage.
The best ways to get SERP data are through crawling SERPs in screaming frog, the scraper API or chrome extension, “Thruuu” (a SERP analysis tool), and then using Jupyter Notebooks to analyse it.
Scraping of SERPs, product reviews, comments, or reddit forums can be really powerful in that it will give you a data source that can reveal insight about what your customers want. Then you can optimise the content on your pages to appeal to them.
If you can get a better idea about what language and tone resonates with users, you can incorporate it into CTAs and content.
Check our Rory’s slides as well as the Jupyter notebook he uses to analyse SERP data.
SearchLove London 2019 - Rory Truesdale - Using the SERPs to Know Your Audience from Distilled
Miracle Inameti Archibong - ‘The Complete Guide To Actionable Speed Audits: Getting Your Developer To Work With You’
It can be a huge challenge to get devs to implement our wishlist of SEO recommendations. Miracle discussed the practical steps to getting developers to take your recommendations seriously.
If you take some time to understand the Web Dev roles at your company, then it will help you better communicate your needs as an SEO and get things rolled out. You can do this by:
Learning the language that they’re using. Do some research into the terminology as well as possible limitations of your ask. This will make you more credible and you’re more likely to be taken seriously.
A team of developers may have different KPIs than you. It may be beneficial to use something like revenue as a way to get them on board with the change you want to make.
Try to make every ask more collaborative rather than instructive. For example, instead of simply presenting “insert this code,” try “here’s some example code, maybe we can incorporate x elements. What do you think?” A conversation may be the difference in effecting change.
Prioritising your requests in an easily readable way for web dev teams is always a good idea. It will give them the most information on what needs to get done in what timeline.
SearchLove London 2019 - Miracle Inameti-Archibong - The Complete Guide to Actionable Speed Audits from Distilled
Faisal Anderson - ‘Spying On Google: Using Log File Analysis To Reveal Invaluable SEO Insights’
Log files contain hugely valuable insight on how Googlebot and other crawlers behave on your site. Rory uncovered why you should be looking at your logs as well as how to analyse them effectively to reveal big wins that you may have otherwise been unable to quantify.
Looking at log files is a great way to see the truest and freshest data on how Google is crawling your site. It’s most accurate because it’s the actual logs of how Google (and any other bot) is crawling your website.
Getting log file data can be tricky, so it’s helpful to ask devs about your hosting setup (if your server uses load balancing, the log files may be split between various hosts). You’ll want to get 6 months of data if you can.
The three main things to evaluate when you’re analysing log files
Crawl behavior: look at most and least crawled URLs, look at crawl frequency by depth and internal links
Budget waste: find low value urls (faceted nav, query params, etc.) there are likely some subdirectories you want crawled more than others
Site health: look for inconsistent server responses
Using Jupyter to do log file analysis is great because it’s reusable and you’ll be able to use it again and again.
SearchLoveLondon 2019 - Faisal Anderson - Spying on Google: Using Log File Analysis to Reveal Invaluable SEO Insights from Distilled
Dr Pete Myers - ‘Scaling Keyword Research: More Isn’t Better’
Dr Pete Myers discussed how more is not better when it comes to keyword research! Ditch the thousands of keywords and instead focus on a smaller set of keywords that actually matter for you or your clients. Below are his top tips:
Pete has developed a simple metric called RankVol to help determine the importance of a keyword
RankVol = 1 / (rank x square root of volume)
Using this metric is better than sorting by search volume, as often the highest volume keywords that a site is appearing for are not the most relevant
Lots of data in keyword research can be irrelevant. Using John Lewis as an example:
9% of keywords John Lewis ranks for are mis-spellings
Almost 20% of keywords they rank for are very close variants (plural vs. singular, for example)
Dr Pete provides a short script in his deck to group keywords to help strip out noise in your data set
If sitelinks appear for your website, Google thinks you’re a brand
A new SERP feature (‘best of’ carousel) is appearing in the US, and will likely be rolled out into Europe soon
This feature takes you to a heavily paid SERP, with lots of ads (some well-disguised!)
If a keyword has a heavily paid SERP, you should probably not bother trying to rank for it, as the pay-off will be small
‘People also ask’ is on 90% of searches - be sure to try and take advantage of this SERP space
To summarise, perception is everything with keyword research - make sure you filter out the noise!
SearchLove London 2019 - Dr. Pete Meyers - Scaling Keyword Research: More Isn't Better from Distilled
Lindsay Wassell - ‘Managing Multinational & Multilingual SEO in Motion’
Lindsay covered the many challenges involved in handling migrations involving multiple international site variants. Her key points are highlighted below:
Ask your dev team to make sure it’s possible to implement hreflang via XML sitemaps or on-page; then if there are problems implementing one method, you have another as a fall-back option
When deciding site structure and where international sites should be located (sub-folder? Subdomain? ccTLD?) bear in mind that there are no one-size-fits all solutions. It may be best to have a mixture of solutions, depending on each market.
If you have hreflang relationship issues, Lindsay advises to use Google Sheets to manage hreflang mappings, in combination with a script that can automatically generate XML sitemaps (link provided in her deck)
In order to encourage more people in your organisation to understand the importance of SEO and to make it a priority, highlight statistics such as traffic levels and revenue coming from organic search
Also keep in mind that every department has a wish list when it comes to a migration! Be tactical and tack onto other people’s wishlists to get SEO items implemented
As a final tip - check redirects before going live, as often dev teams will say it’s under control, and then there can be problems at the last minute
SearchLove London 2019 - Lindsay Wassell - Managing Multinational & Multilingual SEO in Motion from Distilled
Stacey MacNaught - ‘Actioning Search Intent - What To Do With All That Data’
By analysing search intent, you can gain a ton of really insightful data. Stacey discussed how you can utilise all of this data to optimise your site for organic search and ultimately increase revenue and traffic.
Traditionally, search intent is categorised broadly as navigational, informational, and transactional. However, it’s often unclear where things are categorised because sometimes keywords are really ambiguous. Often you can break these categories down into more specific categories.
In terms of targeting keywords on your site, look out for opportunities where you may not be delivering the right content based on what query you’re targeting.
For example, if you’re targeting an informational keyword with a transactional result, you’re not going to rank. This can be an opportunity for you to create the kind of page that will rank for a select query. If the phrase is “best ballet shoes” and the results are informational pages, then you shouldn’t be serving a transactional result.
If you can be objective about the topic at hand and you have someone qualified to write that content, then you should definitely do it.
If your rankings drop but revenue unaffected, it’s likely you’ve lost rankings on informational keywords
Don’t assume that users will come back of their own accord - work with PPC and get them to retarget to users who have read your content
Build out different audience lists according to the types of content or topics that users have been reading
Build out separate PPC campaigns for this so you can easily monitor results
Stacey saw CPA fall by -34% when she did this for a healthcare site
To generate content ideas, talk to the sales and customer service teams to find out what users are asking, then build content around it
You can also use Google Forms to survey previous customers to find out what drove their purchase
SearchLove London 2019 - Stacey MacNaught - Actioning Search Intent: What to Do with All That Data from Distilled
Will Critchlow - ‘Misunderstood Concepts at the Heart of SEO - Get An Edge By Understanding These Areas’
Most things in SEO can be boiled down to technical accessibility, relevance, quality, and authority. Or: can it be crawled, does it meet a keyword need, and is it trustworthy? However, some of the foundational elements of SEO are misunderstood.
Regarding crawlability, it’s important to understand how setting directives in robots.txt will impact your site if handled incorrectly.
Robots.txt directives do not cascade. For example, if you set a specific directive to disallow Googlebot from /example, that is the one it will follow. Even if you specify that * (all user agents) are disallowed from /dont-crawl elsewhere in the file, Googlebot will only follow it’s set directive not to crawl /example and still be able to crawl /dont-crawl.
The Google documentation, robots.txt checker in GSC, and the open source parser tend to disagree on what is allowed and disallowed. So, you’ll need to do some testing to ensure that the directives you’re setting are what you intended.
We often have a lot of intuition about how things like pagerank work, but too many of our recommendations are based on misconceptions about how authority flows
There are some huge changes coming to major browser cookie handling. The cookie window will be shorter, which means that a lot of traffic that’s currently classified as organic will be classified as direct. Understanding the language around the changes that are happening is, and will be, important
There are common misconceptions too about the meaning of ‘long tail keywords’
50% of Twitter respondents incorrectly think it means that there are many words in a query
40% understand the correct meaning, which is that they are keywords with low search volume
SearchLove London 2019 - Will Critchlow - Misunderstood Concepts at the Heart of SEO from Distilled
That's it for our London conference for another year. But the good news is we are heading to San Diego in March where we'll be getting some sun, sea and search at SearchLove San Diego!
If you have any questions about our conferences please leave a comment below or come and say hello over on Twitter.
0 notes
Text
SearchLove London 2019: The Great Big Round Up
On 14th and 15th October, we made our annual visit to The Brewery in London for our UK edition of SearchLove. This year’s conference was our most successful yet, not only in terms of the number of folks attending but also with regard to the high calibre of speakers who joined us over the jam-packed two days to share their invaluable industry insights.
Let the show begin! #searchlove #seo pic.twitter.com/zDIRbbX2KG
— Udo Leinhäuser (@u_leinhaeuser) October 14, 2019
This post is a quick-fire summary of the knowledge our speakers had to share, plus their slides & a few photos from across the conference. All sessions in their entirety will be available with a DistilledU membership in a couple of weeks' time. And don’t forget that if you feel you missed out this year, make sure you sign up to our mailing list to be the first in the know for next year’s conference! Are you ready? Let’s get started!
Marie Haynes - ‘Practical Tips For Improving E-A-T’
Google’s algorithms are increasingly considering E-A-T components (expertise, authority and trust) when evaluating sites. Marie shared why and how to improve E-A-T so that you have the best chance at winning in the current and future search landscape.
One of the most important things to focus on is the accuracy of the information on your site. This is especially important if your pages are primarily YMYL (‘your money or your life’, in other words, content that can affect someone’s health, safety, financial stability, etc.).
Google’s quality raters use the quality raters guidelines as their textbook. If you take a look at the guidelines, you can get a better idea about what Google is actually looking at when they’re evaluating E-A-T components. Try doing a CTRL+F for your industry to see what they suggest for your vertical.
There are some practical things you can do on your site to help Google understand that you’re trustworthy and authoritative:
Have contact information available.
If you’re eCommerce, ensure that your refund policy and customer service information is clearly accessible.
Make sure your site is secure (HTTPS)
Have correct grammar. How your page reads is important!
Make sure that the information on your site doesn’t contradict any known facts, something called scientific consensus. Site all sources as necessary.
SearchLove London 2019 - Marie Haynes - Practical Tips for Improving E-A-T from Distilled
Sarah Gurbach - ‘Using Qualitative Data To Make Human-Centered Decisions’
SEOs have a huge amount of data to work with, but often, the data that gets overlooked is that which comes directly from the humans who are driving all of our data points.
By performing qualitative research in tandem with quantitative, we can get insights on the actual human wants, barriers, and confusions that drive our customers to make their decisions and move through the funnel.
Sarah’s steps to conducting qualitative research include:
Defining your objective. Write it as a question. Keep it specific, focused and simple.
Asking open-ended questions to customers to define the personas you should be targeting. Sarah recommends surveys of 10 questions to 5 customers that should only take around 20 minutes each. More than this will likely be redundant.
Actually observing our users to figure out what and how they’re searching and moving through the funnel.
You can then quantify this data by combining it with other data sources (i.e. PPC data, conversion data, etc.).
If you don’t have time to conduct surveys, then you can go to social media and ask a question!
Want more on questions you can ask your customers? Check out this resource from Sarah.
SearchLove London 2019 - Sarah Gurbach - Using Qualitative Data to Make Human-centered Decisions from Distilled
Greg Gifford - ‘Doc Brown’s Plutonium-Powered SEO Playbook’
Greg delivered an entertaining, informative and best of all highly actionable talk on local SEO. If you have physical locations for your business, you should not be neglecting your local SEO strategy! It’s important to remember that there is a different algorithm for local SEO compared to the traditional SERP, and therefore you need to approach local SEO slightly differently.
Greg’s key tips to nailing your local SEO strategy are as follows:
Links are weighted differently for local SEO! Make sure you acquire local links - quality, and whether these are follow or nofollow, matters far less than in the standard SERP. The key is to make sure your links are local - get your hands dirty with some old-school marketing and get out into your local community to build links from churches, businesses and community websites in your area.
Content needs to actually be about your business and local area. If you can use your website copy for a site in another area, you’re doing it wrong. Also, make sure that your blog is a local destination - if your content is more localised than competitors, then you’ll be one step ahead of competitors.
Citations are also important, but you only need a handful! Make sure you link to your website from places that customers will actually see, such as your Facebook, Twitter and other social profiles. Ensure your business information is accurate across platforms.
Reviews need to be strong across platforms - there’s no use having excellent reviews in Google My Business, and then bad reviews on TripAdvisor!
Google My Business is your new homepage, so make sure you give it some attention!
Bear in mind that users can not only ask questions but also answer them - make sure you create your own Q&A here and upvote your answers so that they appear at the top.
Also be aware that clicks from GMB are recorded as direct! If you use UTM tracking parameters, then you can update the tracking so that you can attribute it correctly to organic.
SearchLove London 2019 - Greg Gifford - Doc Brown's Plutonium-powered Local SEO Playbook from Distilled
Luke Carthy - ‘Finding Powerful CRO and UX Opportunities Using SEO Crawlers’
Luke Carthy discussed the importance of not always striving to drive more traffic, but making the most of the traffic you currently do have. More traffic does not necessarily equal more conversions! He explored different ways to identify opportunities using crawl software and custom extraction, and to use these insights to improve conversion rates on your website.
His top recommendations include:
Look at the internal search experience of users - do they get a ‘no results found’ page? What does this look like - does it provide a good user experience? Does it guide users to alternative products?
Custom extraction is an excellent way to mine websites for information (your own and especially competitors!)
Consider scraping product recommendations:
What products are competitor sites recommending? These are often based on dynamic algorithms, so provide a good insight into what products customers buy together
Also pay attention to the price of the recommended products vs. the main product - recommended items are often more expensive, to encourage users to spend more
Also consider scraping competitor sites for prices, review and stock
Are you cheaper than competitors?
Do competitors have popular products that you don’t have? What are their best and worst-performing products? Often category or search results pages are ordered by best-sellers, and you can take advantage of this by mining this information
To deepen your analysis, plugin other data such as log file data, Google Analytics, XML sitemaps and backlinks to try to understand how you can improve your current results, and to obtain comprehensive insights that you can share with the wider team
SearchLove London 2019 - Luke Carthy - Finding Powerful CRO and UX Opportunities Using SEO Crawlers from Distilled
Andi Jarvis - ‘The Science of Persuasion’
Human psychology affects consumers’ buying behavior tremendously. Andi covered how we as SEOs can better understand these factors to influence our SEO strategy and improve conversions.
Scarcity: you can create the impression of scarcity even when it doesn’t exist, by creating scarcity of time to drive demand. An example of this is how Hotels.com creates a sense of urgency by including things like “Only 4 rooms left!” Test and learn with different time scales (hours, days, weeks or more) to see what works best for your product offering.
Authority: building authority helps people understand who they should trust. When you’ve got authority, you are more likely to persuade people. You can build authority simply by talking about yourself, and by labelling yourself as an authority in your industry.
Likeability: The reason that influencer marketing works is due to the principle of liking: we prefer to buy from people who we are attracted to and who we aspire to be. If we can envision ourselves using a product or service by seeing ourselves in its marketing, then we are more likely to convert.
Pretty Little Thing has started doing this by incorporating two models to model clothing, to increase the likelihood of users identifying with their models
Purpose: People are more likely to buy when they feel they are contributing to a cause, for example, Pampers who has a partnership with Unicef, so consumers feel like they are doing a good deed when they buy Pampers products. This is known as cause-based or purpose-based marketing.
Social proofing: It’s been known for a long time that people are influenced by the behaviour of others. In the early 1800s, theatres would pay people to clap at the right moments in a show, to encourage others to join in. Similarly today, if a brand has several endorsements from celebrities or users, people are more likely to purchase their products.
Reciprocation: Offering customers a free gift (even if small) can have a positive impact on re-purchase rates. Make sure though that you evolve what you do if you have a regular purchase cycle - offer customers different gifts so that they don’t know what to expect, otherwise the positive effect wears off.
SearchLove London 2019 - Andi Jarvis - The Science of Persuasion from Distilled
Heather Physioc - ‘Building a Discoverability Powerhouse: Lessons From Integrating Organic Search, Paid Search & Performance Content’
Organic, paid content and the like all impact discoverability. Yet, in many organisations, these teams are siloed. Heather discussed tips for integrating and collaborating between teams to build a “discoverabilty powerhouse”.
There are definite obstacles to integrating marketing teams like paid, social, or organic.
It’s not unlikely that merging teams too much can actually diminish agility. Depending on what marketing needs are at different times, allow for independence of teams when it’s necessary to get a job done.
Every team has their own processes for getting things done. Don’t try to overhaul everything at once. Talk with each other to see where integration makes the most sense.
There are also clear wins when you’re able to collaborate effectively.
When you’re in harmony with each team, you can more seamlessly find opportunities for discoverability. This can ultimately lead to up-sells or cross-sells.
By working together, we can share knowledge more deeply and have richer data. We can then leverage this to capture as much of the SERP as possible.
Cross-training teams can help build empathy and trust. When separate teams gain an understanding of how and why certain tasks (i.e. keyword research) are done, it can help everyone work better together and streamline processes.
SearchLove London 2019 - Heather Physioc - Building a Discoverability Powerhouse from Distilled
Robin Lord - ‘Excel? It Would Be Easier To Go To Jupyter’
Robin, a senior consultant here at Distilled, demonstrated the various shortcomings of Excel and showed an easier, repeatable, and more effective way to get things done - using Jupyter Notebooks and Python.
Below we outline Robin’s main points:
Excel and Google Sheets are very error-prone - especially if you’re dealing with larger data sets! If you need to process a lot of data, then you should consider using Jupyter Notebooks, as it can handle much bigger data sets (think: analysing backlinks, doing keyword research, log file analysis)
Jupyter Notebooks are reusable: if you create a Jupyter script to do any repeatable task (i.e. reporting or keyword research) then you can reuse it. This makes your life much easier because you don’t have to go back and dissect an old process.
Jupyter allows you to use Regex. This gives a huge advantage over excel because it is far more efficient at allowing you to account for misspellings. This, for example, can give you a far more accurate chance at accounting for things like branded search query permutations.
Jupyter allows you to write notes and keep every step in your process ordered. This means that your methodology is noted and the next time you perform this task, you remember exactly the steps you took. This is especially useful for when clients ask you questions about your work weeks or months down the line!
Finally - Jupyter notebooks allow us to get answers that we can’t get from Excel. We’re able to not only consider the data set from new angles, but we also have more time to go about other tasks, such as thinking about client strategy or improving other processes.
Robin has so many slides it breaks Slideshare. Instead, you can download his slides from Dropbox.
Jes Scholz - ‘Giving Robots An All Access Pass’
Jes Scholz uses the analogy of a nightclub to explain how Googlebot interacts with your website. The goal? To become part of the exclusive “Club Valid”. Her main points are outlined below:
As stated by John Mueller himself, “crawl budget is overrated - most sites never need to worry about this”. So instead of focusing on how much Google is crawling your site, you should be most concerned with how Google is crawling it
Status codes are not good or bad - there are right codes and wrong codes for different situations
In a similar vein, duplicate content is not “bad”, in fact, it’s entirely natural. You just need to make sure that you’re handling it correctly
JavaScript is your ticket to better UX, however, bear in mind that this often presents a host of SEO difficulties. Make sure that you don’t rely on the mobile friendly testing tool to see if Google is able to crawl your JavaScript - this tool actually uses different software to Googlebot (this is a common misconception!) The URL inspection tool is a bit better for checking this, however, bear in mind it’s more patient that Googlebot when it comes to rendering JavaScript, so isn’t 100% accurate.
SearchLove London 2019 - Jes Scholtz - Giving Robots an All Access Pass from Distilled
Rand Fishkin - ‘The Search Landscape in 2019’
As the web evolves, it’s important to evaluate the areas you could invest in carefully. Rand explored the key changes affecting search marketers and how SEOs can take these changes into account when determining strategy.
Should you invest in voice search? It’s probably a bit too early. There is little difference in the results you get from a voice search vs. a manual search.
Both mobile and desktop are big - don’t neglect one at the expense of the other!
The zero-click search is where the biggest search growth is happening right now. It now accounts for about half (48.96% in the US) of all searches!
If you could benefit from answering zero-click searches, then you should prepare for that. You can determine whether you’d benefit by evaluating the value in ranking for a particular query without necessarily getting traffic.
With changes in Google search appearance recently, ads have become more seamless in the SERP. This has led to paid click-through-rate rising a lot. However, if history is correct, then it will probably slowly decline until the next big search change.
As Google’s algorithms evolve, you’ll likely receive huge ranking benefits from focusing on growing authority signals (E-A-T).
Check out Rand’s slides to see where you should be spending your time and money as the search landscape evolves.
SearchLove London 2019 - Rand Fishkin - The Search Landscape in 2019 from Distilled
Emily Potter - ‘Can Anything in SEO Be Proven? A Deep-Dive Into SEO Split-Testing’
Split testing SEO changes allow us to say with confidence whether or not a specific change hurts or helps organic traffic. Emily discusses various SEO split tests she’s run and potential reasons for their outcome.
The main levers for SEO tend to boil down to
1. Improving organic click-through-rate (CTR)
2. Improving organic rankings of current keywords
3. Ranking for new keywords
Split testing changes that we want to make to our site can help us to make business cases, rescue sessions, and gain a competitive advantage.
Determining which of the three levers causes a particular test to be positive or negative is challenging because since they all impact each other, the data is noisy. Measuring organic sessions relieves us of this noise.
Following “best practices” or what your competitors are doing is not always going to result in wins. Testing shows you what actually works for your site. For example, adding quantity of products in your titles or structured data for breadcrumbs might actually negatively impact your SEO, even if it seems like everyone else is doing so.
Check out Emily’s slides to see more split test case studies and learnings!
Lessons from another year in SEO A/B Testing - SearchLove London 2019 from Emily Potter
Jill Quick - ‘Segments: How To Get Juicy Insights & Avoid The Pips!’
In her excellent talk, Jill highlights how “average data gives you average insights”, and discusses the importance of segmenting your data to gain deeper insights into user behaviour. While analytics and segments are awesome, don’t become overwhelmed with the possibilities - focus on your strategy and work from there.
Jill’s other tips include:
Adding custom dimensions to forms on your website allows you to create more relevant and specific data segments
For example, if you have a website in the education sector, you can add custom dimensions to a form that asks people to fill in their profession. You can then create a segment where custom dimension = headteacher, and you can then analyse the behaviour of this specific group of people
Build segments that look at your best buyers (people who convert well) as well as your worst customers (those who spend barely any time on site and never convert). You can learn a lot about your ideal customer, as well as what you need to improve on your site, by doing this.
Use your segments to build retargeting lists - this will usually result in lower CPAs for paid search, helping your PPC budget go further
Don’t forget to use advanced segments (using sequences and conditions) to create granular segments that matter to your business
You can use segments in Google Data Studio, which is awesome! Just bear in mind that in Data Studio you can’t see if your segment data is sampled, so it’s best to go into the GA interface to check
If you want to hear more about Jill's session, she's written a post to supplement her slides.
Segments in Google Analytics from The Coloring In Department
Rory Truesdale - ‘Using The SERPs to Know Your Audience’
It can be easy to get lost in evaluating metrics like monthly search volume, but we often forget that for each query, there is a person with a very specific motivation and need. Rory discussed how we can utilise Google’s algorithmic re-writing of the SERP to help identify those motivations and more effectively optimise for search intent - the SERPs give us amazing insight into what customers want!
Google rewrites the SERP displayed meta description 84% of the time (it thinks it’s smarter than us!) However, we can use this rewrite data to our advantage.
The best ways to get SERP data are through crawling SERPs in screaming frog, the scraper API or chrome extension, “Thruuu” (a SERP analysis tool), and then using Jupyter Notebooks to analyse it.
Scraping of SERPs, product reviews, comments, or reddit forums can be really powerful in that it will give you a data source that can reveal insight about what your customers want. Then you can optimise the content on your pages to appeal to them.
If you can get a better idea about what language and tone resonates with users, you can incorporate it into CTAs and content.
Check our Rory’s slides as well as the Jupyter notebook he uses to analyse SERP data.
SearchLove London 2019 - Rory Truesdale - Using the SERPs to Know Your Audience from Distilled
Miracle Inameti Archibong - ‘The Complete Guide To Actionable Speed Audits: Getting Your Developer To Work With You’
It can be a huge challenge to get devs to implement our wishlist of SEO recommendations. Miracle discussed the practical steps to getting developers to take your recommendations seriously.
If you take some time to understand the Web Dev roles at your company, then it will help you better communicate your needs as an SEO and get things rolled out. You can do this by:
Learning the language that they’re using. Do some research into the terminology as well as possible limitations of your ask. This will make you more credible and you’re more likely to be taken seriously.
A team of developers may have different KPIs than you. It may be beneficial to use something like revenue as a way to get them on board with the change you want to make.
Try to make every ask more collaborative rather than instructive. For example, instead of simply presenting “insert this code,” try “here’s some example code, maybe we can incorporate x elements. What do you think?” A conversation may be the difference in effecting change.
Prioritising your requests in an easily readable way for web dev teams is always a good idea. It will give them the most information on what needs to get done in what timeline.
SearchLove London 2019 - Miracle Inameti-Archibong - The Complete Guide to Actionable Speed Audits from Distilled
Faisal Anderson - ‘Spying On Google: Using Log File Analysis To Reveal Invaluable SEO Insights’
Log files contain hugely valuable insight on how Googlebot and other crawlers behave on your site. Rory uncovered why you should be looking at your logs as well as how to analyse them effectively to reveal big wins that you may have otherwise been unable to quantify.
Looking at log files is a great way to see the truest and freshest data on how Google is crawling your site. It’s most accurate because it’s the actual logs of how Google (and any other bot) is crawling your website.
Getting log file data can be tricky, so it’s helpful to ask devs about your hosting setup (if your server uses load balancing, the log files may be split between various hosts). You’ll want to get 6 months of data if you can.
The three main things to evaluate when you’re analysing log files
Crawl behavior: look at most and least crawled URLs, look at crawl frequency by depth and internal links
Budget waste: find low value urls (faceted nav, query params, etc.) there are likely some subdirectories you want crawled more than others
Site health: look for inconsistent server responses
Using Jupyter to do log file analysis is great because it’s reusable and you’ll be able to use it again and again.
SearchLoveLondon 2019 - Faisal Anderson - Spying on Google: Using Log File Analysis to Reveal Invaluable SEO Insights from Distilled
Dr Pete Myers - ‘Scaling Keyword Research: More Isn’t Better’
Dr Pete Myers discussed how more is not better when it comes to keyword research! Ditch the thousands of keywords and instead focus on a smaller set of keywords that actually matter for you or your clients. Below are his top tips:
Pete has developed a simple metric called RankVol to help determine the importance of a keyword
RankVol = 1 / (rank x square root of volume)
Using this metric is better than sorting by search volume, as often the highest volume keywords that a site is appearing for are not the most relevant
Lots of data in keyword research can be irrelevant. Using John Lewis as an example:
9% of keywords John Lewis ranks for are mis-spellings
Almost 20% of keywords they rank for are very close variants (plural vs. singular, for example)
Dr Pete provides a short script in his deck to group keywords to help strip out noise in your data set
If sitelinks appear for your website, Google thinks you’re a brand
A new SERP feature (‘best of’ carousel) is appearing in the US, and will likely be rolled out into Europe soon
This feature takes you to a heavily paid SERP, with lots of ads (some well-disguised!)
If a keyword has a heavily paid SERP, you should probably not bother trying to rank for it, as the pay-off will be small
‘People also ask’ is on 90% of searches - be sure to try and take advantage of this SERP space
To summarise, perception is everything with keyword research - make sure you filter out the noise!
SearchLove London 2019 - Dr. Pete Meyers - Scaling Keyword Research: More Isn't Better from Distilled
Lindsay Wassell - ‘Managing Multinational & Multilingual SEO in Motion’
Lindsay covered the many challenges involved in handling migrations involving multiple international site variants. Her key points are highlighted below:
Ask your dev team to make sure it’s possible to implement hreflang via XML sitemaps or on-page; then if there are problems implementing one method, you have another as a fall-back option
When deciding site structure and where international sites should be located (sub-folder? Subdomain? ccTLD?) bear in mind that there are no one-size-fits all solutions. It may be best to have a mixture of solutions, depending on each market.
If you have hreflang relationship issues, Lindsay advises to use Google Sheets to manage hreflang mappings, in combination with a script that can automatically generate XML sitemaps (link provided in her deck)
In order to encourage more people in your organisation to understand the importance of SEO and to make it a priority, highlight statistics such as traffic levels and revenue coming from organic search
Also keep in mind that every department has a wish list when it comes to a migration! Be tactical and tack onto other people’s wishlists to get SEO items implemented
As a final tip - check redirects before going live, as often dev teams will say it’s under control, and then there can be problems at the last minute
SearchLove London 2019 - Lindsay Wassell - Managing Multinational & Multilingual SEO in Motion from Distilled
Stacey MacNaught - ‘Actioning Search Intent - What To Do With All That Data’
By analysing search intent, you can gain a ton of really insightful data. Stacey discussed how you can utilise all of this data to optimise your site for organic search and ultimately increase revenue and traffic.
Traditionally, search intent is categorised broadly as navigational, informational, and transactional. However, it’s often unclear where things are categorised because sometimes keywords are really ambiguous. Often you can break these categories down into more specific categories.
In terms of targeting keywords on your site, look out for opportunities where you may not be delivering the right content based on what query you’re targeting.
For example, if you’re targeting an informational keyword with a transactional result, you’re not going to rank. This can be an opportunity for you to create the kind of page that will rank for a select query. If the phrase is “best ballet shoes” and the results are informational pages, then you shouldn’t be serving a transactional result.
If you can be objective about the topic at hand and you have someone qualified to write that content, then you should definitely do it.
If your rankings drop but revenue unaffected, it’s likely you’ve lost rankings on informational keywords
Don’t assume that users will come back of their own accord - work with PPC and get them to retarget to users who have read your content
Build out different audience lists according to the types of content or topics that users have been reading
Build out separate PPC campaigns for this so you can easily monitor results
Stacey saw CPA fall by -34% when she did this for a healthcare site
To generate content ideas, talk to the sales and customer service teams to find out what users are asking, then build content around it
You can also use Google Forms to survey previous customers to find out what drove their purchase
SearchLove London 2019 - Stacey MacNaught - Actioning Search Intent: What to Do with All That Data from Distilled
Will Critchlow - ‘Misunderstood Concepts at the Heart of SEO - Get An Edge By Understanding These Areas’
Most things in SEO can be boiled down to technical accessibility, relevance, quality, and authority. Or: can it be crawled, does it meet a keyword need, and is it trustworthy? However, some of the foundational elements of SEO are misunderstood.
Regarding crawlability, it’s important to understand how setting directives in robots.txt will impact your site if handled incorrectly.
Robots.txt directives do not cascade. For example, if you set a specific directive to disallow Googlebot from /example, that is the one it will follow. Even if you specify that * (all user agents) are disallowed from /dont-crawl elsewhere in the file, Googlebot will only follow it’s set directive not to crawl /example and still be able to crawl /dont-crawl.
The Google documentation, robots.txt checker in GSC, and the open source parser tend to disagree on what is allowed and disallowed. So, you’ll need to do some testing to ensure that the directives you’re setting are what you intended.
We often have a lot of intuition about how things like pagerank work, but too many of our recommendations are based on misconceptions about how authority flows
There are some huge changes coming to major browser cookie handling. The cookie window will be shorter, which means that a lot of traffic that’s currently classified as organic will be classified as direct. Understanding the language around the changes that are happening is, and will be, important
There are common misconceptions too about the meaning of ‘long tail keywords’
50% of Twitter respondents incorrectly think it means that there are many words in a query
40% understand the correct meaning, which is that they are keywords with low search volume
SearchLove London 2019 - Will Critchlow - Misunderstood Concepts at the Heart of SEO from Distilled
That's it for our London conference for another year. But the good news is we are heading to San Diego in March where we'll be getting some sun, sea and search at SearchLove San Diego!
If you have any questions about our conferences please leave a comment below or come and say hello over on Twitter.
from Marketing https://www.distilled.net/resources/searchlove-london-2019-round-up/ via http://www.rssmix.com/
0 notes
Text
SearchLove London 2019: The Great Big Round Up
On 14th and 15th October, we made our annual visit to The Brewery in London for our UK edition of SearchLove. This year’s conference was our most successful yet, not only in terms of the number of folks attending but also with regard to the high calibre of speakers who joined us over the jam-packed two days to share their invaluable industry insights.
Let the show begin! #searchlove #seo pic.twitter.com/zDIRbbX2KG
— Udo Leinhäuser (@u_leinhaeuser) October 14, 2019
This post is a quick-fire summary of the knowledge our speakers had to share, plus their slides & a few photos from across the conference. All sessions in their entirety will be available with a DistilledU membership in a couple of weeks' time. And don’t forget that if you feel you missed out this year, make sure you sign up to our mailing list to be the first in the know for next year’s conference! Are you ready? Let’s get started!
Marie Haynes - ‘Practical Tips For Improving E-A-T’
Google’s algorithms are increasingly considering E-A-T components (expertise, authority and trust) when evaluating sites. Marie shared why and how to improve E-A-T so that you have the best chance at winning in the current and future search landscape.
One of the most important things to focus on is the accuracy of the information on your site. This is especially important if your pages are primarily YMYL (‘your money or your life’, in other words, content that can affect someone’s health, safety, financial stability, etc.).
Google’s quality raters use the quality raters guidelines as their textbook. If you take a look at the guidelines, you can get a better idea about what Google is actually looking at when they’re evaluating E-A-T components. Try doing a CTRL+F for your industry to see what they suggest for your vertical.
There are some practical things you can do on your site to help Google understand that you’re trustworthy and authoritative:
Have contact information available.
If you’re eCommerce, ensure that your refund policy and customer service information is clearly accessible.
Make sure your site is secure (HTTPS)
Have correct grammar. How your page reads is important!
Make sure that the information on your site doesn’t contradict any known facts, something called scientific consensus. Site all sources as necessary.
SearchLove London 2019 - Marie Haynes - Practical Tips for Improving E-A-T from Distilled
Sarah Gurbach - ‘Using Qualitative Data To Make Human-Centered Decisions’
SEOs have a huge amount of data to work with, but often, the data that gets overlooked is that which comes directly from the humans who are driving all of our data points.
By performing qualitative research in tandem with quantitative, we can get insights on the actual human wants, barriers, and confusions that drive our customers to make their decisions and move through the funnel.
Sarah’s steps to conducting qualitative research include:
Defining your objective. Write it as a question. Keep it specific, focused and simple.
Asking open-ended questions to customers to define the personas you should be targeting. Sarah recommends surveys of 10 questions to 5 customers that should only take around 20 minutes each. More than this will likely be redundant.
Actually observing our users to figure out what and how they’re searching and moving through the funnel.
You can then quantify this data by combining it with other data sources (i.e. PPC data, conversion data, etc.).
If you don’t have time to conduct surveys, then you can go to social media and ask a question!
Want more on questions you can ask your customers? Check out this resource from Sarah.
SearchLove London 2019 - Sarah Gurbach - Using Qualitative Data to Make Human-centered Decisions from Distilled
Greg Gifford - ‘Doc Brown’s Plutonium-Powered SEO Playbook’
Greg delivered an entertaining, informative and best of all highly actionable talk on local SEO. If you have physical locations for your business, you should not be neglecting your local SEO strategy! It’s important to remember that there is a different algorithm for local SEO compared to the traditional SERP, and therefore you need to approach local SEO slightly differently.
Greg’s key tips to nailing your local SEO strategy are as follows:
Links are weighted differently for local SEO! Make sure you acquire local links - quality, and whether these are follow or nofollow, matters far less than in the standard SERP. The key is to make sure your links are local - get your hands dirty with some old-school marketing and get out into your local community to build links from churches, businesses and community websites in your area.
Content needs to actually be about your business and local area. If you can use your website copy for a site in another area, you’re doing it wrong. Also, make sure that your blog is a local destination - if your content is more localised than competitors, then you’ll be one step ahead of competitors.
Citations are also important, but you only need a handful! Make sure you link to your website from places that customers will actually see, such as your Facebook, Twitter and other social profiles. Ensure your business information is accurate across platforms.
Reviews need to be strong across platforms - there’s no use having excellent reviews in Google My Business, and then bad reviews on TripAdvisor!
Google My Business is your new homepage, so make sure you give it some attention!
Bear in mind that users can not only ask questions but also answer them - make sure you create your own Q&A here and upvote your answers so that they appear at the top.
Also be aware that clicks from GMB are recorded as direct! If you use UTM tracking parameters, then you can update the tracking so that you can attribute it correctly to organic.
SearchLove London 2019 - Greg Gifford - Doc Brown's Plutonium-powered Local SEO Playbook from Distilled
Luke Carthy - ‘Finding Powerful CRO and UX Opportunities Using SEO Crawlers’
Luke Carthy discussed the importance of not always striving to drive more traffic, but making the most of the traffic you currently do have. More traffic does not necessarily equal more conversions! He explored different ways to identify opportunities using crawl software and custom extraction, and to use these insights to improve conversion rates on your website.
His top recommendations include:
Look at the internal search experience of users - do they get a ‘no results found’ page? What does this look like - does it provide a good user experience? Does it guide users to alternative products?
Custom extraction is an excellent way to mine websites for information (your own and especially competitors!)
Consider scraping product recommendations:
What products are competitor sites recommending? These are often based on dynamic algorithms, so provide a good insight into what products customers buy together
Also pay attention to the price of the recommended products vs. the main product - recommended items are often more expensive, to encourage users to spend more
Also consider scraping competitor sites for prices, review and stock
Are you cheaper than competitors?
Do competitors have popular products that you don’t have? What are their best and worst-performing products? Often category or search results pages are ordered by best-sellers, and you can take advantage of this by mining this information
To deepen your analysis, plugin other data such as log file data, Google Analytics, XML sitemaps and backlinks to try to understand how you can improve your current results, and to obtain comprehensive insights that you can share with the wider team
SearchLove London 2019 - Luke Carthy - Finding Powerful CRO and UX Opportunities Using SEO Crawlers from Distilled
Andi Jarvis - ‘The Science of Persuasion’
Human psychology affects consumers’ buying behavior tremendously. Andi covered how we as SEOs can better understand these factors to influence our SEO strategy and improve conversions.
Scarcity: you can create the impression of scarcity even when it doesn’t exist, by creating scarcity of time to drive demand. An example of this is how Hotels.com creates a sense of urgency by including things like “Only 4 rooms left!” Test and learn with different time scales (hours, days, weeks or more) to see what works best for your product offering.
Authority: building authority helps people understand who they should trust. When you’ve got authority, you are more likely to persuade people. You can build authority simply by talking about yourself, and by labelling yourself as an authority in your industry.
Likeability: The reason that influencer marketing works is due to the principle of liking: we prefer to buy from people who we are attracted to and who we aspire to be. If we can envision ourselves using a product or service by seeing ourselves in its marketing, then we are more likely to convert.
Pretty Little Thing has started doing this by incorporating two models to model clothing, to increase the likelihood of users identifying with their models
Purpose: People are more likely to buy when they feel they are contributing to a cause, for example, Pampers who has a partnership with Unicef, so consumers feel like they are doing a good deed when they buy Pampers products. This is known as cause-based or purpose-based marketing.
Social proofing: It’s been known for a long time that people are influenced by the behaviour of others. In the early 1800s, theatres would pay people to clap at the right moments in a show, to encourage others to join in. Similarly today, if a brand has several endorsements from celebrities or users, people are more likely to purchase their products.
Reciprocation: Offering customers a free gift (even if small) can have a positive impact on re-purchase rates. Make sure though that you evolve what you do if you have a regular purchase cycle - offer customers different gifts so that they don’t know what to expect, otherwise the positive effect wears off.
SearchLove London 2019 - Andi Jarvis - The Science of Persuasion from Distilled
Heather Physioc - ‘Building a Discoverability Powerhouse: Lessons From Integrating Organic Search, Paid Search & Performance Content’
Organic, paid content and the like all impact discoverability. Yet, in many organisations, these teams are siloed. Heather discussed tips for integrating and collaborating between teams to build a “discoverabilty powerhouse”.
There are definite obstacles to integrating marketing teams like paid, social, or organic.
It’s not unlikely that merging teams too much can actually diminish agility. Depending on what marketing needs are at different times, allow for independence of teams when it’s necessary to get a job done.
Every team has their own processes for getting things done. Don’t try to overhaul everything at once. Talk with each other to see where integration makes the most sense.
There are also clear wins when you’re able to collaborate effectively.
When you’re in harmony with each team, you can more seamlessly find opportunities for discoverability. This can ultimately lead to up-sells or cross-sells.
By working together, we can share knowledge more deeply and have richer data. We can then leverage this to capture as much of the SERP as possible.
Cross-training teams can help build empathy and trust. When separate teams gain an understanding of how and why certain tasks (i.e. keyword research) are done, it can help everyone work better together and streamline processes.
SearchLove London 2019 - Heather Physioc - Building a Discoverability Powerhouse from Distilled
Robin Lord - ‘Excel? It Would Be Easier To Go To Jupyter’
Robin, a senior consultant here at Distilled, demonstrated the various shortcomings of Excel and showed an easier, repeatable, and more effective way to get things done - using Jupyter Notebooks and Python.
Below we outline Robin’s main points:
Excel and Google Sheets are very error-prone - especially if you’re dealing with larger data sets! If you need to process a lot of data, then you should consider using Jupyter Notebooks, as it can handle much bigger data sets (think: analysing backlinks, doing keyword research, log file analysis)
Jupyter Notebooks are reusable: if you create a Jupyter script to do any repeatable task (i.e. reporting or keyword research) then you can reuse it. This makes your life much easier because you don’t have to go back and dissect an old process.
Jupyter allows you to use Regex. This gives a huge advantage over excel because it is far more efficient at allowing you to account for misspellings. This, for example, can give you a far more accurate chance at accounting for things like branded search query permutations.
Jupyter allows you to write notes and keep every step in your process ordered. This means that your methodology is noted and the next time you perform this task, you remember exactly the steps you took. This is especially useful for when clients ask you questions about your work weeks or months down the line!
Finally - Jupyter notebooks allow us to get answers that we can’t get from Excel. We’re able to not only consider the data set from new angles, but we also have more time to go about other tasks, such as thinking about client strategy or improving other processes.
Robin has so many slides it breaks Slideshare. Instead, you can download his slides from Dropbox.
Jes Scholz - ‘Giving Robots An All Access Pass’
Jes Scholz uses the analogy of a nightclub to explain how Googlebot interacts with your website. The goal? To become part of the exclusive “Club Valid”. Her main points are outlined below:
As stated by John Mueller himself, “crawl budget is overrated - most sites never need to worry about this”. So instead of focusing on how much Google is crawling your site, you should be most concerned with how Google is crawling it
Status codes are not good or bad - there are right codes and wrong codes for different situations
In a similar vein, duplicate content is not “bad”, in fact, it’s entirely natural. You just need to make sure that you’re handling it correctly
JavaScript is your ticket to better UX, however, bear in mind that this often presents a host of SEO difficulties. Make sure that you don’t rely on the mobile friendly testing tool to see if Google is able to crawl your JavaScript - this tool actually uses different software to Googlebot (this is a common misconception!) The URL inspection tool is a bit better for checking this, however, bear in mind it’s more patient that Googlebot when it comes to rendering JavaScript, so isn’t 100% accurate.
SearchLove London 2019 - Jes Scholtz - Giving Robots an All Access Pass from Distilled
Rand Fishkin - ‘The Search Landscape in 2019’
As the web evolves, it’s important to evaluate the areas you could invest in carefully. Rand explored the key changes affecting search marketers and how SEOs can take these changes into account when determining strategy.
Should you invest in voice search? It’s probably a bit too early. There is little difference in the results you get from a voice search vs. a manual search.
Both mobile and desktop are big - don’t neglect one at the expense of the other!
The zero-click search is where the biggest search growth is happening right now. It now accounts for about half (48.96% in the US) of all searches!
If you could benefit from answering zero-click searches, then you should prepare for that. You can determine whether you’d benefit by evaluating the value in ranking for a particular query without necessarily getting traffic.
With changes in Google search appearance recently, ads have become more seamless in the SERP. This has led to paid click-through-rate rising a lot. However, if history is correct, then it will probably slowly decline until the next big search change.
As Google’s algorithms evolve, you’ll likely receive huge ranking benefits from focusing on growing authority signals (E-A-T).
Check out Rand’s slides to see where you should be spending your time and money as the search landscape evolves.
SearchLove London 2019 - Rand Fishkin - The Search Landscape in 2019 from Distilled
Emily Potter - ‘Can Anything in SEO Be Proven? A Deep-Dive Into SEO Split-Testing’
Split testing SEO changes allow us to say with confidence whether or not a specific change hurts or helps organic traffic. Emily discusses various SEO split tests she’s run and potential reasons for their outcome.
The main levers for SEO tend to boil down to
1. Improving organic click-through-rate (CTR)
2. Improving organic rankings of current keywords
3. Ranking for new keywords
Split testing changes that we want to make to our site can help us to make business cases, rescue sessions, and gain a competitive advantage.
Determining which of the three levers causes a particular test to be positive or negative is challenging because since they all impact each other, the data is noisy. Measuring organic sessions relieves us of this noise.
Following “best practices” or what your competitors are doing is not always going to result in wins. Testing shows you what actually works for your site. For example, adding quantity of products in your titles or structured data for breadcrumbs might actually negatively impact your SEO, even if it seems like everyone else is doing so.
Check out Emily’s slides to see more split test case studies and learnings!
Lessons from another year in SEO A/B Testing - SearchLove London 2019 from Emily Potter
Jill Quick - ‘Segments: How To Get Juicy Insights & Avoid The Pips!’
In her excellent talk, Jill highlights how “average data gives you average insights”, and discusses the importance of segmenting your data to gain deeper insights into user behaviour. While analytics and segments are awesome, don’t become overwhelmed with the possibilities - focus on your strategy and work from there.
Jill’s other tips include:
Adding custom dimensions to forms on your website allows you to create more relevant and specific data segments
For example, if you have a website in the education sector, you can add custom dimensions to a form that asks people to fill in their profession. You can then create a segment where custom dimension = headteacher, and you can then analyse the behaviour of this specific group of people
Build segments that look at your best buyers (people who convert well) as well as your worst customers (those who spend barely any time on site and never convert). You can learn a lot about your ideal customer, as well as what you need to improve on your site, by doing this.
Use your segments to build retargeting lists - this will usually result in lower CPAs for paid search, helping your PPC budget go further
Don’t forget to use advanced segments (using sequences and conditions) to create granular segments that matter to your business
You can use segments in Google Data Studio, which is awesome! Just bear in mind that in Data Studio you can’t see if your segment data is sampled, so it’s best to go into the GA interface to check
If you want to hear more about Jill's session, she's written a post to supplement her slides.
Segments in Google Analytics from The Coloring In Department
Rory Truesdale - ‘Using The SERPs to Know Your Audience’
It can be easy to get lost in evaluating metrics like monthly search volume, but we often forget that for each query, there is a person with a very specific motivation and need. Rory discussed how we can utilise Google’s algorithmic re-writing of the SERP to help identify those motivations and more effectively optimise for search intent - the SERPs give us amazing insight into what customers want!
Google rewrites the SERP displayed meta description 84% of the time (it thinks it’s smarter than us!) However, we can use this rewrite data to our advantage.
The best ways to get SERP data are through crawling SERPs in screaming frog, the scraper API or chrome extension, “Thruuu” (a SERP analysis tool), and then using Jupyter Notebooks to analyse it.
Scraping of SERPs, product reviews, comments, or reddit forums can be really powerful in that it will give you a data source that can reveal insight about what your customers want. Then you can optimise the content on your pages to appeal to them.
If you can get a better idea about what language and tone resonates with users, you can incorporate it into CTAs and content.
Check our Rory’s slides as well as the Jupyter notebook he uses to analyse SERP data.
SearchLove London 2019 - Rory Truesdale - Using the SERPs to Know Your Audience from Distilled
Miracle Inameti Archibong - ‘The Complete Guide To Actionable Speed Audits: Getting Your Developer To Work With You’
It can be a huge challenge to get devs to implement our wishlist of SEO recommendations. Miracle discussed the practical steps to getting developers to take your recommendations seriously.
If you take some time to understand the Web Dev roles at your company, then it will help you better communicate your needs as an SEO and get things rolled out. You can do this by:
Learning the language that they’re using. Do some research into the terminology as well as possible limitations of your ask. This will make you more credible and you’re more likely to be taken seriously.
A team of developers may have different KPIs than you. It may be beneficial to use something like revenue as a way to get them on board with the change you want to make.
Try to make every ask more collaborative rather than instructive. For example, instead of simply presenting “insert this code,” try “here’s some example code, maybe we can incorporate x elements. What do you think?” A conversation may be the difference in effecting change.
Prioritising your requests in an easily readable way for web dev teams is always a good idea. It will give them the most information on what needs to get done in what timeline.
SearchLove London 2019 - Miracle Inameti-Archibong - The Complete Guide to Actionable Speed Audits from Distilled
Faisal Anderson - ‘Spying On Google: Using Log File Analysis To Reveal Invaluable SEO Insights’
Log files contain hugely valuable insight on how Googlebot and other crawlers behave on your site. Rory uncovered why you should be looking at your logs as well as how to analyse them effectively to reveal big wins that you may have otherwise been unable to quantify.
Looking at log files is a great way to see the truest and freshest data on how Google is crawling your site. It’s most accurate because it’s the actual logs of how Google (and any other bot) is crawling your website.
Getting log file data can be tricky, so it’s helpful to ask devs about your hosting setup (if your server uses load balancing, the log files may be split between various hosts). You’ll want to get 6 months of data if you can.
The three main things to evaluate when you’re analysing log files
Crawl behavior: look at most and least crawled URLs, look at crawl frequency by depth and internal links
Budget waste: find low value urls (faceted nav, query params, etc.) there are likely some subdirectories you want crawled more than others
Site health: look for inconsistent server responses
Using Jupyter to do log file analysis is great because it’s reusable and you’ll be able to use it again and again.
SearchLoveLondon 2019 - Faisal Anderson - Spying on Google: Using Log File Analysis to Reveal Invaluable SEO Insights from Distilled
Dr Pete Myers - ‘Scaling Keyword Research: More Isn’t Better’
Dr Pete Myers discussed how more is not better when it comes to keyword research! Ditch the thousands of keywords and instead focus on a smaller set of keywords that actually matter for you or your clients. Below are his top tips:
Pete has developed a simple metric called RankVol to help determine the importance of a keyword
RankVol = 1 / (rank x square root of volume)
Using this metric is better than sorting by search volume, as often the highest volume keywords that a site is appearing for are not the most relevant
Lots of data in keyword research can be irrelevant. Using John Lewis as an example:
9% of keywords John Lewis ranks for are mis-spellings
Almost 20% of keywords they rank for are very close variants (plural vs. singular, for example)
Dr Pete provides a short script in his deck to group keywords to help strip out noise in your data set
If sitelinks appear for your website, Google thinks you’re a brand
A new SERP feature (‘best of’ carousel) is appearing in the US, and will likely be rolled out into Europe soon
This feature takes you to a heavily paid SERP, with lots of ads (some well-disguised!)
If a keyword has a heavily paid SERP, you should probably not bother trying to rank for it, as the pay-off will be small
‘People also ask’ is on 90% of searches - be sure to try and take advantage of this SERP space
To summarise, perception is everything with keyword research - make sure you filter out the noise!
SearchLove London 2019 - Dr. Pete Meyers - Scaling Keyword Research: More Isn't Better from Distilled
Lindsay Wassell - ‘Managing Multinational & Multilingual SEO in Motion’
Lindsay covered the many challenges involved in handling migrations involving multiple international site variants. Her key points are highlighted below:
Ask your dev team to make sure it’s possible to implement hreflang via XML sitemaps or on-page; then if there are problems implementing one method, you have another as a fall-back option
When deciding site structure and where international sites should be located (sub-folder? Subdomain? ccTLD?) bear in mind that there are no one-size-fits all solutions. It may be best to have a mixture of solutions, depending on each market.
If you have hreflang relationship issues, Lindsay advises to use Google Sheets to manage hreflang mappings, in combination with a script that can automatically generate XML sitemaps (link provided in her deck)
In order to encourage more people in your organisation to understand the importance of SEO and to make it a priority, highlight statistics such as traffic levels and revenue coming from organic search
Also keep in mind that every department has a wish list when it comes to a migration! Be tactical and tack onto other people’s wishlists to get SEO items implemented
As a final tip - check redirects before going live, as often dev teams will say it’s under control, and then there can be problems at the last minute
SearchLove London 2019 - Lindsay Wassell - Managing Multinational & Multilingual SEO in Motion from Distilled
Stacey MacNaught - ‘Actioning Search Intent - What To Do With All That Data’
By analysing search intent, you can gain a ton of really insightful data. Stacey discussed how you can utilise all of this data to optimise your site for organic search and ultimately increase revenue and traffic.
Traditionally, search intent is categorised broadly as navigational, informational, and transactional. However, it’s often unclear where things are categorised because sometimes keywords are really ambiguous. Often you can break these categories down into more specific categories.
In terms of targeting keywords on your site, look out for opportunities where you may not be delivering the right content based on what query you’re targeting.
For example, if you’re targeting an informational keyword with a transactional result, you’re not going to rank. This can be an opportunity for you to create the kind of page that will rank for a select query. If the phrase is “best ballet shoes” and the results are informational pages, then you shouldn’t be serving a transactional result.
If you can be objective about the topic at hand and you have someone qualified to write that content, then you should definitely do it.
If your rankings drop but revenue unaffected, it’s likely you’ve lost rankings on informational keywords
Don’t assume that users will come back of their own accord - work with PPC and get them to retarget to users who have read your content
Build out different audience lists according to the types of content or topics that users have been reading
Build out separate PPC campaigns for this so you can easily monitor results
Stacey saw CPA fall by -34% when she did this for a healthcare site
To generate content ideas, talk to the sales and customer service teams to find out what users are asking, then build content around it
You can also use Google Forms to survey previous customers to find out what drove their purchase
SearchLove London 2019 - Stacey MacNaught - Actioning Search Intent: What to Do with All That Data from Distilled
Will Critchlow - ‘Misunderstood Concepts at the Heart of SEO - Get An Edge By Understanding These Areas’
Most things in SEO can be boiled down to technical accessibility, relevance, quality, and authority. Or: can it be crawled, does it meet a keyword need, and is it trustworthy? However, some of the foundational elements of SEO are misunderstood.
Regarding crawlability, it’s important to understand how setting directives in robots.txt will impact your site if handled incorrectly.
Robots.txt directives do not cascade. For example, if you set a specific directive to disallow Googlebot from /example, that is the one it will follow. Even if you specify that * (all user agents) are disallowed from /dont-crawl elsewhere in the file, Googlebot will only follow it’s set directive not to crawl /example and still be able to crawl /dont-crawl.
The Google documentation, robots.txt checker in GSC, and the open source parser tend to disagree on what is allowed and disallowed. So, you’ll need to do some testing to ensure that the directives you’re setting are what you intended.
We often have a lot of intuition about how things like pagerank work, but too many of our recommendations are based on misconceptions about how authority flows
There are some huge changes coming to major browser cookie handling. The cookie window will be shorter, which means that a lot of traffic that’s currently classified as organic will be classified as direct. Understanding the language around the changes that are happening is, and will be, important
There are common misconceptions too about the meaning of ‘long tail keywords’
50% of Twitter respondents incorrectly think it means that there are many words in a query
40% understand the correct meaning, which is that they are keywords with low search volume
SearchLove London 2019 - Will Critchlow - Misunderstood Concepts at the Heart of SEO from Distilled
That's it for our London conference for another year. But the good news is we are heading to San Diego in March where we'll be getting some sun, sea and search at SearchLove San Diego!
If you have any questions about our conferences please leave a comment below or come and say hello over on Twitter.
SearchLove London 2019: The Great Big Round Up was originally posted by Video And Blog Marketing
0 notes
Text
Market Penetration Guide for Chiropractors
Market penetration for chiropractors is a business development approach where a clinic carries out efforts to broaden its client base for its services within a particular local market. Market penetration is both a measurement and a forecast of how successful a chiropractor has been and will be when compared to the existing competition.
When Chiropractic Market Penetration Appropriate?
The short answer is: always.
New chiropractic practices and under-performing practices have a short window of opportunity to be successful. While new chiropractors attempt to enter the marketplace, successfully established chiropractic practices must maintain their positioning.
One of the more effective strategies is actually to not compete, but to rather expand the size of the marketplace by defining what specifically your ideal target patients want and where to find them. They may not be searching for solutions within the context of the chiropractic services.
Traditional businesses will consider if their market share appears to be increasing or decreasing along with sales conversions. If sales are decreasing and market share is decreasing, then significant changes must be applied to the overall marketing strategy. The way to define market share is to research how many people online are searching for keywords relevant to the services your clinic offers and contrast this by the amount of sales for such specific services.
If there are significant searches but your clinic is not converting sales in those specific niches, then a market penetration strategy must be implemented.
Your Chiropractic Clinic’s Market Penetration Planning
Every chiropractic clinic will have the elements that make it unique and the following considerations will generally apply to all clinics:
What are The Big-Picture Demographics of Your Ideal Chiropractic Patients
Who do you want to work with…? Is there a specific age, gender, lifestyle or income. If you can segment your ideal chiropractic patients into categories, do they have overlaps in terms of their interests, values or lifestyles?
What are the Geographic Demographics of Your Target Patients?
Geography is important because your clinic’s brand elements that appeal to one demographic in a geographic area may not appeal to another demographic in a different suburb or part of the city. Every suburb or neighborhood typically has a unique vibe, feel or set of values. These may not crossover and their might even be class issues that come up.
Knowing your clinic’s general area of appeal in a geographic radius will helps to calculate the gross population of potential patients. For example, if your immediate chiropractic market exists within a 5 mile radius where approximately 50,000 people live but only 20% of those people fall within your ideal target patient profile, then your potential local market is only 10,000.
Have Extremely Realistic Expectations as a Chiropractor for your Market Penetration
The flaw that most entrepreneurs make is to overestimate the reception of their business by their target market. It is going to take time and a lot of repeated visibility before your branding takes hold.
Going back to the example above of the chiropractic clinic in an area of 50,000 people with around 10,000 people meeting their target demographic – it might take an average of 15 different visibility impressions before this target market begins to associate chiropractic services with their problem or desired outcomes and consciously connect these to your clinic’s brand.
If you can get a response and convert 1% of your ideal target market, you can build off of this. A higher figure will obviously generate greater revenues and that conversion process is the art and science of marketing, persuasion and salesmanship.
The Challenges Market Penetration in a Saturated Chiropractic Market
Chiropractic market saturation is a significant challenge for most chiropractors. You likely choose a location to establish your clinic for a variety of (probably) personal reasons over pure economic reasons and you’re likely in competition with a number of other clinics. In any niche or local market there is a predictable, measurable point when all potential customers of a specific product/service have been reached by a business. This is a saturated market.
What can be done in this scenario? What we recommend and can help you to do… place your brand in front of people looking for solutions in crossover niches so that you can go deeper into the market. Don’t fight over smaller pieces of an ever-shrinking pie… Make a bigger pie!
Market Penetration vs. Chiropractic Market Development
The difference between chiropractic market development and chiropractic market penetration is that market development creates a new target market. In the market penetration context, the target market has a fixed size. In market development, the size of the market is yet to be determined and the message/offering can be evolved to further expand the potential market.
The Art of Stealing Chiropractic Patients from Other Chiropractors
It isn’t that difficult. You can’t steal what doesn’t belong to someone else. There will always be price-shopping patients with no loyalty and the perspective that all chiropractic services are the same. They’ll drive halfway across town for an internet sale not realizing that they spent the sale difference in gas money; not to mention their time. You don’t want them. You want loyal patients who are raving fans.
Every clinic will have patients that are halfway out the door. You can’t win them all – nor should you try. You just do your best and the rest will sort itself out.
How can you target the chiropractic patients who would leave a competing clinic – for reasons other than price? Here’s what you do:
Enhance Your Chiropractic Clinic’s Appearance and Aesthetic Appeal
Have a comfortable, elegant space that is congruent with your brand and clinic’s primary focus. People like to have a complete experience and the more your clinic matches their expectations of what a clinic should be like, the more likely they will try it if they see it promoted on social media. If your clinic looks/feels zen and their current DC’s clinic is a bit shabby and run down, they might come check you out.
Clearly Define Your Chiropractic Philosophy to Your Target Market
Your branding must unconsciously appeal to the values and desired outcomes of your target market. Encapsulate your beliefs, mission and successes in a way that sticks in the minds of your potential patients. The clearer you are in your communication, the more memorable it becomes to your patients.
Maximize Your Chiropractic Patients’ Overall Experiences
Give your patients a premium treatment that they want to brag about and want to share. Remember classical conditioning? Peak state+environmental constant+anchor? When your patients are feeling good, send them out the door with the post-hypnotic suggestions to bring you their friends and family who need to feel better…
Limit Chiropractic Patients Risk of Dissatisfaction
Maybe create a special guarantee for new patients like a money-back guarantee or a satisfaction guarantee with their treatment so that they can rest assured that they will be satisfied with their treatment.
Market Yourself as a Chiropractic Expert in A Niche Area
You don’t have to be the jack-of-all-trades (stop trying to do your own web design, social media management and SEO also…) Pick a few niches that you really enjoy working on and brand yourself as the expert in solving those problems. People want to be treated by a specialist, not a generalist. If you dominate a couple of chiropractic niches you will get general chiropractic patients along the way.
Sapid Agency Will Help You Penetrate Your Local Market
We know what it takes to make your clinic stand out in local search and how to maximize your online visibility. We can consult you regarding strategies and run campaigns that will bring you to the top of your market. Contact us today and we will analyze your current marketing results, and the potential opportunities we can help you develop.
The post Market Penetration Guide for Chiropractors appeared first on Sapid Agency.
source https://www.sapidagency.com/market-penetration-guide-for-chiropractors/ from Sapid Agency https://sapidagency.blogspot.com/2018/09/market-penetration-guide-for.html
0 notes
Text
Market Penetration Guide for Chiropractors
Market penetration for chiropractors is a business development approach where a clinic carries out efforts to broaden its client base for its services within a particular local market. Market penetration is both a measurement and a forecast of how successful a chiropractor has been and will be when compared to the existing competition.
When Chiropractic Market Penetration Appropriate?
The short answer is: always.
New chiropractic practices and under-performing practices have a short window of opportunity to be successful. While new chiropractors attempt to enter the marketplace, successfully established chiropractic practices must maintain their positioning.
One of the more effective strategies is actually to not compete, but to rather expand the size of the marketplace by defining what specifically your ideal target patients want and where to find them. They may not be searching for solutions within the context of the chiropractic services.
Traditional businesses will consider if their market share appears to be increasing or decreasing along with sales conversions. If sales are decreasing and market share is decreasing, then significant changes must be applied to the overall marketing strategy. The way to define market share is to research how many people online are searching for keywords relevant to the services your clinic offers and contrast this by the amount of sales for such specific services.
If there are significant searches but your clinic is not converting sales in those specific niches, then a market penetration strategy must be implemented.
Your Chiropractic Clinic’s Market Penetration Planning
Every chiropractic clinic will have the elements that make it unique and the following considerations will generally apply to all clinics:
What are The Big-Picture Demographics of Your Ideal Chiropractic Patients
Who do you want to work with…? Is there a specific age, gender, lifestyle or income. If you can segment your ideal chiropractic patients into categories, do they have overlaps in terms of their interests, values or lifestyles?
What are the Geographic Demographics of Your Target Patients?
Geography is important because your clinic’s brand elements that appeal to one demographic in a geographic area may not appeal to another demographic in a different suburb or part of the city. Every suburb or neighborhood typically has a unique vibe, feel or set of values. These may not crossover and their might even be class issues that come up.
Knowing your clinic’s general area of appeal in a geographic radius will helps to calculate the gross population of potential patients. For example, if your immediate chiropractic market exists within a 5 mile radius where approximately 50,000 people live but only 20% of those people fall within your ideal target patient profile, then your potential local market is only 10,000.
Have Extremely Realistic Expectations as a Chiropractor for your Market Penetration
The flaw that most entrepreneurs make is to overestimate the reception of their business by their target market. It is going to take time and a lot of repeated visibility before your branding takes hold.
Going back to the example above of the chiropractic clinic in an area of 50,000 people with around 10,000 people meeting their target demographic – it might take an average of 15 different visibility impressions before this target market begins to associate chiropractic services with their problem or desired outcomes and consciously connect these to your clinic’s brand.
If you can get a response and convert 1% of your ideal target market, you can build off of this. A higher figure will obviously generate greater revenues and that conversion process is the art and science of marketing, persuasion and salesmanship.
The Challenges Market Penetration in a Saturated Chiropractic Market
Chiropractic market saturation is a significant challenge for most chiropractors. You likely choose a location to establish your clinic for a variety of (probably) personal reasons over pure economic reasons and you’re likely in competition with a number of other clinics. In any niche or local market there is a predictable, measurable point when all potential customers of a specific product/service have been reached by a business. This is a saturated market.
What can be done in this scenario? What we recommend and can help you to do… place your brand in front of people looking for solutions in crossover niches so that you can go deeper into the market. Don’t fight over smaller pieces of an ever-shrinking pie… Make a bigger pie!
Market Penetration vs. Chiropractic Market Development
The difference between chiropractic market development and chiropractic market penetration is that market development creates a new target market. In the market penetration context, the target market has a fixed size. In market development, the size of the market is yet to be determined and the message/offering can be evolved to further expand the potential market.
The Art of Stealing Chiropractic Patients from Other Chiropractors
It isn’t that difficult. You can’t steal what doesn’t belong to someone else. There will always be price-shopping patients with no loyalty and the perspective that all chiropractic services are the same. They’ll drive halfway across town for an internet sale not realizing that they spent the sale difference in gas money; not to mention their time. You don’t want them. You want loyal patients who are raving fans.
Every clinic will have patients that are halfway out the door. You can’t win them all – nor should you try. You just do your best and the rest will sort itself out.
How can you target the chiropractic patients who would leave a competing clinic – for reasons other than price? Here’s what you do:
Enhance Your Chiropractic Clinic’s Appearance and Aesthetic Appeal
Have a comfortable, elegant space that is congruent with your brand and clinic’s primary focus. People like to have a complete experience and the more your clinic matches their expectations of what a clinic should be like, the more likely they will try it if they see it promoted on social media. If your clinic looks/feels zen and their current DC’s clinic is a bit shabby and run down, they might come check you out.
Clearly Define Your Chiropractic Philosophy to Your Target Market
Your branding must unconsciously appeal to the values and desired outcomes of your target market. Encapsulate your beliefs, mission and successes in a way that sticks in the minds of your potential patients. The clearer you are in your communication, the more memorable it becomes to your patients.
Maximize Your Chiropractic Patients’ Overall Experiences
Give your patients a premium treatment that they want to brag about and want to share. Remember classical conditioning? Peak state+environmental constant+anchor? When your patients are feeling good, send them out the door with the post-hypnotic suggestions to bring you their friends and family who need to feel better…
Limit Chiropractic Patients Risk of Dissatisfaction
Maybe create a special guarantee for new patients like a money-back guarantee or a satisfaction guarantee with their treatment so that they can rest assured that they will be satisfied with their treatment.
Market Yourself as a Chiropractic Expert in A Niche Area
You don’t have to be the jack-of-all-trades (stop trying to do your own web design, social media management and SEO also…) Pick a few niches that you really enjoy working on and brand yourself as the expert in solving those problems. People want to be treated by a specialist, not a generalist. If you dominate a couple of chiropractic niches you will get general chiropractic patients along the way.
Sapid Agency Will Help You Penetrate Your Local Market
We know what it takes to make your clinic stand out in local search and how to maximize your online visibility. We can consult you regarding strategies and run campaigns that will bring you to the top of your market. Contact us today and we will analyze your current marketing results, and the potential opportunities we can help you develop.
The post Market Penetration Guide for Chiropractors appeared first on Sapid Agency.
from Sapid Agency https://www.sapidagency.com/market-penetration-guide-for-chiropractors/ from Sapid Agency https://sapidagency.tumblr.com/post/178009601576
0 notes