#about analyzing an argumentative editorial from a popular news source
Explore tagged Tumblr posts
Text
forgive me father for im insane
#im literally gonna lose it ndjva#im tryin to concentrate on my college work bc I was busy with my HIGH SCHOOL work over the past week#now i have to write a stupid 5 page paper#about analyzing an argumentative editorial from a popular news source#and i really liked the one i picked and I'm over halfway done and i just realized it really doesn't fit ANY of the criteria i need#so either i restart completely even though its due at midnight#or i keep going and fight for my life trying to get it to work with the assignment#plus my room is dirty but i have no motivation to clean it#and ive got school tomorrow#and my hoco dress hasnt come yet which is making me anxious#but when it does come that means i have to try it on#which is a NIGHTMARE#cause i dont wanna look at my legs they're gross#anyway
1 note
·
View note
Text
Gayborhoods aren't dead. In fact, there are more of them than you think.
Open up a travel guide and you're likely to see multiple passages about where to find the local "gayborhood," a neighborhood disproportionately populated by LGBTQ people. In San Francisco, there's the Castro. In Chicago, you have Boystown. And in Mexico City, there's Zona Rosa.
Walk through any of these neighborhoods, and you'll discover blocks of rainbow flags and queer clubs pulsing with extremely corny but good '90s house music. Yet for over a decade, critics have been lamenting the alleged "death" and "demise" of these gayborhoods, accusing them of being "passé" or surrendering to gentrification.
"There goes the gayborhood," The New York Times proclaimed in one 2017 headline.
But Amin Ghaziani, assistant professor of sociology at the University of British Columbia, isn't exactly grieving. In his recently published piece, "Cultural Archipelagos: New Directions in the Study of Sexuality and Space," Ghaziani analyzes new research to make a bold hypothesis: The gayborhood hasn't died, and it isn't being diluted out of existence. Instead, gayborhoods are multiplying and diversifying.
Gayborhoods, Ghaziani argues, aren't singular sites but have instead become cultural archipelagos: a series of queer islands, connected by sexuality and gender. And cities will often have more than one of them.
What's a gayborhood, anyway?
The Castro district, historically the center of the queer community in San Francisco, is now one of multiple gayborhoods.
Image: smith collection/gabo/Getty Images
If you're queer and live in an urban area, chances are decent that you've at least travelled to a "gayborhood" — maybe to stand among a crowd of sweaty bears in thongs during a pride parade or to puke your guts out outside the local queer karaoke bar.
Ghaziani defines as gayborhood as having four defining features: It's a geographical center of LGBTQ people (including queer tourists), it has a high density of LGBTQ residents, it's a commercial center of businesses catering to the queer and trans community, and it's a cultural concentration of power. It's the neighborhood where you'll see pride parades begin, dyke marches take off, and street parties go into the night.
Want to buy an anal plug from a queer-affirming sex shop? Go to the gayborhood. Need advice from a trans-friendly psychic? The gayborhood awaits. Looking for support as you get prepare to come out to your family? Attend a group therapy session held at your local LGBT center in ... the gayborhood, of course.
"The gayborhood is home to large amounts of organizations, businesses, and nonprofits that cater to the LGBTQ individuals," Ghaziani told Mashable in a phone interview. "Not everyone who lives in a gayborhood self-identifies as LGBTQ, though a statistically sizable portion does."
Gayborhoods formed as gay culture itself emerged in the postwar period and began to flourish: think New York City's West Village in the Stonewall era or San Francisco's Castro District in the 1950s. These were radical communities, home to intergenerational bath houses, butch femme bars, and sites of protest.
In the early 2000s, critics began to lament the supposed loss of these neighborhoods, citing "late-stage gentrification, the global circulation of capital, changes in the flow of migration, liberalizing attitudes toward homosexuality, social acceptance and assimilation, and the normalization of geo-coded mobile apps (which have altered how places facilitate social and sexual connections)," Ghaziani writes.
The critics weren't entirely wrong. Many traditional gayborhoods have indeed gentrified, and queer people have dispersed to other neighborhoods. But even as they've changed, gayborhoods have yet to disappear. Actually, they continue to bloom — you just won't see them if you're looking in the same singular places. That's partly because it's a "misconception" that "cities have only one gayborhood," Ghaziani told Mashable.
Historically, some cities have had more than one gayborhood, but not all of them have made it to the map. And even as queer people disperse from recognized gayborhoods, they cluster and form new gayborhoods in areas not traditionally mapped as queer.
The country has emerging queer neighborhoods that act "as cultural archipelagoes. The imagery of an archipelago suggests a chain or a cluster of islands. That's a more apt way of thinking about sexuality in a city," Ghaziani says. "LGBTQ Americans are diverse people. Why wouldn't they express that diversity in the places they call home?"
There are more gayborhoods than you'll ever find in a travel guide.
The Gayborpelago
Northampton, Massachusetts, may not be known as a traditional gayborhood, but it's home to a number of women in same-sex relationships.
Image: flickr Editorial/Getty Images
Ghaziani cites multiple pieces of research to back his claim that gayborhoods function more like archipelagoes than they do singular sites within a city.
First, he uses US Census data to examine the geographic distribution of lesbians, noting that census data only captures information from same-sex couples, not individuals.
What the data reveals is clear: Lesbian couples do exhibit geographical clustering behavior. They just appear to be less visible because they often exist outside traditional gayborhoods in less urban areas. Same-sex lesbian couples reside in both traditional gayborhoods like Provincetown — where they make up 5.1 percent of all households — as well as outside of them, in areas not traditionally recognized as gayborhoods.
While gay men make up 14.2 percent of all households in the Castro, the well-known San Francisco gayborhood, for example, lesbian couples make up 3.3 percent of all residents in Northampton, Massachusetts. Yet in the popular imagination, San Francisco, not Northampton, is the epicenter of queer culture.
In Wellfleet, Massachusetts 2.2 percent of all households are same-sex households led by women, making it the 7th most concentrated lesbian area in the country. But Wellfleet is a town of 3,171 people — not exactly a standard gayborhood you could identify on a map.
Wellfleet, Massachusetts, isn't exactly a dense metropolis.
Image: UIG via Getty Images
Ghaziani attributes this unique clustering to multiple factors: Lesbians may feel more accepted in rural areas, where female masculinity isn't as tightly policed as male femininity; lesbians have less capital than gay men (women, including queer women, continue to make less than men) and therefore may not be able to afford urban neighborhoods; lesbians are statistically more likely to have children (and therefore different housing requirements).
"Only 12 percent of LGBTQ Americans aged 18 and above currently live in a gayborhood," Ghaziani says. "We're limiting our understanding if we focus on singular parts of the city."
Like lesbians, queer people of color often reside outside popularly known gayborhoods. Black same-sex couples, for example, are more likely to live in areas where other black people concentrate than where other specifically LGBTQ people live. Cases in point: Same-sex black couples are disproportionately concentrated in Baltimore City, Maryland (where they make up 4.15 percent per 1,000 households), and Lee County, South Carolina (where that number stands at 3.69 percent).
Lee County, South Carolina, isn't exactly a well-known gayborhood. Parts of that county nonetheless exhibit a key element of the gayborhood: residential concentration.
"Zip codes associated with traditional gayborhoods are largely white," Ghaziani writes. "The assumption of spatial singularity is epistemologically harmful because it ignores the 'spatial capital' and creative placemaking efforts of queer people of color. This includes youth of color, many of whom respond to the racial exclusions of the gayborhood by building separate communities."
By focusing solely on historically celebrated gayborhoods, sociologists run the risk of ignoring both old and new gayborhoods of color.
Meanwhile, trans people are often excluded from conversations about the gayborhood entirely. Disproportionately low-income, they often lack the capital needed to live in traditional gayborhoods. They report discrimination from both straight people and cis gays in gayborhoods. Even then, trans people can form their own cultural islands simply by sharing residential space together — an apartment, a building, wherever it may be.
The existence of other gayborhoods out there also provides a source of comfort. Ghaziani cites a recent study that found that "if you know your city has a gayborhood and you self-identify as trans, you're more likely to think your city is safer for trans people — even if you don't necessarily feel all that safe in the gayborhood."
When the gayborhoods of queer people of color, women, and trans folks are included, the gayborhood no longer looks passé. It looks vibrant. It's more diffuse than traditionally conceptualized.
Throw in digital queer neighborhoods, and the number of islands on the LGBTQ archipelago multiplies.
The Digital Queerborhood
It's a beautiful day in the digital queerborhood.
Image: leon neal/Getty Images
Critics have long blamed the rise of digital queer culture for the supposed demise of the gayborhood. Because many queer people have access to mobile technology and no longer need to find one another in bars, the argument goes, the need for gayborhoods diminishes.
The thesis isn't without merit: New York, once an oasis of lesbian bars, now only has three. Los Angeles has zero lesbian bars. San Francisco, also zero. Seriously.
But instead of looking at digital culture from a deficit-based perspective, consider reframing: Digital gayborhoods continue to thrive. Between Grindr and Scruff and Her, there are now dozens of location-based dating apps that bring people in neighboring zip codes together. Unlike historical gayborhoods, which tend to be white, digital gayborhoods are often more open to diversity, giving room for trans and POC queers to connect.
If users are connecting in a neighborhood without traditional gay establishments, they can nonetheless create a "gayborhood" online or create "pop-up" physical gayborhoods. Using Facebook's events planner, they can plan a trans-centered party at a local straight bar or hold a LGBTQ health fair at a nearby field, thereby temporarily transforming these spaces into "gayborhood" spaces.
Here's how Ghaziani describes it:
"You can queer any given space by logging on to see see any queers near you. It undermines the [traditional] gayborhood as the sole locus ... Many more areas of the city can now function as queer spaces [because of digital culture]."
These digital queer neighborhoods may lack the charms of more traditional physical ones. Pop-up parties planned on Facebook don't quite lend the same sense of stability as your local gay bar. And it probably feels different to connect to your lesbian neighbor on an app than it does to share a beer with them at a local queer restaurant (though participating in the former can lead to the latter).
These neighborhoods matter nonetheless. Their existence should be registered.
If there are so many gaybhorhoods, why doesn't it feel that way?
Let's say you agree with Ghaziani's central thesis that gayborhoods aren't dying, they simply exist in an archipelago. If you've grown up in or around a traditional gayborhood, you might still experience the transformation of some of these neighborhoods as loss.
The West Village, home to the infamous Stonewall Inn, is now also home to some of the city's wealthiest residents. The neighborhood remains queer, but queer parties also happen throughout the city's outer boroughs. The Village still serves as a point of culture, it's just no longer the only point.
That can feel like a death.
These centralized gayborhoods once provided "very powerful political functions," Ghaziani says. "Having a residential concentration of queer people in particular parts of the city means we can exert political influence. The election of Harvey Milk is one of the most visible ones. So when you see and hear reports that show [some] residential integration, [it can feel like] dissolution."
With dissolution comes a feeling of invisibility:
"Sexuality is unlike other major demographic characteristics," Ghaziani adds. "It's not visible on the body in the same way. So the visibility functions of queer spaces is still very important [for queer people to feel like they exist]."
Reframing is critical. By de-centering the idea of a singular gayborhood, and traveling to other gayborhoods within a city — maybe even spending some time in a digital ones — people can transform their feelings of loss into strength and multiply their cultural power.
The gayborhood isn't dead. It isn't even dying. It's just ready to thrive in a different way.
WATCH: Meet the 10-year-old drag kid shaping the future of drag youth
The DIYborhood
#_author:Heather Dockray#_uuid:bbe7534b-12b6-3075-9265-0bda05d54d53#_category:yct:001000002#_lmsid:a0Vd000000DTrEpEAL#_revsp:news.mashable
0 notes
Text
Why Statistical Significance Is Killing Science
In 2016, the American Statistical Association1 released an editorial warning against the misuse of statistical significance in interpreting scientific research. Another commentary was recently published in the journal Nature,2 calling for the research community to abandon the concept of statistical significance.
Before being published in Nature,3 the article states it was endorsed by more than 800 statisticians and scientists from around the world. Why are so many researchers concerned about the P-value in statistical analysis?
In 2014, George Cobb, a professor emeritus of mathematics and statistics, posed two questions to members of an American Statistical Association discussion forum.4 In the first question, he asked why colleges and grad schools teach P=0.05, and found this was the value used by the scientific community. In the second question he asked why the scientific community used this particular P-value and found this was what was taught in school.
In other words, it was circular logic that drove the continued belief in an arbitrary value of P=0.05. Additionally, researchers and manufacturers may alter the perception of statistical significance, demonstrating a positive response occurs in an experimental group over the control group simply by using either relative or absolute risk.
However, since many are not statisticians, it’s helpful to first understand the mathematical basis behind P-values, confidence intervals and how absolute and relative risk may be easily manipulated.
Probability Frameworks Define How Researchers Present Numbers
At the beginning of a study, researchers define a hypothesis, or a proposed explanation made on limited evidence, which they hope research will either prove or disprove. Once the data are gathered, researchers employ statisticians to analyze the information to determine whether or not the experiment proved their hypothesis.
The world of statistics is all about probability, which is simply how likely it is that something will or will not happen, based on the data. These collections of data from sample sizes are used in science to infer whether or not what happens in the sample size would likely happen in the entire population.5
For instance, if you wanted to find the average height of men around the world, you couldn’t measure every man’s height to get the answer, so researchers would estimate the number. Samples would be gathered from subpopulations to infer the height. These numbers are then evaluated using a framework. In many instances, medical research6 uses a Bayesian framework.7
Under a Bayesian framework, researchers see probabilities as a general concept. This framework has no problem assigning probabilities to nonrepeatable events.
Frequentist framework defines probability in repeatable random events that are equal to the long-term frequency of occurrence. In other words, they don’t attach probabilities to hypotheses or any fixed but unknown values in general.8
Within these frameworks the P-value is determined. The researcher first defines a null hypothesis, in which they state there is no difference or no change between the control group and the experimental group.9 The alternate hypothesis is opposite of the null hypothesis, stating there is a difference.
What’s Behind the Numbers?
The simple definition of the P-value is that it represents the probability of the null hypothesis being true. If P = 0.25 then there is a 25% probability of no change between the experimental group and the control group.10 In the medical field,11 the acceptable P-value is 0.05, or the cut-off number resulting in a threshold considered to be statistically significant.
When the P-value is 0.05, or 5%, researchers say they have a confidence interval of 95% that there is a difference between the two observations, as opposed to differences due to random variations, and the null hypothesis is disproved.12
Researchers look for a small P-value, typically less than 0.05, to indicate strong evidence the null hypothesis may be rejected. When P-values are close to the cutoff, they may be considered marginal and able to go either way in most other fields.13
Since “perfectly” random samples cannot be obtained and definitive conclusions are difficult to confirm without perfectly random samples, the P-value attempts to minimize the sources of uncertainty.14
The P-value may then be used to define the confidence interval and confidence level. Imagine you’re trying to find out how many people from Ohio have taken two weeks of vacations in the past year. You could ask every resident in the state, but to save time and money you could sample a smaller group, and the answer would be an estimate.15 Each time you repeat the survey, the results may be slightly different.
When using this type of estimate, researchers use a confidence interval to determine a range of values above and below a finding the actual value is likely to fall. If the confidence interval is 4 and 47% of the sample takes a two-week vacation, researchers believe that had they asked the entire relevant population, then between 43% and 51% would have gone for a two-week vacation.
The confidence level is expressed as a percentage of how often the true percentage of the population would pick the answer lying within the confidence interval. If the confidence level is 95%, the researcher is 95% confident that between 43% and 51% would have gone on a two-week vacation.16
Scientists Rebelling Against Statistical Significance
Kenneth Rothman, professor of epidemiology and medicine at Boston University, took to Twitter with a copy of a letter to the JAMA editor after it was rejected from the medical journal.17 In the letter, signed by Rothman and two of his colleagues from Boston University, they outline their agreement with the American Statistical Association statement, stating,18 “Scientific conclusions and business or policy decisions should not be based only on whether a P-value passes a specific threshold.”
William M. Briggs, Ph.D., author and statistician, writes all statisticians have felt the stinging disappointment from clients when P-values do not fit the client’s expectations, despite explanations of how this significance has no bearing on real life and how there may be better methods of evaluating the experiment’s success.19
After receiving emails from other statisticians outlining their reasons for maintaining the status quo of using P-values to ascertain the value of a study, and ignoring arguments he lays out, Briggs goes on to say:20
“A popular thrust is to say smart people wouldn’t use something dumb, like P-values. To which I respond smart people do lots of dumb things. And voting doesn’t give truth.”
Numbers May Not Accurately Represent Results
A recent editorial in the journal Nature delves into the reason why P-values, confidence intervals and confidence levels are not accurate representations of whether a study has proven or disproven its hypothesis. The authors urge researchers to:21
“[N]ever conclude there is ‘no difference’ or ‘no association’ just because a P value is larger than a threshold such as 0.05 or, equivalently, because a confidence interval includes zero. Neither should we conclude that two studies conflict because one had a statistically significant result and the other did not. These errors waste research efforts and misinform policy decisions.”
The authors compare an analysis of the effects of anti-inflammatory drugs between two studies. Although the actual data in both studies found the exact risk ratio of 1.2, since one study had more precise measurements, it found a statistically significant risk versus the second study, which did not. The authors wrote:22
“It is ludicrous to conclude that the statistically non-significant results showed ‘no association,’ when the interval estimate included serious risk increases; it is equally absurd to claim these results were in contrast with the earlier results showing an identical observed effect. Yet these common practices show how reliance on thresholds of statistical significance can mislead us.”
The authors call for the entire concept of statistical significance to be abandoned and urge researchers to embrace uncertainty. Scientists should describe practical implications of values and limits of the data rather than relying on proving a null hypothesis and claiming no associations if the value of the interval is deemed unimportant.23
They believe using confidence intervals as a comparison will eliminate bad practices and may introduce better ones. Instead of relying on statistical analysis, they hope scientists will include more detailed methods sections and emphasize their estimates by explicitly discussing the upper and lower limits in their confidence intervals.
Relative Risk or Absolute Risk?
George Canning was a British statesman and politician who served briefly as prime minister in England in 1827.24 He was quoted in the Dictionary of Thoughts published in 1908, saying, “I can prove anything by statistics except the truth.”25
As you read research or media stories, the risk associated with a particular action is usually expressed as relative risk or absolute risk. Unfortunately, the type of risk may not be identified. For instance, you may hear a particular action will reduce the risk of prostate cancer by 65%.
Unless you know if this refers to absolute risk or relative risk, it’s difficult to determine how much this action would affect you. Relative risk is a number used to compare the risk between two different groups, often an experimental group and a control group. The absolute risk is a number that stands on its own and does not require comparison.26
For instance, imagine there were a clinical trial to evaluate a new medication researchers hypothesized would prevent prostate cancer, and 200 men signed up for the trial. The researchers split the group into two, with 100 men receiving a placebo and 100 men receiving the experimental drug.
In the control group, two men developed prostate cancer. In the treatment group only one man developed prostate cancer. When the two groups are compared, the researchers find there is a 50% reduction in prostate cancer when they talk about relative risk. This is because one developed it in the treatment group and two developed it in the control group.
Since one is half of two, there is a 50% reduction in the development of the disease. This number can sound really good and potentially encourage someone to take a medication with significant side effects if they believe it can cut their risk of prostate cancer in half.
The absolute risk, however, is far smaller. In the control group, 98 men never developed cancer. In the treatment group, 99 men never developed cancer. Put another way, in the control group, the risk of developing prostate cancer was 2%, since 2 out of 100 got cancer; while in the treatment group, the risk lowered to 1%.
This means there is a 1% absolute risk of developing prostate cancer with the medication, compared to 2%. The difference now — your absolute risk — is not 50% but 1% (2 minus 1). Knowing this, taking the drug may not seem worth it.
Big Pharma Would Like You to Look the Other Way
Now imagine your profit depends upon which risk ratio you publicize. Knowledge of the relative risk without understanding overall mortality does not tell the true story to those who need to make a decision. If the experiment were run with 1,000 people instead of 100, the relative risk would remain the same, 50%, but the absolute risk changes from 1% to 0.1%.
You may not be motivated to find and read the research to determine if the numbers being publicized are an accurate representation of the overall mortality rate of individuals taking experimental medications. However, without knowledge of the mortality rate, it is then nearly impossible to determine the actual risk your undertaking.
While it may not be possible to stop taking all medications, consider having a conversation with your physician to discuss what medications you may be able to stop taking as you change lifestyle choices, such as increasing exercise, changing your nutritional habits and improving your sleep habits.
You may be surprised by the number of ways these simple changes improve overall health. Although some experience being overwhelmed by making changes, if you choose to make one change at a time, and integrate them slowly into your routine, you’ll likely find it wasn’t nearly as difficult as if you tried to make a number of changes immediately. See my previous articles to get started:
Go With Your Gut
Top 33 Tips to Optimize Your Sleep Routine
My Updated Nutrition Plan — Your Guide to Optimal Health
4-Minute Daily Workout
from Articles http://articles.mercola.com/sites/articles/archive/2019/04/24/why-statistical-significance-is-killing-science.aspx source https://niapurenaturecom.tumblr.com/post/184405769831
0 notes
Text
Why Statistical Significance Is Killing Science
In 2016, the American Statistical Association1 released an editorial warning against the misuse of statistical significance in interpreting scientific research. Another commentary was recently published in the journal Nature,2 calling for the research community to abandon the concept of statistical significance.
Before being published in Nature,3 the article states it was endorsed by more than 800 statisticians and scientists from around the world. Why are so many researchers concerned about the P-value in statistical analysis?
In 2014, George Cobb, a professor emeritus of mathematics and statistics, posed two questions to members of an American Statistical Association discussion forum.4 In the first question, he asked why colleges and grad schools teach P=0.05, and found this was the value used by the scientific community. In the second question he asked why the scientific community used this particular P-value and found this was what was taught in school.
In other words, it was circular logic that drove the continued belief in an arbitrary value of P=0.05. Additionally, researchers and manufacturers may alter the perception of statistical significance, demonstrating a positive response occurs in an experimental group over the control group simply by using either relative or absolute risk.
However, since many are not statisticians, it's helpful to first understand the mathematical basis behind P-values, confidence intervals and how absolute and relative risk may be easily manipulated.
Probability Frameworks Define How Researchers Present Numbers
At the beginning of a study, researchers define a hypothesis, or a proposed explanation made on limited evidence, which they hope research will either prove or disprove. Once the data are gathered, researchers employ statisticians to analyze the information to determine whether or not the experiment proved their hypothesis.
The world of statistics is all about probability, which is simply how likely it is that something will or will not happen, based on the data. These collections of data from sample sizes are used in science to infer whether or not what happens in the sample size would likely happen in the entire population.5
For instance, if you wanted to find the average height of men around the world, you couldn’t measure every man’s height to get the answer, so researchers would estimate the number. Samples would be gathered from subpopulations to infer the height. These numbers are then evaluated using a framework. In many instances, medical research6 uses a Bayesian framework.7
Under a Bayesian framework, researchers see probabilities as a general concept. This framework has no problem assigning probabilities to nonrepeatable events.
Frequentist framework defines probability in repeatable random events that are equal to the long-term frequency of occurrence. In other words, they don't attach probabilities to hypotheses or any fixed but unknown values in general.8
Within these frameworks the P-value is determined. The researcher first defines a null hypothesis, in which they state there is no difference or no change between the control group and the experimental group.9 The alternate hypothesis is opposite of the null hypothesis, stating there is a difference.
What’s Behind the Numbers?
The simple definition of the P-value is that it represents the probability of the null hypothesis being true. If P = 0.25 then there is a 25% probability of no change between the experimental group and the control group.10 In the medical field,11 the acceptable P-value is 0.05, or the cut-off number resulting in a threshold considered to be statistically significant.
When the P-value is 0.05, or 5%, researchers say they have a confidence interval of 95% that there is a difference between the two observations, as opposed to differences due to random variations, and the null hypothesis is disproved.12
Researchers look for a small P-value, typically less than 0.05, to indicate strong evidence the null hypothesis may be rejected. When P-values are close to the cutoff, they may be considered marginal and able to go either way in most other fields.13
Since “perfectly” random samples cannot be obtained and definitive conclusions are difficult to confirm without perfectly random samples, the P-value attempts to minimize the sources of uncertainty.14
The P-value may then be used to define the confidence interval and confidence level. Imagine you're trying to find out how many people from Ohio have taken two weeks of vacations in the past year. You could ask every resident in the state, but to save time and money you could sample a smaller group, and the answer would be an estimate.15 Each time you repeat the survey, the results may be slightly different.
When using this type of estimate, researchers use a confidence interval to determine a range of values above and below a finding the actual value is likely to fall. If the confidence interval is 4 and 47% of the sample takes a two-week vacation, researchers believe that had they asked the entire relevant population, then between 43% and 51% would have gone for a two-week vacation.
The confidence level is expressed as a percentage of how often the true percentage of the population would pick the answer lying within the confidence interval. If the confidence level is 95%, the researcher is 95% confident that between 43% and 51% would have gone on a two-week vacation.16
Scientists Rebelling Against Statistical Significance
Kenneth Rothman, professor of epidemiology and medicine at Boston University, took to Twitter with a copy of a letter to the JAMA editor after it was rejected from the medical journal.17 In the letter, signed by Rothman and two of his colleagues from Boston University, they outline their agreement with the American Statistical Association statement, stating,18 “Scientific conclusions and business or policy decisions should not be based only on whether a P-value passes a specific threshold.”
William M. Briggs, Ph.D., author and statistician, writes all statisticians have felt the stinging disappointment from clients when P-values do not fit the client's expectations, despite explanations of how this significance has no bearing on real life and how there may be better methods of evaluating the experiment’s success.19
After receiving emails from other statisticians outlining their reasons for maintaining the status quo of using P-values to ascertain the value of a study, and ignoring arguments he lays out, Briggs goes on to say:20
“A popular thrust is to say smart people wouldn’t use something dumb, like P-values. To which I respond smart people do lots of dumb things. And voting doesn’t give truth.”
Numbers May Not Accurately Represent Results
A recent editorial in the journal Nature delves into the reason why P-values, confidence intervals and confidence levels are not accurate representations of whether a study has proven or disproven its hypothesis. The authors urge researchers to:21
“[N]ever conclude there is ‘no difference’ or ‘no association’ just because a P value is larger than a threshold such as 0.05 or, equivalently, because a confidence interval includes zero. Neither should we conclude that two studies conflict because one had a statistically significant result and the other did not. These errors waste research efforts and misinform policy decisions.”
The authors compare an analysis of the effects of anti-inflammatory drugs between two studies. Although the actual data in both studies found the exact risk ratio of 1.2, since one study had more precise measurements, it found a statistically significant risk versus the second study, which did not. The authors wrote:22
“It is ludicrous to conclude that the statistically non-significant results showed ‘no association,’ when the interval estimate included serious risk increases; it is equally absurd to claim these results were in contrast with the earlier results showing an identical observed effect. Yet these common practices show how reliance on thresholds of statistical significance can mislead us.”
The authors call for the entire concept of statistical significance to be abandoned and urge researchers to embrace uncertainty. Scientists should describe practical implications of values and limits of the data rather than relying on proving a null hypothesis and claiming no associations if the value of the interval is deemed unimportant.23
They believe using confidence intervals as a comparison will eliminate bad practices and may introduce better ones. Instead of relying on statistical analysis, they hope scientists will include more detailed methods sections and emphasize their estimates by explicitly discussing the upper and lower limits in their confidence intervals.
Relative Risk or Absolute Risk?
George Canning was a British statesman and politician who served briefly as prime minister in England in 1827.24 He was quoted in the Dictionary of Thoughts published in 1908, saying, “I can prove anything by statistics except the truth.”25
As you read research or media stories, the risk associated with a particular action is usually expressed as relative risk or absolute risk. Unfortunately, the type of risk may not be identified. For instance, you may hear a particular action will reduce the risk of prostate cancer by 65%.
Unless you know if this refers to absolute risk or relative risk, it's difficult to determine how much this action would affect you. Relative risk is a number used to compare the risk between two different groups, often an experimental group and a control group. The absolute risk is a number that stands on its own and does not require comparison.26
For instance, imagine there were a clinical trial to evaluate a new medication researchers hypothesized would prevent prostate cancer, and 200 men signed up for the trial. The researchers split the group into two, with 100 men receiving a placebo and 100 men receiving the experimental drug.
In the control group, two men developed prostate cancer. In the treatment group only one man developed prostate cancer. When the two groups are compared, the researchers find there is a 50% reduction in prostate cancer when they talk about relative risk. This is because one developed it in the treatment group and two developed it in the control group.
Since one is half of two, there is a 50% reduction in the development of the disease. This number can sound really good and potentially encourage someone to take a medication with significant side effects if they believe it can cut their risk of prostate cancer in half.
The absolute risk, however, is far smaller. In the control group, 98 men never developed cancer. In the treatment group, 99 men never developed cancer. Put another way, in the control group, the risk of developing prostate cancer was 2%, since 2 out of 100 got cancer; while in the treatment group, the risk lowered to 1%.
This means there is a 1% absolute risk of developing prostate cancer with the medication, compared to 2%. The difference now — your absolute risk — is not 50% but 1% (2 minus 1). Knowing this, taking the drug may not seem worth it.
Big Pharma Would Like You to Look the Other Way
Now imagine your profit depends upon which risk ratio you publicize. Knowledge of the relative risk without understanding overall mortality does not tell the true story to those who need to make a decision. If the experiment were run with 1,000 people instead of 100, the relative risk would remain the same, 50%, but the absolute risk changes from 1% to 0.1%.
You may not be motivated to find and read the research to determine if the numbers being publicized are an accurate representation of the overall mortality rate of individuals taking experimental medications. However, without knowledge of the mortality rate, it is then nearly impossible to determine the actual risk your undertaking.
While it may not be possible to stop taking all medications, consider having a conversation with your physician to discuss what medications you may be able to stop taking as you change lifestyle choices, such as increasing exercise, changing your nutritional habits and improving your sleep habits.
You may be surprised by the number of ways these simple changes improve overall health. Although some experience being overwhelmed by making changes, if you choose to make one change at a time, and integrate them slowly into your routine, you’ll likely find it wasn’t nearly as difficult as if you tried to make a number of changes immediately. See my previous articles to get started:
Go With Your Gut
Top 33 Tips to Optimize Your Sleep Routine
My Updated Nutrition Plan — Your Guide to Optimal Health
4-Minute Daily Workout
from http://articles.mercola.com/sites/articles/archive/2019/04/24/why-statistical-significance-is-killing-science.aspx
source http://niapurenaturecom.weebly.com/blog/why-statistical-significance-is-killing-science
0 notes