Tumgik
#sub2
Text
I love to see a Women Win
Tumblr media
(Amanda Nunes)
September 19, 2024 Rookie's Playbook
I’m back at it again deep diving into things I have no concept of, and today it’s the world of the UFC!  I like for my first read-through something to be an explanation of sorts so I can try and figure out exactly what is happening, and today I'm diving into the beauty of women champions. 
From my understanding there are four types of championships, there is feather-weight, bantam-weight, fly-weight, and straw-weight.  I’m not gonna lie, half of these to me sound like the same thing with AI generated names.  After doing further research it all depends on the weight of the fighter, which when I was younger used to confuse the hell out of me because I thought it was so rude to segregate people into weight categories, now however I know it’s so you don’t get absolutely murdered. 
Feather-weight(126 pounds) has no current champion, but they do have three past champions.  Amanda Nunes won in 2018 with two defenses and a TKO of 1 over her opponent Cris Cyborg.  I’m not going to sit here and lie to you either if my last name was Cyborg no one could talk to me. That is such badass.  The only knowledge I have of a TKO(Total Knockout) is when executed the fighter is left unable to complete the fight, so that is insanely hardcore to me.  Currently, the brazilian fighter has insane stats ranging from 13-2 TKO, SUB 4-2, and W-L-D 23-5-0.  Now, again I will not lie to you I only know what the TKO is.  
Sub is when a fighter ‘taps out’, yielding to their opponent so that they win.  So, that takes the badass levels down some, but hey who knows this could actually be the lowest for someone and she is the dominant woman to have ever lived.  Then the W-L-D is win, loss, or draw.  The thing that I'm getting at for Nunes is that she doesn’t believe in draws and eats up her opponent every time, 23 wins! Insane! 
Now that I know the terms let's move onto bantamweight, which is the 135 pound limit.  The current champion for this category is Raquel Pennington, with an outcome of a UD over Mayra Bueno Silva.  Now comes my next question, the hell is a UD? This is a term I have heard of and often yell out when I want to look knowledgeable, it means unanimous decision.  This is when all the judges are in agreement of which fighter is the winner, so good for Miss Pennington! 
Pennington’s stats are W-L-D 16-9-0, TKO 1-1, and sub is 1-1, not bad but not my new favorite fighter Amanda Nunes good.
Women’s flyweight is next on our list and their poundage (?) is 125.  I’m going to be honest, with all the searching and rules about the fighters weight is insane to me, what is a one pound difference in the grand scheme of it all you know? Anyway, the current champion of this category is Valentina Shevchenko who won in a unanimous decision against Alexas Grasso! Look at me knowing my terms, this is what we call development guys.   Shevchenko’s stats are W-L-D 24-4-1, TKO 8-1, and Sub 7-1.  I’m gonna be honest, Miss Nunes has my heart, but Schevchenko scares the shit outta me the most. 
The final category is straw-weight and the weight is 115 pounds, again the names just all seem too similar for my taste and I find them very much confusing but non the less I digress.  The current champion is the lady with the best cheekbones I have ever seen Zhang Weili who won against Carla Esparza with a Sub2.  Her stats rival Nunes to me with a W-L-D 25-3-0, TKO 11-1, and Sub 8-0.  She is a Chinese fighter whose career I feel I will be following closely. 
After all that research I can confirm that this is the most intriguing of the sports I have covered so far, and I simply cannot wait to learn more about it.
2 notes · View notes
penny-anna · 1 year
Text
okay sub legacy thoughts w spoilers (by 'spoilers' i mean comments on changes made from the OG not plot spoilers) (i am a little way into the lab)
keeping the coin but moving the code to get the fuse elsewhere? DICK move. took me ages to find it. not sure if it's randomised but if it is then i think i got unlucky with the second 2 digits being 11 bcos it wasn't actually obvious that they were numbers when i found them
oughhhhh took me till sub2 to recognise the dimensional co-ordinates i was running into that's really neat and im so hype to see where we're going with that
where is the spoon and the fork... where is my classic submachine cutlery puzzle :(
i appreciate that there's more explanation for what you're doing w the chemicals in the lab (that's always been a bit of a ?? puzzle as it seems to hinge on knowledge that the player doesn't actually have so you're basically just combining items till you hit on the right combo). however i always liked the detail that you're straight up breaking in rather than hunting around for the code for entry so i wish that had been kept!!
!!!!! when re-opening the game and spotting sub 0 added to the level map
ok i need to stop now bcos it's late and also im supposed to be reducing screen time bcos of my eyes. off to bed shortly zzz good night.
14 notes · View notes
locoier · 5 months
Text
i remember like. literally two years ago talking to someone about how sub2:30 seemed almost impossible for AA and now fein has a sub2:15. that is fucking insane but also if anyone could do it it would be feinberg.
3 notes · View notes
raedas · 11 months
Text
it is a good day bc i got a sub2 time in set yayyyyyyy
7 notes · View notes
mwseo · 9 months
Text
Tumblr media
TWIN PEAKS
s01 - sUB
02 - sUB1 / sUB2
Fuego Camina Conmigo (1992)
03 - sUB
5 notes · View notes
moviecode · 2 years
Photo
Tumblr media
Salvation E13S1 some code described as a never seen encryption level (or something like that) is just a piece of stuxnet2 code
TV Series: https://www.imdb.com/title/tt7270746/?ref_=ttep_ep13
Code: https://github.com/throwaway2266/stuxnet2.0/blob/master/sub2
Twitter post: https://twitter.com/bit_man/status/1567839124161167362
23 notes · View notes
Link
This is an example of religious rules/laws made humorous.
9 notes · View notes
beautybodies · 1 year
Text
Get your drive back and feel great again
Testogen boosts your testosterone naturally and reverses the symptoms of low .
So you can feel better, every day.
Complete testosterone support for male health and wellness
100% safe and natural ingredients backed by clinical studies
Improves energy, performance, muscle growth, libido and fat loss .
https://testogen.com/?_ef_transaction_id=&oid=10&affid=8664&source_id=Facebook%20&sub1=Tumblr%20&sub2=Instagram%20&sub3=Alnnamir
9 notes · View notes
miki3aqors · 2 years
Text
Tumblr media
Soooo I wrote an Incubus AU AeBedo fic a while ago and I designed what Albedo and Sub2 (Rubedo in the fic) look like in their Incubus forms~
I have the naked ver on Poipiku but now I gotta figure out if there's another host site just in case things happen...
Also spreading the trans Albedo agenda, thank you~
5 notes · View notes
aprendizaje-datos · 2 days
Text
Consumo de alcohol: ¿Un asunto de familia? Análisis de la relación entre el consumo familiar y el consumo individual.
Asignación 3
Puede encontrar el código y el resultado a continuación, incluido un breve resumen.
Las decisiones de administración de datos se han documentado en los comentarios del código:
Eliminación de datos incompletos.
Agrupación de datos de acuerdo al estatus de consumo.
Transformación de datos numéricos a porcentuales.
Creación de tablas para para mejorara la comparación.
Código:
#Importación de las librerías necesarias y su renombre para hacer más fácil su uso
import pandas as pd import numpy as np import matplotlib.pyplot as plt
#Importar conjunto de datos
import pandas as pd import numpy as np data = pd.read_csv('nesarc_pds.csv', low_memory=False)
data.colums = map(str.upper, data.columns) pd.set_option('display.float_format', lambda x:'%f'%x)
#Creación de los datos obtenidos del repositorio de NESARC a forma numérica y la eliminación de datos incompletos.
data['CONSUMER'] = pd.to_numeric(data['CONSUMER'].replace(9, np.nan)) #ESTATUS DE BEBIDA data['S2DQ1'] = pd.to_numeric(data['S2DQ1'].replace(9, np.nan)) #PADRE NATURAL/DE SANGRE ¿SIEMPRE FUE ALCOHÓLICO O BEBEDOR PROBLEMÁTICO? data['S2DQ2'] = pd.to_numeric(data['S2DQ2'].replace(9, np.nan)) #MADRE NATURAL/DE SANGRE ¿SIEMPRE FUE ALCOHÓLICO O BEBEDOR PROBLEMÁTICO? data['S2DQ3C2'] = pd.to_numeric(data['S2DQ3C2'].replace(9, np.nan)) #¿HAY ALGUNOS HERMANOS COMPLETOS ALGUNOS ALCOHÓLICOS O BEBEDORES PROBLEMÁTICOS? data['S2DQ4C2'] = pd.to_numeric(data['S2DQ4C2'].replace(9, np.nan)) #¿HAY ALGUNA HERMANA CARPETA ALGUNA VEZ ALCOHÓLICA O BEBEDORA PROBLEMÁTICA? data['S2DQ5C2'] = pd.to_numeric(data['S2DQ5C2'].replace(9, np.nan)) #¿ALGUNOS HIJOS NATURALES ALGUNOS ALCOHÓLICOS O BEBEDORES PROBLEMÁTICOS? data['S2DQ6C2'] = pd.to_numeric(data['S2DQ6C2'].replace(9, np.nan)) #¿HAY ALGUNA HIJA NATURAL ALGUNA QUE SEAN ALCOHÓLICAS O BEBEDORES PROBLEMÁTICOS? data['S2DQ7C2'] = pd.to_numeric(data['S2DQ7C2'].replace(9, np.nan)) #¿ALGUNO DE LOS HERMANOS COMPLETOS DEL PADRE NATURAL SEAN ALCOHÓLICOS O BEBEDORES PROBLEMÁTICOS? data['S2DQ9C2'] = pd.to_numeric(data['S2DQ9C2'].replace(9, np.nan)) #¿ALGUNO DE LOS HERMANOS COMPLETOS DE LA MADRE NATURAL SEAN ALCOHÓLICOS O BEBEDORES PROBLEMÁTICOS? data['S2DQ10C2'] = pd.to_numeric(data['S2DQ10C2'].replace(9, np.nan)) #¿ALGUNA DE LAS HERMANAS DE LA MADRE NATURAL FUE ALGUNA VEZ ALCOHÓLICA O BEBEDORA PROBLEMÁTICA? data['S2DQ11'] = pd.to_numeric(data['S2DQ11'].replace(9, np.nan)) #ABUELO NATURAL POR LADO PATERNO SIEMPRE ALCOHÓLICO O BEBEDOR PROBLEMÁTICO data['S2DQ12'] = pd.to_numeric(data['S2DQ12'].replace(9, np.nan)) #ABUELA NATURAL POR LADO PATERNO ¿SIEMPRE FUE ALCOHÓLICA O BEBEDORA PROBLEMÁTICA? data['S2DQ13A'] = pd.to_numeric(data['S2DQ13A'].replace(9, np.nan)) #ABUELO NATURAL POR LADO MATERNO ¿SIEMPRE FUE ALCOHÓLICO O BEBEDOR PROBLEMÁTICO? data['S2DQ13B'] = pd.to_numeric(data['S2DQ13B'].replace(9, np.nan)) #ABUELA NATURAL POR LADO MATERNO ¿SIEMPRE FUE ALCOHÓLICA O BEBEDORA PROBLEMÁTICA?Ç
c1 = data.groupby('CONSUMER').size()* 100 / len(data)
#Creacion de los filtros usados para poder clasificar los datos de acuerdo a los estatus de consumo
sub = data[(data['CONSUMER'] == 1)] sub1 = sub subb = data[(data['CONSUMER'] == 2)] sub2 = subb subbb = data[(data['CONSUMER'] == 3)] sub3 = subbb
#Transformación de datos numéricos a porcentuales y filtrado de los datos
padre = sub1["S2DQ1"].value_counts(sort= False, normalize=True)* 100 padre2 = sub2["S2DQ1"].value_counts(sort= False, normalize=True)* 100 padre3 = sub3["S2DQ1"].value_counts(sort= False, normalize=True)* 100
madre = sub1["S2DQ2"].value_counts(sort= False, normalize=True)* 100 madre2 = sub2["S2DQ2"].value_counts(sort= False, normalize=True)* 100 madre3 = sub3["S2DQ2"].value_counts(sort= False, normalize=True)* 100
hermanos = sub1["S2DQ3C2"].value_counts(sort= False, normalize=True)* 100 hermanos2 = sub2["S2DQ3C2"].value_counts(sort= False, normalize=True)* 100 hermanos3 = sub3["S2DQ3C2"].value_counts(sort= False, normalize=True)* 100
hermanas = sub1["S2DQ4C2"].value_counts(sort= False, normalize=True)* 100 hermanas2 = sub2["S2DQ4C2"].value_counts(sort= False, normalize=True)* 100 hermanas3 = sub3["S2DQ4C2"].value_counts(sort= False, normalize=True)* 100
hijos = sub1["S2DQ5C2"].value_counts(sort= False, normalize=True)* 100 hijos2 = sub2["S2DQ5C2"].value_counts(sort= False, normalize=True)* 100 hijos3 = sub3["S2DQ5C2"].value_counts(sort= False, normalize=True)* 100
hijas = sub1["S2DQ6C2"].value_counts(sort= False, normalize=True)* 100 hijas2 = sub2["S2DQ6C2"].value_counts(sort= False, normalize=True)* 100 hijas3 = sub3["S2DQ6C2"].value_counts(sort= False, normalize=True)* 100
tio_p = sub1["S2DQ7C2"].value_counts(sort= False, normalize=True)* 100 tio_p2 = sub2["S2DQ7C2"].value_counts(sort= False, normalize=True)* 100 tio_p3 = sub3["S2DQ7C2"].value_counts(sort= False, normalize=True)* 100
tio_m = sub1["S2DQ9C2"].value_counts(sort= False, normalize=True)* 100 tio_m2 = sub2["S2DQ9C2"].value_counts(sort= False, normalize=True)* 100 tio_m3 = sub3["S2DQ9C2"].value_counts(sort= False, normalize=True)* 100
tia_m = sub1["S2DQ10C2"].value_counts(sort= False, normalize=True)* 100 tia_m2 = sub2["S2DQ10C2"].value_counts(sort= False, normalize=True)* 100 tia_m3 = sub3["S2DQ10C2"].value_counts(sort= False, normalize=True)* 100
abuelo_p = sub1["S2DQ11"].value_counts(sort= False, normalize=True)* 100 abuelo_p2 = sub2["S2DQ11"].value_counts(sort= False, normalize=True)* 100 abuelo_p3 = sub3["S2DQ11"].value_counts(sort= False, normalize=True)* 100
abuela_p = sub1["S2DQ12"].value_counts(sort= False, normalize=True)* 100 abuela_p2 = sub2["S2DQ12"].value_counts(sort= False, normalize=True)* 100 abuela_p3 = sub3["S2DQ12"].value_counts(sort= False, normalize=True)* 100
abuelo_m = sub1["S2DQ13A"].value_counts(sort= False, normalize=True)* 100 abuelo_m2 = sub2["S2DQ13A"].value_counts(sort= False, normalize=True)* 100 abuelo_m3 = sub3["S2DQ13A"].value_counts(sort= False, normalize=True)* 100
abuela_m = sub1["S2DQ13B"].value_counts(sort= False, normalize=True)* 100 abuela_m2 = sub2["S2DQ13B"].value_counts(sort= False, normalize=True)* 100 abuela_m3 = sub3["S2DQ13B"].value_counts(sort= False, normalize=True)* 100
#crear e imprimir tablas de frecuencias (con etiquetas) usando el primer filtro
print('Porcentaje de Consumo de Alcohol de los familiares de bebedores actuales') print('El consumo de alcohol esta dado por: 1-Si, 2-No') tabla = {'padre': padre, 'madre': madre, 'hermanos':hermanos, 'hermanas': hermanas, 'hijos': hijos, 'hijas': hijas, 'tio paterno':tio_p, 'tio materno': tio_m, 'tia materno': tia_m, 'abuelo paterno':abuelo_p, 'abuela paterno': abuela_p, 'abuelo materno': abuelo_m, 'abuela materno': abuela_m} df = pd.DataFrame(tabla) print(df)
#crear e imprimir tablas de frecuencias (con etiquetas) usando el segundo filtro
print('El consumo de alcohol esta dado por: 1-Si, 2-No') tabla2 = {'padre': padre2, 'madre': madre2, 'hermanos':hermanos2, 'hermanas': hermanas2, 'hijos': hijos2, 'hijas': hijas2, 'tio paterno':tio_p2, 'tio materno': tio_m2, 'tia materno': tia_m2, 'abuelo paterno':abuelo_p2, 'abuela paterno': abuela_p2, 'abuelo materno': abuelo_m2, 'abuela materno': abuela_m2} df2 = pd.DataFrame(tabla2) print('Porcentace de consumo de alcohol de los familiares de ex-bebedores') print(df2)
#crear e imprimir tablas de frecuencias (con etiquetas) usando el tercer filtro
print('El consumo de alcohol esta dado por: 1-Si, 2-No') tabla3 = {'padre': padre3, 'madre': madre3, 'hermanos':hermanos3, 'hermanas': hermanas3, 'hijos': hijos3, 'hijas': hijas3, 'tio paterno':tio_p3, 'tio materno': tio_m3, 'tia materno': tia_m3, 'abuelo paterno':abuelo_p3, 'abuela paterno': abuela_p3, 'abuelo materno': abuelo_m3, 'abuela materno': abuela_m3} df3 = pd.DataFrame(tabla3) print('Porcentaje de Consumo de Alcohol de los familiares de abstemios de por vida') print(df3)
Salida:
Tumblr media
Tabla de porcentajes del historial de abuso en el consumo de alcohol de los familiares de bebedores actuales.
Tumblr media
Tabla de porcentajes del historial de abuso en el consumo de alcohol de los familiares de ex-bebedores.
Tumblr media
Tabla de porcentajes del historial de abuso en el consumo de alcohol de los familiares de abstemios de por vida.
RESUMEN BREVE:
Se observa un mayor porcentaje de abuso de alcohol en los familiares de los sujetos con "ex-bebedores", seguido de forma muy cercana por aquellos familiares de los "bebedores actuales".
Lo cual nos puede dar a entender que hay una influencia familiar para acercar a las personas a un consumo de alcohol y posteriormente a un abandono del consumo.
0 notes
gurney59 · 2 days
Text
https://asa.doctormstr.com/l/?sub1=%5BID%5D&sub2=%5BSID%5D&sub3=3&sub4=bodyclick
0 notes
bilvensstuff · 1 month
Text
https://earnoppcenter.com/?utm_campaign={replace}&sub2=&sub3=&sub4=165889403678&sub5=708764116728&sub6=21493327657&sub7=m&sub8=&sub9=ytv&sub10=youtube.com&utm_source=Google&wbraid=&gbraid=&ref_id=CjwKCAjw8fu1BhBsEiwAwDrsjIT4VcJMh_SqzrRMm-cy4VlTSyXP180kIBzrKQomdxnGKFmsKaoDhBoC0ocQAvD_BwE&gclid=CjwKCAjw8fu1BhBsEiwAwDrsjIT4VcJMh_SqzrRMm-cy4VlTSyXP180kIBzrKQomdxnGKFmsKaoDhBoC0ocQAvD_BwE
0 notes
adaraygr · 2 months
Text
PARTE 2: Tarea Semana 2; Ejecute su primer programa.
Gestión y visualización de datos
Curso en Coursera, por Wesleyan University
Realizado por; Jose Alonso Adaray Gomez Rivera.
'S3AQ3B1 = 1' (Smoked everyday):
Tumblr media
'S1Q10B' (Annual Income count):
Tumblr media
'S2AQ5B = 5' (Drank beer once a week count):
Tumblr media
Ahora, se muestran los resultados de todos los subconjuntos de datos creados:
Tumblr media
" Here I used a group by function in order to visualize the mexican distribution in different kind of jobs: "
Tumblr media
"Here I also used a group by function in order to visualize the mexican distribution in different annual incomes: "
Tumblr media
Descripción
Se generó una variable llamada ' mxn ' que almacena el dataframe resultado del código: ' df['S1Q1E']==35 '.
Depués se hizo un conteo y porcentaje de todas las variables a considerar. Estas variables se describieron en el codebook de la tarea de la SEMANA 1.
Como la cuestión busca evaluar la relación que existe entre el alcoholismo, tabaquismo e ingresos anuales de la población mexicana, se consideró importante analizar la distribución de la población mexicana en:
Frecuencia con la que fuman 1 cigarrillo al día (Línea 83: sub1).
Frecuencia con la que beben cerveza 1 vez a la semana: (Línea 87: sub2).
Su distribución en las categorías de trabajos consideradas (Línea 113: ctd).
Su distribución en los rangos de ingresos anuales considerados (Línea 143, cai).
Por el momento se logró visualizar las distribuciones de frecuencia. En semanas posteriores se graficará los resultados de las variables conjugadas para obtener resultados y observar las tendencias.
0 notes
wolves58 · 3 months
Text
https://jiveschwer.shop/?encoded_value=223GDT1&sub1=263dd2d1b3554834991d53cc78806af1&sub2=&sub3=&sub4=&sub5=18852&source_id=20021&ip=109.52.150.233&domain=www.todaystrackisfast.com
0 notes
rogerscode · 3 months
Text
varianza Anova
import numpy
import pandas
import statsmodels.formula.api as smf
import statsmodels.stats.multicomp as multi
data = pandas.read_csv('nesarc.csv', low_memory=False)
#setting variables you will be working with to numeric
data['S3AQ3B1'] = pandas.to_numeric(data['S3AQ3B1'], errors='coerce')
data['S3AQ3C1'] = pandas.to_numeric(data['S3AQ3C1'], errors='coerce')
data['CHECK321'] = pandas.to_numeric(data['CHECK321'], errors='coerce')
#subset data to young adults age 18 to 25 who have smoked in the past 12 months
sub1=data[(data['AGE']>=18) & (data['AGE']<=25) & (data['CHECK321']==1)]
#SETTING MISSING DATA
sub1['S3AQ3B1']=sub1['S3AQ3B1'].replace(9, numpy.nan)
sub1['S3AQ3C1']=sub1['S3AQ3C1'].replace(99, numpy.nan)
#recoding number of days smoked in the past month
recode1 = {1: 30, 2: 22, 3: 14, 4: 5, 5: 2.5, 6: 1}
sub1['USFREQMO']=sub1['S3AQ3B1'].map(recode1)
#converting new variable USFREQMMO to numeric
sub1['USFREQMO']=pandas.to_numeric(sub1['USFREQMO'], errors='coerce')
# Creating a secondary variable multiplying the days smoked/month and the number of cig/per day
sub1['NUMCIGMO_EST']=sub1['USFREQMO'] *sub1['S3AQ3C1']
sub1['NUMCIGMO_EST']=pandas.to_numeric(sub1['NUMCIGMO_EST'], errors='coerce')
ct1 =sub1.groupby('NUMCIGMO_EST').size()
print (ct1)
# using ols function for calculating the F-statistic and associated p value
model1 =smf.ols(formula='NUMCIGMO_EST ~ C(MAJORDEPLIFE)', data=sub1)
results1 = model1.fit()
print (results1.summary())
sub2 = sub1[['NUMCIGMO_EST', 'MAJORDEPLIFE']].dropna()
print ('means for numcigmo_est by major depression status')
m1=sub2.groupby('MAJORDEPLIFE').mean()
print (m1)
print ('standard deviations for numcigmo_est by major depression status')
sd1 =sub2.groupby('MAJORDEPLIFE').std()print (sd1)
#i will call it sub3
sub3 = sub1[['NUMCIGMO_EST', 'ETHRACE2A']].dropna()
model2 =smf.ols(formula='NUMCIGMO_EST ~ C(ETHRACE2A)', data=sub3).fit()
print (model2.summary())
print ('means for numcigmo_est by major depression status')
m2=sub3.groupby('ETHRACE2A').mean()
print (m2)
print ('standard deviations for numcigmo_est by major depression status')
sd2=sub3.groupby('ETHRACE2A').std()print (sd2)
mc1=multi.MultiComparison(sub3['NUMCIGMO_EST'], sub3['ETHRACE2A'])
res1 = mc1.tukeyhsd()
print(res1.summary())
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
0 notes
deploy111 · 4 months
Text
This assignment aims to statistically assess the evidence, provided by NESARC codebook, in favour of or against the association between cannabis use and major depression, in U.S. adults. More specifically, I examined the statistical interaction between frequency of cannabis use (10-level categorical explanatory, variable ”S3BD5Q2E”) and major depression diagnosis in the last 12 months (categorical response, variable ”MAJORDEP12”), moderated by variable “S1Q231“ (categorical), which indicates the total number of the people who lost a family member or a close friend in the last 12 months. This effect is characterised statistically as an interaction, which is a third variable that affects the direction and/or the strength of the relationship between the explanatory and the response variable and help us understand the moderation. Since I have a categorical explanatory variable (frequency of cannabis use) and a categorical response variable (major depression), I ran a Chi-square Test of Independence (crosstab function) to examine the patterns of the association between them (C->C), by directly measuring the chi-square value and the p-value. In addition, in order visualise graphically this association, I used factorplot function (seaborn library) to produce a bivariate graph. Furthermore, in order to determine which frequency groups are different from the others, I performed a post hoc test, using Bonferroni Adjustment approach, since my explanatory variable has more than 2 levels. In the case of ten groups, I actually need to conduct 45 pair wise comparisons, but in fact I examined indicatively two and compared their p-values with the Bonferroni adjusted p-value, which is calculated by dividing p=0.05 by 45. By this way it is possible to identify the situations where null hypothesis can be safely rejected without making an excessive type 1 error.
Regarding the third variable, I examined if the fact that a family member or a close friend died in the last 12 months, moderates the significant association between cannabis use frequency and major depression diagnosis. Put it another way, is frequency of cannabis use related to major depression for each level of the moderating variable (1=Yes and 2=No), that is for those whose a family member or a close friend died in the last 12 months and for those whose they did not? Therefore, I set new data frames (sub1 and sub2) that include either individuals who fell into each category (Yes or No) and ran a Chi-square Test of Independence for each subgroup separately, measuring both chi-square values and p-values. Finally, with factorplot function (seaborn library) I created two bivariate line graphs, one for each level of the moderating variable, in order to visualise the differences and the effect of the moderator upon the statistical relationship between frequency of cannabis use and major depression diagnosis. For the code and the output I used Spyder (IDE).
The moderating variable that I used for the statistical interaction is:
Output
 
A Chi Square test of independence revealed that among cannabis users aged between 18 and 30 years old (subsetc1), the frequency of cannabis use (explanatory variable collapsed into 9 ordered categories) and past year depression diagnosis (response binary categorical variable) were significantly associated, X2 =29.83, 8 df, p=0.00022.
In the bivariate graph (C->C) presented above, we can see the correlation between frequency of cannabis use (explanatory variable) and major depression diagnosis in the past year (response variable). Obviously, we have a left-skewed distribution, which indicates that the more an individual (18-30) smoked cannabis, the better were the chances to have experienced depression in the last 12 months.
In the first place, for the moderating variable equal to 1, which is those whose a family member or a close friend died in the last 12 months (sub1), a Chi Square test of independence revealed that among cannabis users aged between 18 and 30 years old, the frequency of cannabis use (explanatory variable) and past year depression diagnosis (response variable) were not significantly associated, X2 =4.61, 9 df, p=0.86. As a result, since the chi-square value is quite small and the p-value is significantly large, we can assume that there is no statistical relationship between these two variables, when taking into account the subgroup of individuals who lost a family member or a close friend in the last 12 months.
In the bivariate line graph (C->C) presented above, we can see the correlation between frequency of cannabis use (explanatory variable) and major depression diagnosis in the past year (response variable), in the subgroup of individuals whose a family member or a close friend died in the last 12 months (sub1). In fact, the direction of the distribution (fluctuation) does not indicate a positive relationship between these two variables, for those who experienced a family/close death in the past year.
Subsequently, for the moderating variable equal to 2, which is those whose a family member or a close friend did not die in the last 12 months (sub2), a Chi Square test of independence revealed that among cannabis users aged between 18 and 30 years old, the frequency of cannabis use (explanatory variable) and past year depression diagnosis (response variable) were significantly associated, X2 =37.02, 9 df, p=2.6e-05 (p-value is written in scientific notation). As a result, since the chi-square value is quite large and the p-value is significantly small, we can assume that there is a positive relationship between these two variables, when taking into account the subgroup of individuals who did not lose a family member or a close friend in the last 12 months.
In the bivariate line graph (C->C) presented above, we can see the correlation between frequency of cannabis use (explanatory variable) and major depression diagnosis in the past year (response variable), in the subgroup of individuals whose a family member or a close friend did not die in the last 12 months (sub2). Obviously, the direction of the distribution indicates a positive relationship between these two variables, which means that the frequency of cannabis use directly affects the proportions of major depression, regarding the individuals who did not experience a family/close death in the last 12 months.
Summary
It seems that both the direction and the size of the relationship between frequency of cannabis use and major depression diagnosis in the last 12 months, is heavily affected by a death of a family member or a close friend in the same period. In other words, when the incident of a family/close death is present, the correlation is considerably weak, whereas when it is absent, the correlation is significantly strong and positive. Thus, the third variable moderates the association between cannabis use frequency and major depression diagnosis.
-- coding: utf-8 --
""" Created on Sun Mar 17 18:11:22 2019
@author: Voltas """
import pandas import numpy import seaborn import scipy import matplotlib.pyplot as plt
nesarc = pandas.read_csv ('nesarc_pds.csv', low_memory=False)
Set PANDAS to show all columns in DataFrame
pandas.set_option('display.max_columns' , None)
Set PANDAS to show all rows in DataFrame
pandas.set_option('display.max_rows' , None)
nesarc.columns = map(str.upper , nesarc.columns)
pandas.set_option('display.float_format' , lambda x:'%f'%x)
Change my variables to numeric
nesarc['AGE'] = nesarc['AGE'].convert_objects(convert_numeric=True) nesarc['MAJORDEP12'] = nesarc['MAJORDEP12'].convert_objects(convert_numeric=True) nesarc['S1Q231'] = nesarc['S1Q231'].convert_objects(convert_numeric=True) nesarc['S3BQ1A5'] = nesarc['S3BQ1A5'].convert_objects(convert_numeric=True) nesarc['S3BD5Q2E'] = nesarc['S3BD5Q2E'].convert_objects(convert_numeric=True)
Subset my sample
subset1 = nesarc[(nesarc['AGE']>=18) & (nesarc['AGE']<=30) & nesarc['S3BQ1A5']==1] # Ages 18-30, cannabis users subsetc1 = subset1.copy()
Setting missing data
subsetc1['S1Q231']=subsetc1['S1Q231'].replace(9, numpy.nan) subsetc1['S3BQ1A5']=subsetc1['S3BQ1A5'].replace(9, numpy.nan) subsetc1['S3BD5Q2E']=subsetc1['S3BD5Q2E'].replace(99, numpy.nan) subsetc1['S3BD5Q2E']=subsetc1['S3BD5Q2E'].replace('BL', numpy.nan)
recode1 = {1: 9, 2: 8, 3: 7, 4: 6, 5: 5, 6: 4, 7: 3, 8: 2, 9: 1} # Frequency of cannabis use variable reverse-recode subsetc1['CUFREQ'] = subsetc1['S3BD5Q2E'].map(recode1) # Change the variable name from S3BD5Q2E to CUFREQ
subsetc1['CUFREQ'] = subsetc1['CUFREQ'].astype('category')
Raname graph labels for better interpetation
subsetc1['CUFREQ'] = subsetc1['CUFREQ'].cat.rename_categories(["2 times/year","3-6 times/year","7-11 times/year","Once a month","2-3 times/month","1-2 times/week","3-4 times/week","Nearly every day","Every day"])
Contingency table of observed counts of major depression diagnosis (response variable) within frequency of cannabis use groups (explanatory variable), in ages 18-30
contab1 = pandas.crosstab(subsetc1['MAJORDEP12'], subsetc1['CUFREQ']) print (contab1)
Column percentages
colsum=contab1.sum(axis=0) colpcontab=contab1/colsum print(colpcontab)
Chi-square calculations for major depression within frequency of cannabis use groups
print ('Chi-square value, p value, expected counts, for major depression within cannabis use status') chsq1= scipy.stats.chi2_contingency(contab1) print (chsq1)
Bivariate bar graph for major depression percentages with each cannabis smoking frequency group
plt.figure(figsize=(12,4)) # Change plot size ax1 = seaborn.factorplot(x="CUFREQ", y="MAJORDEP12", data=subsetc1, kind="bar", ci=None) ax1.set_xticklabels(rotation=40, ha="right") # X-axis labels rotation plt.xlabel('Frequency of cannabis use') plt.ylabel('Proportion of Major Depression') plt.show()
recode2 = {1: 10, 2: 9, 3: 8, 4: 7, 5: 6, 6: 5, 7: 4, 8: 3, 9: 2, 10: 1} # Frequency of cannabis use variable reverse-recode subsetc1['CUFREQ2'] = subsetc1['S3BD5Q2E'].map(recode2) # Change the variable name from S3BD5Q2E to CUFREQ2
sub1=subsetc1[(subsetc1['S1Q231']== 1)] sub2=subsetc1[(subsetc1['S1Q231']== 2)]
print ('Association between cannabis use status and major depression for those who lost a family member or a close friend in the last 12 months') contab2=pandas.crosstab(sub1['MAJORDEP12'], sub1['CUFREQ2']) print (contab2)
Column percentages
colsum2=contab2.sum(axis=0) colpcontab2=contab2/colsum2 print(colpcontab2)
Chi-square
print ('Chi-square value, p value, expected counts') chsq2= scipy.stats.chi2_contingency(contab2) print (chsq2)
Line graph for major depression percentages within each frequency group, for those who lost a family member or a close friend
plt.figure(figsize=(12,4)) # Change plot size ax2 = seaborn.factorplot(x="CUFREQ", y="MAJORDEP12", data=sub1, kind="point", ci=None) ax2.set_xticklabels(rotation=40, ha="right") # X-axis labels rotation plt.xlabel('Frequency of cannabis use') plt.ylabel('Proportion of Major Depression') plt.title('Association between cannabis use status and major depression for those who lost a family member or a close friend in the last 12 months') plt.show()
#
print ('Association between cannabis use status and major depression for those who did NOT lose a family member or a close friend in the last 12 months') contab3=pandas.crosstab(sub2['MAJORDEP12'], sub2['CUFREQ2']) print (contab3)
Column percentages
colsum3=contab3.sum(axis=0) colpcontab3=contab3/colsum3 print(colpcontab3)
Chi-square
print ('Chi-square value, p value, expected counts') chsq3= scipy.stats.chi2_contingency(contab3) print (chsq3)
Line graph for major depression percentages within each frequency group, for those who did NOT lose a family member or a close friend
plt.figure(figsize=(12,4)) # Change plot size ax3 = seaborn.factorplot(x="CUFREQ", y="MAJORDEP12", data=sub2, kind="point", ci=None) ax3.set_xticklabels(rotation=40, ha="right") # X-axis labels rotation plt.xlabel('Frequency of cannabis use') plt.ylabel('Proportion of Major Depression') plt.title('Association between cannabis use status and major depression for those who did NOT lose a family member or a close friend in the last 12 months') plt.show()
0 notes