Tumgik
#Nassociates
batmillersamuelus · 4 years
Photo
Tumblr media
We are NS & Associates. We have all talented and experienced Indian tax agents. Our team will help you out on all your financial decisions and tax advices. Visit us to know more about our services and rates.
0 notes
sizzlenut · 4 years
Text
Test a Multiple Regression Model
Multiple Regression
import pandas import numpy import scipy.stats import matplotlib.pyplot as plt import statsmodels.api as sm import statsmodels.formula.api as smf
data=pandas.read_csv('_7548339a20b4e1d06571333baf47b8df_gapminder.csv',low_memory=False)
data['incomeperperson']=data['incomeperperson'].replace(' ',numpy.nan) data['lifeexpectancy']=data['lifeexpectancy'].replace(' ',numpy.nan) data['alcconsumption']=data['alcconsumption'].replace(' ',numpy.nan) data['employrate']=data['employrate'].replace(' ',numpy.nan) data['urbanrate']=data['urbanrate'].replace(' ',numpy.nan)
data['lifeexpectancy'] = pandas.to_numeric(data['lifeexpectancy'], errors='ignore') data['incomeperperson'] = pandas.to_numeric(data['incomeperperson'], errors='ignore') data['alcconsumption']= pandas.to_numeric(data['alcconsumption'], errors='ignore') data['employrate']= pandas.to_numeric(data['employrate'], errors='ignore') data['urbanrate']=pandas.to_numeric(data['urbanrate'], errors='ignore')
sub1 = data[['incomeperperson', 'lifeexpectancy', 'alcconsumption', 'employrate', 'urbanrate']].dropna()
print('\nThe mean of explanatory variable is:') print(data['incomeperperson'].mean())
print('\nThe values of incomeperperson after Centering:') sub1['incomeperperson_m']=data['incomeperperson']-data['incomeperperson'].mean() print(sub1['incomeperperson_m'].mean())
print ('\nassociation between incomeperperson and life expectancy after centering:') print (scipy.stats.pearsonr(sub1['incomeperperson_m'], sub1['lifeexpectancy']))
print('\nOLS regression model for incomeperperson and life expectancy after centering:') model2=smf.ols(formula='lifeexpectancy~incomeperperson_m',data=sub1).fit() print(model2.summary())
#multiple regression print('\nOLS regression model for multiple regression') model3=smf.ols(formula='lifeexpectancy~incomeperperson_m + alcconsumption',data=sub1).fit() print(model3.summary())
#multiple regression print('\nOLS regression model for multiple regression') model3=smf.ols(formula='lifeexpectancy~incomeperperson_m + alcconsumption + employrate',data=sub1).fit() print(model3.summary())
#multiple regression print('\nOLS regression model for multiple regression') model3=smf.ols(formula='lifeexpectancy~incomeperperson_m + alcconsumption + employrate + urbanrate',data=sub1).fit() print(model3.summary())
Tumblr media
interpretation: WE CAN SEE THAT EXPLANATORY VARIABLE AND RESPONSE VARIABLE ARE RELATED AS THE P-VALUE IS ALMOST ZERO. THE EXPECTED RANGE OF VALUES ARE BETWEEN 68.906 AND 71.389. WE GET A VALUE OF 70.1473.
Tumblr media
interpretation: MULTIPLE REGRESSION ANALYSIS: ALCOHOL CONSUMPTION DOES NOT AFFECT THE RELATION BETWEEN PRIMARY VARIABLES. IT IS ALTHOUGH RELATED TO THE RESPONSE VARIABLE. WE CAN SEE AN IMPROVEMENT IN R-SQUARE VALUE.
Tumblr media
interpretation: MULTIPLE REGRESSION ANALYSIS:EMPLOY RATE DOES NOT AFFECT THE RELATION BETWEEN PRIMARY VARIABLES. IT IS ALTHOUGH RELATED TO THE RESPONSE VARIABLE AND IT AFFECTS THE RELATIONSHIP BETWEEN ALCOHOL CONSUMPTION AND RESPONSE VARIABLE. FURTHER MORE INCREASE IN R-SQUARE VALUE.
Tumblr media
interpretation: MULTIPLE REGRESSION ANALYSIS:EMPLOY RATE DOES NOT AFFECT THE RELATION BETWEEN PRIMARY VARIABLES. IT IS ALTHOUGH HIGHLY RELATED TO THE RESPONSE VARIABLE AND IT AFFECTS THE RELATIONSHIP BETWEEN EMPLOYMENT RATE AND RESPONSE VARIABLE. FURTHER MORE INCREASE IN R-SQUARE VALUE.
Regression diagnostic
#multiple regression print('\nOLS regression model for multiple regression') model3=smf.ols(formula='lifeexpectancy~incomeperperson_m + alcconsumption',data=sub1).fit() print(model3.summary())
#multiple regression print('\nOLS regression model for multiple regression') model4=smf.ols(formula='lifeexpectancy~incomeperperson_m + alcconsumption + employrate',data=sub1).fit() print(model4.summary())
#multiple regression print('\nOLS regression model for multiple regression') model5=smf.ols(formula='lifeexpectancy~incomeperperson_m + alcconsumption + employrate + urbanrate',data=sub1).fit() print(model5.summary())
#QQ plot for normality fig1=sm.qqplot(model5.resid, line='r')
#SIMPLE PLOT OF RESIDUALS stdres=pandas.DataFrame(model5.resid_pearson) fig2=plt.plot(stdres, 'o', ls='None') l = plt.axhline(y=0, color='r') plt.lable('Standardized Residual') plt.xlable('Obervation Table') print(fig2)
#additional regression diagnostic plots fig3=plt.figure(figsize=(12,8)) fig3=sm.graphics.plot_regress_exog(model5, 'incomeperperson_m',fig=fig3) print(fig3)
Tumblr media
interpretation: WE CAN SEE THAT LOWER AND HIGHER VALUES DON’T FOLLOW A LINEAR REGRESSION LINE. SO THE CURVE SHOULD PERPHAS BE RECTILINEAR.
Tumblr media
interpretation: BY OBSERVING THE VERTICAL LIKE WE CAN SAY THAT MOST THE VALUES LIE BETWEEN 1 AND -1. HENCE THE AMOUNT OF ERROR IS ACCEPTABLE. FOR THE ADDITIONAL HORIZONTAL LINE THE AMOUNT OF ERROR IS UNACCEPTABLE. 
Tumblr media
interpretation: WE CAN SAY FROM THE RESIDUAL PLOT THAT IT IS A SOMEWHAT FUNNEL SHAPED. THE CCPR ALSO SUGGESTS THE SAME. PARTIAL REGRESSION PLOT SUGGESTS A LINEAR RELATIONSHIP HOWEVER THERE ARE SOME DISCRETE VALUES
NOTE : SUMMARY IS BELOW EVERY GRAPH.
0 notes
dskashyap · 5 years
Text
Effect of moderator in regression
# -*- coding: utf-8 -*- """ Spyder Editor DK, 03/24/2020
Script to test correlation between income levels and life expectancy """ import numpy as np import pandas as pd import statsmodels.formula.api as smf import statsmodels.stats.multicomp as multi import statistics as st
import scipy.stats as scst import seaborn import matplotlib.pyplot as plt
info = pd.read_csv('gapminder.csv')#, low_memory=False)
#Using low values to replace missing values to minimize data skew
for i in range(len(info.lifeexpectancy)):    if info.alcconsumption[i] == ' ':        info.alcconsumption[i] = 0.05    if info.lifeexpectancy[i] == ' ':        info.lifeexpectancy[i] = 50    if info.incomeperperson[i] == ' ':        info.incomeperperson[i] = 100
info['lifeexpectancy'] = info.lifeexpectancy.astype(float) info['alcconsumption'] = info.alcconsumption.astype(float) info['incomeperperson'] = info.incomeperperson.astype(float)
#%%Effect of alcohol consumption on relation between income and life expectancy alc_cat = [] alc = np.array(np.quantile(info.alcconsumption,[0.25,0.5,0.75,1],axis=0)) for i in range(len(info.alcconsumption)):    if info.alcconsumption[i]<alc[0]:        alc_cat.append(0.25)    elif info.alcconsumption[i]<alc[1]:        alc_cat.append(0.5)    elif info.alcconsumption[i]<alc[2]:        alc_cat.append(0.75)    else:        alc_cat.append(1)
#dfx = pd.DataFrame(inccat, alc_cat, cncr_cat, columns = ['incomecategory', 'alcoholcons', 'cancerrate']) dfx = pd.DataFrame(alc_cat, columns = ['alcoholcons']) df_fin = pd.concat([df1,dfx], sort=False, axis=1)
#data_clean['incomegrp'] = data_clean.apply (lambda row: incomegrp (row),axis=1) #chk1 = data_clean['incomegrp'].value_counts(sort=False, dropna=False) #print(chk1)
alc_cat01=df_fin[(df_fin.alcoholcons== 0.25)] alc_cat02=df_fin[(df_fin.alcoholcons== 0.5)] alc_cat03=df_fin[(df_fin.alcoholcons== 0.75)] alc_cat04=df_fin[(df_fin.alcoholcons== 1)]
print ('\nassociation between lifeexpectancy and incomeperperson OVERALL') print (scst.pearsonr(df_fin['lifeexpectancy'], df_fin['incomeperperson'])) #print ('       ') print ('\nassociation between lifeexpectancy and incomeperperson for LOWER QUARTILE alcoholism') print (scst.pearsonr(alc_cat01['lifeexpectancy'], alc_cat01['incomeperperson'])) print ('\nassociation between lifeexpectancy and incomeperperson for 2nd QUARTILE alcoholism') print (scst.pearsonr(alc_cat02['lifeexpectancy'], alc_cat02['incomeperperson'])) print ('\nassociation between lifeexpectancy and incomeperperson for 3rd QUARTILE alcoholism') print (scst.pearsonr(alc_cat03['lifeexpectancy'], alc_cat03['incomeperperson'])) print ('\nassociation between lifeexpectancy and incomeperperson for UPPER QUARTILE alcoholism') print (scst.pearsonr(alc_cat04['lifeexpectancy'], alc_cat04['incomeperperson'])) #Overall p value without any moderation suggests that we should reject null hypothesis. Further investigation #suggests that with low alcoolism, that may not be true.
Tumblr media
0 notes
Text
Stevie G는 내가 뛰었던 최고의 미드
<p>New Post has been published on https://www.travel-guides-and-books.com/stevie-g%eb%8a%94-%eb%82%b4%ea%b0%80-%eb%9b%b0%ec%97%88%eb%8d%98-%ec%b5%9c%ea%b3%a0%ec%9d%98-%eb%af%b8%eb%93%9c/</p> <blockquote><p><strong>Stevie G는 내가 뛰었던 최고의 미드</strong></p> <p><img src=""/></p><p>
Stevie G는 내가 뛰었던 최고의 미드 필더입니다. 그는 나와 그를 놀고있는 모든 사람들을위한 우상 이었지만, 당신이 그와 함께 있었을 때, 당신이 그를 훈련하는 것을 보았을 때, 그것은 달랐습니다. 문제는 그 후 결코 나아지지 않았다는 것입니다. 시애틀의 다음 3 년 동안 Mirer는 1995 시즌 동안 20 회를 포함 해 29 회 터치 다운과 39 회의 인터셉트를 던지면서 급격한 감소를 경험했습니다. 전 세계의 모든 국가는 일상적으로 다루어야하는 문제가 있으며, 올림픽에서는 이러한 일상적인 스트레스를 피하고 국가의 운동 선수를 응원 할 기회를 얻습니다. 국가 나 도시, 심지어 도시에 영향을 미칠 수있는 스포츠의 수를 나타냅니다.2009 년 9 월 11 일에 있었던 것처럼 미국이 역경에 직면했을 때 사람들은 여러 단계의 감정을 겪었습니다.
그러면 갑자기 군중이 무서워하게됩니다. 타자가 공원에서 1 명을 때리고, 뛰거나, 단지 타격을 입을 수 있습니다. 브리 샤는 미드 필드에서 피닉스 수비수와 헤딩으로 올라간 뒤 벤시 그문트 팔꿈치를 머리 뒤쪽으로 쳤다. 브리샤는 듀란테에서 퇴장했고 윌리엄스는 즉시 레드 카드를 보여줬다. 서사시. 이 손실은 UConn에 대한 7 번째 연속 초과 근무 손실이며, 3 번째 손실입니다.. 이는 출연자 나 선수에게 문제를 일으킬뿐만 아니라 공무원, 이벤트 스태프 또는 팬이 이벤트에서 쉽게 빠져 나올 수 있기 때문입니다. 항상 운동 선수와 경쟁자의 집중과 성적을 존중하십시오.. 그러나 Cravens는 포지션이 부족한 4 주차에서 뇌진탕을 앓 았고 다음 2 경기를 놓쳤습니다. Cravens는 7 번째 주에 액션으로 돌아 왔지만, 팀원들은 게임에 대한 군포안마 열의가 느껴짐을 느꼈습니다.
(개인 정보 보호 정책) 루비콘 프로젝트 이것은 광고 네트워크입니다. (개인 정보 보호 정책) TripleLift 이것은 광고 네트워크입니다. (l) 저렴한 서비스로의 마이그레이션을 계속하면서 서비스 제공 업체가 1 개월 동안 99.95 달러를 추가로 포함합니다. 기술자가이 작업을 진행하는 동안 일시적으로 두 개의 월간 요금을 청구합니다. \\ 군포출장서비스보장 nGoal : 자녀가 놀 수있는 군포출장서비스보장 스포츠에 대해 현실적이어야합니다. \\ n 운동 경기를 지원하는 것은 National Collegiate Athletic \\ nAssociation 또는 NCAA입니다. 그 다음, 우리는 산의 위아래로 푸른 언덕을 몇 개 붙였다. 트라이 배는 지금 단단한 부분을 위해 행해졌 다.. 가톨릭 팬과 팀은 분명히 ‘당신이 예수를 죽였어!’라고 외치면서 유태인 팀을 조롱하기 시작했습니다. 프로 운동 선수들은 게임을 할 때 서로를 대하고 쓰레기통을 말하고 반대자들에게 사악한 외설을 외치는 것은 이런 방식입니다.
특히 1607 버전 군포출장샵후기 이상인 경우 ‘기념일 업데이트’라고도합니다. Windows 메뉴를 클릭 한 다음 ‘About your PC’를 입력하여 버전을 확인할 수 있습니다. 축구 / 축구가 사람들을 하나로 모으는 이유 비록 당신이 축구를 좋아하지 않는다고해도, 그 누구도 알 수있는 기회가 있습니다. 올해는 폴란드와 우크라이나에서 개최 된 유럽 선수권 대회가 스페인이 3 회 연속 세계 챔피언십 (유럽 2 개국) 2008 년과 2012 년, 그리고 2010 년 월드컵에서의 챔피언십) 국내 수준에서도 맨유, 아스날, 첼시, 리버풀 등의 대형 팀이든 현지 팀이든, 사람들은 일반적으로 재미있는 경기를 제공 할 것이기 때문에 누가 잘 할 것인가, 누가 잘 수행하지 않을 것인가 등에 대해 깊은 토론을하게됩니다. 그것들을 추가하려면 민족적 ‘게임’으로 다시 이어질 수 있습니다. 외모는 아예 없습니다.
센터 백 (CB) 및 백 (SB). 센터 백은 골키퍼와 수비 앞에서 중앙에있는 플레이어입니다. ‘UEFA 챔피언스 리그의 준결승에서 아스날이 바르셀로나에 의해 패퇴 한 것을 아나운서는 말했다. 내가 실제로 아스날 팬인 것처럼 나는 한 편으로 의견을 말하고있다. 부모님, 옛 친구가 당신의 신앙을 더 이상 말하지 않기 때문에 당신을 망명 시키면 매우 불행합니다. 다른 친구들을 찾아야 해. 우리의 러닝 백이 펀트를 필드에 올려 놓았고 잘 조율 된 블로킹으로 그는 터치 다운을 위해 그것을 모두 뒤로 돌렸다. 우리는 이제 16 승을 올렸습니다. 더미는 충격으로부터 보호 할 때 헬멧이 더 좋습니다. 평균적으로, 헬멧은 헬멧을 착용하지 않은 것과 비교하여 두개골 골절의 위험을 70 %까지 군포동출장마사지 줄였으며 뇌 조직 타박상의 위험도를 70-80 % 낮추었습니다.
USB 플래시 드라이브는 매일 사무실에서 사용됩니다. 개인 컴퓨터에서 업무용 컴퓨터로 파일을 전송하거나 동료와 문서를 공유하거나 작업중인 프레젠테이션을 전송할 수 있습니다. 그들은 땅을 채우고 씨앗을 심어 사료를 공급하고, 지역 사회를위한 식량 안보를 보장하고, 기후 탄력성을 구축합니다. 그러나 거의 모든 개발 척도에서 남녀 차별과 불평등이 심각하기 때문에 시골 여성은 농촌 남성이나 도시 여성보다 나쁘다. 여기서 다루는 내용은 Microsoft Office Online에서 무료로 다운로드 할 수있는 전단지 템플릿입니다. 템플릿에는 사용 가능한 표준 전단지에 필요한 모든 부품이 포함되어 있으며 텍스트 및 그림과 그래픽으로 사용자 정의 할 수 있습니다. 작업을 마친 Django는 Shultz의 견습생이되어 그들이 배운 아내를 찾을 것을 맹세하며 Calvin J. Candle (Leonardo DiCaprio)라는 무자비한 농장 주인의 노예 재산입니다.
스냅으로 걸쇠를 들어 올리십시오. 드물게 군포출장서비스보장 지난 두 시즌 동안 벌점. 나는 또한 계약을 맺고 곧바로 판매 할 것입니다. 나는이 모든 일을했는데, 판매에서 파생 된 물건과 아이템을 버리는 것에 대한 팩의 비용을 계산할뿐 아니라, 내가 얻었고이 방법이 가치있는 것인지를 기본적으로 확인한 군포동출장마사지 플레이어를 살펴볼 것입니다. 개인은 원더러스 선수에게 환영받지 않아도됩니다. 언론은 풋볼 연방 호주 대변인이이 사건을 조사 할 것이라고 확인했다. 점수 모바일은 모든 게임의 점수에 초점을 맞추는 앱입니다. 사용자는 NFL 또는 기타 스포츠 카테고리를 선택하여 최신 게임 점수와 심도 깊은 부분을 검색 할 수 있습니다. 문채원 원장은 1986 년 11 월 13 일 대구에서 태어났다. (나이 : 29) 추계 예배당에서 서양화를 공부했지만 2006 년에 탈퇴하여 연기를 추구했다.
</p></blockquote>
0 notes
Fast Memory Improvement
Improving reminiscence depart give you a lot of benefits for your student`s life as well as for your daily routine.\n\nAssociations\nWhen works on memorizing just aboutthing, cr feede your confess ties with each topic or a piece of reading. They whitethorn make any esthesis to anyone, but you; they can be strange, funny, unexpected. The view is that when you create an association with something, you create another bond in your warehousing, and it helps you to divide sidesteps of information into smaller pieces and gives you a notice to find either regular hexahedron faster in the storehouse of your memory. Your associations will become keys for e actually block when remembering it you will make up an easy entrance money to the whole piece of information.\n\n rest on it\nUnfortunately, it doesnt squiffy that if you put a view as under your pillow, the next aurora you will wake up knowing e trulything that the book has in it. It would have made our lives so much easier, wouldnt it? The topic of sleeping on something you atomic number 18 trying to memorize essence that it is very useful to subject in the evening, before you go to sleep. harbor sure that you do not watch TV or read a relaxing book aft(prenominal) go to sleep as soon as you be finished with canvas. Our hotshot keeps on working when we are asleep, it generalizes and sorts take away all the information it has legitimate through the day. This way your humour will put what you had learnt in a reachable place, so your memory has an easy admission fee to it when it needs it. This simple deceit is used not entirely by the students all over the world, but by galore(postnominal) actors as well. Who knows, maybe Jonny Depp learns his separate in the same way, and by sleeping on what you had wise to(p) you will have something in common with him!\n\nMake breaks\n\nMake breaks\nYou surely have discover that after some m spent on working or learning your headland start s to work much bumper-to-bumper: it becomes harder for you to contract, you keep on ancestral yourself thinking about something else. It is very crucial to give yourself breaks, so your caput can reload itself. It is said that we rest when we intensify the type of activity. On those breaks it is outstanding to get some knowing air, to do some somatic exercises, it is a great idea to go for a go with a dog or to go jogging. Dont think that you lose conviction with such breaks: they help you to focus and to be more attentive.\n\n scat\nNot only piteous while you are on a break is very useful, but it as well plays an important role in the act of holding. Walk, jump, tiptoe while studying it stimulates the work of the brain and helps your memory. It is also useful to change a place where you are learning when you start working on a new chapter or a new block of information, go to another fashion or go to study outside. Changing of the surroundings contributes to your b rain activity and helps you to remember more.\n\n solid victuals\n\nEat brain-friendly food\nIt is always important to have enough sleep and to eat healthy food. It is an interesting circumstance that there are some types of food that stimulate brain activity and improve your memory! Among them there are zany (especially walnuts), pumpkin seeds, fish, dark chocolate, cocoa, avocados, wholemeal products, blueberry, broccoli, spinach, tomatoes, olive oil. Enjoy winy food and improve your memory!If you want to get a full essay, order it on our website: Custom essay writing service. Free essay/order revisions. Essays of any complexity! Courseworks, term papers, research papers. 100% confidential!Homework live help. Custom Essay Order is available 24/7!
0 notes
Fast Memory Improvement
Improving keeping ordain give you a lot of benefits for your student`s life as salutary as for your daily routine.\n\nAssociations\nWhen running(a) on memorizing virtuallything, cr givee your ingest crossties with each topic or a piece of discipline. They whitethorn make any aesthesis to anyone, but you; they can be strange, funny, unexpected. The intellect is that when you create an association with something, you create another interrelate in your store, and it helps you to divide halts of information into smaller pieces and gives you a demote to find all(prenominal) engine block faster in the remembering of your depot. Your associations will become keys for e truly block when remembering it you will moderate an easy annoyion to the whole piece of information.\n\n repose on it\nUnfortunately, it doesnt mean that if you put a concur under your pillow, the next break of day you will wake up knowing everything that the book has in it. It would have made our l ives so much easier, wouldnt it? The nous of sleeping on something you atomic number 18 trying to memorize heart that it is very useful to exact in the evening, before you go to sleep. energize sure that you do not watch TV or read a relaxing book after go to sleep as soon as you atomic number 18 finished with geting. Our mavin keeps on operative when we are asleep, it generalizes and sorts finish all the information it has sure through the day. This way your intelligence will put what you had learnt in a reachable place, so your memory has an easy access to it when it needs it. This simple trick is used not whole by the students all all over the world, but by umpteen actors as well. Who knows, maybe Jonny Depp learns his split in the same way, and by sleeping on what you had intentional you will have something in common with him!\n\nMake breaks\n\nMake breaks\nYou surely have notice that after some cartridge clip spent on working or education your sensation s tarts to work much sluggish: it becomes harder for you to tension, you keep on contagious yourself thinking about something else. It is very all- of the essence(predicate)(prenominal) to give yourself breaks, so your top dog can reload itself. It is said that we rest when we transport the type of activity. On those breaks it is important to get some uncontaminating air, to do some material exercises, it is a great idea to go for a bye with a dog or to go jogging. Dont think that you lose clip with such breaks: they help you to focus and to be more attentive.\n\n take\nNot only locomote while you are on a break is very useful, but it in addition plays an important role in the play of studying. Walk, jump, tiptoe while studying it stimulates the work of the brain and helps your memory. It is also useful to change a place where you are learning when you start working on a new chapter or a new block of information, go to another inhabit or go to study outside. Changing o f the surroundings contributes to your brain activity and helps you to remember more.\n\n nigh viands\n\nEat brain-friendly food\nIt is always important to have enough sleep and to eat healthy food. It is an interesting detail that there are some types of food that stimulate brain activity and improve your memory! Among them there are zany (especially walnuts), pumpkin seeds, fish, dark chocolate, cocoa, avocados, wheaten products, blueberry, broccoli, spinach, tomatoes, olive oil. Enjoy ambrosian food and improve your memory!If you want to get a full essay, order it on our website: Custom essay writing service. Free essay/order revisions. Essays of any complexity! Courseworks, term papers, research papers. 100% confidential!Homework live help. Custom Essay Order is available 24/7!
0 notes
sizzlenut · 4 years
Text
Centering of quantitative explanatory variable
Program
print('\nThe mean of explanatory variable is:') print(data['incomeperperson'].mean())
print('\nThe values of incomeperperson after Centering:') data['incomeperperson']=data['incomeperperson']-data['incomeperperson'].mean() print(data['incomeperperson'])
print ('\nassociation between incomeperperson and life expectancy after centering:') print (scipy.stats.pearsonr(sub1['incomeperperson'], sub1['lifeexpectancy']))
print('\nOLS regression model for incomeperperson and life expectancy after centering:') model2=smf.ols(formula='incomeperperson ~ lifeexpectancy',data=data).fit() print(model2.summary())
Output
Tumblr media Tumblr media
0 notes
sizzlenut · 4 years
Text
Testing a Potential Moderator
a) Defining moderation, a.k.a. statistical interaction
import pandas import numpy import statsmodels.formula.api as smf
data=pandas.read_csv('_7548339a20b4e1d06571333baf47b8df_gapminder.csv',low_memory=False) copy=data.copy()
copy['lifeexpectancy']=copy['lifeexpectancy'].replace(' ',numpy.nan) copy['lifeexpectancy'] = pandas.to_numeric(copy['lifeexpectancy'], errors='ignore') #catogorical variable copy['Glifeexpactancy']=pandas.cut(copy.lifeexpectancy, bins=[0,50, 75, 100], labels=['50', 'NaN', '75'], right=True, include_lowest=True) copy['Glifeexpactancy'] = pandas.to_numeric(copy['Glifeexpactancy'], errors='ignore')
#explanatory variable copy['employrate']=copy['employrate'].replace(' ',numpy.nan) copy['employrate'] = pandas.to_numeric(copy['employrate'], errors='ignore')
model1=smf.ols(formula='employrate ~ C(Glifeexpactancy)',data=copy).fit(); print(model1.summary())
Tumblr media
b) Testing moderation in the context of ANOVA
#moderator is sucide per 100th copy['suicideper100th']=copy['suicideper100th'].replace(' ',numpy.nan) copy['suicideper100th'] = pandas.to_numeric(copy['suicideper100th'], errors='ignore') copy['qsuicideper100th']=pandas.qcut(copy.suicideper100th, q=2, labels=['low', 'high'])
sub2=copy[(copy['qsuicideper100th']=='low')] sub3=copy[(copy['qsuicideper100th']=='high')]
print('\nassociation of emplyment rate and life expectancy with low sucide rate') model2=smf.ols(formula='employrate ~ C(Glifeexpactancy)',data=sub2).fit() print(model2.summary())
print('\nassociation of emplyment rate and life expectancy with high sucide rate') model3=smf.ols(formula='employrate ~ C(Glifeexpactancy)',data=sub3).fit() print(model3.summary())
print("\n mean for sub2") m1= sub2.groupby('Glifeexpactancy').mean() print(m1)
print("\n mean for sub3") m1= sub3.groupby('Glifeexpactancy').mean() print(m1)
Tumblr media Tumblr media Tumblr media
(c) Testing moderation in the context of Chi-Square
import pandas import numpy import scipy.stats
data=pandas.read_csv('_7548339a20b4e1d06571333baf47b8df_gapminder.csv',low_memory=False) copy=data.copy() #response variable data['lifeexpectancy']=data['lifeexpectancy'].replace(' ',numpy.nan) data['lifeexpectancy'] = pandas.to_numeric(data['lifeexpectancy'], errors='ignore') copy['Glifeexpactancy']=pandas.cut(data.lifeexpectancy, bins=[0,50, 75, 100], labels=['<50', '50-75', '75<'], right=True, include_lowest=True) #explainatory variable copy['suicideper100th']=copy['suicideper100th'].replace(' ',numpy.nan) copy['suicideper100th'] = pandas.to_numeric(copy['suicideper100th'], errors='ignore') copy['qsuicideper100th']=pandas.qcut(copy.suicideper100th, q=2, labels=['low', 'high']) #moderator copy['incomeperperson']=copy['incomeperperson'].replace(' ',numpy.nan) copy['incomeperperson'] = pandas.to_numeric(copy['incomeperperson'], errors='ignore') copy['qincomeperperson']=pandas.qcut(copy.incomeperperson, q=2, labels=['low', 'high'])
sub2=copy[(copy['qincomeperperson']=='low')] sub3=copy[(copy['qincomeperperson']=='high')] print('\n____for sub2____') # contingency table of observed counts-sub2 print('\ncontingency table of observed counts') ct1=pandas.crosstab(sub2['Glifeexpactancy'], sub2['qsuicideper100th']) print (ct1) # chi-square-sub2 print('\nchi-square') print ('chi-square value, p value, expected counts') cs1= scipy.stats.chi2_contingency(ct1) print (cs1) print('\n____for sub3____') # contingency table of observed counts-sub3 print('\ncontingency table of observed counts') ct1=pandas.crosstab(sub3['Glifeexpactancy'], sub3['qsuicideper100th']) print (ct1) # chi-square-sub3 print('\nchi-square') print ('chi-square value, p value, expected counts') cs1= scipy.stats.chi2_contingency(ct1) print (cs1)
Tumblr media
d) Testing moderation in the context of correlation
import pandas import numpy import scipy.stats import matplotlib.pyplot as plt
data=pandas.read_csv('_7548339a20b4e1d06571333baf47b8df_gapminder.csv',low_memory=False) copy=data.copy() #moderator copy['incomeperperson']=copy['incomeperperson'].replace(' ',numpy.nan) copy['incomeperperson'] = pandas.to_numeric(copy['incomeperperson'], errors='ignore') copy['qincomeperperson']=pandas.qcut(copy.incomeperperson, q=2, labels=['low', 'high']) #variables copy['alcconsumption']=copy['alcconsumption'].replace(' ',numpy.nan) copy['alcconsumption'] = pandas.to_numeric(copy['alcconsumption'], errors='ignore') copy['suicideper100th']=copy['suicideper100th'].replace(' ',numpy.nan) copy['suicideper100th'] = pandas.to_numeric(copy['suicideper100th'], errors='ignore')
sub1=copy[(copy['qincomeperperson']=='low')].dropna() sub2=copy[(copy['qincomeperperson']=='high')].dropna()
print ('association for sub1') print (scipy.stats.pearsonr(sub1['suicideper100th'], sub1['alcconsumption'])) plt.xlabel('suicideper100th') plt.ylabel('alcconsumption') plt.scatter(sub1['suicideper100th'], sub1['alcconsumption']) plt.show()
print ('\n\nassociation for sub2') print (scipy.stats.pearsonr(sub2['suicideper100th'], sub2['alcconsumption'])) plt.xlabel('suicideper100th') plt.ylabel('alcconsumption') plt.scatter(sub2['suicideper100th'], sub2['alcconsumption']) plt.show()
Tumblr media
SUMMARY/CONCLUSION
In (a) part the value of p is more than 0.05 hence there would be no relation between employment rate and life expectancy but we will still run further analysis to be sure.
In (b) part also the value of p is very large rejecting all the possibilities that a relation may exist  between employment rate and life expectancy even considering a moderator (suicide per 100th).
In (c) chi-square value is not very large while p-value is very larger than 0.05. Hence there is no relation between suicide rate and life expectancy normally or considering employment rate as moderator.
In (d) we can see that there exist a linear relationship between alcohol consumption and suicide rate for sub2 that is countries with high income per person where as there is no relation between alcohol consumption and suicide rate for sub1 that is countries with low income per person
0 notes