sadeashahan
Sadea Shahan
11 posts
Hi there! I am a freelance writer with experience in biological research, psychological research and language education. I collaborate on a photo blog with my husband, called 'Tales of Trails'. Photo Blog: https://talesoftrails.com/
Don't wanna be here? Send us removal request.
sadeashahan · 7 years ago
Text
Our Spectacular Countryside Retreat!
Tumblr media
This was our first long distance trip with our eleven month old daughter. And this was also the first time we stayed one night in a hotel with our little one. As you can imagine, my husband and I were both very apprehensive. We were worried about how our girl would behave on the 3+ hour train ride as she recently had been very difficult to take care of, even in just 10 minute train rides!
But fortunately for us, everything went very smoothly. Our daughter did bother us a little but it was nowhere near what we expected! She actually enjoyed the train ride, the hotel stay and the rural landscape. And she was utterly surprised at seeing the sun rise from the coast of the Pacific Ocean, not to mention the huge waves, right in front of her eyes. The sun rise was spectacular but her expression was simply priceless!
Tumblr media
We visited the countryside in Chiba Prefecture in Japan. We stayed at an inn in the Boso Peninsula, which is famous for its beaches among surfers, its rural landscape and of course its position facing the vast Pacific Ocean. The inn was located within two minutes walking distance of the Pacific Ocean! On the way to our inn, we were able to see a panoramic view of the ocean from the summit of Mount Nokogiri and the snow capped Mount Fuji. I could never imagine I would be visiting the Pacific Ocean one day, let alone staying there for one night, watching the sun rise on New Year's Day and celebrating all this with my eleven month old daughter, who was speechless looking at the view of the Pacific Ocean!
Tumblr media
Needless to say, I truly am grateful for this once in a lifetime experience that I was able to enjoy so peacefully with my family on New Year's Day. On that note, I would like to wish you all a very Happy New Year! May all of you have such lovely experiences and more to enjoy throughout this year and the years to come!
3 notes · View notes
sadeashahan · 7 years ago
Text
Life Lessons Learned from Outliers: The Story of Success By Malcolm Gladwell
Outliers is a phenomenal book with short stories about how people from various fields attained success or became outliers, as Gladwell defines it. It is such a thought provoking book that I want to share some of the knowledge that I gained from it with you. So let’s jump straight into some of these factors that fuel success, as this is a rather long blog post!
1. The Matthew Effect AKA Success Fuels Success
Tumblr media
Those with a comparative advantage because they are born earlier in the year or because they have met the cut-off dates for entering into a team are more likely to succeed. They are more likely to succeed because they were born at the right time. Their birth date, specifically, their birth month, has allowed them to have this comparative advantage over their peers who were born later in the year. So a word to the wise potential parents, make sure to create a family plan where your target is to have a baby who is born earlier in the year.  Sounds silly, doesn’t it? Well, unfortunately, this is the reality we live in and cutoff dates are used as a metric of success in this day and age.
2. The 10,000 Hour Rule AKA Practice Makes Perfect
Tumblr media
Researchers agree that the magic number for true expertise is 10,000 practice hours. This means that 10,000 hours of practice are required to achieve world class expert level of mastery in any given subject.  It was found that it takes the brain this long to learn all that it needs to know to achieve true mastery in a given field. This theory applies to everyone—from the most genius to the most incapable person on the planet. Ten thousand practice hours roughly translates to ten years! In order to have these practice hours under your belt, you need to have spend a significant amount of time practicing whatever it is you want to truly master. But unfortunately, not all of us have so much spare time and opportunity due to financial or other limitations. So in essence, it is luck at the end that decides who gets to practice for those 10,000 hours!
3. Do all geniuses become successful?
Tumblr media
No. Why? Because it is rare to find geniuses who have both a high IQ and are socially savvy. IQ, to some degree is a measure of innate ability, whereas social savvy is a set of skills that has to be learned from our families. And let’s face it: not all geniuses are born with supportive families. So what becomes of those who are not born with supportive families? These geniuses never become famous, they lead a very average life as they fear authority and don’t have the social skills to challenge authority. On the other hand, the geniuses who attain fame are the ones with supportive families and these are the ones who have learned how to be socially savvy from their families. These are the people who know how to get their way; they know how to talk and when to talk.
4. Is there a golden era for lawyers, entrepreneurs and the like?
Tumblr media
Yes. There is a right time and place to seize every opportunity. Historically, to become a great lawyer in New York, it was ideal to be born in the 1930s, to have parents who did meaningful work and to be a foreigner. Just as being born in the early 1930s was the prime time for a New York lawyer, being born in 1955 was the magic year for a software programmer and being born in 1835 was the perfect time for an entrepreneur. As you might have guessed, yes the geniuses (e.g. Bill Gates, Steve Jobs, etc) in these different fields were all born during the golden period for their field of work. What a coincidence, isn’t it? This also translates to a tragedy for those who are as competent but were not born in the golden years for their intended field of practice.
5. Can cultural legacies define success?
Tumblr media
Yes, absolutely! Where we come from plays such a central role on where we end up in life. Because as much as we think our culture, family upbringing and legacy will pass by, they end up staying in our life and leave a lasting impact. Recently, there have been quite a few number of plane crashes and there is a very ethnic reason to explain this phenomenon. The communications among the first officer, flight engineer, captain and others in the cockpit vary depending on their country of origin. In some countries, for example in the U.S.A., the flight engineer can speak very directly to the captain about any concerns that he/she might have about the captain’s decision making and they are both considered on the same level, whereas, in a place like South Korea, where flight engineers can’t really give their opinion directly to the captain because it is not acceptable, socially. These communication differences can cause massive problems when a tired South Korean captain doesn’t understand the indirect messages of the South Korean flight engineer. We might assume that plane crashes happen because of major engineering issues, but instead it happens because of these communication differences between cultures, tired pilots, and other very minor technical problems that when put together can create fatal damage.
6. Is there any logical reason to why Asians are so good at math?
Tumblr media
Yes, there is! First of all, in the Asian language, it is easier to think of numbers than in English. Therefore, these linguistic differences makes the Asian numbering system more transparent, more systematic and more regular than the ambiguous and random numbering system found in Western linguistics. Have you ever wondered why there are so many rice paddies in parts of Asia like Japan, China and Korea? Well, they are there because these Asians are willing to put in the time and effort needed to harvest these rice paddies that are so central to the diets of these cultures. Out of all the farming that has been done historically, rice farming has been documented to be the most labor intensive. This translates to the fact that Asian rice farmers are one of the hardest workers in the world. It is also one of the most mechanical and technical type of work. No wonder Asians are so smart in math, it’s because they have been given this technical and mechanical mentality from their intensive agricultural lifestyle. Asians are also likely to stick to a problem until they can solve it for a longer amount of time than Westerners, which obviously translates to a higher level of success in math for Asians. And where did this trait come from? You got it. From the endless hours spent toiling under the scorching sun making sure their harvest this season will give a productive yield.
7. Fact as shown by studies: children from low income families perform just as well as their counterparts from high income families. 
Tumblr media
This shocking fact is absolutely valid when you look at a standardized test that was given to students at the end of the school year right before they go into summer break (this is in the case of U.S.). However, when the standardized tests were given right at the beginning of the school year when students come back from their summer vacation, the students from the high and middle income families outperformed those from the low income homes. Why is that? It’s because of the differences in their family structure. Those kids who come from middle and high income families are often more likely to be attending summer camp to enhance their reading and math ability, have libraries at home where parents encourage reading and of course, their parents are more likely to implement more structured play and study time at home. Whereas, those kids who come from low income families have no summer camp, no library at home, and no structured play/study time so they spend their summer vacation just having fun when their middle and high income peers are making advancements in their education. This is the achievement gap that causes low income students to fall behind. And this is mainly due to the long summer vacation systems that the U.S.A. has in place.
All of these above points are examples of outliers that you have seen in your life at some point. But did you ever think twice about these outliers as much as Malcolm Gladwell has done so smoothly in his profound book? If not, then this was definitely the right place for you to start and I hope you can also glean some insights from these unique and powerful stories.
0 notes
sadeashahan · 7 years ago
Text
WHAT I HAVE REALIZED IN MY MID-TWENTIES
1. Experience is everything!
I can't emphasize this point enough! Seriously, we often know to go to our elders for advice because they have more experience. Similarly, it's our own experiences (mistakes and misunderstandings) that lead us to our current (more improved) self, hopefully. There are so many items that my spouse and I bought when we first got married that we are now wondering how we could ever have possibly thought to buy such stuff?!?! But when we bought it in our inexperienced past, we thought we found the perfect item and got the best deal. Looking back at it makes us realize how much more time, effort and thinking we put in now before purchasing anything. This leads to my second point which is-
2. Your search engine becomes 10x better than that of Google!
Because of all the research that you do in finding the best product or getting the best travel deal, you learn so many things in the process that you would have never imagined before. Your search engine literally becomes ten times better than that of Google as you know exactly what you are looking for after all that research. So at first, it does seem very tedious and time-consuming but the end result is your fabulous gain in knowledge. You gain so much knowledge about that item or offer that you could probably easily write an article about it.
Tumblr media
3. Family time beats everything
No matter how much you love your work or studies, at the end of the day, it's your family that will be there for you. The family that I am referring to here is your spouse and kids. Because let's face it, our definition of a family also changes as we mature. But this realization took some time; it definitely did not click even after my daughter was born. Don’t get me wrong though! Your mom, dad, siblings, and other family members are also important but you slowly learn to give more priority to the family that you work so hard to create at your own home.
Tumblr media
4. Learning to enjoy cooking
I have to make a small confession and anyone who knows me well enough should actually know this already! I disliked cooking so much that I even hated the thought of walking into the kitchen. That was before I got married. After I got married, the condition remained the same two years into my marriage. And now in the third year post marriage, I can confidently say that I can cook! Moreover, I enjoy cooking now. Every day and every night, I am thinking about what ingredients I need, what I want to eat and what I should cook. Never thought meal preparation would take up so much of my brain matter!
Tumblr media
0 notes
sadeashahan · 7 years ago
Text
What My Half-Year Old Taught Me
1. Keep in mind everything is temporary
-You can’t eat out at your favorite restaurant with your significant other…..don’t worry! In a few months, when your baby enjoys eating solids, they will also enjoy going out to restaurants and of course join in on your meals (even without your permission)!
-You are getting tired of having to wake up at night every 3 hours or so to feed…..even this will pass. Soon enough, your baby will be able to sleep for a longer time without you having to wake up to nurse him/her and even if you have to breastfeed at night, you will become more comfortable feeding while lying down so you won’t even need to get out of bed to feed! Phew….what a blessing!!! But sadly, you still have to wake up but trust me, waking up every few hours will become second nature to you so it won’t be as painful as it is the first month or so while you are still adjusting to this schedule.
Tumblr media
2. Your baby will grow up very quickly!
-Literally, your baby will develop both mentally and physically every single day. If I were you, I would not suggest you to buy many items in the first few months for your baby as you are likely to receive a lot of gifts for your little one.
-Your baby will develop so quickly and do so many new tricks that even you will be amazed at how quickly they are developing! Your baby will learn what they like as well as what you like and dislike. They will try to grab your phone, enjoy looking at videos of themselves and love looking in the mirror for their reflection.
3. Now you and your partner can completely sympathize with other mommies
-Before you had your baby, you always thought uggghhh why is that lady bringing in such a bulky stroller inside this crowded train? Can’t she just try to avoid these crowded train lines and rush hour? Well, fast forward to the present and now, you are the one to give space to the other mommy with a stroller while maintaining your own space with your bulky stroller. This is when you realize you are not the only one who is struggling and it’s really a good feeling to know you have a support system with others who are facing exactly what you are facing. And as a bonus, you might even learn something from these random mommies, just by observing! (You know like what kind of drinking bottle maybe ideal for your baby by comparing the ones you see during your commute).
4. Priority seats are your new friend!
-Thank goodness for priority seats as they are a must have when you have your baby on a carrier or even on a stroller. Priority seats are always at the ends of each car, so they are very convenient to place your strollers in as most people won’t be hampered by your stroller since people don’t exit or enter the train in this area.
Tumblr media
5. Babies are too cute to scold!
-This is the dangerous truth! As much as you get stressed out, mad or frustrated with your baby, just looking at their face makes you feel so guilty about your negative emotions. Gosh babies, why do you have to be so cute?? And even if you try to lightly scold them, they will most likely respond clueless or with a smile…so there is no point in wasting your breath and your brain matter with negativity.
6. Storage space becomes a limitation
-Trust me; you will need a lot of toys to keep your baby entertained as they grow up. At two or three months of age, your baby will be okay playing with one toy for some time as they are more likely to nap for longer periods. But as your baby grows up, nap time < awake time and in this awake time, you need to entertain your baby as you will soon learn they get bored really easily and one toy is not enough to last them even for 15 minutes. So that’s when you realize, you need a whole room just for your baby’s stuff, literally.
7. Your baby will make you more social
-Whenever you step outside, a friendly stranger will come up to tell you how cute your baby is, play with your baby and talk to your baby. Now all of this does not happen without your active interactions with the stranger of course! So you have to talk, meet new people, interact with other humans and overall, become more social (if you weren’t already)!
Babies come with so many benefits…..just writing this blog post made me realize this. Feeling thankful and blessed for all of these wonderful perks of having a baby! Rock on babies!
Tumblr media
0 notes
sadeashahan · 10 years ago
Text
Why We Can Emulate the Japanese Education System
Tumblr media
Prior to moving to Japan, I lived in New York. I started working at an international school in Hiroshima two months after arriving to Japan. I had my first cultural shock in Japan when right after getting out from the airport, I found almost everything around me written in Japanese (even the labels on the CD player in the taxi)! I quickly realized that the majority of people here do not communicate in English. Thus, I was very nervous when I started working at the international school. But, to my utter surprise, I have successfully adjusted to the work environment at the school with my lack of Japanese language skills.
I am indeed very fortunate to be working at this school with such a diverse and supportive staff. Furthermore, I have gained a greater appreciation for the Japanese lifestyle and schooling system while working at this school. In a recent discussion with my supervisor, we reflected on how the Japanese education system is quite different from the American system.
One of these differences is seen in disciplinary education. At this international school, I have observed how discipline is taught from a very early age and how it is nurtured as children grow. I also have the unique opportunity to work with children in different age groups (ranging from age two to ten). I cannot overstate the fact that the Japanese education system places an immense emphasis in teaching discipline to its students while also giving them room to have fun. In the American education system, we do not usually see the same level of discipline. This may be due to the fact that in the U.S., there is a very diverse population of students in terms of races, religions and ethnicities, whereas Japanese society is more homogenous. Therefore, different modes of teaching may be necessary within one class in the U.S. to develop the same level of discipline that Japanese students embody.
Another stark difference in the U.S. and Japanese education system is visible in lunch etiquette. In the U.S., the school lunches that students are served are often kept frozen and are not as fresh and tasty as home cooked meals. Therefore, in order to add more flavor to their school lunches, American children will add a lot of unhealthy sauces, such as ketchup, to their food. In Japan, the school lunches that are provided to students are prepared the same way that they would be at home. School lunches in Japan are very nutritious as the food is grown locally. Furthermore, Japanese children are taught to finish their entire lunch box (called `bento`). In order to facilitate this process, most schools in Japan hire nutritionists who work with students who are picky or unhealthy eaters. Therefore, from a very young age, Japanese children are taught how to eat healthy and have a balanced meal, whereas, American children are not really immersed in these values.
The Japanese education system, especially the one that is utilized in preschools here, can be emulated in other parts of the world. Not only does the Japanese schooling system nurture discipline, a healthy lifestyle, but it also lays the basis for a strong higher education system.  
1 note · View note
sadeashahan · 10 years ago
Text
How Android 5.0 Lollipop Screwed Up My Life!
Tumblr media
As we are die-hard Android fans, our hearts bleed to write this post. But the truth must be told! Who cares about the new so-called “material design” when  you cannot perform the simple tasks you were able to do with a Nokia 1100 in 2003 with a shiny expensive smartphone?
Silent mode with vibration off: Yes, the basics of a smartphone. Before upgrading to Lollipop, all you had to do is press down on the volume button to decrease the volume of your ringtone, and then press on the same button, again, to turn off vibrations. But, as of today, you cannot do that with your Lollipop. If you press the volume button, you will see three options on the screen—None, Priority, and All. Now you are telling us to go through this extra step in order to silence the ringtone on our phones (without stopping the notifications for our alarms)? Well, let us tell you Lollipop! Life is too short to figure out how to silence this phone.
Connecting to WiFi: “What is your WiFi password?” Perhaps, one of the top 10 FAQs in the world. But if you have Lollipop installed on your phone, before asking this FAQ, first you may have to ask “excuse me, do you have WiFi here?” Because with Lollipop, it takes forever to even locate all the WiFi connections that are active in a certain location. So you will be lucky if you can even find and connect to WiFi at McDonalds, by the time you are done with your meal and heading out!
Tethering: If you have unlimited data plan on your phone, I bet you are a huge fan of tethering. And, most likely you prefer tethering via USB over wireless. But it’s 2014 and you are using Lollipop. USB sounds more like a 1998 gadge—at least to Lollipop, that is.  Lollipop won’t recognize when a USB is connected to it, almost all of the time. So if you are tethering, you can’t choose to tether using a USB. You will have to choose to create a WiFi hotspot instead, thanks to Android 5.0!
Google Maps and Facebook app: The Facebook app fails to upload new newsfeed, even when you are connected to the internet! And Google Maps has no idea where you are when you turn on navigation. Not sure if it’s due to a bug in the respective apps, but the lack of these basic functions is definitely a struggle to deal with, at times .
All that said, we have to admit Lollipop does have some cool features such as the Battery Saver mode. And so far, the OS seems to be working pretty fast. But it is really striking how Google released such an important version upgrade without tweaking these seemingly trivial but practically big issues! Even worse, although it’s been quite a while since they released it, they have yet to roll out any updates for these bugs. So if you are reading this and have not yet upgraded to Android 5.0 Lollipop and your phone is reminding you to do so, please don’t upgrade yet (you can think about upgrading, perhaps after Google releases an update on these bugs)!  
 Written in collaboration with Zilhaz Jalal Chowdhury
0 notes
sadeashahan · 10 years ago
Text
Risk Factors Associated with Psychosurgery
Tumblr media
Psychosurgery (frontal leucotomy) has proven to cause more harm than benefits which raises questions about whether or not it should be implemented. The effects of these surgeries need to be further studied upon in order to determine if they are effective for the long-term. Since these psychosurgeries can directly alter the neural basis of an individual’s thoughts and behavior, they have a great potential in modifying one’s personality altogether. Do we as a society desire to risk altering an individual’s personality so that we can treat them from a psychological illness? Furthermore, psychosurgery only provides a treatment for psychiatric disorders but it does not cure them. Therefore, the risks and benefits of psychosurgery need to be validated and made reliable on a larger scale in the psychology literature before we attempt to actually engage in these procedures.
But currently, we know that there are many more risk factors associated with psychosurgery than there are benefits. At the most basic level, psychosurgery interrupts the connections between the prefrontal lobes and other parts of the brain (Moniz, 1994). According to rule utilitarianism, psychosurgery would not be justified as it violates professional rules of the physician who must hold the patient’s safety above everything else.
In some cases, patients become apathetic, more aggressive, display amnesia, and have difficulty controlling impulses—not to mention suicide—can also be a significant symptom after surgery (Glannon, 2007). These essentially indicate a change in the personality of the individual which leads us to question: is our personality biologically disposed since we can essentially change someone’s personality by altering the circuits in their brain?
Research has shown that deteriorated patients often obtain slight benefit from treatment. Therefore, according to utilitarianism, is psychosurgery worth the risk? Similarly, the principle of non-maleficence states that surgeons have a duty not to expose their patients to undue risk. Therefore, a non-maleficence theorist would never accept the notion of psychosurgery as an option.
We must also realize that psychosurgery is an experimental field in that its procedures are being refined every day. Thus, the risks of permanent damage to brain circuits and of neurological and psychological disability are significant but relatively unknown (Glannon, 2007). Similarly, the methods in which psychosurgery is done are inhumane and violate moral norms of respecting one’s autonomy as well as the physician’s professional guidelines. For instance, some surgeries require injecting alcohol into the subcortical white matter of the prefrontal region of the brain and leucotomy requires lacerations in the brains. But if neuroscientists do not have full knowledge of the various parts of the brain as well as its functions, then how can surgeons be so confident that they are actually cutting the right area in the brain? Furthermore, these procedures require much accuracy which is not possible even with all the new technologies we have. If we cut outside the area that we should be focusing on—even slightly—there is a potential of causing severe and irreversible damage to the patient.
According to ethics of care principle, psychosurgery would not even be a possibility because these ethicists claim that the physician should be emotionally connected to his patients and act on the patient’s behalf. This implies that the physician should show empathy towards the patient and he can only show that empathy if he refuses this surgical procedure considering the disturbances that can result from it. An individual coming from this stance could furthermore assert that the patient’s consent is the final consent required to carry out this procedure even if the patient cannot reasonably make decisions because he or she has lost their cognitive capacity (Glannon, 2007).
Similarly, Kantians would emphasize that the patient be well notified of all the effects of the surgery including all the risks. Kant would also claim that the patient should be aware of the probability of failure and success because concealing facts or lying is unacceptable. In the case of a patient who has lost their full cognitive capacity, obtaining consent can get quite difficult; therefore, the physician might fail to get an accurate consent to surgery in these cases (Glannon, 2007). Moreover, individuals’ brains are wired differently. The source of mental disorder differs on an individual basis. Therefore, two patients will not respond the same way to neurosurgery and neurostimulation. This is why the threshold for competency for consent should be higher for psychosurgery compared to all other brain interventions including that of neurostimulation (Glannon, 2007).
As has been pointed out various times, the use of psychosurgery has been discouraged by various moral principles and theories. Furthermore, the fact that psychosurgery has a relatively poor history of usage attests to idea that psychosurgery was not well supported historically as well. To this day, research in psychosurgery still has proven to reveal many more severe and permanent side effects that can alter the entire fabric of an individual. If we as a society cannot yet embrace the idea of genetic engineering, cloning and other biological techniques that have the potential to alter the genetic basis of a human being, then how can we suppose that as a society, we are ready to embrace psychosurgery? At the least, we would assume that psychosurgery would receive even more stigma from the larger community simply because it is associated with psychological and not biological illnesses. Therefore, psychosurgery does not seem to have a bright future and should be halted now to prevent more damage.
2 notes · View notes
sadeashahan · 10 years ago
Text
Pharmacological Enhancements: Good or Bad?
Tumblr media
Various psychotropic medications work the same way among those diagnosed with the disorder for which the treatment is used as well as among those who do not have that diagnosis. Adderall is one such medicine that produces the same effects among ADHD and non-ADHD diagnosed patients. This proposes the idea that drugs like Adderall can be abused by the general public in order to enhance cognitive abilities such as attention to detail and alertness in a non-ADHD diagnosed individual in the form of pharmacological enhancements. Considering the principle of utility, the use of pharmacological enhancement could be justified if it produces the maximal balance of positive value over disvalue.
An ethical dilemma surrounding the use of pharmacological enhancement revolves around the idea of whether or not it is ethically correct to increase our cognitive ability considering respect for individual autonomy. In the case that someone decides to take pharmacological enhancement drugs, they are not upholding to the principle of autonomy of the will because they are violating the categorical imperative by acting on their desires. Instead, Kant would assert that this person is acting in accordance with the principle of heteronomy, which implies that they are not acting on their own will through motivation by moral principles. Furthermore, Kant would claim that a person who uses pharmacological enhancements is essentially using artificial means to attain an end—another idea that contradicts the categorical imperative and the maxim that “One must act to treat every person as an end and never as a means only” (Beauchamp, T & Childress).
According to Kant, a student should not take pharmacological enhancements as a result of peer pressure. For Kant, such an act would be considered a violation of acting for the sake of obligation because the student in this case is taking medication since they are scared that they might not fit in with their peers. “For Kant, one must act not only in accordance with but for the sake of obligation” (Beauchamp, T & Childress). Kant also claims that we should cultivate our talents because it is a way of respecting our own rational agency. Therefore, when we use drugs, we are basically failing to cultivate the capacity for self-knowledge and self insight and thus are sacrificing self-development and a more pleasurable lifestyle for other desires.
Pragmatically speaking, it would be difficult to punish anyone for using cognitive enhancers which raises the point as to why not make these as available as other over the counter medicines? In the case of a college setting, it would be nearly impossible to detect which students are taking these enhancers in a very efficient and cost-effective manner. College administrators might stipulate on the honor code that students are not allowed to use any other “device” besides their natural mental capacities during an exam, but how do they enforce such codes when we consider these cognitive enhancers? Since there is no clear and simple way to punish those who take these medicines, perhaps, allowing for more access to these medications may actually decrease demand for them. This scenario also begs the question as to whether or not we should punish those who engage in cosmetic neurology. If we do punish them, what is the basis for that punishment? And should I care if others use the drugs—what are possible consequences to the other individuals taking the drug?
If we relate neurocognitive enhancement to genetic engineering, we see that there are a few similarities in terms of the ethics surrounding these debates. Through both genetic engineering and neurocognitive enhancements, we attempt to harness favorable traits in organisms—the difference is in the method employed in harnessing these traits. A debate that surrounds this topic is whether or not we are allocating more power to God or science when we implement the use of genetic engineering or cosmetic neurology.
In a similar vein, some sociocultural concerns come into light when we discuss neurocognitive enhancements and the rewards that we associate with them. When someone is using drugs like Adderall, are they using their natural intellectual abilities or “artificial” drug enhanced abilities to ace an exam? This brings to light an interesting point which is that when we as a society uphold the usage of neurocognitive enhancers for non-pharmaceutical usages, are we not rewarding individuals for their artificial intellectual capabilities instead of implementing the use of their natural cognitive abilities? Another concern related to this matter is how we as a society define intelligence and whether or not we can expand upon that definition to include the added advantage that is derived from neurocognitive enhancers.
Overall, there may be more shortcomings to using neurocognitive enhancers if we consider Kant’s ethical analysis. Kant completely rejects the idea of using these enhancements because it violates the categorical imperative, the autonomy of the will in addition to failing to cultivate the capacity for self-development. The usage of neurocognitive enhancers also leads to sociocultural concerns and other public policy matters. For instance, the use of drugs to increase cognitive abilities or to tackle certain “life” issues can hinder self-knowledge in an attempt to attain more expedient results.  When a physician allows his patient to ingest some pills to “fix” her life problems, he is encouraging his patient to treat herself as a means to an end and thus failing to respect her humanity (Manninen). Therefore, Kant encourages that patients use the more self exploratory method (namely, no cognitive enhancers) so that they can better cultivate their talents to achieve an “end” of pleasure or enjoyment while also abiding by the principle of self autonomy (Manninen).
0 notes
sadeashahan · 10 years ago
Text
Ethics of Neuroimaging
Tumblr media
Neuroimaging can be appropriate for some circumstances but not for others. Situations in which neuroimaging serves to benefit society are those in which it is used to develop treatments for diseases associated with the brain that might not have been possible through other means. Because of its non-invasive procedures and its remarkable ability to focus in on single brain structures that are too subtle to be noticed at the behavioral level of analyses with multiple studies, neuroimaging can be said to have more benefits than simple observational studies of patients with major brain damage or retardation. Brain imaging can also be useful in providing preliminary personality profiles of potential employees and/or patients in a psychiatry ward but this process can only be used as a supplement to other methods of personality assessments. Similar to how the combination of psychological therapy with psychotropic medications can produce more favorable outcomes, neuroimaging along with some other additive assessment can also be useful in providing full personality profiles. 
Maybe the greatest ethical fallback to using neuroimaging lays in its power to predict future psychopathology. A proposed legislation by the Britain’s Mental Health Act diagnosed a certain group of individuals who have not yet committed a crime but would potentially do so with the term “Dangerous Severe Personality Disorder” without any pre-defined or sanctioned legal or medical status. This then begs the questions as to “what extent would it be possible to identify those who will behave violently in the future?” (Canli and Amin) This legislation also assumes that personality is quite unstable whereas psychological studies has asserted numerous times that personality is quite stable over an individual’s lifetime. In a similar vein, another ethical debate arises along the lines of the public’s right to safety versus the individual’s right to freedom or autonomy. This brings up the issue: should we forego an individual’s right to autonomy in order to protect the greater society’s right to safety?
In statistical terms, a major hindrance of neuroimaging is the fact that when one sees images of brain’s activation patterns, he or she can be easily fooled into assuming that the visual image represents absolute truths rather than statistical inferences. This can be further complicated by the notion that even “rest” baseline conditions can be associated with some degree of brain activation essentially skewing the degree of activation present in control versus those in test conditions. Furthermore, another statistical obstacle arises when we discuss how to assess what is considered a standard “normal” brain to compare abnormality against. In deciding whether a given brain structure falls within normative range, one may also question if it will vary from one brain structure to the next so that any given brain may qualify as “normal” in one measure but not by another (Canli and Amin).
Ethical dilemmas raised by neuroimaging revolve around four issues. Firstly, brain structure and function are not equivalent—the brain does not look the same way as it behaves! Therefore, without evidence of functional impairment, the brain may appear to look bad but not necessarily be defective in a certain function. Secondly, using results from brain image to predict long term outcome can lead to considerable bias. This bias could either be in the form of benefits such as more careful treatment options, increased attention and greater access to therapeutic services or it could possibly lead to detriments in the form of assuming inevitable impairment, overprotectiveness and/or imposing unnecessary interventions on an individual who is otherwise “normal.” Thirdly, neuroimaging findings are only as good as the imaging techniques available. Thus, the techniques are to blame for any invalid conclusions not the findings themselves. Lastly, predicting more than is possible and using findings of associated risk factors as causal agents can be more than problematic.
Beyond the scientific and ethical realms, neuroimaging proves to be a heavily debated topic in philosophy as well. This comes in the form of using neuroimaging to test for consciousness (Racine, Bell and Illes). Consciousness is a very complex topic to define both pragmatically and philosophically, therefore assessing for consciousness using neuroimaging can be seen as overstepping the boundaries of certain possibilities. How do we measure consciousness using neuroimaging if we as a society have not yet even derived at a mutually consensual agreement on its definition? Going beyond philosophy into the sociocultural realm, there appears to be a huge challenge in interpreting and using functional neuroimaging within the general public. Neuro-essentialism, neuro-realism, and neuro-policy shed light on these sociocultural complications. These three concepts essentially display how the media can skew the visual format of neuroimages to make it appear as if neuroimaging can reveal deep “secrets” about ourselves (Racine, Bell and Illes).
In essence, ethical concerns arise in using brain images as a means to identify behavior because behavior is difficult to measure and subject to biased interpretation. In addition to having the potential to help confirm diagnosis and to determine who may benefit from certain pharmacological interventions, neuroimaging is a relatively unbiased diagnostic procedure. It is less susceptible to subjective measures such as cultural and socioeconomic influences that come into play in most other cognitive assessments such as standardized exams. Nonetheless, neuroimaging is expensive and although it has potential benefits, they have not all been empirically validated. Therefore, although neuroimaging has its advantages, it raises various concerns, ethically, scientifically and philosophically.
0 notes
sadeashahan · 10 years ago
Text
Consciousness and Brain Death
Tumblr media
Death can be characterized as either a continuous process or a categorical event. It would, though, be more feasible to determine death if we use the categorical method in which an individual can be in a state of livelihood or death; they cannot be in both states together or be in a state of dying. On the other hand, considering death as a continuous process makes it more difficult to distinguish between the dying phase and actual death. This conflict is quite visible when we look at the literature for brain death where there are multiple states of death such as the vegetative state, coma and minimally conscious state. But can we not cluster all these states of death as part of the process of dying and not as types of death? If we were to cluster vegetative state, coma and etc. under the category of processes used to describe the dying process, what does this indicate about the way in which we define death and is it much different from the concept of lack of life?
Brain death can be a very misleading term in that it can imply two types of death: brain death and regular death. But this idea is not valid because brain death is considered to be death in neurological terms (Laureys). This begs the question then as to why we distinguish brain death from actual death. As the literature on brain death presents, the main reason for the creation of the term brain death lay in the concept of organ procurement (Bernat).
The three formulations of brain death, namely, whole brain, brainstem and neocortical death all share the definition of death which is the “permanent cessation of the critical functions of the organism as a whole” (Laureys). The neocortical definition of death leads to  the most ethical debates in that it assesses death based on an added criterion, that of consciousness and social interaction. The neocortical view asserts death as the loss of consciousness and social interaction. Therefore, according to this view, patients in a vegetative state are considered dead unlike in the brainstem and whole brain death theories. Therefore, the neocortical view of death would justify the removal of life sustaining equipment from those in the vegetative state. But research has shown that patients in the vegetative state are in a paradoxical state of “wakeful unresponsiveness” in which the eyes are open but there is no sensation of the outer environment. It has also been noted that patients in a vegetative state can regain some sense of consciousness before entering into the permanent vegetative state (Fins). So then where do we do draw the line between withdrawing life sustaining equipment if we are not completely sure that a patient is indeed dead in the vegetative state? This raises the broader question as to what constitutes death—is it that the person is not conscious or that they have lost all mental capabilities as well as respiratory and cardiac functions?
The definition of death has been further complicated by the advent of modern technology through which patients now have access to artificial ventilators and such. The permanent vegetative state is an artifact of modern technology in that unlike brain death, the vegetative state is reversible. Unlike patients in brain death, patients in the vegetative state can open their eyes and breathe spontaneously without assistance. But according to the neocortical view of death, these patients would be considered dead. However, those in the vegetative state are clearly not dead if we define death as a categorical measure. As it is, vegetative state can be said to function more as a stage of dying since it is reversible. This idea raises some profound ethical debates in that if a physician were assessing a patient in a vegetative state according to the neocortical theory, he or she would declare an unwarranted death which clearly violates all professional ethical norms. Therefore, we need to be more wary when we apply the neocortical theory of death.
Research has shown that patients with brain death never display any facial expressions and are mute whereas a patient in a vegetative state can occasionally smile, grunt or scream (Laureys). This type of research can pose an ethical dilemma for the conscientious physician who has to decide between removing a patient in a vegetative state from life support because that ICU room is needed for another critically ill patient who is not yet in a vegetative state. In this case, how does the physician judge who to give priority to? According to the utilitarian theory, one would propose that the physician in this case offer the ICU room to the patient who would be more likely to recover since this would lead to the maximal production of positive value over disvalue. But if we were to look at the utilitarian theory from the viewpoint of producing maximum good to all involved parties, then we would assume that the physician would attempt his best to make sure that both patients get placement in an ICU room, possibly by transferring one of the patients to another hospital. Physicians adhering to the utilitarian theory would also consider the probability of recovery and/or potential for further harm for both patients before making a decision as to who deserves to stay in the ICU room.     
According to Kantianism, the scenario mentioned above would look something like this: the patient in the vegetative state should stay in the ICU room because the physician has a moral obligation to treat and save that patient. Assuming that the vegetative patient comes from a lower socioeconomic background than the patient who has just arrived in a critical condition, the physician in this case would have a greater moral obligation to keep and treat the patient who is in the vegetative state in the ICU. If this were the case, Kant would argue that the physician who decides to keep the patient in the vegetative state in the ICU room because he or she intends to do what is morally required is acting for the sake of obligation, which makes his actions morally credible.
1 note · View note
sadeashahan · 10 years ago
Text
Work Smarter, NOT Harder!
Tumblr media
We are often told to work harder as it will help us to strive farther and to reach our full potential. Does working harder really guarantee success? Well……. this may not be the case. Working harder or putting in longer hours is not a sign that one will reach the top of their field. Instead, we must be able to multitask and be efficient at what we do—to put it simply, work smarter.
In my undergraduate experience, I quickly learned that I need to work smart, not hard. I can’t possibly read all the assigned textbook pages, but I can try to figure out the parts of the textbook that I do need to read. During my first and second year of undergraduate, I was trying to read all the assigned readings, take notes on my readings and study my lecture notes. I soon realized that this was impossible. I became so stressed out and overwhelmed to the point where even if I had read all the notes and the assigned readings, I still was not performing as well as I would have liked to. I talked to my peers and friends about these concerns and came to a shocking conclusion. I had to change the way I worked in order to achieve the level of success I aimed to reach. I learned to focus on what was more important and to glance over less important materials.  I stopped focusing on the minor details in my readings and notes and began to focus on understanding the broader concepts. And voila! I was working smarter, not harder! Not surprisingly, this tactic reduced my stress level while increasing my overall class performance.
That was just one example. The idea of working smarter and not harder applies to all realms. Even in my job, I applied this principle and I was quite satisfied with my performance. Similar to how my peers offered me advice on tackling my course load, at work, my colleagues shared many valuable tricks and tips to help me become more productive with my day to day tasks. 
This goes to indicate that working smarter gives us more time to allot to other tasks, thus enabling us to complete more work in a shorter period of time. When we work harder, we attempt to accomplish everything and eventually, we tire out to the point where we can’t possibly complete everything, even after putting forth our best effort. When we work smarter, we tend to spend less time on tasks that are easier and allot more time on the more challenging tasks. This not only allows us to reach our desired end result in a more efficient manner, but it also enables us to be more stress free at the end of the day. Therefore, working smarter can lead to greater success and work satisfaction in the longer term than working harder.
0 notes