#JBGSR
Explore tagged Tumblr posts
biogenericpublishers · 3 years ago
Text
Emergency at the Zoo: An Orangutan Bite Resulting in Thumb Amputation and Forearm Laceration by Shaza Aouthmany MD
Tumblr media
Abstract
Zoos and aquariums receive over 180 million visitors per year. Although this amount of traffic is equivalent to over half the US population, traumatic injuries at modern zoo enclosures are rare. Despite adequate safety standards, proper training, and experience with animals, incidents may occur that require extensive medical management. We report the case of a 57-year-old woman who presented to the emergency department after being bitten by an orangutan resulting in right disarticulation of her right thumb and a laceration to her right forearm.
Introduction
Primates are an order within the class of Mammalia and include humans, gorillas, orangutans, chimpanzees, among many others. Simians are an infraorder within the Primate order but encompass the species stated above [1,2]. Such exotic animals are kept in zoos all over the world and require close human interaction in order to be fed and to be treated medically. These interactions pose a risk for serious injury. While there are countless studies on bites from dogs, cats, and other animals, there are no case reports on orangutan bites. We present a case of thumb amputation and forearm laceration from an orangutan bite to an animal caretaker.
Case Summary
A 57-year-old, right-handed, female patient with no significant past medical history arrived via EMS from a local zoo for right thumb amputation and forearm laceration after being bitten by an orangutan. The patient, an animal caretaker, was feeding the primate when it grabbed the patient’s right upper extremity through its cage. Orangutan initially bit into right hand disarticulating patient thumb and then into into right medial forearm . EMS reported an arterial bleed and applied a tourniquet on the scene.
On presentation patient was alert and oriented x4, had blood pressure 149/71 mmHg, pulse rate 95 beats/min, respirations 12 breaths/min, and pulse oximetry 100% on room air. Bleeding was well controlled with the tourniquet in place.. There was obvious amputation of the right thumb which was hanging by approximately 15 cm of avulsed extensor pollicis longus and flexor pollicis longus tendons. The right forearm had significant soft tissue damage from laceration on the medial aspect with musculature and tendon exposed. There was no evidence of compartment syndrome and the distal circulation was intact. The patient was able to move her right second through fifth digits. No sensory deficits were elicited. The remainder of the physical exam was unremarkable.
The patient was given a 0.5 mL Tdap injection IM, 3000 mg IV ampicillin-sulbactam was given along with 50 mcg IV fentanyl and the tourniquet was removed. Radiograph of the right hand revealed complete amputation of the first digit at the metacarpophalangeal joint (Figure 1). Radiographs of the right forearm demonstrated minimally displaced distal ulnar styloid fracture and large soft tissue laceration with associated soft tissue hematoma (Figure 2). Orthopedic surgery and vascular surgery were consulted and patient was taken emergently to the operating room for wound exploration, replantation, and wound closure.
Discussion and Conclusion
Patients with bite injuries comprise about 1% of visits to the emergency department annually [3,4]. In 2009, animal-related injuries accounted for 1.3 million emergency department visits, 60,800 of which led to in-patient stays. Of these in-patient stays, 9,700 were related to animals other than dogs [5]. These ‘other animal’ injuries may not be as well studied as dog bites, but information available allows for physicians to make clinically sound decisions. As far as we know, there are no published studies on orangutan bites specifically, and as such we review information regarding the complications, treatment of, and risk of infection from nonhuman primate bites.
Bite injuries from mammals have the potential for a wide range of damage to the recipient, from a simple abrasion to the more concerning laceration, avulsion, fracture, and amputation [4]. To put the potential for injury from an orangutan bite into perspective, consider that an African lion has a bite force of over 4000 N, an orangutan almost 2000 N, and a human about a third of an orangutan [6].
The most common complication of mammal bite injuries is infection, which in turn can progress to osteomyelitis, tenosynovitis, and cellulitis.4 Current data suggests a similarity between the bacteria isolated from simian bites and that which is isolated from human bites which includes alpha-hemolytic streptococci, S. aureus, Neisseria species, Eikenella corrodens, anaerobes, and Enterobacteriaceae [7,8]. Left untreated, human bites have an infection rate of 48%, so it is imperative to adequately treat orangutan bites [4].
Treatment principles outlined for animal bites include some combination of irrigation, prophylactic antibiotics, targeted antibiotics, Tetanus immunization, debridement, wound closure, and reporting if legally necessary [3,8-10]. Prophylactic antibiotics should be given based on the propensity for risk of the bite itself, the animal and its immunization status, and the time elapsed from injury to presentation. When choosing antibiotics, selection should be based on coverage for the aforementioned organisms and further guided by results from aerobic and anaerobic cultures. Medication with proper coverage includes oral amoxicillin and clavulanate, IV ampicillin-sulbactam, or doxycycline if allergic to penicillin [3,8,10]. An orangutan bite on the magnitude that our patient suffered was indeed high risk, and as such, 3000 mg IV ampicillin-sulbactam was administered upon arrival to the emergency department.
The situation regarding zoo animal bites and getting immunizations or post-exposure prophylaxis is interesting. While they are exotic animals, primates in captivity undergo similar vaccination schedules as do humans. Orangutans share 97% of human DNA, so it’s no surprise that they can also transmit and can become infected with similar respiratory and gastrointestinal diseases. In fact, the Orangutan Care Manual published by the Association of Zoos and Aquariums recommends following the American Academy of Pediatrics immunization schedule. In the case of an orangutan not being immunized early, the AZA strongly recommends all orangutans get yearly influenza vaccines, tetanus toxoid every 5-10 years, a one-time pneumococcus vaccine, and Haemophilus influenzae type b vaccine. Lastly, since rabies vaccines are not routine in humans, the decision to vaccinate an orangutan against rabies should be based on the exhibits exposure risk [11]. The CDC recommends a rabies preexposure vaccine in high risk groups such as veterinarians and animal handlers. Although this does not completely eliminate the need for therapy after rabies exposure, it eliminates the need for rabies immunoglobulin and reduces the number of vaccine doses [12]. It is reasonable to conclude that the decision to immunize an individual bitten by an orangutan should be based on a combination of vaccination history of the patient as well as the animal. In the emergency setting this information may not be immediately available and so it is prudent for the physician to act on their clinical acumen. In our patient’s case, we chose only to administer the Tdap vaccine.
The patient experienced a tragic encounter while feeding an orangutan at a local zoo and suffered tremendous damage to her arm and thumb. Following her evaluation in the emergency department she was taken to the operating room where orthopedic surgeons extensively irrigated and debrided the wound. Surgeons then closed the forearm laceration and replanted the right thumb. Since 1990, there have been 16 primate escape or attacks at AZA accredited facilities resulting in human injury and zero reported cases of primate attacks resulting in death [13]. This indicates the safeguards currently in place are generally effective. It is difficult to ascertain what precautions should be instituted in the future to prevent such incidents from recurring but most emergency medicine departments are equipped to acutely treat them.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00200.pdf https://biogenericpublishers.com/jbgsr-ms-id-00200-text/
1 note · View note
biogenericpublishers · 4 years ago
Text
Necrotizing Sialometaplasia of Palate by Radka Cholakova in Open Access Journal of Biogeneric Science and Research
Tumblr media
Abstract
Necrotizing sialometaplasia (NS) is a rare, benign, inflammatory condition, occasionally with ulcers, which is self-limiting and affects mainly salivary tissue. Purpose: To present a case of NS associated with a systemic connective tissue disease in a female patient. Material/Methods: A 56-year-old female patient with ulcers of the palate, surrounded by a red halo, for 2 weeks, which did not resolve when treated with topical antiseptics. Clinical and radiological methods, together with pathohistological analysis, were used to make the diagnosis. Immunohistochemical analyses to diagnose patient's systemic disease were performed at a rheumatology clinic. Results: The definitive diagnosis was made based on a pathological examination and the tests performed in the rheumatology clinic, which found that this was a case of necrotizing sialometaplasia of the minor salivary glands and a concomitant systemic connective tissue disease. Conclusions: NS is a rare disease, with an excellent prognosis, without any possible preventive strategies.
Keywords: Necrotizing sialometaplasia; diseases of the minor salivary glands; tumour-like lesion
Introduction
Necrotizing Sialometaplasia (NS) is a rare, benign, inflammatory condition, sometimes with ulcers, which is self-limiting and affects mainly salivary tissue. This disease is classified as a “tumour-like lesion” in the WHO classification of salivary gland tumours. It was first described in 1973 by Abrams, Melrose and Howell [1], and in the following year, Dunlap and Barker reported five diagnosed cases [2]. This lesion can be confused with a malignant disease, thus resulting in unnecessary radical surgery.
Materials and Methods
A female patient of visible age of 56, corresponding to her calendar age, with complaints of a non-healing “aphthous ulcer”, which appeared 2 weeks earlier. Treatment with antiseptic mouthwashes and topical application of propolis was administered. The aphthous ulcer did not resolve, but the pain subsided over time. The initial examination showed two ulcers with dimensions of about 2 mm, located on either side of the median palatine suture, with a red halo around them. After treatment with SOLCOSERYL (MEDA Pharma GmbH & Co. KG, Germany), the ulcers resolved, but the red spots remained (Figure 1). The patient was referred for a CBCT in order to detect erosions in the palatine bone under the ulcers (Figure 2). The size of the formation did not decrease and, therefore, an excisional biopsy was performed in full thickness of one of the lesions, and then was provided for histopathological examination. The wound was covered with a PRF membrane and healed without complications. Pathological examination showed that there were cords of the lining multilayered squamous non-keratinizing epithelium, deep below it, without atypicality, and with rapid proliferation of granulation tissue, which covered the minor salivary gland ducts, with squamous cell metaplasia. The patient was a moderate smoker (up to 10 cigarettes/day). She had cholelithiasis without clinical manifestations. She had had surgery for a benign neoplasm of the breast several years before. Intermittent complaints of swelling and rash on the upper eyelids, erythematous rash on the nasal dorsum and photosensitivity were present, for which the patient had been admitted to a rheumatology clinic for examination. A systemic connective tissue disease was suspected there, as elevated total AHA, Anti-SS-A and Anti-SSB antibodies with very high titers and low C4 complement level, without significant proteinuria, were found. The dermatological examination showed pathological skin lesions of the nose with erythematous-edematous plaque, 6 cm in diameter. Sjögren's syndrome or systemic lupus erythematosus were suspected in this patient. The pathohistological examination of the minor salivary glands from the lower lip mucosa found that this was not a case of Sjögren's syndrome. Biopsy of the plaque of the nasal lesion was recommended.
Results
The definitive diagnosis was made based on a pathological examination and the tests performed in the rheumatology clinic, which found that this was a case of necrotizing sialometaplasia of the minor salivary glands and a concomitant systemic connective tissue disease.
Discussion
Necrotizing sialometaplasia is extremely rare, accounting for less than 1% of oral lesion biopsies [3]. The mean age of onset is 46 years, and the male:female ratio is 2.7:1 [4]. The disease is prevalent among the Caucasian race, with Caucasian:African American ratio of 5:1, according to Brannon [5]. In most cases (80%), the minor palatine salivary glands are affected. Although rarely, it can occur in the retromolar space, gingiva, lips, tongue, cheeks, and nasal cavity [1,6]. This disease can also affect the large salivary glands in more than 10% of the cases [3]. The etiopathogenesis of necrotizing sialometaplasia is unknown, but it is thought that the lesion develops as a result of previous ischemia in the salivary gland. In experimental models, disruption of the arterial blood supply to the salivary glands in rodents results in a NS-like histopathological picture. This disease is found in patients with sickle cell disease, Buerger's disease and Raynaud's phenomenon, which are all vasculopathies that predispose to ischemia. Other risk factors for the development of NS include: smoking (and alcohol consumption), use of cocaine and anabolic steroids, hot food, fellatio, traumatic vascular injury, and bulimia [7-9]. The synergistic action of NSAIDs and alcohol over a long period of time results in a change in the oral mucosa functions due to suppression of prostaglandin production and reduction in the blood supply to the minor salivary glands, which causes ischemic events [10]. The iatrogenic factors for the development of NS are the use of local infiltration anesthesia with an anesthetic with a higher concentration of correctives, intubation, bronchoscopy, local radiotherapy, as well as surgical procedures in the vicinity of the affected area [3,7,11,12], which, in the case described by us, had not been conducted for a period of more than 6 months. Senapati et al. [13] reported that NS is a manifestation of local vasculitis. It may be associated with other tumours, in particular Warthin's tumour, Abrisokov's tumour, lip cancer, rapidly growing malignant mesenchymal disease and salivary gland tumours. There is also a connection with previous upper respiratory tract diseases (chronic sinusitis and allergies) in the past few weeks. It is possible that ischemia is due to immune complexes, resembling the pathogenesis of erythema multiforme or benign trigeminal sensory neuropathy. In our case, levels of immune complexes in the body were elevated.
Anneroth and Hansen described the following five clinical stages in the development of necrotizing sialometaplasia: infarction, sequestration, ulceration, repair and healing [6]. A subacute variant of this condition was also described in the literature. Histological features are ischemic lobular necrosis of seromucinous glands, with maintenance of intact lobular architecture, despite coagulative necrosis of the mucinous acini. Pale acinar outlines often persist, but the cell nuclei are hypochromatic or absent. Mucin extravasation into the adjacent tissues triggers an inflammatory reaction dominated by histiocytes and granulation tissue. Within the necrotic lobules, the inflammatory component is often minimal, but is usually found in the surrounding tissues. Although squamous metaplasia of ducts and acini is typical (which makes the diagnosis challenging due to its similarity to malignancies), the metaplastic cells have benign nuclear morphology, with minimal pleomorphism or hyperchromatism and few mitotic figures. Nests of squamous epithelium, which usually have smooth contours, occasionally may have an irregular outline.
Pseudoepitheliomatous hyperplasia, where the overlying or adjacent epithelium is markedly hyperplastic, together with extensive ductal metaplasia, may resemble malignant condition of the epithelium, which could be the reason for misdiagnosis and radical ablative surgery. It may be difficult to distinguish NS from squamous cell carcinoma, low-grade mucoepidermoid carcinoma and oncocytic tumours. Specific histopathological characteristics may have some relation to the “age” of the lesion at biopsy. Coagulative necrosis is more common in “new” lesions, whereas fibrosis and squamous metaplasia are typical of “older” lesions. In our case, biopsy was taken nearly 2 weeks after the onset of the first symptoms, and therefore the changes correspond to an “old” lesion. Management of this disease includes monitoring, use of topical antiseptics, and pain control until recovery [3]. In the presence of predisposing factors, their correction is necessary.
Conclusion
This work was supported with grants from the URPC, University of Benin, Benin City.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00155.pdf https://biogenericpublishers.com/jbgsr.ms.id.00155.text/
For more open access journals click on https://biogenericpublishers.com/
4 notes · View notes
biogenericpublishers · 4 years ago
Text
Latent Crohn’s Disease Uncovered During Treatment with Secukinumab in a Patient with Ankylosing Spondilitis by Moșteanu Elena Ofelia* in Open Access Journal of Biogeneric Science and Research
Tumblr media
Keywords: Crohn’s disease, IL-17A Inhibitors, Secukinumab
Introduction
The inflammatory bowel diseases (IBD) are defined as a group of heterogeneous disorders with multifactorial etiology, in which a chronic inflammation of the digestive tract is caused by disturbances in the immune response to the pathogenic low-diversity gut microbiota [1]. IBD includes two major entities: Crohn’s disease (CD) and Ulcerative Colitis (UC). Current data estimates an impressive prevalence of over 0.3% globally [2]. CD is defined as transmural inflammation of any part of the gastrointestinal tract (GTI), from mouth to the perianal area, with a mainly ileal localization, while UC is an autolimited disease, of localization and histology, affecting only the colorectal mucosa. The clinical presentation of CD typically includes non-specific symptoms as abdominal pain, diarrhea, low-grade fever and weightloss, and with a low prevalence, extraintestinal manifestations. Complications of the disease include stricturing with secondary bowel obstruction, penetrating fistulas or abcesses.
Ankylosing spondylitis (AS) is a disease of spondyloarthropathies’ (SpA) class, which includes arthritis of different etiology. AS is a chronic inflammatory rheumatic disease involving primarily the axial skeleton, clinically expressed by back pain. It’s progressive character leads to continuosly increasing stiffnes of the spine, leading to spine fusion in late stages. In the light of epidemiologic studies of SpA associated with IBD, it was found an association of these pathologies in a third of the patients [3,4]. The overexpression and the complex immunoregulation of IL-17/IL-23 axis is of main importance in the interconnection of these entities [5]. Secukinumab is a human monoclonal antibody, a IL-17A inhibitor that was approved on the market in 2015, 20 years after Th-17, and consequently IL-17, was discovered. The novelty of the current case report is the long-term follow-up of a patient with new-onset of CD during treatment with Secukinumab, describing the resolution of the newly-emerged disease.
Case Report
We present the case of a 40 years old male, smoker, diagnosed with AS in 2006, for which he was treated with sulphasalazine for 4 years, etoricoxib for the next 2 years and etanercept (a TNFα inhibitor) for 7 years, until 2019, when the treatment with Secukinumab started. In 2016, he started having brief episodes of mesogastric abdominal pain, nausea, diarheea, episodes that didn’t remit with any medication, but did spontaneuosly after aproximately 24 hours. Because of the short duration and the low frequency, one episode every 6 months, the patient was not referred. After only 2 weeks of treatment with Secukinumab, in February 2019, these episodes worsened and became more frequent, almost weekly at the moment of the first gastrointestinal assessment in April. The abdominal ultrasound and the colonoscopy didn’t reveal any pathologic modifications. The symptoms were to be controlled by diet. In November, the patient was admitted again as the symptomatology worsened. Physical evaluation revealed pain at the abdomnial palpation in the right flank. Paraclinical examination show only an elevated C-reactive Protein of 2.88 mg/dl. Stool examination excluded any gastrointestinal infection. Abdominal ultrasound was nomal. A colonoscopy was performed and revealed cecal pseudopolyps, but the terminal ileon could not be inspected. The MRI Enterography revealed an ileal inflammatory stenosis, right next to the ileocecal valve, explaining the impossibility of ileocecal valve intubation (Figures 1&2).
The diagnosis was of ileocecal CD due to the inflammatory stenosis with subsequent upstream bowel dilatation (Figure 3). Secukinumab administration was discontinued and treatment with Adalimumab was indicated. At 3-months follow-up, the patient’s gastrointestinal symptomatology completely remmited, the remission being controlled by maintaining the treatment with Adalimumab, daily administration of inulin and simeticone and diet.
Discussion
It is widely known that there is a close association between IBD and SpA in terms of genetics, microbiota and immunology discorders [3,4]. In the etiopatogenesis of IBD, CD4 Th cells play a major role by starting and maintaining the autoimmune inflammation of the gastrointestinal tract, by producing pro-inflammatory cytokines. Beside CD4 Th cells, another subset of Th cells, Th17 cells, are overexpressed at this level and are positively regulated by IL-23, another proinflammatory cytokine produced by antigen-presenting cells, after the contact with the compounds of the pathological microbiota [6]. The activation of IL-17/IL-23 axis is fundamentally connected to the etiology of both CD and AS, IL-17 being found extensively in the blood, the synovial fluid in AS and in the intestinal lamina propria of CD patients [7,8].
Human interleukin IL-17 was found to be involved in different autoimmune diseases, such as systemic sclerosis, multiple sclerosis, systemic lupus erythematosus, psoriasis, asthma, SpAs and IBD [9-14]. While IL-17 is a pro-inflammatory cytokine, current data suggests a protective role on the gastrointestinal tract in IBD patients [15]. The importance of this paradox is to be seen in patients that benefit of biological treatment with IL-17 inhibitors and might associate a latent IBD.
Studies conducted with the objective of administering Secukinumab on CD animal models studies (2 ref trial) and on one human trial, which had to be completed prematurely because of the diseases’ unfavorable evolution compared to placebo, enforcing so the theory of IL-17 as being, in an unknown manner, a protective factor in the natural inflammatory evolution of CD. It is suggested that this difference may be due to the pathological microbiota found in IBD [16-18]. Using the data of these trials and knowing the statistical association between CD and SpA, the IL-17 molecule being overexpressed and playing different roles in both diseases, it might come handy that by inhibiting the IL-17/IL-23 path in SpA, the inflammation of the gut might worsen.
In Secukinumab’s summary of product characteristics, IBDs are mentioned in the section of special warnings and precautions for use, warning that the patients should be closely monitored. The latest retrospective analysis of pooled data from 21 clinical trials, containing 7355 patients, concluded that the cases of IBD during treatment with Secukinumab were uncommon [19]. The reported incidence of CD among 794 patients treated for SpA was 0,1 per 100 patients/year, with 5 new onset cases of CD (0,63%) and an exacerbation found in 3 out of 5 patients with a history of CD. The question this case raised was whether our patient was having a silent CD that was activated by this treatment or he developed the disease on a normal gastrointestinal tract. Anamnestically, the patient did associate in time the onset of gastrointestinal manifestations with the use of Secukinumab.
Diarrhea is a common side effect of this treatment and the patients should be informed of the possibility of having an inactive IBD that might activate during this treatment and they should be advised to adress a specialist if the gastrointestinal symptoms persist. IL-17 positive cells are not detected in the mucosa of healthy individuals, infectious colitis or ischaemic colitis patients, but IL-17 levels are significantly elevated in active and even in inactive CD [20]. Therefore, we can’t state that a subclinical CD was not present, taking into consideration also the frequency of IBD and SpA associating. Using the patient’s history and the current data available in the literature, we strongly believe that Secukinumab was the trigger of CD in this case.
Currently there are treatment guidelines just for IBD and SpA individually, but not any for both IBD and SpA, which would be needed in the near future, as more specific biological therapies emerge, targeting different inflammatory pathways. As written in the summary of product characteristics of Secukinumab, a close follow-up for IBD patients is needed, but, even if the new onset of CD is reported to have a low incidence, a gastroenterological monitoring would be recommended even for healthy individuals, because of the reported prevalence of IBD and SpA association.
Conclusion
A gastroenterological consult before the initiation of the treatment would be beneficial, since SpA may precede the onset of IBD [21,22] or may associate with an underlying asymptomatic intestinal inflammation [23]. This case report is also enforcing the current data of IL-17 having a protective role in IBD, it’s path inhibition leading to exacerbation or to activation of a silent IBD. Of biological therapies, there are safer treatment schemes for a patient with AS and symptomatic or silent CD, which could include Ustekinumab, a human monoclonal antibody that is targeting the IL-12/IL-23 path, Infliximab, a chimeric monoclonal antibody that inhibits tumor necrosis factor alpha (TNFα) or Adalimumab, the first human monoclonal antibody that bind and neutralizez TNFα. The statistics and this case report are underlining the importance of a multi-disciplinary approach when prescribing biological therapy, including rheumatologists and gastroenterologists. Why the IL-17/IL-23 axis is having a „paradoxal” role on the gastrointestinal tract it still a question that’s laking an answer and represents a future research direction.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00156.pdf https://biogenericpublishers.com/jbgsr.ms.id.00156.text/
For more open access journals click on https://biogenericpublishers.com/
1 note · View note
biogenericpublishers · 4 years ago
Text
Meta-Cognition as a Predictor of Productive Learning Among Out-of-School Emerging Adults (15-25) Engaged in Mechanic Work in Buea Municipality, South West Region of Cameroon by Busi Ernest Neba in Open Access Journal of Biogeneric Science and Research
Tumblr media
Abstract
The study investigated “Meta-cognition as a determinant of productive learning among out-of-school emerging adults engaged in mechanic work in Buea Municipality”. The specific objective is; To ascertain the extent to which executive functioning impacts productive learning among out-of-school emerging adults engaged in mechanic work in Buea municipality. Methodologically, the research design used was experimental design and the type of design quasi-experimental research design and exploratory research design with the aid of a pretest and posttest. The population of the study was made up of all apprentices between the ages 15 to 25 in mechanic garages, and trainers. The target population were 26 apprentices and 4 trainers and the accessible population was 12 apprentices and 4 trainers. The study took place in Buea in Cameroon in two mechanic garages.  The instruments used for data collection were questionnaire, observational checklist and interview guide. The procedure for data collection was through observing and training the expert on psycho-pedagogy skills who later trained the apprentice and they were later tested. Pretest and posttest were carried out. The study was carried out in two mechanic workshops. Each of the garages had experimental and control groups. The sampling technique was purposive sampling technique. Quantitative data were entered using EpiData Version 3.1 (EpiData Association, Odense Denmark, 2008) and analyzed using the Statistical Package for Social Sciences (SPSS) Standard version, Release 21.0 (IBM Inc. 2012). Data collected from the field were subjected to both descriptive and inferential statistics. For the descriptive data, frequency distribution tables and charts were used to present and describe the data obtained. Cohen’s d was used to compare assert significant difference of the inferential statistics. The findings were as follows; executive function should constantly be applied in mechanic garages by the trainers so that apprentice can gain knowledge, aptitude, competencies and become productive.
Keywords: Metacognition; executive functioning; productive learning; out of school emerging adults; mechanic work.
Introduction
Metacognition entails that learners reflect with accuracy on their cognition [1]. Metacognition also requires planning, self-regulation of both cognition and affective or motivational states and allocation and other intellectual resources, executive functioning forms part of the construct. Carlson & Moses (2001) argue that executive functioning may be a prerequisite skill for the development of metacognition. Schraw & Moshman [1] posit that metacognitive development proceeds as follows; cognitive knowledge appears first with learners reflecting on their accuracy of the cognition an consolidation of this skills, ability to regulate next with dramatic improvement in monitoring regulation and evaluation in the form of planning and finally, construction of metacognitive theories, these theories allow for the integration of cognitive knowledge and cognitive regulation, as learners construct their own theories, come to reflect on their own thinking and learning. Productive learning involves bringing in skills, strategies that makes learning fruitful among out of school emerging adults from the ages 15 to 25.
Objective of the Study
1.1. General Objective
1.       To determine the extent to which meta-cognition predicts productive learning among out-of-school emerging adults engaged in mechanic work in Buea Municipality
1.2. Specific Objectives of the Study
Specifically, this study is intended:
1.       To investigate the extent to which executive functioning impacts productive learning among out-of-school emerging adults engaged in mechanic work in Buea municipality
Background to the Study
Metacognition is being aware of one’s own cognitive structure and learning characteristics. According Flavell [2] metacognition is a system which organizes information, experiences, objectives and strategies. Metacognition refers also to thinking about thinking, generally covers various skills that are inter-related to thinking and learning which are critical thinking, reflective thinking, problem solving, executive functioning, and making of decisions in the process of problem solving. Furthermore, the main indicator of this study is executive functioning as an indicator of metacognition   which consist of those capacities that enable a person to engage successfully in independent, purposive self-serving behaviour. The Executive functioning asks questions how and whether a person goes about doing something. The executive function is thus conceptualized as having four components; violation, planning, purposive action and effective performance which each involves a distinctive set of activity related behaviour. According to Tchombe [3], cognitive learning takes place through mutual reciprocity. This is determined via participation which is oriented by cultural belief about knowledge, parent-child expectations and aspirations.
Therefore, the skills and competencies learners employ for meaning making through shared activities have enormous impact on how learners make decisions which is held by understanding of the dynamics of their cognitive developmental sequence. Productive learning here is skill learning, learning through experience that solve real life problems. Emerging adults are young people between the ages 15 to 25 who are out of school and engaged in mechanic work as apprentices with their experts. According to Lo-oh (2009) the implication in the lifecourse is evidenced in how young people conceive and define adult status today. According to him, in the African sub region in general and Cameroon in particular, the transition to adulthood is an arduous task characterized by several challenges.
Statement of the Problem
The problematic in this study is productive learning. The researcher observed that most apprentices in the mechanic workshop turn out to be less skilful in solving problems in cars when cars have breakdowns especially when the expert is not around, especially thinking out of the box to resolve problems in breakdowns like removing the engine, and other complicated mechanic repairs. Literature Review
Metacognition
Research activity in metacognition began with John Flavell who is considered to be the father of the field. Metacognition is a concept that has been refer to a variety of epistemological processes. Metacognition essentially means cognition about cognition; that is thoughts about thoughts, knowledge about knowledge.  So, if metacognition involves perceiving remembering and so forth, then metacognition involves thinking about one’s own perception, understanding, remembrance.
Executive Functioning
Executive functioning includes responsible processes, for directing focus, managing and integrating cognitive functions related to everyday life tasks as well as new and complex problems. Executive functioning as used in this study describes a set of mental processes that helps connect past experience with present action through critical thinking. It involves thinking out of the box in order to solve a problem. Executive function is used in when the following activities are being performed that is planning, organizing, strategizing and paying attention to and remembering details.
Productive Learning
sees productive learning as learning through human development.
Productive learning is learning on the basis of productive activity in social serious situation, learning on the basis of experience by being able to achieve something important in one’s environment. Productive learning involves active and inquiry learning or integrating creativity, competencies, fostering learners’ ability to think critically and learn autonomously based on constructivism in order that learners produce creative things based on problem solving method.
The Concept of Mechanic Work
Mechanic work is a trade craftsmanship. Mechanic work involves application of specific knowledge in the design, selection, construction, operation and maintenance of automobiles. Mechanic work is geared to test, diagnose, service and completely maintain faults relating to the conventional automobile assembly like vehicles of different brands. Theoretical review
Vygotsky Socio-Cultural Theory (1896-1934)
Vygotsky believed that individual development could not be understood without reference to the social and cultural context within which such development is embedded. He states that using activity mediators, the human being is able to modify the environment and this is her way of interacting with nature. Hence, Zone of Proximal Development is actually the gap between actual competence level (what problem level a learner is able to independently solve), and the potential development level (what problem level could she solve with guidance from a tutor). It supports a representation of intellectual development based on continuity.
Transformative Learning Theory by Mezirow (1978)
According to Mezirow [4] transformative learning is the process of making meaning from our experiences through reflection, critical reflection and critical self-reflection. Meaning according to Mezirow means making sense of the day to dayness of our experiences. He eventually names this process perspective transformation to reflect change within the core and central meaning structures. Perspectives are made up of sets of beliefs, values and assumptions that we have acquired through our life’s experiences.
Method
1.1. Research Design
The researcher used a triangulation approach, where the researcher used both qualitative and quantitative research methods to collect data.  An experimental design was used in this study supported with an exploratory sequential design. The type of experimental design was a quasi-experimental design which was chosen to identify an intervention effect using an experimental group and control group to ascertain treatment effects on experimental group, through pre-test and post-test. The procedures employed tests causal effect (XY) and test causal hypothesis. Starting by ensuring through a pre-test that the two groups are comparable and with a post-test repeating the pre-test including researcher constructed test to ensure change in design.
1.2. Instrument for Data Collection
The following methods were used to gather information from the correspondence. A questionnaire of 5 items per objective were conducted, an observational checklist and an interview guide was also designed that had statements from the following measures, coaching, scaffolding, executive functioning and productive learning measured the following; aptitude, mastery experience, attitude, discipline, knowledge, skills and competency development as seen in the appendices. A lesson note was prepared for mechanic work for the intervention with the used of the experiential learning as a teaching method.
1.3. Findings
Research hypothesis three: There is no significant relationship between executive functioning and productive learning among out-of-school emerging adults engaged in mechanic work in Buea Municipality. (Cohen’s d ): If the theoretical effect size is smaller than the calculated one, we then reject the hypothesis that the means are not significantly different at 90% power and at 95% CL with cohort sample 3 and a total sample size 6 as it is the case in our study context. As for the total score in executive functioning for mechanic in the experimental group, the mean at pretest was 9.5 and rose to 11.5 at posttest and this increase was significant (negative Cohen’s d). In fact, the theoretical effect size is smaller than the calculated one, we then reject the hypothesis that the means are not different. This therefore implies that there was a significant progression from pre-test to post-test.The significant improvement in productive learning score was as the result of improvement in executive functioning because such improvement was not obtained in the control group where no significant improvement was realized in coaching from pretest to posttest.
The hypothesis here stated is then accepted.
Executive Functioning and Productive Learning among Out-Of-School Emerging Adults Engaged In Mechanic Work
There was no significant difference between the two workshops for both masters and mechanics. As for the total score in executive functioning for mechanic in the experimental group, the mean at pretest was average and rose to at posttest and this increase was significant. This therefore implies that there was a significant progression from pre-test to post-test. This matches with Flavell [5] metacognitive theory who referred to metacognition theory as thinking about your own thinking. The root “meta” means “beyond”, so the term refers to “beyond thinking” [6-15].
Conclusion
To conclude, thinking out of the box enable out of school emerging adults to reflect, think critically and work positively towards a positive growth mind, that makes them gain skills, aptitude, knowledge, competencies, and are able to repair cars and engaged in car maintenance to solve car breakdowns [15-23].
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00152.pdf
https://biogenericpublishers.com/jbgsr.ms.id.00152.text/ For more open access journals click on https://biogenericpublishers.com/
1 note · View note
biogenericpublishers · 4 years ago
Text
A Case-Study of the Physico-Chemical Parameters of the Public Water Supply in the University of Port Harcourt by Johnson Ajinwo OR in Open Access Journal of Biogeneric Science and Research
Tumblr media
Abstract
Water –borne diseases is on the rise currently in the third world countries as a result of lack of routine water analysis checks to ensure that the desired quality of drinking water is upheld. In the light of the above, this research aimed at determining the physico-chemical properties and mineral content of seventeen water samples from the students’ residential areas and environs of the Main Campus of the University of Port Harcourt, Choba, Rivers State, Nigeria was carried out. The results showed that most of the physico-chemical quality indices of the water samples were within acceptable limits, except the nitrate levels of samples 13 and 14. The pH of all the samples were found to be acidic, with sample 12 having the lowest pH of 4.44. The hardness levels of the samples were determined to be very soft affirming the relationship between acidic pH and soft water. This increase in the corrosivity and plumbosolvency of the samples may result in long-term risk of metal poisoning from plumbing materials. However, the metal analysis showed only slight sodium and calcium contamination which may pose no health risk.
Introduction
About 829,000 people die annually from diarrhoea caused by poor sanitation, hand hygiene and drinking contaminated water. A number of diseases which include cholera, dysentery, diarrhoea, polio, typhoid and hepatitis A are transmitted through contaminated water and poor hygiene. Deaths from contaminated water are preventable and efforts aimed at tackling this ugly menace be put in place. The 2010 UN General Assembly emphasised that access to water and sanitation are basic human rights requirements. But water which is the number one liquid for life has come under intense pressure, owing to climate change, population explosion, urbanization and scarcity of water in many places. According to WHO, about 50% of the world’s population would be living in water-stressed areas in 2025 [1].
Water quality can be compromised by the presence of unwanted chemicals, micro-organisms and even radiological hazards. The problem of provision of good quality water for human consumption in Nigeria has been a major challenge that has received little or no attention. The National Agency for Food and Drug Administration and Control, (NAFDAC) is the body charged with the responsibility of ensuring the provision of good quality drinking water through the registration and quality assurance of commercially available drinking water [2]. However, majority of the Nigerian populace, in particular students shun commercially available water possibly due to the cost implication and still resort to water sources that lack quality assurance.
The vital role water plays include its ability to dissolve a wide range of substances, and has gained the status of being tagged the ‘universal solvent’. In the human body, two-thirds of the body is made up of water; which is the basic component of cells, tissues and the circulatory system. Due to the solvation character of water, cells are able to access nutrients in the body to produce energy, undergo metabolism and excrete waste in the body. Similarly, for drugs taken to elicit their desired activities, the drug substances must first be dissolved, prior to absorption into systemic circulation. It is well-known that acute dehydration may lead to death, which underscores the role of water as a life-sustaining fluid of great value and importance.
The University of Port Harcourt is sited in Choba community, Obio/Akpor Local Government Area of Rivers state, Nigeria. The state is one of the South-south states that constitute the oil-rich Niger-Delta Area, which has been the subject of oil exploration for more than 50 years. During this time, there have been oil spillages in the environment resulting in air, soil and water pollutions. This is evidenced in the recent United Nations Environment Programme (UNEP) report on the effects of oil spillages in Ogoniland in Rivers state. In this report water samples were obtained from boreholes drilled specifically for the research. The findings from the research revealed high levels of hydrocarbon, some organic and inorganic substances, some of which were carcinogenic [3]. The results further showed that in many locations, petroleum hydrocarbons had migrated to the groundwater. Furthermore, the host community of the University has also played host to an American oil exploration company for over two decades. To this end, it is expected that both soil and water in and around the community will be contaminated, especially with hydrocarbons and heavy metals.
This research aims to determine the physico-chemical parameters and the mineral content of the water sourced from deep water table within the students’ residential area and environs of the main campus of the University of Port Harcourt and to ascertain if the contamination is within safe limits. The standards by which this research would judge water quality is that prescribed by the World Health Organization (WHO), the United States Environmental Protection Agency (EPA) and the Nigerian Industrial Standard developed by the Standards Organization of Nigeria (SON).
Materials and Methods
1.1. Materials
1.1.1.        Water Samples
Drinking water samples were collected from students’ residential areas and environs at the University of Port Harcourt Main Campus (Unipark, Abuja); the samples were collected from seventeen locations, which were described in (Table 1).  The samples were collected using 2 L glass bottles fitted with an inner cork and an outer screw cap. The bottles were initially washed with detergent, rinsed thoroughly with tap water and then rinsed with distilled water. Prior to sample collection, the bottle was rinsed three times with the sample to be collected before collection. The samples were stored at room temperature. All titrations carried out in the physico-chemical analysis were done in triplicate for each sample and the average titre calculated.
1.2. Methods
1.2.1.        pH Determination
Apparatus: pH Meter.
The pH meter was calibrated with standardized solutions of pH 4.0 and 9.1 respectively. The pH was read after inserting the electrode of the pH meter into the sample and allowing the reading to stabilize.
1.2.2.        Total Alkalinity
1.2.3.        Apparatus/Reagents: Burette, pipette, conical flasks, 0.001105 M HC1, phenolphthalein indicator, and methyl orange indicator.25 ml of the sample was pipetted into a conical flask and 2 drops of phenolphthalein indicator was added. There was no colour change (indicating the absence of carbonate and hydroxyl alkalinity). 2 drops of methyl orange indicator was added to the sample and titrated with the acid to a yellow endpoint.
1.2.4.        Calculation:
Total Alkalinity (mg CaCO_3/L) =(M x V x 50000)/V_ (sample ) Bicarbonate Alkalinity (mg CaCO_3/L)=(M x V x 30500)/V_(sample )
Where M= molarity of HCI, V= titre value, and Vsample= Volume of Sample
1.2.5.        Dissolved Co2 Content
Apparatus/Reagents: Burette, pipette, conical flasks, 0.01 M NaOH, phenolphthalein indicator.
25 ml of the sample was pipetted into a conical flask and 2 drops of phenolphthalein indicator was added. Titration was done against the base. Endpoint was determined by colour change from colourless to pink.
Calculation
Dissoved CO_2 (mg/L)=(V x N x E x 1000)/V_(sample )
Where  V=titre value , N=normality of the base (0.0128), E=equivalent
Weight of co2(22),Vsample=Volume of Sample
1.2.6.        Chloride Determination (Precipitation Titration)
Principle:
               The principle behind this titration is the precipitation of C1 as AgCl by AgNO3 before AgCrO4 (red) is formed at the endpoint
Apparatus/Reagents: Burette, pipette, conical flasks, 0.014N AgNO3 and K2CrO4 indicator
25 ml of sample was pipetted into a conical flask, 2 drops of the indicator was added and this was titrated against AgNO3 solution until there was a colour change form yellow to brick red.
 Calculation:
Chloride (mg/L) =(V x N x E x 1000)/V_(sample )
Where V= titre value, N= normality of AgNO3 (0.014), E= equivalent
Weight of chloride ion (35.5),Vsample=Volume of sample used
1.2.7.        Silica Determination (Molybdosilicate Method)
Principle
The Molybdosilicate Method is based on the principle that at a pH of about 1.2, ammonium molybdate ((NH4)6M07024.4H20) reacts with any silica and phosphate present in a sample to form hetero-polyacids. Oxalic acid is then added no neutralize any molybdophosphoric acid present. This reaction produces a yellow colour whose intensity is proportional to the silica that reacted with the molybdate. Standard colour solutions of silica are also prepared and the colour intensity can be visually compared or its absorbance can be measured.
Apparatus: Conical flasks, beakers, pipettes, ammonium molybdate reagent: (NH4)6MO7O24.4H2O), 1:1 HCI, oxalic acid (H2C204.2H20)
Ammonium molybdate: prepared by dissolving 10g of (NH4)6M07024.4H20) in distilled water.
Oxalic acid: prepared by dissolving 7.5 g of H2C204.2H20 in 100 ml of distilled water.
Potassium Chromate (K2CrO4) Solution: prepared by dissolving 315 mg of K2CrO4 in distilled water and made up to 500 ml.
Borax Solution: prepared by dissolving 2.5 g of borate decahydrate Na2B407.10H20 in distilled water and made up to 250 ml.
The standard colour solution of concentrations 0.00 — 1.00 (mg Si/L) was prepared by mixing volumes of distilled water, potassium chromate and borax in the proportion given in (Table 2).
The absorbance of the standard was measured using a UV spectrophotometer at 390 nm. 50 ml of sample was pipetted into a beaker and 2 ml of ammonium molybdate and 1 ml of 1:1 HC1 were added to the beaker. The resulting solution was thoroughly mixed and allowed to stand for 7 minutes. 2 ml of oxalic acid was then added and after 2 minutes, the absorbance of the solution was measured at 390 nm.
Calculation:
The silica content of each sample was determined by means of simple proportion, using the formula:
(Absorbance of standard)/(concentration of silica in standard )=(Absorbance of sample)/(concentration of silica in sample )
1.2.8.        Total Hardness Determination (Edta Titrimetric Method)
Principle
Ethylene Diaminetetraacetic Acid, (EDTA) and its sodium salt forms chelated soluble complex when added to a solution of certain metal cations. The addition of a small amount of a dye such as Eriochrome Black T to an aqueous solution containing calcium and magnesium ions at pH of about 10, results in a wine red coloured solution. If EDTA is added as a titrant, any magnesium or calcium will be complexed and the solution will turn from wine red to blue.
Apparatus/Reagents: Burette, pipette, conical flasks, 0.01 M EDTA, Ammonia buffer, Eriochrome Black T indicator. 50 ml of sample was pipetted into the conical flask and 5 drops of indicator was added. 20 ml of Ammonia buffer was added and the resulting mixture was titrated with 0.01 M EDTA solution. The endpoint was determined by a colour change from wine red to blue.
Calculation
Total Hardness (mgCaCO_3/L)=(V x M x E x 2.5 x 1000)/V_sample
Where V=titre value,M=concentration of EDTA,2.5= (molecular mass of Ca〖CO〗_3)/(atomic mass of Ca^(2+) )
E=equivalent weight of Ca^(2+) (40),and V_ sample=Volume of sample
1.2.9.        Sulphate Determination (Turbidimetric Method)
Principle:
Sulphate ion is precipitated in a hydrochloric acid medium with barium chloride (BaCI2) to form barium sulphate (BaSO4) crystals of uniform size.  The absorbance of the BaSO4 suspension is measured using a UV spectrophotometer and the sulphate ion concentration is determined from the calibration curved developed
Apparatus: UV spectrophotometer, conical flasks, pipettes, beakers, spatula, sulphate conditioning reagent, sulphate stock solution.
Preparation Of Conditioning Reagent: the conditioning reagent was prepared by mixing 45 g of NaCI, 18 ml of conc. HCI, 60 ml of 20 % isopropyl alcohol, 30 ml of glycerol and 180 ml of distilled water in a beaker and stirred thoroughly with a glass rod until the solution was clear. Preparation of Sulphate Stock Solution: this was prepared by dissolving 147.9 mg of anhydrous sodium sulphate (Na2SO4) in 1000 ml of distilled water. Preparation of Sulphate Standard Solution: 0.1, 0.2, 0.3, 0.4 and 0.5 ml respectively of the stock solution was pipetted into five 100 ml volumetric flasks and made up to the 100 ml mark with distilled water to produce 1, 2, 3, 4 and 5 ppm of the sulphate stock solution. These were then transferred into appropriately labelled stopper reagent bottles.
Formation Of Baso4 Turbidity: 5 ml of the conditioning reagent was added to the each of the 100 ml standard solution as well as to 100 ml of each sample. This was stirred for one minute. During stirring, a spatula full of BaCl2 crystals was added. The absorbance or each standard as well as each sample was measured using the UV spectrophotometer at 420 nm. The agitated samples were allowed to stand the in UV spectrophotometer for 4 minutes before recording the reading.
Calculation
The absorbance of the five standard solutions were plotted against their concentrations to obtain a calibration curve. The equation of the resulting curve (Equation 1) was used to calculate the sulphate ion content for each sample.
y = 0.0054x + 0  ----------(equation 1)
(R2 = 0.971)
Where y = sulphate ion content (mg/L), 0.0054 = slope, 0 = intercept, R2 = extent of linearity
1.2.10.    Nitrate Determination (Brucine Colorimetric Method)
Apparatus/Reagents: UV Spectrophotometer, volumetric flasks, pipettes, beakers, brucine sulphanilic acid (brucine), conc. H2S04, 30 % NaC1, conc. HNO3, stock nitrate solution.
Preparation of Nitric Acid Stock Solution: 8.5 ml of conc. HNO3 was dissolved in distilled water and diluted to 500 ml in a 1000 ml measuring cylinder.
Preparation of Nitrate Standard Solution: 0.1, 0.2, 0.3, 0.4 and 0.5 ml respectively of the stock solution was pipetted into five 100 ml measuring cylinders and made up to the 100 ml mark with distilled water to produce 1, 2, 3, 4 and 5 ppm of the nitrate stock solution. These were then transferred into appropriately labelled conical flasks.
5 ml of the 1 ppm standard solution was pipetted into a volumetric flask. I ml of 30 % NaCI and 10 ml of conc. H2S04 was added gently to the 1 ppm solution, followed by the addition of  0.1 g of brucine. Upon mixing, a deep red colour which turned yellow was produced. The absorbance of the resulting solution was measured using a UV spectrophotometer at 410 nm. The above procedure was repeated using 5 ml each of the remaining as well as for each sample.
Calculation:
The absorbance of each of five standard solutions was plotted against their concentration to obtain a calibration curve. The equation of the resulting curve (Equation 2) was used to calculate the content for each sample.
y = 0.0038x + 0 ----------------- (Equation 2)
                                                               (R2=0.9747)
Where y = nitrate content (mg/L), 0.0038 = slope, 0 = intercept, R2 = extent of linearity
1.2.11.    Determination of Calcium, Iron, Zinc, Lead,Chromium, Cadmium And Sodium Content by Atomic Adsorption Spectroscopy
The levels of the above mentioned heavy metals and non-heavy metals were determined using the atomic adsorption spectrometer of the following model: Bulk Scientific 205 AAA Model 210 VGP (with air-acetylene flame on absorbance mode and with injection volume of 7 ml/min). Calcium was determined at a wavelength of 423 nm, sodium at 589 nm, iron at 248, zinc at 214 nm, chromium 357nm, cadmium at 228 nm and lead at 283 nm.
Standard metal solutions for each metal were prepared and calibration curves for each metal were obtained from a linear plot of the absorbance of the standard against their concentrations in mg/L. This was used to determine the concentration of each metal in each sample by extrapolation from the calibration curves.  The instrument was first calibrated to zero by aspirating a blank solution in the nebulizer. The samples were then aspirated in the nebulizer at 7 ml/min and the absorbance of each sample recorded.
Where M= molarity of HCI, V= titre value, and Vsample= Volume of Sample
1.2.5.        Dissolved Co2 Content
Apparatus/Reagents: Burette, pipette, conical flasks, 0.01 M NaOH, phenolphthalein indicator.
25 ml of the sample was pipetted into a conical flask and 2 drops of phenolphthalein indicator was added. Titration was done against the base. Endpoint was determined by colour change from colourless to pink.
Calculation
Where  V=titre value , N=normality of the base (0.0128), E=equivalent
Weight of co2(22),Vsample=Volume of Sample
1.2.6.        Chloride Determination (Precipitation Titration)
Principle:
               The principle behind this titration is the precipitation of C1 as AgCl by AgNO3 before AgCrO4 (red) is formed at the endpoint
Apparatus/Reagents: Burette, pipette, conical flasks, 0.014N AgNO3 and K2CrO4 indicator
25 ml of sample was pipetted into a conical flask, 2 drops of the indicator was added and this was titrated against AgNO3 solution until there was a colour change form yellow to brick red.
 Calculation:
Where V= titre value, N= normality of AgNO3 (0.014), E= equivalent
Weight of chloride ion (35.5),Vsample=Volume of sample used
1.2.7.        Silica Determination (Molybdosilicate Method)
Principle
The Molybdosilicate Method is based on the principle that at a pH of about 1.2, ammonium molybdate ((NH4)6M07024.4H20) reacts with any silica and phosphate present in a sample to form hetero-polyacids. Oxalic acid is then added no neutralize any molybdophosphoric acid present. This reaction produces a yellow colour whose intensity is proportional to the silica that reacted with the molybdate. Standard colour solutions of silica are also prepared and the colour intensity can be visually compared or its absorbance can be measured.
Apparatus: Conical flasks, beakers, pipettes, ammonium molybdate reagent: (NH4)6MO7O24.4H2O), 1:1 HCI, oxalic acid (H2C204.2H20)
Ammonium molybdate: prepared by dissolving 10g of (NH4)6M07024.4H20) in distilled water.
Oxalic acid: prepared by dissolving 7.5 g of H2C204.2H20 in 100 ml of distilled water.
Potassium Chromate (K2CrO4) Solution: prepared by dissolving 315 mg of K2CrO4 in distilled water and made up to 500 ml.
Borax Solution: prepared by dissolving 2.5 g of borate decahydrate Na2B407.10H20 in distilled water and made up to 250 ml.
The standard colour solution of concentrations 0.00 — 1.00 (mg Si/L) was prepared by mixing volumes of distilled water, potassium chromate and borax in the proportion given in (Table 2).
The absorbance of the standard was measured using a UV spectrophotometer at 390 nm. 50 ml of sample was pipetted into a beaker and 2 ml of ammonium molybdate and 1 ml of 1:1 HC1 were added to the beaker. The resulting solution was thoroughly mixed and allowed to stand for 7 minutes. 2 ml of oxalic acid was then added and after 2 minutes, the absorbance of the solution was measured at 390 nm.
1.2.8.        Total Hardness Determination (Edta Titrimetric Method)
Principle
Ethylene Diaminetetraacetic Acid, (EDTA) and its sodium salt forms chelated soluble complex when added to a solution of certain metal cations. The addition of a small amount of a dye such as Eriochrome Black T to an aqueous solution containing calcium and magnesium ions at pH of about 10, results in a wine red coloured solution. If EDTA is added as a titrant, any magnesium or calcium will be complexed and the solution will turn from wine red to blue.
Apparatus/Reagents: Burette, pipette, conical flasks, 0.01 M EDTA, Ammonia buffer, Eriochrome Black T indicator. 50 ml of sample was pipetted into the conical flask and 5 drops of indicator was added. 20 ml of Ammonia buffer was added and the resulting mixture was titrated with 0.01 M EDTA solution. The endpoint was determined by a colour change from wine red to blue.
1.2.9.        Sulphate Determination (Turbidimetric Method)
Principle:
Sulphate ion is precipitated in a hydrochloric acid medium with barium chloride (BaCI2) to form barium sulphate (BaSO4) crystals of uniform size.  The absorbance of the BaSO4 suspension is measured using a UV spectrophotometer and the sulphate ion concentration is determined from the calibration curved developed
Apparatus: UV spectrophotometer, conical flasks, pipettes, beakers, spatula, sulphate conditioning reagent, sulphate stock solution.
Preparation Of Conditioning Reagent: the conditioning reagent was prepared by mixing 45 g of NaCI, 18 ml of conc. HCI, 60 ml of 20 % isopropyl alcohol, 30 ml of glycerol and 180 ml of distilled water in a beaker and stirred thoroughly with a glass rod until the solution was clear. Preparation of Sulphate Stock Solution: this was prepared by dissolving 147.9 mg of anhydrous sodium sulphate (Na2SO4) in 1000 ml of distilled water. Preparation of Sulphate Standard Solution: 0.1, 0.2, 0.3, 0.4 and 0.5 ml respectively of the stock solution was pipetted into five 100 ml volumetric flasks and made up to the 100 ml mark with distilled water to produce 1, 2, 3, 4 and 5 ppm of the sulphate stock solution. These were then transferred into appropriately labelled stopper reagent bottles.
Formation Of Baso4 Turbidity: 5 ml of the conditioning reagent was added to the each of the 100 ml standard solution as well as to 100 ml of each sample. This was stirred for one minute. During stirring, a spatula full of BaCl2 crystals was added. The absorbance or each standard as well as each sample was measured using the UV spectrophotometer at 420 nm. The agitated samples were allowed to stand the in UV spectrophotometer for 4 minutes before recording the reading.
Calculation
The absorbance of the five standard solutions were plotted against their concentrations to obtain a calibration curve. The equation of the resulting curve (Equation 1) was used to calculate the sulphate ion content for each sample.
y = 0.0054x + 0  ----------(equation 1)
(R2 = 0.971)
Where y = sulphate ion content (mg/L), 0.0054 = slope, 0 = intercept, R2 = extent of linearity
1.2.10.        Nitrate Determination (Brucine Colorimetric Method)
Apparatus/Reagents: UV Spectrophotometer, volumetric flasks, pipettes, beakers, brucine sulphanilic acid (brucine), conc. H2S04, 30 % NaC1, conc. HNO3, stock nitrate solution.
Preparation of Nitric Acid Stock Solution: 8.5 ml of conc. HNO3 was dissolved in distilled water and diluted to 500 ml in a 1000 ml measuring cylinder.
Preparation of Nitrate Standard Solution: 0.1, 0.2, 0.3, 0.4 and 0.5 ml respectively of the stock solution was pipetted into five 100 ml measuring cylinders and made up to the 100 ml mark with distilled water to produce 1, 2, 3, 4 and 5 ppm of the nitrate stock solution. These were then transferred into appropriately labelled conical flasks.
5 ml of the 1 ppm standard solution was pipetted into a volumetric flask. I ml of 30 % NaCI and 10 ml of conc. H2S04 was added gently to the 1 ppm solution, followed by the addition of  0.1 g of brucine. Upon mixing, a deep red colour which turned yellow was produced. The absorbance of the resulting solution was measured using a UV spectrophotometer at 410 nm. The above procedure was repeated using 5 ml each of the remaining as well as for each sample.
Calculation:
The absorbance of each of five standard solutions was plotted against their concentration to obtain a calibration curve. The equation of the resulting curve (Equation 2) was used to calculate the content for each sample.
y = 0.0038x + 0 ----------------- (Equation 2)
                                                               (R2=0.9747)
Where y = nitrate content (mg/L), 0.0038 = slope, 0 = intercept, R2 = extent of linearity
1.2.11.        Determination of Calcium, Iron, Zinc, Lead,Chromium, Cadmium And Sodium Content by Atomic Adsorption Spectroscopy
The levels of the above mentioned heavy metals and non-heavy metals were determined using the atomic adsorption spectrometer of the following model: Bulk Scientific 205 AAA Model 210 VGP (with air-acetylene flame on absorbance mode and with injection volume of 7 ml/min). Calcium was determined at a wavelength of 423 nm, sodium at 589 nm, iron at 248, zinc at 214 nm, chromium 357nm, cadmium at 228 nm and lead at 283 nm.
Standard metal solutions for each metal were prepared and calibration curves for each metal were obtained from a linear plot of the absorbance of the standard against their concentrations in mg/L. This was used to determine the concentration of each metal in each sample by extrapolation from the calibration curves.  The instrument was first calibrated to zero by aspirating a blank solution in the nebulizer. The samples were then aspirated in the nebulizer at 7 ml/min and the absorbance of each sample recorded.
Results and Discussions
The results of the Physico-chemical characteristics of the sampled water sources are presented in (Table 3) below. From the results, the samples can be classified as generally soft. The highest hardness value from the result was 14.67 ± 0.00. According to the Twort Hardness classification, this falls in the soft water category [4]. This is directly related to the calcium levels of the samples. Calcium accounts for about two-thirds of water hardness. The recommended upper limit of calcium in drinking water is 50 mg/L. The calcium values were all less than 6.0 mg/L and this reflected in the low hardness values obtained.
The pH values of all samples were not within the acceptable limit of pH for safe drinking-water. The pH values of all the samples were generally acidic with a range of 4.44 to 6.06. Samples 3, 4, 5, 7, 8, 10, 12 and 17 all had values below 5.0, with sample 12 having the lowest value of 4.44. The acidic nature of most samples can be attributed to the low hardness (soft water) of the samples. Soft water is known to be acidic and this increases the ‘plumbosolvency’ of such water.
Dissolved CO2 is one of the components of carbonate equilibrium in water. The highest value of CO2 was 12.02 ± 1.50 mg/L. Dissolved CO2 is significant in that high values of it (usually above 10 mg/L for surface waters) indicates a significant biological oxidation of the organic matter in water. Dissolved CO2 also has a direct relationship with pH and alkalinity. From the results, the dissolved CO2 level is low for all samples, indicating little biological oxidation of organic matter. At pH values between 4.6 and 8.3, bicarbonate alkalinity is in equilibrium with dissolved CO2. The generally low values of dissolved CO2 corresponds therefore to the generally low (bicarbonate) alkalinity.
Chloride in water does not have a negative health impact. Its impact is aesthetic in nature, with high concentrations exceeding 250 mg/L producing a salty taste (when the associated cation is sodium). The chloride levels of all samples were quite low, the highest value being 66.28 ± 1.33 mg/L.
The silica and sulphate concentrations were very low. The limits are 1-30 mg/L and 250 mg/L, respectively [5]. The silica content was almost insignificant (all less than 0.1 mg/L). The sulphate content was also very low; the highest being 2.96 mg/L for sample 14, and in some cases not determinable (samples. 11 and 15). Nitrate is naturally present in soil, water and food due to the nitrogen cycle. The activities of man also add to increase the nitrate levels in the environment. To this end, WHO and NIS set a limit of 50 mg/L, while EPA stipulates a stricter standard of not more than 10 mg/L (nitrate as nitrogen). The range of nitrate concentration for the samples was 11.32 — 58.68 mg/L by WHO and NIS [6].
Standard samples 13 and 14 have excess of nitrate (58.68 and 52.11 mg/L respectively). The nitrate concentration of sample 12 is just at the threshold (50 mg/L). Nitrate levels can become dangerously increased with the increased use of nitrogen based fertilizers and manure, coupled with the fact that nitrate is extremely soluble. The environment around the boreholes are such that support thriving of bacteria which play a significant role in the nitrogen cycle. Nitrogen easily leaches into groundwater from runoff [7]. Since the sample area is inhabited by mainly adults, the most lethal health effect of nitrate poisoning is not expected to be seen (infants are much more sensitive than adults to methaemoglobinaemia caused by nitrate, and essentially most deaths due to nitrate poisoning have been in infants). However, long term exposure to nitrates can, apart from causing methaemoglobinaemia and anaemia, cause diuresis, starchy deposits and haemorrhaging of the spleen. Nitrites in the stomach can react with food proteins to form nitrosoamines; these compounds can also be produced when meat containing nitrites or nitrates is cooked, particularly using high heat. While these compounds are carcinogenic in test animals, evidence is inconclusive regarding their potential to cause cancer (such as stomach cancer) in humans. The Levels of some selected heavy and non-heavy metals in the water samples were determined and the results shown in (Table 4).
The AAS determination of heavy and non-heavy metals showed that the samples were free from these metals except for sodium and calcium. The range of values for sodium was 0.40 — 16.30 mg/L, well below the guideline value set at 50 mg/L for sodium [8]. Sample 17 was the only sample with a trace of zinc (0.13 mg/L) and this was well below the limit of 3 mg/L set by NIS [9] and 5 mg/L set by EPA [10] The increased corrosivity of these samples therefore has an increased associated risk of dissolving metals and non metals including lead, iron, zinc, nickel, brass, copper and cement/concrete [8]. If the water distribution system was laid with pipes containing any of these metals, then the risk of increased levels of these, especially lead would be high. However, this seems not to be the case because the lead levels obtained from AAS analysis of all the samples were all either zero or very low.
Conclusion And Recommendation
The physico-chemical analyses performed on the samples, demonstrated that the physico-chemical quality of the water samples were mostly within the specified limits as stated by WHO and EPA.  The health implications of the physico-chemical quality were considered to be of importance on the longterm basis, since these contaminants at the levels at which they occurred in the water samples can accumulate over time. The pH of the samples was found to be acidic. It can be concluded that the same acidic aquifer serves the entire sample area. The pH of water must be controlled through increasing alkalinity and calcium levels since acidic water tends to be corrosive and can dissolve metal fittings and cement into water, leading to contamination. Also, the nature of construction materials that have been used and that will be used in the future should be reviewed to ensure that it can withstand the acidity of the water. It was not in the scope of this research to determine the size of the underground water aquifer, but it is recommended therefore that the size of the underground aquifer be determined in other to ascertain the extent to which the recommendations for remediation proposed herein would be implemented. The nitrate levels of 2 samples were also found to exceed the acceptable limit (50 mg/L as nitrate ion), while one sample had 50 mg/L as its value. It is recommended that biological denitrification for surface water and ion exchange for ground water is employed in order to reduce the nitrate levels.
Conflict of Interest
The authors have no conflict of interest to declare.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00151.pdf
https://biogenericpublishers.com/jbgsr.ms.id.00151.text/ For more open access journals click on https://biogenericpublishers.com/
1 note · View note
biogenericpublishers · 4 years ago
Text
Assessment of Hygiene in Collective Restaurants of Abidjan City (côte d'ivoire) by Kouamé Kohi Alfred in Open Access Journal of Biogeneric Science and Research
Tumblr media
Abstract
In the collective restaurants especially, the large quantities of food prepared on a daily basis mean that the basic rules of hygiene are often neglected. This is particularly true in our countries where the workforce often has a low level of training. The aim of this study was to assess the effectiveness of hygiene measures implemented in collective restaurants in the industrial zone of Yopougon (YOP1 and YOP 2) and a University Hospital Center (CHU) in Abidjan to ensure food safety for the guests.  An inspection of three collective restaurants in the city of Abidjan was carried out. Sampling of the dishes as well as the diving areas and the hands of food handlers just before the completion of their task were carried out for the search and enumeration of Mesophilic Aerobic Germs, Staphylococcus aureus, coliforms and Salmonella. It was found that the food, the hands of the producers and the diving areas were contaminated with Mesophilic Aerobic Germs and Staphylococcus aureus. The loads ranged from 0 to (2.6±0.3)109 CFU/g and 0 to (1.57±0.1)106 CFU/g, respectively. These loads were in compliance with EC standard No. 2073/2005. The food supplied by these restaurants was then of unsatisfactory microbiological quality. Keywords: Collective restaurants, hygiene, microorganisms
Introduction
The reference to quality in its various meanings has become omnipresent, as the food product undergoes transformations and manipulations of which the consumer knows neither the nature nor the manipulators. The perceived quality of food, refers to a complex set of qualities expected from six aspects: nutritional, organoleptic, functional, social and health; not the least of these, the health quality of food refers to chemical and bacteriological safety.  Collective restaurants are an economic activity that aims to ensure the common intake of food by a group of people outside the domestic setting. It includes the preservation and distribution of meals for collective use. Collective restaurants are defined as public or private establishments that provide a catering service free of charge or for a fee and where at least part of the clientele is made up of a community of regular consumers [1,2]. The safety of the food served by these establishments remains a major concern for the official services in charge of control. Ready-made meals are obtained from various foodstuffs, each with a specific flora. In mass catering, the respect of hygiene principles is a vital issue because food poisoning can be a sign of food insalubrity for the consumer. In industry, food poisoning can affect company performance through increased absenteeism [3]. In France between 1995 and 2005, 5847 outbreaks of TIAC, 80351 patients, 7364 hospitalisations and 45 deaths were recorded out of the total number of outbreaks, 64% of which occurred in collective or commercial catering. However, food poisoning mobilizes the media, which tend to amplify the accidents, the slightest toxi-infection is considered a disaster. The discredit thus thrown away can weigh heavily on the future of a company that prepares meals in advance [4]. This is why hygiene rules must be enforced in restaurants in order to prevent various food-borne diseases.  The purpose of this work was to assess the effectiveness of the hygiene measures implemented in collective restaurants in the industrial zone of Yopougon and a University Hospital Center (CHU) in Abidjan to ensure food safety for diners.  
Materials and Methods
1.1.             Study Sites
Two restaurants from two companies in the industrial zone of Yopougon as well as a restaurant from a University Hospital Center (CHU) in Abidjan were selected for this study. The two restaurants in the Yopougon industrial zone provided food to the workers of these enterprises, while the University Hospital Center provided food to the patients of this center. These two restaurants were selected for their willingness to participate in this study, but also because of the economic importance of the businesses that host these restaurants. For the restaurant of the CHU, it was chosen because the patients that the restaurant provides the meals are people at risk.
1.2.             Sampling
An inspection of restaurants was carried out according to [5] method. The places where utensils were stored, and where food was stored before being served to customers were inspected. Ready-made and ready-to-serve dishes were collected from each restaurant. Samples were taken from the hands of the producers just prior to serving and from the surfaces of the dish washing areas and kitchen utensils using the method of Kouame et al. [6]. Three samples were taken in each restaurant. After sampling, the samples were placed in a cooler containing dry ice and transported to the laboratory within four hours of collection for the various analyses.
1.3.             Isolation and Enumeration of Bacteria
The stock solution and decimal dilutions were performed according to the methods of [7]. For the analyses, ten grams (10 g) of samples were crushed and taken under sterile conditions created by the flame of a bunsen burner and mixed in a "stomacher" bag with 90 mL of buffered peptone water (AES Laboratoire, COMBOURG France) previously sterilized and used as diluent. Mesophilic aerobic germs (MAG) were counted on PCA (Plate count Agar) agar (Oxoid LTD, Basingstore, Hamsphire, England) after two (2) days of incubation at 30 °C according to AFNOR Standard NF V08-051, 1999. The research and counting of Staphylococcus aureus were done on Baird Parker agar after one (1) day of incubation at 30 °C using [8] method. Violet crystal and neutral red biliated lactose agar (VRBL agar) was used for coliform count,after one (1) day of incubation at 30 °C for total coliforms and 44°C for faecal coliforms according to AFNOR Standard, NF ISO 4832 July 1991. The isolation and enumeration of Salmonella were carried out using Hendriksen [9] method in several steps. This was achieved by pre-enrichment in a non-selective medium, followed by enrichment in a selective medium and culture on selective agar. For enrichment in non-selective or pre-enrichment media, a quantity of Twenty-five grams (25) g of samples was homogenized with 225 mL of peptonned water in a sterile jar, incubated at 37 °C for 24 h. For selective recording, one milliter (1 mL) of the pre-enriched culture was transferred using a sterile pipette into 10 mL of previously prepared sterile Rappaport Vassililiadis. broth and incubated for 24 h at 37°C. Salmonella enumeration was performed on Salmonella Shigella agar (Oxoid). Each enrichment culture was streaked on Shigella-Salmonella (SS) agar and incubated at 37°C for 24 h. On Salmonella-Shigella agar, the presumptive colonies were colourless, transparent, with or without a black centre.
1.4.             Statistical Analysis
The software R. 3–01 was used for the statistical analysis, ANOVA test and Duncan post-hoc test were performed at the significance level 5%. This software made it possible to calculate the means, the standard deviations of the microbiological parameters. It also made it possible to compare the means of the microbiological parameters of the samples and to determine whether the differences observed in the means of the microbiological parameters are signi��cant at the 5% threshold.
Results
Two restaurants in the industrial zone of yopougon will be rated YOP 1 and YOP 2, one at the University Hospital Center will be rated CHU for ethical reasons.
1.1.             Microbial Load of the Menus, the Hands of the Producers, The Utensils and of the YOP1 Dive Zone
Ordinary sauce made from vegetables, tomatoes and fish, the kitchen utensils and the hands of the food service staff were free of microorganisms. The raw vegetables (starter dish) were contaminated with Mesophilic Aerobic Germs (MAG) and Staphylococcus aureus with respective loads of (3.8±0.4)107 CFU/g; (1.47±0.1)106 CFU/g. The special sauce (for use in the company), rice, ready-to-eat potatoes and the diving area were contaminated with Mesophilic Aerobic Germs (MAG) with respective loads of (1.2±0.7)107 CFU/ml; (3.2±0.1)105 CFU/g; (6.1±0.7)105 CFU/g; (9.1±0.8)106 CFU/cm2. All samples shall be free of Salmonella (Tables 1 & 2).
1.2.             Microbial Load of Menus, Producers' Hands, Utensils and YOP2 Dive Zone
The raw vegetables (starter dish), the rice dish and the hands of the food service staff were free of microorganisms. The special sauce (intended for the company's staff), the ordinary sauce, the ready-to-eat potatoes, the kitchen utensils and the dishwashing area were contaminated with Mesophilic Aerobic Germs (MAG) with respective loads of (2±0.3)106 CFU/ml; (1.5±0.1)106 CFU/ml; (1.5±0.2)105 CFU/g; (5±0.6)107 CFU/cm2 and (2.6±0.3)109 CFU/cm2 respectively. All samples are free of Salmonella (Tables 3 & 4).
1.3.             Microbial Load of Menus, Producers' Hands, Utensils and Chu Dive Zone
All samples tested were coliform-free except for attiéké. In addition, the samples of attiéké were contaminated with all the germs tested with a predominance of Mesophilic Aerobic Germs which had a load of (2±1.2)104 CFU/g. Staphylococcus aureus predominated in fried fish with a load of (2±0.1)104 CFU/g while Mesophilic Aerobic Germs predominated in peanut sauce with a load of (2.5±0.9)105 CFU/ml. All samples were free of Salmonella (Tables 5 & 6).
Table1: Microbial loads in YOP1 menus.
Table 2: Microbial load on the hands of producers, utensils and the diving area of YOP1.
Table 3: Microbial loads in YOP2 menus.
Table 4: Microbial load on the hands of producers, utensils and the diving area of YOP2.
Table 5 : Microbial loads in CHU menus.
Table 6: Microbial load on the hands of producers, utensils and the diving area of CHU.
Discussion
Collective restaurants in companies and for hospital patients are becoming more and more indispensable nowadays. This allows the company to keep these employees on site and to control their food in order to avoid possible food poisoning problems. As for the hospitals, it allows them to follow and control the diet of their patients. However, poor hygiene management in these restaurants will be a source of problems for these companies and hospitals.  The objective of this study was to assess the effectiveness of hygiene measures implemented in collective restaurants in the industrial zone of Yopougon and a University Hospital Center (CHU) in Abidjan in order to ensure the food safety of the guests.  
The meals served as well as the hands of the providers and the diving areas of the restaurants YOP 1 and YOP 2 and CHU were of unsatisfactory microbiological quality according to the EC standard n° 2073/2005 except for the fish soup served at the CHU restaurant. Similar results were found in Senegal by Tayou [10] in a study of the hygiene of modern collective restaurants in Dakar. The menus served by the restaurants in this study were colonized by microorganisms with loads exceeding the standard. The presence of these microorganisms reflects an exposure of the dishes to a soiled environment (air, spoon pot, plates etc.). Their presence also provides information on the state of property of food handlers, the conditions of conservation, the efficiency of the processes of treatment of products. It remains the best indicator of the application of good hygiene practices.  Staphylococci are of human origin (skin, hair, nostrils, mouth) and indicate a lack of hygiene their presence in the dishes of these restaurants meant a lack of personal hygiene of food handlers. In addition, the microbiological quality of the meals served to consumers depends on the initial contamination of raw materials, the possibility of additional contamination at each stage of the production process, the possibility of residual contamination when a sanitizing treatment is applied, and the potential for multiplication of microorganisms present in the food [11]. A poorly adapted hygiene policy will result in an increase in biological contamination with the possibility of development of pathogenic microorganisms (Salmonella, coliforms, Staphylococci) with a risk of food poisoning [3]. The poor quality of the food produced by these companies could impact on the health of workers and consequently increase the rate of absenteeism and affect the performance of the companies.
Conclusion
In our country, collective restaurants are growing every day, particularly in companies. When the hygienic conditions of this catering are not respected, the result is that the meals present a considerable risk due to the possible presence of pathogenic microorganisms for the consumer. The distribution of meals to communities therefore requires special control in order to protect the health of the guests.  The aim of this study was to assess the effectiveness of hygiene measures implemented in collective restaurants in the industrial zone of Yopougon and a University Hospital Center (CHU) in Abidjan to ensure the food safety of the guests.   It was found that the dishes as well as the dishwashing areas and the hands of food handlers contained germs such as Staphylococcus and Mesophilic Aerobic Germs. The loads of these germs in most cases exceeded the EC standard No. 2073/2005. The dishes were therefore of unsatisfactory microbiological quality.  The poor quality of the food from these companies could impact on the health of the workers and consequently increase the rate of absenteeism and affect the performance of the companies.
Competing Interests
The authors declare that there is no competing interest related to this manuscript
Authors' Contributions
This work was carried out in collaboration among all authors. Authors, KKA, BKJP, BZBIA, designed the study, performed the statistical analysis, wrote the protocol and wrote the first draft of the manuscript. Authors DKM managed the analyses of the study. Author KKA managed the literature searches. All authors read and approved the final manuscript.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00159.pdf
https://biogenericpublishers.com/jbgsr.ms.id.00159.text/ For more open access journals click on https://biogenericpublishers.com/
1 note · View note
biogenericpublishers · 4 years ago
Text
Does the Measles, Mumps and Rubella (MMR) Vaccine Enhance One or More Specific Functions in Children and Can it Help against this Novel Paediatric Inflammatory Multisystem Syndrome? by Carl Dowling in Open Access Journal of Biogeneric science and Research
Tumblr media
Introduction On 31st December 2019, the World Health Organisation (WHO) was informed of a novel virus known as Covid-19. This virus originated from Wuhan, China, where it rapidly started to spread to different parts of the world and become a global pandemic. Covid-19 mainly affected elderly and vulnerable adults. However, In April 2020 children started to present with a rare dangerous reaction which was unknown to healthcare providers. The novel syndrome seen in children has now been named Paediatric Inflammatory Multisystem Syndrome (PIMS). Some experts say that this new syndrome seen in children is related to Covid-19 and resembles Kawasaki Disease (KD) and Toxic Shock Syndrome (TSS). According to the Centers for Disease control and Prevention [1] it is recommended that all children receive two doses of the measles, mumps and rubella (MMR) vaccine. This editorial will be analysing key concepts from variable research collected through different studies, in order to gain a better understanding of the MMR Vaccine and if it has any benefit in a child’s immune response when fighting against this new novel PIMS.
What is Covid-19? According to the WHO [2] Covid-19 is a viral infectious disease which causes Severe Acute Respiratory Syndrome. Ferretti [3] states that the virus can be transmitted through exhaled droplets and contamination of surfaces. According to Singal [4] symptoms of Covid-19 include fever, sore throat, cough followed by breathing difficulties. Furthermore, Singal [4] explains that symptoms of Covid-19 in neonates, infants and children are significantly milder than they are in adults. Roser [5] states that as of May 14th 2020 there are (4,477,573) reported cases which includes (299,958) reported deaths, with 2.2% of those deaths being related to children aged 0-17 years of age. Verdoni [6] that in children, the respiratory involvement in Covid-19 takes on a more benign course. According to Mehta [7] stated that Covid-19 carried a 3.7% mortality rate compared to less than 1% mortality rate from influenza. Furthermore, Mehta [7] mentions that Covid-19ncan cause cytokine storms within the body, and that it is advantageous to identify and treat they hyper inflammation using existing approved therapies where possible, in order to reduce the rise in mortality.
What is Paediatric Inflammatory Multisystem Syndrome? According to Herman [8] approximately a month after the first surge of Covid-19 cases in New York, where at that time at least 50 children developed a Multisystem Inflammatory Syndrome, suggesting it is a post infectious immune response related to Covid-19. The European Centre for Disease Prevention and Control [1] stated that a total of 230 suspected cases of this novel PIMS associated with Covid-19 has been reported within Europe with ages ranging between 0-19 years of age. Riphaean [9] explains that symptoms of this PIMS in children include fever, rash, conjunctivitis, peripheral oedema, extremity pain, abdominal pain, gastrointestinal symptoms and cardiac problems. Verdoni [6] States that there is evidence which proposes that tissue damage from Covid-19 is mediated by the innate immunity, however, this novel PIMS causes a similar reaction when compared to Covid-19 as cytokine storms are caused from macrophage activation. According to Ford [10] these new cases of the novel PIMS have common overlapping features of TSS and KD. In Riphaean [9] study, 8 children were admitted into the Paediatric Intensive care Unit (PICU), none of them had any underlying health issues and all were tested negative for Covid-19, however, all children had known family exposure to Covid-19. 6 were of Afro-Caribbean descent and 5 were boys. Furthermore, in Riphaean [9] study, it mentions that the children were given intravenous (IV) Immunoglobulin 2g/kg in first 24 hours of arrival in PICU followed by aspirin if needed. Shekerdemian [11] conducted a study in Italy on this novel PIMS and found that children in group 2 were older than those who are typically seen with KD, had a higher rate of cardiac involvement and macrophage activation syndrome (MAS). Furthermore, Shekerdemian [11] states that all children made a full recovery, however, all patients received immunoglobulin, but 80% required further treatment with steroids.
To know more about open access Journal of Biogeneric Science and Research click on https://biogenericpublishers.com/ To know more about this article click on https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00057.pdf https://biogenericpublishers.com/jbgsr.ms.id.00057.text/ For Online Submissions Click on https://biogenericpublishers.com/submit-manuscript/
1 note · View note
biogenericpublishers · 4 years ago
Text
Acute Coronary Syndrome by Ugur Koca in Open Access Journal of Biogeneric Science and Research
Open Access Journal of Biogeneric Science and Research 
summary Cardiovascular events related to ischemic coronary diseases are among the leading causes of death in the world. The most common of these diseases are in the diagnosis group called acute coronary syndromes (ACS). According to the 2012 data of the World Health Organization (WHO), Ischemic Heart Disease, stroke with 7.4 million people, and 6.7 million people and 3.1 people with Chronic Obstructive Pulmonary Disease (COPD). Cardiac Troponins (cTn) are very sensitive and specific indicators of myocardial damage in cardiac markers.It is included in the group of Cardiac Troponins, Troponin T (cTnT) and Troponin I (cTnI). In ACS, increased cTn levels are important in terms of both prognosis and treatment. In international algorithms, they are accepted as standard markers in the diagnosis and treatment of ACS. In many studies, cTn elevation was found to have negative prognostic value in the short-term, with or without myocardial infarction (MI), in patients hospitalized in the hospital intensive care unit. However, the level of cTn in the blood can increase even due to ischemic coronary diseases and even for non-cardiac reasons. Acute Coronary Syndrome occurs as a result of impaired integrity of the atherosclerotic plaque in the coronary vessel. The clot formed on the plaque impairs various degrees of coronary blood flow. In addition to the clot, different degrees of coronary spasm may accompany the picture. As a result of these changes, acute elevation myocardial infarction (STEMI), ST elevation acute myocardial infarction (NSTEMI) or unstable angina pectoris (Unstable Angina Pectoris, UAP) may occur in the clinic. Keywords: Troponin; Heart failure; Unstable angina Introduction Cardiovascular events related to ischemic coronary diseases are the leading causes of death in the world [1]. Those who complain about ischemic coronary disease often apply to emergency departments for initial diagnosis and treatments. The most common of these diseases are in the diagnosis group called acute coronary syndromes (ACS) [2]. An approach principle consisting of clinical history, ECG and cardiac markers is used in the diagnosis of ACS's emergency department. Although the history and ECG elements of the clinical approach have the same characteristics since the past, cardiac markers have been frequently changed in recent years, and they continue to take place in diagnostic approaches by updating and updating them. Among the cardiac markers, cardiac Troponins (cTn) are highly sensitive and specific indicators of myocardial damage. It is included in the group of Cardiac Troponins, Troponin T (cTnT) and Troponin I (cTnI). In ACS, increased cTn levels are important in terms of both prognosis and treatment [3]. In international algorithms, they are accepted as standard markers in the diagnosis and treatment of ACS [4,5]. In many studies, the height of cTn was found to have negative prognostic value in the short-term, with or without myocardial infarction (MI), in patients hospitalized in the hospital intensive care unit. However, the level of cTn in the blood can increase even without ischemic coronary diseases [6]. The number of clinical studies conducted on the increase of the level of cTn in the blood when it is not related to ischemic coronary diseases is low. To know more about open access Journal of Biogeneric Science and Research click on https://biogenericpublishers.com/ To know more about this article click on https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00031.pdf https://biogenericpublishers.com/jbgsr.ms.id.00031.text/ For Online Submissions Click on https://biogenericpublishers.com/submit-manuscript/
1 note · View note
biogenericpublishers · 2 years ago
Text
Short Course Digoxin in Acute Heart Failure by Nouira Semir in Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
Background Despite many critical voices regarding its efficacy and safety, digoxin may still have a role in the management of heart failure. The objective of this study was to evaluate the efficacy and safety of a short course digoxin therapy started in the emergency department based on clinical outcome after 30 days post hospital discharge.
Methods From Great Tunisian registry, acute decompensated heart failure (ADHF) patients from January 2016 to January 2018 were identified. Patients with incomplete data were excluded. Digoxin treated and non-treated patients were compared in a matched control study with respect to primary outcomes of all-cause mortality and HF readmission. Secondary outcomes included changes of cardiac output (CO) and left ventricular ejection fraction (LVEF) after 72 hours of hospital admission.
Results The study population comprised 104 digoxin treated and 229 matched non-treated with a median age of 67.4±12.8. After 72 hours of ED admission, there was a larger increase of CO (17.8 % vs 14%; p=0.015) and LVEF (14.4% vs 3.5%; p=0.003) in digoxin group compared to control group. At 30-day post-hospital discharge 34 (10.2%) patients died and 72 (21.6%) patients were readmitted. Use of digoxin was associated with decreased risk of death and hospital readmission [odds ratio, 0.79 (95% CI, 0.71-0.89)].
Conclusion In ADHF patients, treatment with digoxin was associated with a significant decrease risk of 30-day mortality and hospital readmission with an improvement of cardiac output and left ventricular ejection fraction.
Key words: Acute heart failure; digoxin; mortality; rehospitalization; emergency department.
INTRODUCTION
Heart failure (HF) is a major worldwide health problem and one of the most important causes of hospital admissions [1,2]. These hospitalizations are responsible for an important economic burden and are associated with high mortality rates, up to 20% following hospital discharge [3,4]. Acute decompensated HF (ADHF) management is difficult given the heterogeneity of the patient population, incomplete understanding of its pathophysiology and lack of evidence-based guidelines. Although the majority of patients with ADHF appear to respond well to initial therapies consisting of loop diuretics and vasoactive agents, these first line treatments failed to decrease post-discharge mortality and readmission rate [5,6]. Investigations of novel therapies such as serelaxin did not show a significant clinical benefit. In a recent multicenter, double-blind, placebo-controlled trial including patients who were hospitalized for acute heart failure, it was shown that the risks of death at 180 days were not lower in patients who had received intravenous serelaxin for 48 hours than in patients who had received placebo [7]. Numerous other clinical trials have been published on ADHF treatment and their results were disappointing in term of efficacy and/or safety [8-11]. Digoxin is one of the oldest compounds in cardiovascular medicine but its beneficial effect is very controversial [12]. Yet, digoxin has many potential beneficial properties for heart failure as it is the only oral inotrope available that did not alter blood pressure neither renal function. Despite its useful hemodynamic, neurohormonal, and electrophysiological effects in patients with chronic congestive HF, concerns about digoxin safety were constantly highlighted [13]. Consequently, the use of digoxin has decreased considerably, in the last 15 years [12]. Digoxin under prescribing is problematic for several reasons. First, it underestimated the substantial beneficial effect of digoxin on the reduction of hospital admissions in HF patients. Second, for its low cost, the favorable cost-effectiveness ratio of digoxin is highly desirable in low-income countries. Moreover, the question whether a short course of digoxin is useful in ADHF was not previously investigated in the era of new heart failure therapies including β-blockers, angiotensin converting enzyme inhibitors and angiotensin-receptor blockers [12]. The objective of this study is to assess the efficacy and safety of a short course digoxin in patients admitted to the ED with ADHF (Figure1 and Figure 2).
PATIENTS AND METHODS
Data Source
We conducted a retrospective matched case-control study to assess the association between digoxin treatment and 30-day outcome in patients with ADHF. The ADHF patients were identified from the Great Tunisian database between January 2016 and January 2018. The patients included are residents of a community of 500,000 inhabitants in the east of Tunisia, served by 2 university hospitals (Fattouma Bourguiba Monastir, and Sahloul Sousse). ADHF was defined as an acute onset of symptoms within 48 hours preceding presentation, dyspnea at rest or with minimal exertion, evidence of pulmonary congestion at chest radiograph or lung ultrasound, NT-proBNP ≥1400 pg/ml. This electronic medical recording system provided detail of each patient admitted to emergency department (ED) for acute undifferentiated non traumatic dyspnea.
Study Population
Patients were included if the following data are available: demographic characteristics, comorbidities, current drug use, baseline NYHA functional class, physical exam findings, standard laboratory tests, brain natriuretic peptide levels at ED admission; echocardiographic results, bioimpedance measured cardiac output at ED admission and at hospital discharge, digoxin daily dose, 30-day follow-up information including ED readmission and survival status. A patient who received at least 0.25 mg of oral digoxin (1 tablet) for three days during hospital stay was defined as case; those who did not receive digoxin treatment were selected as control. The protocol used in this study was approved by the ethics committee of our institution, and all subjects gave their written informed consent to be included in the data base. All the listed criteria have to be fulfilled for patient’s inclusion. Exclusion criteria included ongoing treatment with digoxin, pregnant or breast-feeding women, patients with known severe or terminal renal failure (eGFR<30 ml/min/1.73m2), alteration of consciousness (Glasgow coma score <15) and patients needing immediate hemodynamic or ventilatory support. Cases were matched first for sex, then for age (±2 years) and NYHA functional class. We performed an individual matching; we matched each patient under digoxin (case) with 2 patients who did not receive digoxin (control) for age, gender and New York Heart Association (NYHA) classification. Reviewers were limited to matching criteria data only (e.g., blinded to 30-day outcomes) to eliminate potential sources of bias. Patients who were treated with digoxin and those who did not receive digoxin were clinically managed the same way.
Outcome Measures
The main end points included death or rehospitalization within 30 days after hospital discharge, and 30-day combined death-rehospitalization outcomes. Secondary end-points included CO change from baseline and length of stay in the hospital during the index episode.
Statistical Analysis
Baseline characteristics were compared between groups to detect any differences between cases and controls; independent t-tests were performed for normally distributed variables; Mann Whitney U tests were performed for continuous non-normally distributed variables; and chi square analyses were performed for categorical variables. Logistic regression analysis was performed to identify the odds ratios (ORs) and 95% confidence intervals (95% CIs) for hospital readmission and/or death risk with respect to digoxin treatment. Data are reported as means ± standard deviations, unless otherwise noted, and a p-value less than 0.05 via two-sided testing was considered statistically significant. Data were analyzed using the statistical software package SPSS version 18.
RESULTS
The initial study population comprised 1727 participants who were registered in the database. From this initial population, we excluded 956 with non-cardiac cause of dyspnea, and 211 with incomplete data. From the remaining patients, 104 were included in the digoxin group and 229 in the control group. Digoxin was orally administered once a day and almost all of our patients received the same dose (0.25 mg, one tablet) each day during at least three days. Only few patients received a lower (0.125 mg) or a higher (0.5mg) dose. Baseline characteristics of both groups are shown in table 1. Demographic characteristics were comparable among both study’s groups. There were no relevant differences in age, sex, or NYHA classification. The NYHA class collected was related to base line medical status (within three months before the ongoing exacerbation). Cardiovascular medical history was comparable for both groups. There were no significant differences between cases and controls regarding underlying other comorbidities. Fifty-two percent of the patients had ischaemic cardiomyopathy as the primary aetiology of their heart failure (47-57%) (Table1). Principal baseline medication consisted of diuretics, angiotensin converting enzyme-inhibitors, beta-blockers, and nitrates. Mean vital signs values at baseline were comparable among the 2 groups with respect to heart rate, respiratory rate, and blood pressure. NT-proBNP levels ranged from 1412 to 8615 pg/ml between; 61% in digoxin group and 59% in control group had reduced LVEF (<45%) (p=0.77). After 72 hours of ED admission, there was a larger increase of CO (17.8% vs 14%; p=0.015) and LVEF (14.4% vs 3.5%; p=0.003) in digoxin group compared to control group (Figure 1); NTpro BNP levels decreased and in digoxin group (2%) and in control group (1.2%) but the difference was not significant (p=0.06). Digoxin treatment was associated with a reduced length of hospital stay (10.1±7.2 days versus 6.6± days; p<0.01). At 30-day follow-up, digoxin group showed a significantly lower all-cause (p=0.04) and heart failure (p=0.02) hospital readmission rate compared to control group, and lower mortality (11.8% versus 6.7%; p=0.03) (Table 2). Digoxin treatment was found to significantly decrease the odds for the combined events of mortality and hospital readmission [odds ratio, 0.79 (95% CI, 0.71-0.89)]. No major side effects were observed in relation to digoxin therapy.
DISCUSSION
Our results demonstrated that digoxin is associated with a lower risk of 30-day hospital readmission among ED patients with decompensated HF. Compared with control group, LVEF and cardiac output increased and length of hospital stay decreased significantly in digoxin-treated group. Most available studies analyzed long-term effect of digoxin in patients with chronic heart failure, but data on the effect of short course digoxin on early clinical outcome and physiological related parameters in patients with acute heart failure are scarce. The concordance between physiological and clinical outcomes was in favor of the validity of our results. Digoxin is one of the oldest drugs used in cardiology practice, and few decades ago, it was prescribed in more than 60% of heart failure patients in the United States [14]. Digoxin is the only inotropic drug known to increase cardiac output and to reduce pulmonary capillary pressure without increasing heart rate or lowering blood pressure in contrast to other oral inotropes. However, despite the evidence of its beneficial effects on hemodynamic, neuro-hormonal and electrophysiological parameters, a great concern regarding its safety profile has been raised and the use of digoxin has declined significantly over the past two decades [15]. Indeed, in the ESC guidelines (2016), digoxin indication was limited only to patients with AF and rapid ventricular rate [16]. This could be understandable given the scarcity of randomized trials specifically aimed at testing digoxin safety in heart failure patients. The Digitalis Investigation Group (DIG) trial, the only large randomized trial of digoxin in heart failure, reported a significant reduction in heart failure hospitalizations [17]. Most of the identified studies against the use of digoxin had many potential sources of bias requiring careful assessment. In fact, digoxin safety concern comes from very heterogeneous studies and non-experimental observational studies carrying a high risk of misinterpretation [18-20]. A recent study concluded that prescription of digoxin is an indicator of disease severity and not the cause of worse prognosis which means that a significant prescription bias might be caused by the fact that sicker patients, having a higher mortality risk receive additional treatment with digoxin [21]. Currently, in DIG trial, there is no evidence of an increased risk with digoxin treatment. Importantly, DIG trial demonstrated that beneficial digoxin effects were mainly observed in patients with HFrEF and those with serum digoxin concentration ≤0.9ng/ml. Digoxin efficacy may be attributed in part to the neurohormonal‐inhibiting properties of digoxin, especially in lower doses; it may also be related to its synergistic effects with beta-blockers as pro��arrhythmic effects of digoxin would be expected to be attenuated by β‐blockers [22].
Our study has several limitations. First, as this is a retrospective analysis, we should clearly highlight that our results only describe associations and not causality. Second, our study is limited by its small sample size. Third, as in all case control studies; bias due to unmeasured confounders remains possible. We should have used the propensity-score matching to better match our two groups but we should point out that most of confounding variables influencing outcome were well balanced between the 2 groups of our study. Third, we had no data regarding post-discharge adherence to prescribed treatment nor we had informations on neither serum digoxin concentration nor the incidence of digoxin toxicity. We acknowledge that this important information would be a valuable support to our findings in demonstrating a correlation between serum digoxin levels and their clinical outcome in our patients. In addition, in our study only 30% of our patients were receiving aldosterone antagonists, and none were receiving cardiac resynchronization therapy, which may limit generalizability of our results.
CONCLUSIONS
Our findings provided an additional data to support the association between use of digoxin and clinical benefit in HF patients with reduced LVEF. Digoxin may potentially serve as an inexpensive tool for the reduction of short-term mortality and hospital readmissions which is an important objective especially in low-income countries in the health system.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00258.pdf https://biogenericpublishers.com/jbgsr-ms-id-00258-text/
0 notes
biogenericpublishers · 2 years ago
Text
Noise Pollution is One of the Main Health Impacts in Big Cities Today by Tamaz Patarkalashvili* in Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
Noise pollution today is one of the biggest health risks in big cities along with air pollution. It must be admitted that noise pollution was overlooked by scientists and city authorities lately. Noise pollution has adverse effect on all living organisms. Scientists confirm that noise incentives central nervous system that stimulus to release some hormones which increases risk of hypertension. Hypertension is related with many other cardiovascular and cerebrovascular diseases like infarction and strokes. Nowadays this tendency is being changed at last and noise pollution is often considered not only as harmful as air pollution but sometimes even more. European and North American countries have taken a number of measures to reduce noise level in big cities. Examples of popular measures include replacement of older paved roads with smoother asphalt, better management of traffic flows and reducing speed limits to 30 km. per hour, using less-noisy models of transport, like electric vehicles, cycling and walking.
KEYWORDS: Noise; Pollution; Health; Traffic; Aviation; Vehicle; Electric Car; Cycling; Walking
INTRODUCTION
Noise pollution is a constantly growing problem in all big cities of the world. Many people may not be aware of its adverse impacts on their health. Noise pollution is a major problem both for human health and the environment [1,2]. Long-term exposure to noise pollution can induce variety of adverse health effects like increasing annoyance, sleep disturbance, negative effects on cardiovascular and metabolic system, as well as cognitive impairment in children. Millions of people in big cities suffer from chronic high annoyance and sleep disturbance. It is estimated that school children suffer reading impairment as a result of aircraft noise. Despite the fact that noise pollution is one of the major public health problems in most big cities of the world there was a tendency of underestimating it making accent mostly on-air pollution [3].
World Health Organization (WHO) guidelines for community noise recommends less than 30 A-weighted decibels dB(A) in bedroom during night for good quality sleep and less than 35 dB dB(A) in classrooms to allow good teaching and learning conditions. The WHO guidelines for night noise recommend less than 40 dB (A) 0f annual average (L night) outside of bedrooms to prevent adverse health effects from night noise.
According to European Union (EU) publication:
about 40% of the population in EU countries is exposed to road traffic noise at levels exceeding 55 dB (A)
20% is exposed to levels exceeding 65 dB (A) during daytime and
more than 30 % is exposed to levels exceeding 55 dB (A) at night
Some groups of people are more vulnerable to noise. For example, children spending more time in bed than adults are more exposed to night noise. Chronically ill and elderly people are more sensitive to disturbance. Shift workers are at increased risk because their sleep structure is under stress. Nuisance at night can lead to increased visits in medical clinics and extra spending on sleeping pills that effects family’s budgets and countries’ health expenditure [4,5].
FACTS AND ANALYSIS
Adverse effect of noise is defined as a change in the morphology and physiology of an organism that results in impairment of functional capacity. This definition includes any temporary or long-term lowering of the physical, psychological or social functioning of humans or human organs. The health significance of noise pollution is given according to the specific effects: Noise-induced hearing impairment, Cardiovascular and physiological effects, Mental health effects, Sleep disturbance and Vulnerable groups.
Noise-Induced Hearing Impairment
The International Organization for Standardization (ISO 1999) standard 1999 gives a method for calculation noise-induced hearing impairment in populations exposed to all types of noise (continu-ous, intermittent, impulsive) during working hours. Noise exposure is characterized by LAeq over 8 hours (LAeq, 8h). In the standard, the relationships between Laeq, 8h and noise-induced hearing impairment are given for frequencies of 500-6000Hz and for exposure time of up to 40 years. These relations show that noise-induced hearing impairment occurs predominantly in the high-frequency range of 3000-6000Hz, the effect being largest at 4000Hz [6,7].
Hearing impairment in young adults and children were assessed by Laeq on 24h time basis [7-9]. It includes pop music in discotheques and rock-music concerts [8]. Pop music through headphones [10,11], music played by brass bands and symphony orchestras [11,12]. There is literature showing hearing impairment in people exposed to specific types of non-occupational noise. These noises originate from shooting, motorcycling, using noisy toys by children, fireworks’ noise [13,14].
In Europe environmental noise causes burden that is second in magnitude to that from air pollution. At least 113million people are suffered from traffic-related noise above 55dB Lden that costs the EU about E57.1 billion a year. Additionally, 22 million Europeans are exposed to railway noise, 4 million to aircraft and about 1 million to industrial noise. All these exposures to noise pollution cause about 1.6 million of life lost annually, about 12000 premature deaths and 48000 cases of ischemic heart diseases. About22 million people suffer from chronic high annoyance and 6.5 million from sleep disturbance [15-17].
Cardiovascular and Physiological Effects
Laboratory studies of workers exposed to occupational noise and noisy streets, indicate that noise can have temporary as well as, permanent impacts on physiological functions in people. Acute noise exposures activate autonomic and hormonal systems, leading to temporary changes such as hyper- tension and ischemic heart diseases associated with long-term exposure to high sound pressure levels [7,11,18]. The magnitude and duration of the effects are determined in part by individual characteristics, lifestyle behaviors and environmental conditions. Sounds also evoke reflex responses, particularly when they are unfamiliar and have a sudden onset. The most occupational and community noise studies have been focused on the possibility that noise may be a risk factor for cardiovascular disease. Studies in occupational settings showed that workers exposed to high levels of industrial noise for many years at their working places have increased blood pressure and risk for hypertension, compared to workers of control areas [19,20]. Cardiovascular adverse effects are associated to long-term exposure of LAeq, 24h. values in the range of 65-70 dB or more, for both air and road-traffic noise.
Mental Health Effects
Environment noise accelerates and intensifies development of adverse effects on mental health by variety of symptoms, including anxiety, emotional stress, nervous complains, nausea, headaches, changes in mood, increase in social conflicts, psychiatric disorders as neurosis, psychosis and hysteria [21-32]. Noise adversely effects cognitive performance. In children environmental noise impairs a number of cognitive and motivational parameters [20,22]. Two types of memory deficits were identified under experimental noise exposure: incidental memory and memory for materials that observer was not explicitly instructed to focus on during learning period. Schoolchildren in vicinity of Los Angeles airport were found to be deficient in proofreading and persistence with challenging puzzles [20]. It has been documented following exposure to aircraft noise that in workers exposed to occupational noise it adversely effects cognitive task performance. In children too environmental noise impairs a number of cognitive and motivational parameters in children too [21-24].
Sleep Disturbance
Annoyance in populations exposed to environmental noise varies not only with the acoustical characteristics of the noise, but also with many non-acoustical factors of social, psychological, or economic nature [17,7]. These factors include fear associated with the noise source, conviction that the noise could be reduced by third parties, individual noise sensitivity, the degree to which an individual feels able to control the noise.
At nights environmental noise starting at Lnight levels below 40 dB, can cause negative effects on sleep such as body movements, awakenings, sleep disturbance, as well as effects on the cardiovascular system that becomes apparent above 55dB [24-27]. It especially concerns vulnerable groups such as children, chronically ill and elderly people. All these impacts contribute to a range of health effects, including mortality. During the COVID-19 pandemic European cities experienced sufficient reduction in noise pollution due to reduced road traffic movement.
The WHO recommends reduction of road traffic noise levels to 53dB during daytime (Lden) and 45dB during the night (Lnight). Though, the Environment Noise Directive (END) sets mandatory reporting for noise exposure at 55dB Lden and 50 dB Lnight [26-28]. It means that we don’t yet have accurate understanding of exact number of people exposed to harmful noise levels as defined by the WHO [5,6].
Vulnerable Groups
Vulnerable groups of people include people with decreased abilities like: people with particular diseases and medical problems; blind people or having hearing impairment; babies and small children; elderly and old-aged people. These people are less able to cope with impairments of noise pollution and are at greater risk to harmful effects. People with impaired hearing are most effected to speech intelligibility. From 40 years aged people demonstrate difficulties to understand spoken messages. Therefore, majority of this population can be belonged to vulnerable group of people. Children are also included in vulnerable group of noise exposure [29]. So, monitoring is necessary to organize at schools and kindergartens to protect children from noise effects. Specific regulations and recommendations should be taken into account according to types of effects for children like, communication, recreation, listening to loud music through headphones, music festivals, motorcycling, etc.
CONCLUSIONS
Our cities have already witnessed welcome period of unusual quiet during confinement periods due to Covid-19 pandemic, but noise pollution is rising again and, in some cases, even more than precrisis levels. It is clear that we cannot live without sound or noise and reducing noise pollution to zero level is unrealistic. However, we must work to make sure that noise be reduced to less harmful levels to environment and human health. Examples of measures include: installing road and rail noise barriers; optimizing aircraft movements around airports and urban planning measures. But the most effective actions to reduce exposure can be reduction of noise at source, namely by reducing number of vehicles, introducing quieter tires for road vehicles and laying quieter road surfaces. Anyway, it is unlikely that noise pollution will decrease significantly in near future and that transport demand is expected to increase. Air traffic noise is also predicted to increase along with city inhabitants. Effective measures against this situation can be raising awareness and changing people’s behavior by using less-noise models of transport, such as electric vehicles, cycling and walking. Zero emission buses must be welcomed in big cities as well, as refuse collection trucks and municipal vans. Required infrastructure of safe cycling must be constructed in cities for safe cycling and available public bike fleet. Such types of transport as motorcycles and scooters must be banned in big cities because they produce the most terrible and loud noise that adversely impacts on citizens. Municipalities and mayors of big cities must organize so-called quiet city areas, like commodity parks and other green spaces, where people can go to escape city noise.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00257.pdf https://biogenericpublishers.com/jbgsr-ms-id-00257-text/
0 notes
biogenericpublishers · 2 years ago
Text
Effect of Qishan Formula Granules on Interventing Obesity Intestinal Microflora and Immune-Inflammatory by Wei Yan in Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
Objective: To investigate the effects of Qishan Formula Granule on Simple Obesity and Intestinal Microflora-Inflammatory Immune Pathway. Methods: Eighty patients with simple obesity in our hospital were randomly divided into two groups: traditional Chinese medicine group and placebo group. The Chinese medicine group was treated with lifestyle intervention + Qishan formula granule, while the placebo group was treated with lifestyle intervention + placebo. The therapeutic effect, biochemical indexes, clinical symptoms, the number and composition of bacteria, the proportion of Th17/Treg cells in serum and inflammatory factors were measured before and after treatment. Results: After treatment, the total effective rate of simple obesity patients in Chinese medicine group was significantly higher than that in placebo group. (P <05). Compared with the placebo group after treatment, the biochemical indexes and clinical symptoms of the Chinese medicine group improved significantly after treatment. Further tests showed that Qishan Formula Granule could significantly improve the intestinal bacterial abundance, species and quantity of simple obesity patients. The levels of IL-17, TNF-alpha, Th17/Treg and LPS in patients with simple obesity in traditional Chinese medicine group were significantly lower than those in placebo group before and after treatment (P < 0.05). Conclusion: Qishan Formula Granule can alleviate clinical symptoms of simple obesity and improve treatment efficiency through intestinal flora-inflammatory immune pathway.
KEYWORDS: Qishan formula granule; Obesity; Intestinal flora; Immune inflammatory; Th17/Treg
In recent years, with the change of people's lifestyle, the incidence of obesity has increased rapidly. For the chronic metabolic disorders celected to obesity and overweight, the prevalence of diseases such as diabetes and cardiovascular and cerebrovascular diseases has increased year by year [1]. Obesity cannot only lead to diabetes or high incidence of cardiovascular and cerebrovascular events, but also closely relate to cancer, depression, asthma, apnea syndrome, infertility, osteoarthropathy, fatty liver and many other diseases [2-5]. Therefore, it has become a serious impact on people's health, it’s urgent to find reasonable and effective intervention measures. At present, the main drugs of weight loss treatment include non-central drugs, central drugs and hypoglycemic drugs, which have many problems such as low effective response rate, large side effects and weight rebound after stopping [6]. The combination of diet and exercise is often difficult to adhere to for a long time, and compliance is poor [7]. Seeking effective drugs or methods to treat simple obesity has become an important research hotspot in recent years.
It is believed that intestinal flora plays an important role in the regulation of immune inflammation and glycolipid metabolism [8,9]. Studies have shown that there is chronic low-level inflammation in obese people, and chronic low-level inflammation caused by obesity may promote the occurrence development of metabolic disorders [10]. At the same time, it has been found that the disorder of intestinal flora and its metabolites in obesity patients and the obvious imbalance of proportion can affect the formation and differentiation of immune cells such as Th17 cells and Treg cells, thus leading to chronic low-level immune inflammation and obesity [11-13]. At present, a number of studies have shown that traditional Chinese medicine has an important effect on intestinal flora, berberine, Gegenqinlian decoction, tonifying traditional Chinese medicine and so on have a certain degree of adjustment of bacterial dysbiosis [14-15]. Qishan formula granules from Gegenqinlian modification, is a national famous old Chinese medicine experience prescription. The results of the previous study showed that Qishan formula has the effect of reducing blood sugar, improving insulin resistance and reducing body weight. However, the mechanism is not clear, and the simple obesity population has not been studied. Therefore, this study explores the efficacy of Qishan Formula Granule in the treatment of simple obesity and its effect on intestinal flora immune inflammatory pathway, in order to provide reference for traditional Chinese medicine in the treatment of obesity.
INFORMATION AND METHODOLOGY
General Information
In this study, A randomized (randomized digital approach), double-blind(The subjects, researchers, surveyors or data analysts did not know the treatment allocation, placebo-controlled, prospective study approach was selected for simple obese patients through health check-ups, community population screening, and outpatient visits. All patients were treated with lifestyle intervention(In a low sugar diet, 200-350g of main food should be eaten every day, and the ratio of carbohydrate to total calories should be 50% - 65%. Low fat diet, fat intake within 50g, about 30% of the total calories. Protein balance, about 15% of the total heat. Encourage the intake of foods rich in dietary fiber and vitamins. The daily total heat is controlled within 100kj / kg. Adhere to moderate intensity aerobic exercise, i.e., heart rate + 170 age after exercise, at least 3-5 days a week. It lasts for half a year; those with diabetes were treated with glizat sustained-release tablets; and those with hypertension were treated with amlodipine. The study included 80 patients who still met the following criteria after the 1-month elution period. a random number table was established using excel software. the standard 80 patients were randomly averaged into two groups: the traditional medicine group (40) and the placebo group (40). In the group of traditional Chinese medicine,14 cases were male and 26 cases were female; the age was 25-50 years, the average age was 38.74±10.23 years; and the average course of disease was 6.63±3.55 years. in the placebo group,16 men and 24 women; age 25-50 years, mean age 39.61±9.83 years; and mean course of disease 6.17±3.82 years.
Inclusion and Exclusion Criteria
Inclusion Criteria
(1)Patients with simple obesity (male waist ≥90 cm, female ≥85 cm, and BMI≥25 kg/m2) in accordance with the 2011 edition of the Expert Consensus on the Prevention and Control of Adult Obesity in China;(2) age greater than 25 years of age less than 70 years of age;(3) classification of TCM syndrome differentiation as obesity-wet-heat accumulation of spleen syndrome :The body is fat(25kg/m2≤BMI≤28kg/m2:+1score, 28kg/m2≤BMI≤30kg/m2:+2scores, 30kg/m2≤BMI:+3scores,); the abdomen is full(+1score); the food is little and tired(+1score); the head is heavy as wrap(+1score),The loose stool is not good(+1score); the urine color is yellow(+1score); and the whole body is hot and humid jaundice(+1score);The tongue is fat(+2scores); with yellow and greasy fur(+2scores);and smooth veins(+2scores);(4) discontinuation of drugs affecting weight for 4 weeks; (5) signing of informed consent.
Exclusion Criteria
(1)Weight gain due to drugs, endocrine diseases or other diseases;(2) severe liver and kidney dysfunction or other severe primary diseases;(3) history of acute cardiovascular and cerebrovascular events or myocardial infarction within 6 months;(4) stress state or secondary blood glucose elevation or secondary hypertension;(5) weight-loss surgery within one year;(6) severe dyslipidemia;(7) unwillingness of cooperators (who cannot cooperate with dietary control or do not use drugs as prescribed);(8) mental illness, tumor patients;(9) women with or breast-feeding, and women with planned or unplanned contraception;(10) possible allergy to gestational drugs;(11) patients with diabetes who have received medication.
Treatment
(1) The Chinese medicine group was given lifestyle intervention + qishan formula granules orally, one pack at a time, twice a day. (The formulas are: Pueraria root 15g, Scutellaria baicalensis 10g, Coptis chinensis 10g, Rhubarb 3g, gynostemma pentaphyllum 10g, Shengqi 20g, Huai yam 20g, Atractylodes chinensis 15g, Poria cocos 15g, fried Fructus Aurantii 10g, Raw Hawthorn 10g, Chuanxiong 10g, produced by the preparation room of traditional Chinese medicine in our hospital). (2) The placebo group was given a lifestyle intervention plus a placebo oral dose, twice daily. (The formulas are: starch, pigment and adhesive, which are produced by the traditional Chinese medicine preparation room of our hospital).
Indicator Measurements
Fasting blood glucose (FPG), blood lipids, blood pressure, waist-to-hip ratio, body mass index (BMI), body fat content, tcm symptom score (and other biochemical indicators, clinical symptoms were measured every 4 weeks. HbA1c, fasting insulin (FINS), fecal intestinal flora, proportion of serum th17/treg cells and serum IL-17, TNF-α, liver and kidney function, blood routine, urine routine, electrocardiogram were measured at week 0 and 12.
Determination of Flora Size and Composition
Quantitative intestinal excreta were diluted and inoculated into bs medium (bifidobacterium isolate) anaerobic culture for 48 h, bbe medium (bacillus isolates) anaerobic culture for 24h, lactic acid bacteria selective medium for 24h, enterococcal agar for 24h, fs medium (clostridium isolates) for 72h, kf streptococcus agar (streptococcus isolates) anaerobic culture for 24 h, and iridium agar for 24h. After the growth of the colony, the desired target bacteria were identified by colony morphology, Gram staining and biochemical reaction. On a variety of different media, the colonies were identified, and the number of each bacteria was compared with the reference value, and the B/E value was calculated to evaluate the number and composition of intestinal flora in patients with simple obesity after oral intervention of Qishan without sugar.
PCR-DGGE Analysis Intestinal Flora Composition
Fecal specimen collection and DNA extraction: The collection of feces of simple obese patients by aseptic method was about 1 g in 2 mL EP tube, and the stool genomic DNA was extracted according to the instructions of the DNA extraction kit.pcr: the v3 section of bacterial 16srdna was amplified by universal primers. the amplification conditions were 94°c for 3 min predenaturation,94° c for 1 min denaturation,55°c for 1 min annealing,72°c for 1 min extension, a total of 36 cycles,72°c for 10 min extension, and 4°c preservation.PCR products were detected by 2% agarose gel electrophoresis and stored at -20°C.dgge: the pcr product was separated on 8% polyacrylamide glue, the gel was stained by gelred after the end, the gs-800 grayscale scanner was imaged, and the correlation analysis of dgge molecular fingerprint was performed by biomerics software.
Quantitative Quantitative PCR Analysis of Intestinal Microflora
Fecal specimen collection and DNA extraction: The collection of feces of simple obese patients by aseptic method was about 1 g in 2 mL EP tube, and the stool genomic DNA was extracted according to the instructions of the DNA extraction kit. PCR primer design: according to Bifidobacterium, Lactobacillus, Escherichia coli, Bacillus, Clostridium, Streptococcus 16SrDNA gene sequence, the corresponding bacterial PCR primer was designed, and the specificity of the corresponding bacterial sequence was compared in the BLAST gene bank. Preparation of the standard curve: the PCR products of each bacteria in the control group were purified according to the instructions of the DNA purification kit, and the absorbance (A value) and concentration of the purified product were determined, and the copy number of each standard product 1μl was converted to be used to make the standard curve.
Detection of Biochemical Indexes
Blood sugar, blood lipids and other biochemical indicators were determined by the Olimpas 2000 large automatic biochemical instrument. serum insulin, hba1c was determined by our advia centur®xp fully automated chemiluminescence immunoanalyzer. Serum LPS was detected by ELISA. Methods for the determination of intestinal flora and SCFAs:2g of fresh feces of patients were frozen at -20°C refrigerator with a toilet provided by the Institute of Microbiology of Zhejiang Province (containing stabilizer), and the Institute was commissioned to test it. Body fat content was determined by the department's own body fat tester.HOMA-IR is calculated by formula HOMA-IR = FPG × FINS /22.5, HOMA-IS is calculated by formula HOMA-IS =1/HOMA-IR=22.5/ FPG × FINS.
Anges Of Th17/Treg Cells Before and After Intervention
The peripheral blood of the two groups of patients was collected, standing for 1 h,2000 rpm,4°c, centrifuged for 10 min, collected and packed into supernatant, and stored at -20°c when not detected in time. the content of serum il-17, tnf- α was determined by double antibody sandwich enzyme-linked immunosorbent assay (elisa), and the specific operation was carried out strictly according to the instructions of elisa kit. cd3-pecy7 and cd4-pe 0.5μg each, after oscillating and mixing evenly, incubated at room temperature for 30 min;300 g centrifuged for 5 min. after washing with cold pbs,1 ml of diluted fixed, membrane-penetrating agent was added. after reaction for 50 min, the concentration of th17 and treg cells was determined by flow cytometry.
Safety Evaluation and Adverse Reaction Management
If there is an alt increase during medication, the principle of adjusting the drug dose or interrupting treatment is:1 If the alt increase is within 2 times the normal value, continue to observe. If ALT rises at 2-3 times the normal level and is taken in half, continue to observe if ALT continues to rise or remains between 80-120 U.L-1 and interrupt treatment. 3 If ALT rises above 3 times the normal value, stop the drug. After the withdrawal of drugs return to normal can continue to use, and strengthen the treatment of liver protection and follow-up. If leukopenia occurs during medication, the principles for adjusting the drug dose or interrupting treatment are as follows:1 If leukopenia is not lower than 3.0 x 109·L-1, continue to take medication to observe. If the white blood cell drops between (2.0 and 3.0) ×109·L-1, observe in half. Most patients can return to normal during continued medication. If the review of leukocytes is still below 3.0 x 109·L-1, the treatment is interrupted. 3 If leukopenia falls below 2.0 x 109·L-1, interrupt treatment.
CRITERIA FOR EVALUATION OF SYNDROME EFFICACY
Clinical recovery: TCM clinical symptoms, signs disappear or basically disappear, syndrome score reduction ≥90% and weight lost by ≥ 15%. Remarkable effect: the clinical symptoms and signs of TCM were obviously improved, and the score of syndromes was reduced by more than 70% and weight lost by ≥ 10%. Effective: TCM clinical symptoms, signs are improved, syndrome score reduction≥30% and weight lost by ≥ 5%. Invalid: TCM clinical symptoms, signs are not significantly improved, or even aggravated, syndrome score reduction <30% or weight lost by<5%. Total effective = (clinical recovery + significant + effective)/ total number *100%.
Statistical Analysis
Statistical software SPSS 17.0 is used to compare and analyze the indexes in this paper. The measurement data are expressed in form, and the measurement data are normally distributed, which meet the t-test standard.T-test to compare the counting data with xs test comparison; the count data are compared by Χ2 test; when p < 0.05, the statistics have significant differences.
Estimation of Sample Size
Estimation of sample size: according to the estimation method of sample size in clinical experimental research, the sample size of two sample mean comparison is estimated. Check the "sample size table required for two sample mean comparison". According to the bilateral α = 0.05, the test efficiency (1- β) = 0.9, μ 0.05 = 1.96, μ 0.1 = 1.28, according to the previous research experience, σ is the estimated value of the overall standard deviation of two samples = 16, δ is both samples. The difference of number = 3.0, and the result is n = 34. Considering the loss rate of sample 15%, 40 cases in the experimental group and 40 cases in the control group were preliminarily determined.
RESULTS
The rate of abscission and baseline were compared between the two groups. In the placebo group, 40 cases were enrolled, 2 cases were dropped, 38 cases were observed, and the drop rate was 5%. In the treatment group of traditional Chinese medicine, 40 cases were enrolled, 3 cases fell off, 37 cases were observed, the rate of falling off was 7.5%. There were 12 diabetic patients in placebo group, 15 hypertensive patients, 13 diabetic patients and 14 hypertensive patients in traditional Chinese medicine treatment group. There was no significant difference between the two groups in the proportion of diabetic and hypertensive patients. The age gender, baseline FPG, total cholesterol, triglyceride, LDL, BMI, body fat content, HbA1c, fins, waist to hip ratio, TCM syndrome score and the number of specific intestinal flora were comparable.
Groups Comparison of Clinical Effect of Simple Obesity Patients After Treatment
Compared with the placebo group after treatment, the total effective rate of simple obese patients in the traditional chinese medicine group was significantly increased after treatment (81.1% vs 50.0 %), and there was a significant difference in comparison (p <0.05). the clinical efficacy of the two groups was compared in table 1. Further observation found that the two groups of patients did not have liver function, renal function, abnormal white blood cell level and other adverse reactions Table 1.
Two Groups Comparison of Biochemical Indexes of Simple Obesity Patients Before and After Treatment
Compared with the previous treatment, the indexes of the patients with simple obesity in placebo group did not change significantly after treatment, and there was no significant statistical difference (P >0.05). The scores of FPG, total cholesterol, triglyceride, low-density lipoprotein, BMI, body fat, HbA1c, FINS, waist-to-hip ratio and TCM syndromes were significantly lower in patients with simple obesity than before and after treatment group of placebos, whereas HDL was significantly higher than that before and after treatment placebo group, there was a significant difference in comparison (P <0.05); the comparison of biochemical indexes before and after treatment in the two groups was shown in Table 2.
Groups Patients with simple obesity before and after treatment Comparison of intestinal flora
The colony numbers of Bifidobacterium, Bacillus fragilis, Lactobacillus, Enterococcus and Escherichia coli in the placebo group were not significantly different after treatment compared with that before and after treatment (P >0.05). The colony numbers of Bifidobacterium and Bifidobacterium in the traditional Chinese medicine group were significantly higher than those in the comfort group before and after treatment (P <0.05). A similar result was found for the detection of bacterial copy number of each stool by real-time fluorescence quantitative PCR, and the real-time fluorescence quantitative PCR is shown in Table 4.
Two Groups Comparison of Inflammatory Indexes in Simple Obesity Patients Before and After Treatment
Compared with pre-treatment, the inflammatory indexes of patients with simple obesity in placebo group did not change significantly after treatment (P >0.05)The levels of IL-17, TNF-α and Th17/Treg were significantly lower in patients with simple obesity than those before and after treatment placebo group, there was a significant difference in comparison (P <0.05); the comparison of biochemical indexes before and after treatment in the two groups was shown in Table 5.according to the relationship between th17/treg level and normal range after treatment of qishan formula granules, it was divided into normal group (th17/treg level was within normal range) and high level group (th17/treg level was higher than normal range). Further statistics found that the total amount of intestinal flora after treatment in the normal group was significantly higher than that in the high-level group (P <0.05, Fig.1) Table 3.
DISCUSSION
In recent years, with the rapid development of today's society, people's life style and diet, nutrition structure has undergone great changes, so the incidence of obesity remains high, and the incidence of diabetes, cardio-cerebrovascular diseases and other diseases caused by obesity is increasing year by year. Several studies have shown that Qishan formula granules play an important role in reducing body weight. Therefore, this study discussed the effect of Qishan formula granule on simple obesity and further discussed its mechanism.
This study found that the total effective rate of simple obesity patients was significantly increased after the intervention of qishan formula granules. at the same time, the intervention of qishan formula granules could improve the clinical symptoms and biochemical indexes such as blood sugar and blood lipids in simple obesity patients. The application of Qishan formula granules is beneficial to the treatment of simple obese patients. The theory of traditional Chinese medicine believes that obesity is due to dietary fat, inactivity,dysfunction of spleen in transportation, accumulation of phlegm and dampness cream. Body diseases characterized by obesity and fatigue. Its most common symptom is damp-heat accumulation spleen syndrome [16,17]. Qishan formula granules from Gegenqinlian decoction and six gentleman decoctions reduced. Raw astragalus, huai yam, poria qi invigorating spleen, Scutellaria baicalensis, Coptis chinensis Qingzhongjiao dampness and heat, rhubarb, gynostemma pentaphyllum to remove dampness, Atractylodes aromatization and dampness, Fructus Aurantiii, Hawthorn Qi digestion, phlegm elimination, Pueraria, Chuanxiong heat Qingjin Qi Huoxue. Thus, it has the effect of invigorating qi and invigorating spleen, clearing away heat and removing dampness and removing turbidity. It can make temper health transport and water Tianjin four cloth, phlegm turbidity inside the heat clear and fat in the full elimination. The purpose of removing Glycyrrhiza in the original prescription of Gegen Qinlian decoction is to prevent the rise of blood sugar and the storage of water and sodium.Therefore, the application of Qishan formula granules can significantly improve the total effective rate of patients and significantly improve clinical symptoms, blood sugar, blood lipids and other biochemical indicators. However, the mechanism of cell biology and molecular biology has not been elucidated Table 5.
Several studies have pointed out that differences in the composition of the intestinal flora are one of the most important causes of obesity, and their mechanisms mainly involve activating inflammatory responses, promoting energy absorption and regulating intestinal permeability [18,19] Studies have shown that obese patients are often accompanied by inflammatory signaling pathway activation, immune cell infiltration and other pathological changes [20]. therefore, stopping from the intestinal flora-inflammatory immune pathway will be helpful to elucidate the mechanism of qishan formula granules to improve simple obesity. this study examined the number and composition of intestinal flora and found that the colony numbers lactic acid bacteria, bifidobacterium and bacteroides were significantly higher in patients with simple obesity than in the placebo group before and after treatment after treatment after treatment with qishan formula granules. The results showed that Qishan formula could significantly affect the intestinal flora of simple obese patients. He Xuyun and other studies have found that Astragalus polysaccharide, the main ingredient of Astragalus membranaceus, can significantly inhibit the formation of obesity in mice and significantly restore intestinal flora disorders [21]. At the same time, several studies have pointed out that Radix Puerariae, Scutellariae, Coptis, Huai yam, Ligusticum chuanxiong, Poria cocos and Astragalus membranaceus It can affect the composition and richness of intestinal flora [22-25]. thus further confirming the effect of Qishan formula granules on intestinal flora.Intestinal flora plays an important regulatory role in the balance of immune cells. Fang Qian et al found a linear positive correlation between Bifidobacterium/ Escherichia coli ratio and Treg/Th17 in children with asthmatic bronchitis [26]. At the same time, numerous studies have found that berberine regulates the intestinal flora and the balance of Th17/Treg cells in rats, while the disruption of the balance between pro-inflammatory Th17 cells and inhibitory Treg cells is a key factor in many immune and metabolic diseases [27-29]. Therefore, the effect of Qishan formula granules on inflammatory cells and inflammatory factors was further examined. the results showed that il-17, tnf-α, th17/treg and levels were significantly reduced in patients with simple obesity after intervention of qishan formula granules, indicating that qishan formula granules could affect the inflammatory response in patients with simple obesity. further exploring the relationship between th17/treg levels and intestinal flora found that the total amount of intestinal flora in patients with th17/treg levels in the normal range was significantly higher than that of patients with th17/treg levels than normal, indicating a association between intestinal flora and treg/th17 values in patients with simple obesity. In summary, Qishan formula granules can improve the symptoms of obesity by increasing the richness and diversity of intestinal flora in simple obese patients and inhibiting the inflammatory response.
To sum up, this study found that Qishan formula granules can alleviate the clinical symptoms of simple obesity and improve the treatment efficiency through the intestinal flora-inflammatory immune pathway. Qishan formula granules can regulate the proportion of Th17, Treg cells and the secretion level of inflammatory factors by influencing the composition and richness of intestinal flora, so as to reshape the body shape and improve the biochemical index, and then achieve the purpose of treating simple obesity. However, there are still some shortcomings in this study, and it is necessary to further explore the main components of its efficacy and how the intestinal flora affects the inflammatory response.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00256.pdf https://biogenericpublishers.com/jbgsr-ms-id-00256-text/
0 notes
biogenericpublishers · 2 years ago
Text
Complex Kinesiological Conundrum Could Microzyman Machinations/Inhibitions Explain the Relative Tardiness of Initial Infantile Human Locomotion? Seun Ayoade* in  Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
“The onset of walking is a fundamental milestone in motor development of humans and other mammals, yet little is known about what factors determine its timing. Hoofed animals start walking within hours after birth, rodents and small carnivores require days or weeks, and nonhuman primates take months and humans approximately a year to achieve this locomotor skill”.
Introduction
We, mankind, are the tardiest living thing in terms of the age we start walking. This is highly embarrassing. It is embarrassing to evolutionists who declare man to be the most biologically advanced and evolved species. It is equally embarrassing to creationists who insist that man was made in the Image of God. If man is the most advanced and evolved animal why do our babies take so long to learn to walk? Why are we, the “peak of God’s creation” carried around by our mothers for a year while the zebras and goats and horses are proudly walking and cavorting just hours after delivery? Creationists have a ready excuse-the fall of man and his expulsion from the Garden of Eden caused man to become genetically degraded [1]. After all, they argue, the first humans Adam and Eve walked and talked the very day they were created. Evolutionists on the other hand put forth other arguments for the very embarrassing ambulatory limitations of Homo sapiens. I hereby refute these arguments viz-Refuting the gestation argument-This argument states that humans are pregnant for 9 months unlike those other animals that are pregnant for shorter periods. However, baby elephants walk hours after birth and the gestation period in elephants is 18 to 22 months! Refuting the Life Span Relativity Argument This argument states that because horses and dogs have shorter life spans than we humans their apparent early walking is not really that early [2]. I refute this argument in the table below by showing at what age human babies would walk if we had the life span of cats and dogs etc table 1.
Refuting the Brain Development Argument
This argument states that all animals start walking when their brains reach a particular stage of development [3]. Then why do humans reach the stage so late if we are the most evolved animal?
Refuting the Bipedal Argument
This argument states that walking on two legs involves much more balance and coordination than walking on all fours and so should take longer. If this argument was true human babies would start crawling hours after birth! Human babies don’t crawl till 4 -7 months! Also studies by Francesco Lacquaniti at the University of Rome Tor Vergata, Italy have shown that despite homo sapiens’ unique gait, the motor patterns controlling walking in other animals are nearly identical to that in man!
Intelligence Argument
This argument claims that since humans are more intelligent than other animals we have to start walking later because we have so many other things to do with our minds apart from walking [4-7]. However, ravens are very intelligent birds yet raven chicks walk and fly at one month old. Monkeys are intelligent and yet start waking at 6 weeks!
MY HYPOTHESIS AND PROPOSAL
The key to cracking this mystery will be to do a comparative study of the cellular dust [8-10] of various animals. This is not likely to happen any time soon however as the mainstream scientific community continues to deny the existence of the microzymas [11].
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00254.pdf https://biogenericpublishers.com/jbgsr-ms-id-00254-text/
0 notes
biogenericpublishers · 2 years ago
Text
Learning Difficulties and Reading Comprehension in the First Grades of Primary School by Theofilidis Antonis* in  Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
This paper is a study on the concept of learning disabilities and reading comprehension. Specifically, it studies the learning difficulties and the reading ability in terms of the school performance of the students of the first grades of primary school.
Aim: In our work we try to analyze and present the following topics:
The definition of the term "Learning Disabilities", their etiology and the correlation of learning difficulties and reading.
What learning difficulties do,what we face in reading and writing and how do they affect the school context and students' performance.
How to diagnose learning disabilities in reading ability and interventions that need to be implemented in the classroom to reduce them.
Method: We followed the most up-to-date literature on the subject
Conclusions: Learning difficulties can also cause emotional problems in children as they feel that they are lagging behind compared to the rest of the class. Our goal is to include children with learning disabilities in the classroom by adapting the lesson to the children and not the children in it. Our main concern should be the valid and timely diagnosis of difficulties and effective intervention to address them.
Keywords: Learning difficulties, reading comprehension, primary school
LEARNING DIFFICULTIES - DEFINITION AND ETIOLOGY
Learning disabilities are a generalized expression of some of the individual difficulties encountered by students. More specifically, the concept of learning disabilities refers to a variety of heterogeneous disorders, which result in difficulty in learning, speaking, writing, reading, information processing, mathematical computation, attention retention and in the coordination of movements. Every disorder that is part of the learning disability is differentiated in terms of the intensity of its manifestation, the nature and the symptoms of the difficulties, as well as the consequences that they have. These disorders can cause problems throughout the life of the individual [1]. An important point in the study of learning difficulties concerns the impossibility of having common symptoms of expression of these difficulties, a fact that leads to their delayed recognition, which usually occurs during school age [2].
The goal of any educational system is the success of students in their academic performance, as well as their acceptance by the school environment. Many children, however, who have learning difficulties do not have the expected school performance required. The learning difficulties that can occur to each child individually vary, which makes the role of the teacher difficult, since he is called to deal with each case separately [3].
The origin of learning difficulties seems to be mainly due to dysfunction in the central nervous system of the individual [5]. Thus, learning disabilities come through the pathogenesis of the individual himself. However, apart from the neurological factors that seem to play a major role in the development of learning disabilities, the great heterogeneity of the symptoms of learning disabilities leads, however, many researchers to conclude that the etiology of these difficulties is very likely to be multifactorial and epigenetic factors also play a role. More specifically, researchers should not neglect the study of environmental, cultural and emotional factors, processes that seem to play a significant role in the occurrence of learning disabilities. Such factors may be an inappropriate school environment, a difficult family environment, depression, anxiety, the child's personality, psychological neglect, etc. Still, we should not overlook the cognitive factors that emerge through low performance in almost all learning activities. In addition, there seems to be a differentiation of learning difficulties in relation to the sex of the child. In contrast to girls, boys have a higher rate of learning difficulties especially in the behavioral field and language learning [4]. Therefore, as learning is a complex, multifactorial process, all the complex factors that may affect it should be considered. However, all of the above have not yet been clarified in the literature whether they are factors that cause learning disabilities or simply predisposing or risk factors [6].
Papadakou, Palili, Gini, (2014) studied some epidemiological factors that seem to be directly related to the occurrence or not of learning difficulties. The research process showed that children diagnosed with learning disabilities had at least one first-degree relative with learning disabilities. The researchers also found that learning disabilities were directly linked to sleep disorders, attention deficit hyperactivity disorder (ADHD), and a variety of other emotional and social problems. Such problems may be related to anxiety, depression, adaptive disorders and social interaction, etc. The researchers emphasize that the above factors could be considered as prognostic and gain important function in the design of targeted early interventions, with the aim of better academic and social development [7].
Finally, the correlation of learning difficulties with various emotional problems that children may manifest is considered important. Although no precise explanation has been given, it seems that children with learning disabilities tend to develop less positive and more negative emotions, which reduce their willpower and prevent them from making the necessary effort in the school context. (Gugumi, 2015). Behavioral problems can be characterized as internal and external. Problems such as stress, melancholy, depression, obsessive-compulsive disorder, dysthymia and social phobia, as well as disorders such as bulimia, nightmares, anorexia, shyness and isolation are considered internal, while problems such as aggression are considered. Disasters, negativity, rudeness, attention deficit hyperactivity disorder, adjustment and conduction disorders, hostility and theft, are considered external problems [8]. Bornstein, Hahn & Suwalsky, 2013). Children's intrapersonal or interpersonal adjustment is directly affected by these emotional and behavioral problems, which may be due to students' low self-esteem, which stems from the learning difficulties they face (Koliadis, 2010).
LEARNING DISABILITIES AND READING
Most of the children who have Learning Disabilities have problems in the cognitive process of reading and understanding the written text. The process of reading is a complex task of cognitive function, which refers to the processing and analysis of graphs, phonemes and semantic information of the written language. It is closely related to a variety of other cognitive functions of the child, which must be activated for the full understanding of reading, such as the degree of ability of his phonological awareness and the capacity of his short-term memory, perception, concentration, attention, language and thinking, as well as sensory skills such as vision, motor and reading ability [9] Natália Jordão, Adriana de Souza Batista Kida, Danielle Dutenhefner de Aquino, Mariana de Oliveira Costa, & Clara Regina Brandão de Avila. 2019). More specifically, reading refers to the process in which the student decodes the written symbols of our language and converts them into speech. The graphic processing of these written symbols can contain phonemic, phonological and semantic information about the receiver (Porpodas, 2002).
In order for the reading process to take place correctly, it is necessary not only to decode the imprinted symbols, but also to understand their conceptual content. It is thus an important process of processing and extracting information, deeply connected and dependent on decoding and understanding (Porpodas, 2002. Tsesmeli, S. 2012).
We understand, then, that decoding and comprehension are the two most important cognitive functions that play the most important role in the reading process. More specifically, decoding is the ability to recognize written symbols and automatically convert them to a phonological representation. An important role in the correct decoding is played by the state of long-term memory, access to it and retrieval of any information necessary for the correct letter-phonemic matching [19] (Tzivinikou, S. 2015. Kim, MK, Bryant, DP, Bryant, BR, & Park, Y. 2017) [11]. Comprehension, which is the second equally important cognitive function of reading, presupposes the recognition of the semantic content of words, which can come from the knowledge of the meaning of words, the understanding of their grammatical pronunciation as well as their syntactic structure (Kokkinaki 2014. Westwood, 2016). For the correct understanding of the text, the cognitive strategies should be used correctly, the words of the text should be recognized and combined with the previous knowledge [12] (Westwood, 2016. Tzivinikou, 2015) [13]. According to Kaldenberg, Watt, & Therrien, (2015) reading comprehension is directly related to the reader's prior knowledge, as the information obtained from a text in order to be generalized and understood, must be processed and to connect based on the knowledge already possessed by the individual. Lack of this knowledge leads to inability to use metacognitive strategies needed for reading (Kaldenberg et al., 2015) [14].
Reading, then, can be said to be the product of these two factors presented above, decoding and comprehension. As a result, any malfunction even in one of these two factors can lead to the so-called Reading Difficulties. Difficulty in reading comprehension can manifest itself throughout the levels of the reading process, from the simple learning of individual graphs to the reading, comprehension and retention of the acquired textual information. The reading difficulties that children present are mainly based on neurological abnormalities, they can be combined with delayed speech and generally with language problems (Kokkinaki 2014).
LEARNING DIFFICULTIES AND READING IN PRIMARY SCHOOL
In terms of reading comprehension in the school performance of primary school students we must keep in mind the basic cognitive development of the reading process. A student is expected to have successfully completed the decoding ability by the second grade of elementary school. By the third grade, a child should not only be able to easily decode the written text, but also understand the meaning of what they are reading. With this in mind, we can talk about learning difficulties related to reading comprehension, only if the child has received the appropriate education for the stage of the class in which he is. The usual deviation of students with learning disabilities from the rest of the class is about one to two years.
Learning disabilities seem to slow down children's performance at school. The motivation and enthusiasm that each student has for learning, does not seem to exist in students with learning difficulties, resulting in a low academic level [15] (Lama, 2019). We understand, then, that the early diagnosis and treatment of learning difficulties in a child in the first grades of primary school is vital. The main problem lies in the field of decoding since the difficulty of the reader to decode the words hinders his entry into the process of comprehension (Kokkinaki 2014) [16].
LEARNING DIFFICULTIES AND WRITING SPEECH PROBLEMS IN PRIMARY SCHOOL
The role of writing in school is very important since apart from being a means of communication it is also one of the basic skills that will accompany the child throughout his student years and later in life. A necessary criterion for the production of written speech is the linguistic, metalanguage but also the cognitive, metacognitive skills of the individual. An important role in the production of written speech is played by the already existing knowledge and experiences of the individual, his motivations, feelings and goals (Panteliadou 2000. Vasarmidou, D., & Spantidakis, I. I. 2015). Students with learning disabilities have difficulty using metacognitive strategies that should be employed to produce written communication. These strategies would give the student the opportunity in the writing process, would help him in the production and in the end would give him the opportunity to check his result and evaluate it, allowing him to make the necessary corrections (Panteliadou, 2000. Vasarmidou , 2015).However, in addition to cognitive and metacognitive skills, difficulties may also arise in the student's mechanistic skills. These skills include handwriting, spelling, vocabulary development and the use of punctuation, accentuation, writing and the use of uppercase and lowercase letters. Difficulties in some of these skills or in all, create problems in writing [17] (Vasarmidou, 2015. Panteliadou, 2000). Also, features of written speech difficulty can be the reversal or confusion of letters, omissions or additions of letters, illegible letters and permutations. Identifying more serious issues related to speech and reading in combination with writing is much more complicated than the first grade of elementary school, since the difficulty in organizing speech is something common in preschool children (Tzouriadou, 2008) [18].
DIAGNOSIS AND TEACHING INTERVENTIONS IN CHILDREN WITH LEARNING DISABILITIES
There are many cases of students with learning difficulties who have the ability to keep up with the class schedule without particular problems. However, in most cases, the difficulties are quite intense, as a result of which the content needs to be adjusted in order to be able to follow it [20] (Tzouriadou, 2011).
Learning Disabilities are a common problem for many students, but due to their special nature, they enable us to carry out effective intervention programs. This intervention must be timely, in order to deal with the current difficulty in giving birth, as well as not to create negative feelings regarding the self-esteem, self-image and self-confidence of children for their abilities and school performance. Proper intervention comes from a valid assessment of students' dysfunctions and weaknesses and is a multifactorial process, which requires adequate knowledge of the child's weaknesses, strengths and personality, cooperation with parents, study of social, family and cultural environment of the child, as well as many other important information (Porpodas, 2002). Assessment in terms of cognitive objects should be based on phonological awareness, short-term memory, decoding and finally, comprehension of the reported text (Kokkinaki 2014).
Therefore, a necessary condition for an effective didactic intervention is the correct diagnosis. Specifically, for reading ability, starting with a well-targeted assessment based on the difficulties we detect in the child, we give a specialized approach to the teaching of reading. In order to make this assessment, we must take as a guide the level of reading ability that the student already possesses. The correct evaluation in the kindergarten and in the first grades of elementary school leads to a timely intervention and prevention of the student's difficulties [19] (Tzivinikou, S., 2015).
Today, we are given the opportunity, with the use of appropriate tools, to understand the learning difficulties that a child faces from an early age. Some of the most common screening tests that are mainly related to reading ability are:
The Predictive Assessment of Reading (PAR). It is considered one of the most reliable and valid tests for kindergarten children, through which we have the ability to predict the reading ability of children up to high school.
The Texas Primary Reading Inventory (K-2). It focuses on children from pre-school to the third grade and has the ability to recognize and evaluate the developmental stages of their reading [21].
The Dynamic Indicators of Basic Early Literacy Skills (DIBELS). It is a test that deals with the processes of phonemic awareness, the alphabetic principle and phonological awareness, the ease and accuracy of reading a text, the development of vocabulary and the process of comprehension.
Finally, the AIMS-Web test, which enables the use of RtI programs and multilevel teaching in schools. It refers to children from kindergarten to high school and gives a basis for the detailed curriculum in mathematics and reading [21] (Tzivinikou, S., 2015)
So, after completing our diagnosis and evaluating the capabilities and needs of the student, we turn to the right teaching intervention. In order to eliminate the differences between the students, we focus on adapting the curriculum to the needs of the student and not the other way around. Thus, the goals we have set in a classroom do not change according to the abilities of each student, but the way they are approached and their degree of difficulty differ (Skidmore 2004). The ultimate goal is to develop learning strategies and use them within the classroom. Strategies such as group research or peer-to-peer teaching can be particularly useful, not only for students with difficulties but also for all students in the class (Tzouriadou, 2011). By learning to use cognitive and metacognitive strategies, students have the opportunity to process and use the information they will receive, to think and perform a task, and to evaluate their performance in it [22] (Luke 2006).
Students with reading difficulties can improve their reading skills through various educational approaches applied during teaching. Indicatively, we can refer to direct teaching, the formation of small groups that enhance discussion and support, vocal thinking, etc. (Tzivinikou, 2015). Also, strategies that have to do with reading are:
The Reading Analysis, Merge and Decoding strategy. It refers to students with learning difficulties aged 7 to 12 years and has to do mainly with the connection of sounds with voices.
Auditory Discrimination in Depth. It emphasizes the posture of the mouth and they learn the feeling that each sound gives at the time of its pronunciation. Thus, students analyze the words and recognize the sounds according to the placement of the tongue and mouth.
Analysis for Decoding Only. A strategy that is implemented in order to teach students to analyze letter patterns in small words that they often come across. For example, the word "beyond" combined with the words "day, wedding ring, good morning".
The Read-By-Ratio Approach. This strategy is based on phonemes aimed at word recognition. Through the same spelling patterns of words, students are taught how to analyze and decode unknown words (Tzivinikou, 2015).
CONCLUSIONS
In recent years, there has been a rapid increase in learning disabilities. The percentage of children diagnosed with learning disabilities, which can cause serious developmental problems within and outside the school context, is increasing. Children who have been diagnosed with learning disabilities appear to have serious difficulty adapting to the classroom and thus lagging behind in their academic performance. In the past there was a perception that these children were "lazy", "bad students", "stupid", but now we know that this perception is wrong, since we are not talking about bad or lazy students, but about students who due to a nervous disorder their system, do not have the same capabilities as the rest. Learning difficulties can also cause emotional problems in children as they feel that they are lagging behind compared to the rest of the class. Our goal is to include children with learning disabilities in the classroom by adapting the lesson to the children and not the children in it. Our main concern should be the valid and timely diagnosis of difficulties and effective intervention to address them.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00255.pdf https://biogenericpublishers.com/jbgsr-ms-id-00255-text/
0 notes
biogenericpublishers · 2 years ago
Text
Marine Drugs as a Valuable Source of Natural Biomarkers Used in the Treatment of Alzheimer’s Disease by UMA NATH U* in  Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
Alzheimer’s disease (AD) is a multifactorial neurodegenerative disorder. Current approved drugs may only ameliorate symptoms in a restricted number of patients and for a restricted period of time. Currently, there is a translational research challenge into identifying the new effective drugs and their respective new therapeutic targets in AD and other neurodegenerative disorders. In this review, selected examples of marine-derived compounds in neuro degeneration, specifically in AD field are reported. The emphasis has been done on compounds and their possible relevant biological activities. The proposed drug development paradigm and current hypotheses should be accurately investigated in the future of AD therapy directions.
KEYWORDS: Marine drugs; Alzheimer’s disease; Mechanisms of activity
Introduction
Right now, 46.8 million persons in the world are suffering from dementia and it is expected that this number will increase to 74.7 million in 2030 and 131.5 million in 2050. Alzheimer’s disease (AD) is the main cause of dementia in the elderly. AD is a progressive, continuous and incurable brain disorder leading to increase severe disability such as memory loss (amnesia), minimal to no communication (aphasia), the inability to perform activitiesofdaily living (ADL) (apraxia), the impairment of the sensory input (development of agnosias). In briefly, AD is a multifactorial neurodegenerative disorder that affects cognition (memory, thinking, and language abilities), quality of life and self-sufficiency in elderly [2]. AD is strictly related to aging, indeed the majority of cases (≥ 90%) are initially diagnosed among persons ≥ 65 years of age (AD with late onset-LOAD). In particular, genes involved in the production of the amyloid β (Aβ) peptides such as amyloid precursor protein (APP), Presenilin 1 (PSEN1), and 2 (PSEN2) may account for as much as 5%–10% of the EOAD incidence.
BRYOSTATIN
Bryostatin 1 is a natural product derived from the marine invertebrate Bugula neritina. It has potent and broad antitumor activity. Bryostatin 1 activates protein kinase C family members, with nanomolar potency for PKC1α and ε isotypes.
In the central nervous system, bryostatin 1 activation of PKC boosts synthesis and secretion of the neurotrophic factor BDNF, a synaptic growth factor linked to learning and memory. The compound also activates nonamyloidogenic, α-secretase processing of amyloid precursor protein.
Preclinical work on bryostatin in nervous system diseases has mainly come from the Alkon lab. In their studies, intraperitoneal administration activated brain PKCε and prevented synaptic loss, Aβ accumulation, and memory decline in Alzheimer’s disease transgenic mice. he drugs preserved synapses and improved memory in aged rats, and in rodent models of stroke and Fragile X syndrome. In a different lab, bryostatin given by mouth improved memory and learning in an AD model. In a mouse model of multiple sclerosis, bryostatin promoted anti-inflammatory immune responses and improved neurologic deficits.
MACROALGAE
Acetylcholinesterase (AChE) and butyrylcholinesterase (BChE) are important enzymes involved in the regulation of acetylcholine (ACh) in the cleft of neurons to promote cognitive function. However, loss or rapid degradation of acetylcholine leads to cholinergic dysfunction and synaptic ultimately memory impairment. Hence, cholinesterases have been developed to alleviate cholinergic deficit by restoring ACh levels and improving cognitive function Seaweed-derived biologically active compounds have been reported to exhibit inhibitory effects on enzymes associated with Alzheimer’s disease. revealed that aqueous-ethanol extracts rich in phlorotannins, phenolic acids, and flavonoids from Ecklonia maxima, Gelidium pristoides, Gracilaria gracilis, and Ulva lactuca exhibit acetylcholinesterase and butyrylcholinesterase inhibitory activities. Furthermore, sulfated polysaccharides obtained from Ulva rigida as well as the aforementioned algal species also showed potent inhibitory effects on BChE and AChE in vitro. Purified fractions of Gelidiella acerosa showed AChE and BChE inhibitory activity. Phytol was identified in the fraction as the most effective constituent. In the same study, molecular docking analysis revealed that phytol tightly binds to the arginine residue at the active site of the enzyme, thereby changing its conformation and exerting its inhibitory effect. AChE inhibitory activity of Codium duthieae, Amphiroa beauvoisii, Gelidium foliaceum, Laurencia complanata, and Rhodomelopsis africana. Hypnea musciformis and Ochtodes secundiramea extracts showed weak inhibitory activity (less than 30% inhibition) on AChE. Jung et al. also reported AChE and BChE inhibitory effects of methanol extracts of Ecklonia cava, Ecklonia kurome, and Myelophycus simplex. Glycoprotein isolated from Undaria pinnatifida showed dose responsive inhibitory effects on butyrylcholinesterase and acetylcholinesterase activities.
MEDITERRANEAN RED SEAWEED HALOPITHYS INCURVA
The close relationship between the amyloid aggregation process and the onset of amyloidosis constantly encourages scientific research in the identification of new natural compounds capable of suppressing the formation of toxic amyloid aggregates. For the first time, our findings demonstrated the in vitro anti-amyloidogenic role of the H. incurva, whose metabolic composition and bioactivity were strongly influenced by seasonality. This work focused on the bioactivity of H. incurva phytocomplex to evaluate the synergistic action of its various constituents, while the structure and functionality of its secondary metabolites will be the subject of further studies.
FASCAPLYSIN
Fascaplysin, a bis-indole alkaloid, was isolated from a marine sponge Fascaplysinopsis Bergquist sp. Fascaplysin is a specifc kinase inhibitor for CDK 4.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00253.pdf https://biogenericpublishers.com/jbgsr-ms-id-00253-text/
0 notes
biogenericpublishers · 3 years ago
Text
Forest Biodiversity Degradation: Assessment of Deforestation in Ohaji Egbema Forest Reserve, Imo State, Nigeria Using GIS Approach by Egbuche Christian Toochi in Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
This research is focused on a spatial analysis of a reserved forest deforestation over a period of time using a GIS approach in Ohaji Egbema Local Government Area Imo state, Nigeria. It aimed at assessing and analyze deforestation in Ohaji Egbema forest reserve and examined the possible effects of deforestation on the forest environment. The assessment concentrated on when and where have forestlands changed in the reserved forest programmed within the period of 1984 - 2040 forecast. The key objectives were to assess the impact of land use and land cover changes on forest cover for the past 36 years, while sub objectives were dedicated to achieve in mapping out different land cover in Ohaji Egbema forest reserve, to assess land cover changes in the forest reserve susceptible to long term degradation from 1984 to 2020 of about 20 years. To evaluate forest loss in the area for the past 36years, and to predict the state of the land cover (forest) for the next 20 years (2040). Primary and secondary data employed using (200 ground truth points) were systematically collected from four different LULC classes in the study area using geographical positioning system (GPS), the secondary data (Satellite Landsat Imageries of 1984, 2002 and 2020) of the study area was acquired. The imageries were processed, enhanced and classified into four LULC classes using supervised classification in Idrisi and ArcGis software Ground truth points were utilized to assess the accuracy of the classifications. The data collected was analyzed in tables and figures and represented with a bar chart and pie chart graphs. Results showed that forest land, built up, grassland and water body were the four LULC classified in the study area. Kappa coefficient values of 91%, 85% and 92% for 1984, 2002 and 2020 respectively shows the accuracy of the classifications. Classifying the land uses into built-up and forest lands revealed that the built-up lands constantly rose while the forest lands kept dropping. The built-up lands increased by 49.30% between 1984 and 2000, 50.00% between 2002 and 2020 and 28.40% between 2020 and 2040 at the expense of the forest portion of the area which fell by 33.88% between 1984 and 2000,46.45% between 2002 and 2020, and 49.22% between 2020 and 2040. Increase in population, per capita income, and land use activities and by extension urban expansion were found to be the major factors causing deforestation in the forest reserve, it is likely that in the nearest future the remaining forest lands would be gradually wiped out and consequently the environmental crisis would be aggravated. Based on the findings of the study, there is need to urgently limit and control the high rate of deforestation going on in Ohaji Egbema forest reserve and embark on tree replanting campaigns without delay. There is need and recommended that a higher quality satellite imagery that offers up to 4m resolution should be used and a forest relic analysis should be conducted.
KEYWORDS: Biodiversity; Forest degradation; GIS; Forest Reserve; LULC; Deforestation and Satellite Imagery
Introduction
Deforestation constitutes one of the serious threats to forest biodiversity and pose a global development challenges of long-term environmental problem at both regional level and the world at large. According to [1] and [2] the degradation of the forest ecosystem has obvious ecological effects on the immediate environment and forested areas. Deforestation can result in erosion which in turn may lead to desertification. The economic and human consequences of deforestation include loss of potential wood used as fuel wood for cooking and heating among others. The transformation of forested lands by human actions represents one of the great forces in global environmental change and considered as one of the great drivers of biodiversity loss. Forests are cleared, degraded and fragmented by timber harvest, conversion to agriculture, road-construction, human-caused fire, and in myriad of other ways of degradation. According to [5], deforestation refers to the removal of trees from afforested site and the conversion of land to another use, most often agriculture. There is growing concern over shrinking areas of forests in the recent time [7]. The livelihoods of over two hundred million forest dwellers and poor settlers depend directly on food, fibre, fodder, fuel and other resources taken from the forest or produced on recently cleared forest soils. Furthermore, deforestation has become an issue of global environmental concern, in particular because of the value of forests in biodiversity conservation and in limiting the greenhouse effect [8]. Globally, deforestation by this trend has been described as the major problem facing the forest ecosystem. The extent of deforestation in any particular location or region can be viewed from economic, ecological and human consequences as well as scramble for land. Forest degradation may in many ways be irreversible, because of the extensive nature of forest degradation which the impact of activities altering their condition may not be immediately apparent and as a result they are largely ignored by those who cause them. Forest is often perceived as a stock resource and always and freely available for conversion to other uses without considering the consequences for the production services and environmental roles of the forest. As environmental degradation and its consequences becomes a global issue, the world is faced with the danger that the renewable forest resources may be exhausted and that man stands the risk of destroying his environment if all the impacts of deforestation are allowed to go unchecked. It becomes therefore important to evaluate the level of deforestation and degradation in Ohaji Egbema forest reserve using a GIS application. The effect of deforestation and degradation of the only forest reserve in South east Nigeria has recently become a serious problem. It has been identified that in the area is mostly the quest for fuel wood, grazing and for agricultural use. One of the effects of deforestation is global warming which occurs as a result of deforestation as trees uses carbon dioxide during photosynthesis. Deforestation leads to the increase of carbon dioxide in the environment which traps heat in the atmosphere leading to global warming. I become very objective to assess the impact of land use and land cover changes on forest cover for the past 36 years with further interest to map out different land cover in the Ohaji Egbema forest reserve, assess land cover changes from 1984 to 2020 at 20 years, evaluate forest loss in the area for the past 36 years and make a prediction of the state of the land cover (forest) for the next 20 years (2040). It is known that deforestation and degradation of the forest has posed a serious problem especially at this era of global climatic change.
United Nations Food and Agricultural Organization [12], deforestation can be defined as the permanent destruction of forests in other to make the land available for other uses. Deforestation is said to be taking place when forest is being cut down on a massive scale without making proportionate effort at replanting. Also, deforestation is the conversion of forest to an alternative permanent non-forested land use Such as agriculture, grazing or urban development [5]. Deforestation is primarily a concern for the developing countries of the tropics [6] as it is shrinking areas of the tropical forests [3] causing loss of Biodiversity and enhancing the greenhouse effect [8]. Forest degradation occurs when the ecosystem functions of the forest are degraded but where the area remains forested rather cleared [9]. Available literatures shows that the causes of forest deforestation and degradation are caused by expansion of farming land, logging and fuel wood, overgrazing, fire/fire outbreak, release of greenhouse gases and urbanization/industrialization as well as infrastructural provisions. More so agents of deforestation in agricultural terms include those of slash and burn farmers, commercial farmers, ranchers, loggers, firewood collectors etc. Generally, the center of biodiversity and conservation (CBC 1998) established the remote sensing and geographic information system (RS/GIS) facilities as technologies that will help to identify potential survey sites, analyze deforestation rates in focal study areas, incorporate spatial and non-spatial databases and create persuasive visual aids to enhance reports and proposals. Change detection is the process of identifying differences in the state of an object or phenomenon by observing it at different times [11]. Change detection is an important process in monitoring and managing natural resources and urban development because it provides quantitative analysis of the spatial distribution of the population of interest.
Study Area
Ohaji Egbema lies in the southwestern part of Imo state and shares common boundaries with Owerri to the east, Oguta to the north Andogba/Egbema/Ndoni in Rivers state in the southwest. The 2006 census estimated the study area to over 182,500 inhabitants but recently due to industrialization and urbanization, Ohaji/Egbema has witnessed a great deal of population influx. The study area lies within latitudes 50 11’N and 50 35’N and longitudes 6037’ ad 6057’. It covers an area of about 890km2.
The study area is largely drained by the Otammiri River and other Imo river tributaries. The study area belongs to a major physiographic region- the undulating lowland plain which bears a relationship with its geology. The low land areas are largely underlain by the younger and loosely consolidated Benin formation [12]. The vegetation and climate of the study area has been delineated to have 2 distinct seasons both of which are warm, these are the dry and rainy season.
Climate and Vegetation
The dry season occurs between November and March, while the rainy season occurs between April and October. The high temperatures, humidity and precipitation of the area favour quick plant growth and hence vegetation cover of the area that is characterized by trees and shrubs of the rainforest belt of Nigeria. Geology and Soil
The study area is located in the Eastern Niger delta sedimentary basin, characterized by the three lithostratigrapgic units in the Niger delta. These units are – Akata, Agbada and Benin formation in order of decrease in age [13]. The overall thickness of the tertiary sediments is about 10,000 meters.
Method of Data Collection
Data are based on field observation and from monitoring the real situation, they are collected as fact or evidence that may be processed to give them meaning and turn them into information in line with [14]. Heywood (1988). Geographical Positioning System (GPS) was used to collect fifty (50) coordinate points at each land use land cover, totaling 200 points for the four major land use and land cover identified in the study area. Landsat Imageries of one season (path 188, 189 and row 56) were acquired from United State Geological Survey (USGS) in time series; 1984 Thematic Mapper (Tm), 2002 Enhanced Thematic Mapper (ETM) and 2020 Operational Land Imager (OLI) as shown in the table one below table 3.
Data Analysis and Data Processing
The acquired landsat imageries were pre-processed for geometrical corrections, stripes and cloud removal. Image enhancement was carried out on the acquired imageries employing bands 4, 3, 2 for LANDSAT TM and ETM while bands 5, 4, 3 for LANDSAT OLI/TIRS to get false colour composite using Idrisi and Arcgis softwares. In the resultant false colour composite, built up areas appear in cyan blue, vegetation in shades of red differentiating dense forest and grass or farm lands, water bodies from dark blue to black, bare lands from white to brown [15]. This was necessary to enhance visualization and interpretability of the scenes for classification. The study area was clipped out using administrative map of Nigeria containing Imo State and Local Government shape files in Arc map (Table1 and table 2).
Land Use Land Cover Classification
The false Colour composite images were subjected to supervised classification which was based on ground-based information. Maximum likelihood was adopted to define areas of Landsat images that represented thematic classes as determined by maximal spectral heterogeneity according to [16]. Maximum likelihood algorithm considers the average characteristics of the spectral signature of each category and the covariance among all categories, thus allowing for precise discrimination of categories. Hence the land covers were classified into four major land use land cover classes: Built up, forest cover, grass cover and water body. Forest vegetation are the areas dominated by trees and shrubs; grass land are the areas dominated by grasses, including farm lands and gardens; water body are the areas occupied by streams, rivers, inland waters; while built-up areas are the areas occupied by built structures including residential, commercial, schools, churches, tarred roads and those land surface features devoid of any type of vegetation cover or structures including rocks. Four applications (ArcGis 10.5, Idris software, Excel and Microsoft word) were also applied in this study.
Accuracy Assessment of The Classification
The aim of accuracy assessment is to quantitatively assess how effectively the pixels were sampled into the correct land cover classes. Confusion matrix was used for accuracy assessment of the classification procedure in accordance with the training samples and the ground truth points as a reference point. This approach has also been adopted effectively in similar studies by [17]; [18]. The accuracy assessments of the classified maps for 1984, 2002 and 2020 were evaluated using the base error matrix. The base error matrix evaluates accuracy using parameters such as agreement/accuracy, overall accuracy, commission error, omission error and the Kappa coefficient. The agreement/accuracy is the probability (%) that the classifier has labeled an image pixel into the ground truth class. It is the probability of a reference pixel being correctly classified. The overall accuracy specifies the total correctly classified pixels and is determined by dividing the total number of correctly classified pixels by the total number of pixels in the error matrix. Commission error represent pixels that belong to another class but are labeled as belonging to the class; while the Omission error represent pixels that belong to the truth class but fail to be classified into the proper class. Finally, the Kappa coefficient (Khat) measures the agreement between classification map and reference data, as expressed below:
kappa coefficient t= (Observed Accuracy-Chance agreement)/ (1-Chance agreement)
It is stated that Kappa values of more than 0.80 (80%) indicate good classification performance. Kappa values between 0.40 and 0.80 indicate moderate classification performance and Kappa values of less than 0.40 (40%) indicate poor classification performance [19].
Change Detection Analysis
Spatio-temporal changes in the four classified land use classes for the past 36 years were analyzed through comparison of area coverage of the classified maps. Change detection was carried out in each of the classes to ascertain the changes over time in terms of area and percentage coverage according to [18]. This was done by computing the area coverage for each of the feature class in each epoch from the classified images in idrisi and Arcmap softwares following the expression below:
Area (m2) = (Cell Size x Count)/10,000
Percentage cover (%) = Area/ (Total) x100
Where cell size and count were gotten from the properties of the raster attributes.
The extent of land use land cover over change, land use encroachment as well as gains and losses experienced within the study period were analyzed and presented in maps and charts.
Prediction Analysis
The classified land use imageries were subjected for Land Change Modeling in idrisi software using Cellular Automata and Markov Chain algorithm for prediction. Then land cover scenario under prevailing conditions for the year 2040 was modeled table 6.
RESULTS AND DISCUSSION LAND COVER LAND USE CHANGES LASSIFICATION FROM 1984-2040
The result of the work is presented starting with the land use and land cover classification in the years 1984,2002,2020 and 2040 presented in Figures 4.1,4.2,4.3 and 4.4 below. Dark green colour represents forest vegetation, light green represents grass lands, blue represent water body, while orange colour represent built-up area. In figure 4.1 which is the LULC classification of 1984 shows that the study area is largely covered with deeply dark green which is forest vegetation, patches of scattered light green and orange colour which is grass land and built-up while the blue colour which is the water bodies covers a little part of the study area. This depict that the study area was more of forest vegetation in 1984 table 7.
In figure 4.2, it is observed that dark green colour is reduced, there was a slight increase in blue colour, and there was a slight reduction inlight green colour, while the orange colour was at an increase rate mainly at Obofia, Awarra, Amafor, Ohoba, Umukani, Ohaji Egbema forest reserve and Adapalm axis. This indicated that by the year 2002, reasonable forest lands were deforested and converted to residential, commercial, Agricultural and other land uses, and this could be attributed to infrastructural development, urbanization, industrialization and human population increase in the area which is the causes of deforestation table 4 and table 5.
In fig 4.3, deforestation continued. There were more of orange colours and light green colors were observed more at Umukani and Umuakpu axis in the map, while the dark green colour is gradually decreasing and it’s been observed that the blue colour was rarely seen in the map. These indicated that as the years passes by, there are more built ups, and grass lands while the forest land is gradually degraded and used for built-up and agricultural purposes.
In figure 4.4 below, the whole map is mostly covered with orange colour, with scattered patches of light green, and dark green colour being deforested, while the blue colour is hardly seen in the map. This shows that the forest land cover has been on the regular decrease, while built ups, grass lands have been on the regular increase.
Area coverage, percentage cover and change detection land use and land cover 1984 - 2040
The area coverage and percentage cover of different land use classes are represented below. It is observed that the forest land covers about 723.26km2 with a percentage cover of 81.31% which was the major land cover of the study area in 1984. This implies that more than half of the study area was under forest cover in 1984, In the areas of built-up, it covers about 128.40km2 in 1984 with a percentage cover of 14.43%. Areas covered by grass land (either by sparse vegetation, farmland or grasses) was at minimal in 1984 with an area cover of 32.57km2 and percentage cover of 3.66%, while the water bodies cover an area of 5.34km2 and a percentage cover of 0.6%.
Figure: 5. The land use land cover classification of 2040
Figure 4.7 and 4.8 below show the area coverage and percentage coverage of the year 2002. As time goes on, the forest land decrease from area coverage of 723.26km2 in 1984 to 699.68km2 in 2002, with a percentage cover of 81.31% in 1984 to 78.68% in 2002 which depict that the forest cover was at a loss while the built–up was at increase from 128.40km2 to162.46km2 and a percentage cover of 18.26%. Areas covered by grass land gradually decrease to 21.41km2 with a percentage cover of 2.41% in 2002, and a slight increase of the water bodies from an area cover of 5.34km2 to 5.80km2 with a percentage cover of 0.65%.
In the year 2020, it is shown that the forest land covers about 589.73km2 with a percentage cover of 66.30% which shows that there was a decrease within 2002 to 2020 while in the area of built up it increased to an area cover of 280.98km2 with a percentage cover 31.59%. An area covered by grass land covers about 15.53km2 with a percentage cover of 1.75% and the water bodies cover about 3.27km2 and a percentage cover of 0.37%. This shows that the built-up area which is at increase were initially forest lands and water bodies in the past years.
In 2040 it was predicted that the forest cover about 497.67km2 with a percentage cover of 55.95%, the built up was predicted to be on the increase with an area cover of 334.11km2 and a percentage cover of 37.56%. Area covered by grassland was at predicted to be on increase with an area cover of 55.92km2 and a percentage cover 6.29% and this grass land was formally forest land in the past years and this change occurred mainly at Ohoba, Awarra, Umuakpu, Umukani and Ohaji Egbema forest reserve the only forest reserve in the south east of Nigeria which has been deforested and use for agricultural purposes. The water bodies cover about 1.82km2 with a percentage cover of 0.20%. This implies that the forest land has been deforested and degraded to other land uses in the study area within the study periods. All this are shown in figure 4.11 and figure 4.12.
Change Detection Observed Between (1984-2040)
Approximately the change detected in the forest land from 1984 to 2002 in the area coverage is 23.41km2 with a percentage change of 33.88% which shows it was at decrease, the built up from 1984 to 2002 the change detected in area is -34.06km2 with a percentage change of 49.30% which is at increase while in the area of grassland, the change observed is 11.16km2 with a percentage change of 16.15%. The change detected in the area coverage of water bodies from 1984 to 2002 is -0.46km2 with a percentage coverage of 0.67% which was at increase.
The change detection observed in table 2 below shows that between 2002 to 2020 the forest cover was at a high decrease with about 110.12km2 of change observed and a percentage change of 46.45% and the built up was about -118.53km2 and a percentage change of 50.00% which shows there was an increase in the area. The change observed in the grass was at 5.88km2 with a percentage cover of 2.48% which is at a decrease while for the water bodies the change observed is 2.53km2 with a percentage change of 1.07% which implies that the area of built up has been on the high increase over other land classes.
In the 2020 to 2040 change detection table shows that there was more of built up in the study area which is observed to cover about -53.12km2 with a percentage change of 28.40% which shows that the built up was at increase, and the forest cover was at decrease with the change observed at 92.06km2 and a percentage change of 49.22%.Also the grassland was detected to be on a increased with about -40.39km2 with a percentage change of 21.60% and the water bodies was at a decrease with the change detected at the study period to be about 1.45km2 and a percentage change of 0.78% which implies that the water body has been lost to other land uses.
Land Use Land Cover Classification Accuracy
The result of the accuracy assessment for 1984, 2002 and 2020 were presented in table 4 to table 6below. The overall accuracy for 1984 classification was 0.94%, with overall kappa accuracy of 91%, in 2002 classification, overall accuracy 0.89%, kappa accuracy was 85%, in 2020 classification, overall accuracy was 0.94%, kappa accuracy was 92%.
Discussion
Findings from the study showed that land cover of the study area has been heavily deforested and degraded within the study period 1984, 2002 and 2020, which will continue if control measures are not taken into consideration. Forest land was on the decrease while built ups and grass lands were on the increase. This is in line with other findings of [16] and [23] and [30]. These outrageous changes in the origin all and cover in the study area could be linked to human population, unsustainable human activities in the study area as well as unsustainable environmental management practices and weak environmental policies. As the human population increases, more lands were needed for settlements and many other commercial activities, which gradually led to rapid industrialization, infrastructural development and urbanization. Increase in human population could also increase the levels of anthropogenic activities such as deforestation, intensive farming and sand mining. In other words, the large spread of forest land in 1984 could be linked to low population and productivity, less socio-economic activity. The forest lands have been drastically reduced to build-ups and other land uses in the study area, without consideration to the many environmental needs that forest provides. Hence loss of biodiversity, land degradation, noise pollution, air pollution and climate change could be rooted to changes in the land cover. It can observe that in the past two centuries the impact of human activities on the land has grown enormously, altering entire landscapes, and ultimately impacting the earth's nutrient and hydrological cycles as well as climate. The classification accuracy for the 3 years represents strong agreement. According to [31] values between 0.4 and 0.8 represent moderate agreement, values below 0.4 represent poor agreement and values above 0.81 represent strong agreement.
CONCLUSION AND RECOMMENDATION
In this study, four land use land cover classes were identified as they change through time. However, the result shows a rapid change in the vegetation cover of the study area between 1984 to 2040. Within this period, 225.59km2 of forest land areas and 3.52km2 of water body were lost and converted to other land uses in the study area. Whereas built up and grassland was at increase covering part of the forest and water body. However, if these patterns of degradation continue in the study area, it is likely that in the nearest future the remaining forest land would be wiped out and environmental crisis would be aggravated. Therefore, the assessment of the level of deforestation in Ohaji Egbema using GIS is thus a vital tool for sustainability of the forest management and environmental planning of the area especially at the only forest reserve in the South east, Nigeria.
Based on the findings, there is need to urgently limit and control the high rate of deforestation going on in Ohaji Egbema and embark on tree planting campaigns without delay. It is also recommended that an Environmental Impact statement (EIS) should be carried out. Furthermore, policy makers should ensure that the existing/future polices with regard to environmental and forest degradation is utmost implemented. There is need to create an awareness programme for all stakeholders on the issues at hand and the need to adopt sustainable use of natural resources, sustainable living habits and minimizing impact on the environmental. Finally, [2] having conducted species relics in this forest reserve further research should be conducted on higher quality satellite imagery that offers up to 4m resolution within as well as forest relic analysis.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00252.pdf https://biogenericpublishers.com/jbgsr-ms-id-00252-text/
0 notes
biogenericpublishers · 3 years ago
Text
Wilms’s Tumor Gene Mutations: Loss of Tumor Suppresser Function: A Bioinformatics Study by Uzma Jabbar in Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
Introduction: Mutation in the Wilms’s Tumor (WT1) gene product has been detected in both sporadic and familial cases suggesting that alteration in WT1 may disrupt its normal function. The study aims to find the protagonist amino acid in WT1 proteins by mutating these residues with other amino acids.
Material and Methods: The 3D modeling approach by MODELLER 4 was utilized to build a homology of WT1 proteins. Quality of the WT1 model was verified by predicting 10 models of WT1 and hence selecting the best one. Stereochemistry of model was evaluated by PROCHECK. Mutational studies were done by WHAT IF. Five human WT1 mutations were modeled which were Lys371→Ala371, Ser415→Ala415, Cys416→Ala416, His434→Asp434 and His434→Arg434.
Result: Based on active side of WT1 protein and its role in DNA binding mutation. No significant change was observed when Lys371 was mutated to Ala371, Ser415 was mutated to Ala415. Significant change was observed in Cys416 mutated to Ala416. In mutant Ala416, loss of coordination with the metal ion Zn was also predicted. In case of Mutants His434→Asp434, there was a loss of coordination of metal ion (Zn203) with mutant Asp434. In case of mutant His434→Arg434, there was a loss of Zn203 coordination with Arg434. His434 does not interact directly with any DNA base, whereas mutated Arg434 is predicted to interact directly with DNA base.
Conclusion: It is concluded that mutation of amino acid residue Cys416→Ala416, His434→Asp434 and His434→Arg434 may lose the proto-oncogenic function of WT1.
Keywords: WT1 protein, MODELLER9.0, Mutation, Active side residues
Introduction
WT1, is a protein, which in humans is encoded by the WT1 gene on chromosome 11p13. The WT1 is responsible for the normal kidney development.  Mutations in this gene are reported to develop tumors and developmental abnormalities in the genitourinary system. Conversion of proto-oncogenic function to oncogenic in WT1 has also been documented cause of various hematological malignancies. (***)
Multifaceted protein of WT1 gene has transcriptional factor activity [1]. It regulates the expression of insulin-like growth factor and transforming growth factor system, implicated in breast tumorigenesis [2]. A main function of WT1 is to regulate transcription, which control the expression of genes involved in the process of proliferation and differentiation [3]. In wide range of tumor, WT1 is shown to be predisposing factor for cancer, therefore it has become hot target in research to find out it’s inhibitor which can be safely used as a treatment of cancer. It can induce apoptosis in embryonic cancer cell, presumably through the withdrawal of required growth factor survival signal [4]. WT1 is involved in the normal tissue homeostasis and as an oncogene in solid tumors, like breast cancer [5]. Increased expression of WT1 is related with poor prognosis in breast cancer6. A number of hypotheses are postulated for the relationship of WT1 with tumorigenesis. Acceding to one of the hypothesis, elevated levels of WT1 in tumors may be related with increased proliferation because normally WT1 have a role with apoptosis [7,8]. Another study proposed that WT1 can alter many genes of the the family of BCL2 [9,10] and also have a role to regulate with Fas-death signaling pathway [11]. Furthermore, it is suggested that WT1 can encourage cell proliferation by up-regulation of protein cyclin D1 [12].
A group of workers hypothesized that WT1 has been observed in the vasculature of some tumour types [13]and its expression may be related with angiogenesis especially in endometrial cancer [14]. Another hypothesis based on the fact that WT1 is a main regulator of the epithelial/mesenchymal balance and may have a role in the epithelial-to-mesenchymal transition of tumor cells [3]. Expression of WT1 is higher in estrogen receptor (ER) positive than in ER negative tumors. It is therefore possible that WT1 not only interact with ER alpha, but it may orchestrate its expression [15]. A study, on triple negative breast cancers [7] has shown that high WT1 levels associate with poor survival due to increased angiogenesis [16,17], altered proliferation/apoptosis10,11, and induction of cancer- epithelial-to-mesenchymal transition4. In breast tumors, WT1 is mainly related with a mesenchymal phenotype and increased levels of CYP3A4 [18]. A mutation in the zinc finger region of WT1 protein has been identified in the patients that abolished its DNA binding activity [19]. A study also observed that the mutation in the WT1 gene product has been detected in both sporadic and familial cases suggesting that alteration in WT1 may disrupt its normal function [20].  Bioinformatics approaches are being utilized to resolve the biological problems. Efforts start with the prediction of 3D structures. To achieve the aim, study was designed to view 3D structure of WT1, tumor suppressor protein predicted by homology modeling and to study the role of crucial residues in WT1 proteins by mutating these residues with other amino acids.
Material and Methods
3D structure of WT1 was taken as target of human WT1. Figure 1 shows the normal interaction of WT1 with DNA strands based on the crystal structure of a zinc finger protein.
The 449 amino acid sequences of WT1 were used for homology modeling. Sequences of WT1 were retrieved from Swiss Prot Data Bank in FASTA format [21]. The best suitable templates were used for 3D-structure prediction. The retrieved amino acid sequences of WT1 were subjected to BLAST [22]. Templates were retrieved on the base of query coverage and identity. The 3D structures were predicted by MODELLER 9.0 [23] that is the requirement of 3D structure building of target protein. Tools including stereochemistry and Ramchandran plots were used for the structure evaluation [24]. Identification of Template was carried out, and Sequence Alignment was carried out by using FASTA, BLAST. Quality of the WT1 model was verified. Stereochemistry of model was evaluated by PROCHECK [25]. Mutational studies were done by WHAT IF [26]. Five human WT1 mutants are modeled. These were: Lys371→Ala371, Ser415→Ala415, Cys416→Ala416, His434→Asp434 and His434→Arg434.
Results and Discussion
The study was largely based on active side of the WT1 and its role in DNA binding mutation. Zinc finger binding domain interact selectively and non-covalently. This zinc finger-binding domain is the classical zinc finger domain, in which two conserved cysteine and histidine co-ordinate a zinc ion at the active site.
Cys416®Ala416 MUTANT
Significant change was observed in Cys416 mutated to Ala416. In mutant Ala416, reduction in the Van der Waal’s contact between the amino acids. Loss of coordination with the metal ion Zn was also predicted (Figure 6 A and B).
Figure 6 A and B: Wild Type (Cys416) and mutated (Ala416) WT1.  Distance between Zn and Cys416 is increased in mutated (Ala416) model. Cys416 is predicted to be found in the vicinity of His434 and His438 which are implicated in catalysis (6A) while Ala416 can only interact with His434 and not with His438 in the mutated model (6B).
Cys416 is located at the domain interface with its polar side chain completely buried (0.00 Å). Replacement of this amino acid may account for considerable changes in the interior of protein (Table 1). We have predicted the possible changes that arise due to the mutation of Cys to Ala by molecular modeling experiments. Amino acids, Pro419, Ser420, Cys421, His434 and some atoms of His438 (ND1, NE2, CD2 and CE1) are present near Cys416. Zinc (Zn203) is also present in the vicinity (1.82 Å) of Cys416 (Figure 6). The mutated residue, Ala is also predicted to remain buried (0.00 Å) in the interior of protein. Significant change is observed however, in the surrounding area of the mutated Ala416. Only a few atoms of His434 (CD2 and NE2) and His438 (CE1) were seen in the surrounding. This may reduce the Van der Waal's contacts between the respective amino acids. The loss of coordination with the metal ion, zinc was also predicted as the distance is increased from 1.82 Å to3.12 Å.   It is therefore predicted that Cys416 plays a vital role in the interaction with other amino acid residues as well as in the metal coordination. It is observed that there is a possibility of loss of these interactions in case of Cys416 replacement.
His434®Arg434 MUTANTS
In case of mutant His434→Arg434, there was a loss of zn203 coordination with Arg434.  His434 does not interact directly with any DNA base, whereas mutated Arg434 is predicted to interact directly with DNA base, A1. This suggests that change might effect on the DNA binding pattern, Figure 7 A and B.
FIGURE 7 A and B: Wild Type (His434) and mutated (Arg434) WT1.  Distance between Zn and His434 is increased in mutated model. Arg434 is predicted to bind DNA base A1 (B) while His434 in the original model (A) show no bonding with DNA base.
In case of mutation of His434®Arg434, the distance between the mutated Arg and zinc (Zn203) was increased from 2.28 Å to5.00 Å suggesting that there could be a loss of coordination with the metal ion. Mutational studies proved that hydrogen bonding network close to the zinc-binding motif plays a significant role in stabilizing the coordination of the zinc metal ion to the protein23. The mutated amino acid, Arg434 also moved considerably form buried to relatively exposed environment (2.28 Å to 5.35 Å). Presence of positively charged Arg on the surface could account for additional interaction of the protein with other proteins or with the surrounding water molecules. His434 does not interact directly with any DNA base whereas mutated Arg434 is predicted to interact directly with DNA base Adenine, A1. (Figure 7). This suggests that the change might cause the DNA binding pattern.
Lys371®Ala371 and Ser415®Ala415 MUTANT
No significant change was observed when Lys371 was mutated to Ala371, and Ser415 was mutated to Ala415. It is observed in this mutation that the change that arise in the overall structure and surrounding amino acid residues (Table 1). Lys371 is present on the surface (accessibility = 47.04 Å) of the WT1 molecule. It was observed that the internal protein structure was not affected considerably, as Lys371 is present on the outer most surface of the protein. In the original model, Lys371 stacks against thymine. It also forms a water-mediated contact with side chain hydroxyl of Ser367. Although, Ala371 also stacks against the same DNA base but the distance is slightly altered. The hydrogen bond between Ala371 and Ser367 has not been predicted in the mutated model.  It has been demonstrated that mutation within finger 2 and 4 abolished sequence specific binding of WT1 to DNA bases19. The mutation of the corresponding lysine in a peptide could reduce its affinity for DNA seven folds [27].  On the other hand, it is reported [28] that a surface mutation would not cause a significant change in the internal structure of protein.  However, the replacement of a basic polar residue with a non-polar one could account for a reduction in polarity.  The modeling studies of Lys to Ala mutation do not however support this finding and require further analysis.
Mutation of Ser415®Ala415 in the WT1 model (Table 1). Ser415 is located near the active center of WT1. It has been demonstrated that Ser415 makes a water-mediated contact with phosphate of DNA base, guanine [20]. In our predicted model of WT1, Ser415 makes two water (numbers 516 and 568) mediated contacts. Mutation of this Ser with Ala resulted in the loss of one of these contacts leading to the loss of binding. The replacement of relatively polar residue, Ser to a non-polar one, Ala could account for this reduced interaction. This is also evident by a slight decrease in the accessibility of Ala (Ser415 = 7.96 Å; Ala415 = 7.61 Å).
His434®Asp434 MUTANTS
In case of mutants His434→Asp434, there was a loss of coordination of metal ion (zn203) with mutant Asp434. Glu430 move from relatively exposed to completely buried environment. His434 is also present at the active center of WT1. We predicted two mutants; His434®Asp434 and His434®Arg434 mutants by molecular modeling (Table 1). In case of His434®Asp434 mutation, the water mediated contact is lost. The distance between mutated Asp and zinc (Zn203) was also increased from 2.28 Å to 3.57 Å suggesting that there could be a loss of coordination with the metal ion as well. The amino acid Glu340 that is present near His434 also moved considerably form relatively exposed to completely buried environment (14.83 Å to 00.0 Å).
Conclusion
It is concluded that mutation of amino acid residue Cys416→Ala416, His434→Asp434 and His434→Arg434 of WT1 may lose its function to regulate the function of genes by binding to specific parts of DNA. Besides the mutation of above-mentioned amino acid residue, the role of WT1 in cell growth, cell differentiation, apoptosis and tumor suppressor function is also lost.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00250.pdf https://biogenericpublishers.com/jbgsr-ms-id-00250-text/
0 notes