Search results for: current deflecting wall
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10143

Search results for: current deflecting wall

273 Study of the Association between Salivary Microbiological Data, Oral Health Indicators, Behavioral Factors, and Social Determinants among Post-COVID Patients Aged 7 to 12 Years in Tbilisi City

Authors: Lia Mania, Ketevan Nanobashvili

Abstract:

Background: The coronavirus disease COVID-19 has become the cause of a global health crisis during the current pandemic. This study aims to fill the paucity of epidemiological studies on the impact of COVID-19 on the oral health of pediatric populations. Methods: It was conducted an observational, cross-sectional study in Georgia, in Tbilisi (capital of Georgia), among 7 to 12-year-old PCR or rapid test-confirmed post-Covid populations in all districts of Tbilisi (10 districts in total). 332 beneficiaries who were infected with Covid within one year were included in the study. The population was selected in schools of Tbilisi according to the principle of cluster selection. A simple random selection took place in the selected clusters. According to this principle, an equal number of beneficiaries were selected in all districts of Tbilisi. By July 1, 2022, according to National Center for Disease Control and Public Health data (NCDC.Ge), the number of test-confirmed cases in the population aged 0-18 in Tbilisi was 115137 children (17.7% of all confirmed cases). The number of patients to be examined was determined by the sample size. Oral screening, microbiological examination of saliva, and administration of oral health questionnaires to guardians were performed. Statistical processing of data was done with SPSS-23. Risk factors were estimated by odds ratio and logistic regression with 95% confidence interval. Results: Statistically reliable differences between the averages of oral health indicators in asymptomatic and symptomatic covid-infected groups are: for caries intensity (DMF+def) t=4.468 and p=0.000, for modified gingival index (MGI) t=3.048, p=0.002, for simplified oral hygiene index (S-OHI) t=4.853; p=0.000. Symptomatic covid-infection has a reliable effect on the oral microbiome (Staphylococcus aureus, Candida albicans, Pseudomonas aeruginosa, Streptococcus pneumoniae, Staphylococcus epidermalis); (n=332; 77.3% vs n=332; 58.0%; OR=2.46, 95%CI: 1.318-4.617). According to the logistic regression, it was found that the severity of the covid infection has a significant effect on the frequency of pathogenic and conditionally pathogenic bacteria in the oral cavity B=0.903 AOR=2.467 (CL 1.318-4.617). Symptomatic covid-infection affects oral health indicators, regardless of the presence of other risk factors, such as parental employment status, tooth brushing behaviors, carbohydrate meal, fruit consumption. (p<0.05). Conclusion: Risk factors (parental employment status, tooth brushing behaviors, carbohydrate consumption) were associated with poorer oral health status in a post-Covid population of 7- to 12-year-old children. However, such a risk factor as symptomatic ongoing covid-infection affected the oral microbiome in terms of the abundant growth of pathogenic and conditionally pathogenic bacteria (Staphylococcus aureus, Candida albicans, Pseudomonas aeruginosa, Streptococcus pneumoniae, Staphylococcus epidermalis) and further worsened oral health indicators. Thus, a close association was established between symptomatic covid-infection and microbiome changes in the post-covid period; also - between the variables of oral health indicators and the symptomatic course of covid-infection.

Keywords: oral microbiome, COVID-19, population based research, oral health indicators

Procedia PDF Downloads 69
272 Disseminating Positive Psychology Resources Online: Current Research and Future Directions

Authors: Warren Jared, Bekker Jeremy, Salazar Guy, Jackman Katelyn, Linford Lauren

Abstract:

Introduction: Positive Psychology research has burgeoned in the past 20 years; however, relatively few evidence-based resources to cultivate positive psychology skills are widely available to the general public. The positive psychology resources at www.mybestself101.org were developed to assist individuals in cultivating well-being using a variety of techniques, including gratitude, purpose, mindfulness, self-compassion, savoring, personal growth, and supportive relationships. These resources are empirically based and are built to be accessible to a broad audience. Key Objectives: This presentation highlights results from two recent randomized intervention studies of specific MBS101 learning modules. A key objective of this research is to empirically assess the efficacy and usability of these online resources. Another objective of this research is to encourage the broad dissemination of online positive psychology resources; thus, recommendations for further research and dissemination will be discussed. Methods: In both interventions, we recruited adult participants using social media advertisements. The participants completed several well-being and positive psychology construct-specific measures (savoring and self-compassion measures) at baseline and post-intervention. Participants in the experimental condition were also given a feedback questionnaire to gather qualitative data on how participants viewed the modules. Participants in the self-compassion study were randomly split between an experimental group, who received the treatment, and a control group, who were placed on a waitlist. There was no control group for the savoring study. Participants were instructed to read content on the module and practice savoring or self-compassion strategies listed in the module for a minimum of twenty minutes a day for 21 days. The intervention was semi-structured, as participants were free to choose which module activities they would complete from a menu of research-based strategies. Participants tracked which activities they completed and how long they spent on the modules each day. Results: In the savoring study, participants increased in savoring ability as indicated by multiple measures. In addition, participants increased in well-being from pre- to post-treatment. In the self-compassion study, repeated measures mixed model analyses revealed that compared to waitlist controls, participants who used the MBS101 self-compassion module experienced significant improvements in self-compassion, well-being, and body image with effect sizes ranging from medium to large. Attrition was 10.5% for the self-compassion study and 71% for the savoring study. Overall, participants indicated that the modules were generally helpful, and they particularly appreciated the specific strategy menus. Participants requested more structured course activities, more interactive content, and more practice activities overall. Recommendations: Mybestself101.org is an applied positive psychology research program that shows promise as a model for effectively disseminating evidence-based positive psychology resources that are both engaging and easily accessible. Considerable research is still needed, both to test the efficacy and usability of the modules currently available and to improve them based on participant feedback. Feedback received from participants in the randomized controlled trial led to the development of an expanded, 30-day online course called The Gift of Self-Compassion and an online mindfulness course currently in development called Mindfulness For Humans.

Keywords: positive psychology, intervention, online resources, self-compassion, dissemination, online curriculum

Procedia PDF Downloads 204
271 Teaching Children about Their Brains: Evaluating the Role of Neuroscience Undergraduates in Primary School Education

Authors: Clea Southall

Abstract:

Many children leave primary school having formed preconceptions about their relationship with science. Thus, primary school represents a critical window for stimulating scientific interest in younger children. Engagement relies on the provision of hands-on activities coupled with an ability to capture a child’s innate curiosity. This requires children to perceive science topics as interesting and relevant to their everyday life. Teachers and pupils alike have suggested the school curriculum be tailored to help stimulate scientific interest. Young children are naturally inquisitive about the human body; the brain is one topic which frequently engages pupils, although it is not currently included in the UK primary curriculum. Teaching children about the brain could have wider societal impacts such as increasing knowledge of neurological disorders. However, many primary school teachers do not receive formal neuroscience training and may feel apprehensive about delivering lessons on the nervous system. This is exacerbated by a lack of educational neuroscience resources. One solution is for undergraduates to form partnerships with schools - delivering engaging lessons and supplementing teacher knowledge. The aim of this project was to evaluate the success of a short lesson on the brain delivered by an undergraduate neuroscientist to primary school pupils. Prior to entering schools, semi-structured online interviews were conducted with teachers to gain pedagogical advice and relevant websites were searched for neuroscience resources. Subsequently, a single lesson plan was created comprising of four hands-on activities. The activities were devised in a top-down manner, beginning with learning about the brain as an entity, before focusing on individual neurons. Students were asked to label a ‘brain map’ to assess prior knowledge of brain structure and function. They viewed animal brains and created ‘pipe-cleaner neurons’ which were later used to depict electrical transmission. The same session was delivered by an undergraduate student to 570 key stage 2 (KS2) pupils across five schools in Leeds, UK. Post-session surveys, designed for teachers and pupils respectively, were used to evaluate the session. Children in all year groups had relatively poor knowledge of brain structure and function at the beginning of the session. When asked to label four brain regions with their respective functions, older pupils labeled a mean of 1.5 (± 1.0) brain regions compared to 0.8 (± 0.96) for younger pupils (p=0.002). However, by the end of the session, 95% of pupils felt their knowledge of the brain had increased. Hands-on activities were rated most popular by pupils and were considered the most successful aspect of the session by teachers. Although only half the teachers were aware of neuroscience educational resources, nearly all (95%) felt they would have more confidence in teaching a similar session in the future. All teachers felt the session was engaging and that the content could be linked to the current curriculum. Thus, a short fifty-minute session can successfully enhance pupils’ knowledge of a new topic: the brain. Partnerships with an undergraduate student can provide an alternative method for supplementing teacher knowledge, increasing their confidence in delivering future lessons on the nervous system.

Keywords: education, neuroscience, primary school, undergraduate

Procedia PDF Downloads 211
270 Chemopreventive Efficacy of Andrographolide in Rat Colon Carcinogenesis Model Using Aberrant Crypt Foci (ACF) as Endpoint Marker

Authors: Maryam Hajrezaie, Mahmood Ameen Abdulla, Nazia Abdul Majid, Hapipa Mohd Ali, Pouya Hassandarvish, Maryam Zahedi Fard

Abstract:

Background: Colon cancer is one of the most prevalent cancers in the world and is the third leading cause of death among cancers in both males and females. The incidence of colon cancer is ranked fourth among all cancers but varies in different parts of the world. Cancer chemoprevention is defined as the use of natural or synthetic compounds capable of inducing biological mechanisms necessary to preserve genomic fidelity. Andrographolide is the major labdane diterpenoidal constituent of the plant Andrographis paniculata (family Acanthaceae), used extensively in the traditional medicine. Extracts of the plant and their constituents are reported to exhibit a wide spectrum of biological activities of therapeutic importance. Laboratory animal model studies have provided evidence that Andrographolide play a role in inhibiting the risk of certain cancers. Objective: Our aim was to evaluate the chemopreventive efficacy of the Andrographolide in the AOM induced rat model. Methods: To evaluate inhibitory properties of andrographolide on colonic aberrant crypt foci (ACF), five groups of 7-week-old male rats were used. Group 1 (control group) were fed with 10% Tween 20 once a day, Group 2 (cancer control) rats were intra-peritoneally injected with 15 mg/kg Azoxymethan, Gropu 3 (drug control) rats were injected with 15 mg/kg azoxymethan and 5-Flourouracil, Group 4 and 5 (experimental groups) were fed with 10 and 20 mg/kg andrographolide each once a day. After 1 week, the treatment group rats received subcutaneous injections of azoxymethane, 15 mg/kg body weight, once weekly for 2 weeks. Control rats were continued on Tween 20 feeding once a day and experimental groups 10 and 20 mg/kg andrographolide feeding once a day for 8 weeks. All rats were sacrificed 8 weeks after the azoxymethane treatment. Colons were evaluated grossly and histopathologically for ACF. Results: Administration of 10 mg/kg and 20 mg/kg andrographolide were found to be effectively chemoprotective, as evidenced microscopily and biochemically. Andrographolide suppressed total colonic ACF formation up to 40% to 60%, respectively, when compared with control group. Pre-treatment with andrographolide, significantly reduced the impact of AOM toxicity on plasma protein and urea levels as well as on plasma aspartate aminotransferase (AST), alanine aminotransferase (ALT), lactate dehydrogenase (LDH) and gamma-glutamyl transpeptidase (GGT) activities. Grossly, colorectal specimens revealed that andrographolide treatments decreased the mean score of number of crypts in AOM-treated rats. Importantly, rats fed andrographolide showed 75% inhibition of foci containing four or more aberrant crypts. The results also showed a significant increase in glutathione (GSH), superoxide dismutase (SOD), nitric oxide (NO), and Prostaglandin E2 (PGE2) activities and a decrease in malondialdehyde (MDA) level. Histologically all treatment groups showed a significant decrease of dysplasia as compared to control group. Immunohistochemical staining showed up-regulation of Hsp70 and down-regulation of Bax proteins. Conclusion: The current study demonstrated that Andrographolide reduce the number of ACF. According to these data, Andrographolide might be a promising chemoprotective activity, in a model of AOM-induced in ACF.

Keywords: chemopreventive, andrographolide, colon cancer, aberrant crypt foci (ACF)

Procedia PDF Downloads 429
269 Key Aroma Compounds as Predictors of Pineapple Sensory Quality

Authors: Jenson George, Thoa Nguyen, Garth Sanewski, Craig Hardner, Heather Eunice Smyth

Abstract:

Pineapple (Ananas comosus), with its unique sweet flavour, is one of the most popular tropical, non-climacteric fruits consumed worldwide. It is also the third most important tropical fruit in world production. In Australia, 99% of the pineapple production is from the Queensland state due to the favourable subtropical climatic conditions. The flavourful fruit is known to contain around 500 volatile organic compounds (VOC) at varying concentrations and greatly contribute to the flavour quality of pineapple fruit by providing distinct aroma sensory properties that are sweet, fruity, tropical, pineapple-like, caramel-like, coconut-like, etc. The aroma of pineapple is one of the important factors attracting consumers and strengthening the marketplace. To better understand the aroma of Australian-grown pineapples, the matrix-matched Gas chromatography–mass spectrometry (GC-MS), Head Space - Solid-phase microextraction (HS-SPME), Stable-isotope dilution analysis (SIDA) method was developed and validated. The developed method represents a significant improvement over current methods with the incorporation of multiple external reference standards, multiple isotopes labeled internal standards, and a matching model system of pineapple fruit matrix. This method was employed to quantify 28 key aroma compounds in more than 200 genetically diverse pineapple varieties from a breeding program. The Australian pineapple cultivars varied in content and composition of free volatile compounds, which were predominantly comprised of esters, followed by terpenes, alcohols, aldehydes, and ketones. Using selected commercial cultivars grown in Australia, and by employing the sensorial analysis, the appearance (colour), aroma (intensity, sweet, vinegar/tang, tropical fruits, floral, coconut, green, metallic, vegetal, fresh, peppery, fermented, eggy/sulphurous) and texture (crunchiness, fibrousness, and juiciness) were obtained. Relationships between sensory descriptors and volatiles were explored by applying multivariate analysis (PCA) to the sensorial and chemical data. The key aroma compounds of pineapple exhibited a positive correlation with corresponding sensory properties. The sensory and volatile data were also used to explore genetic diversity in the breeding population. GWAS was employed to unravel the genetic control of the pineapple volatilome and its interplay with fruit sensory characteristics. This study enhances our understanding of pineapple aroma (flavour) compounds, their biosynthetic pathways and expands breeding option for pineapple cultivars. This research provides foundational knowledge to support breeding programs, post-harvest and target market studies, and efforts to optimise the flavour of commercial pineapple varieties and their parent lines to produce better tasting fruits for consumers.

Keywords: Ananas comosus, pineapple, flavour, volatile organic compounds, aroma, Gas chromatography–mass spectrometry (GC-MS), Head Space - Solid-phase microextraction (HS-SPME), Stable-isotope dilution analysis (SIDA).

Procedia PDF Downloads 57
268 A Proposal of a Strategic Framework for the Development of Smart Cities: The Argentinian Case

Authors: Luis Castiella, Mariano Rueda, Catalina Palacio

Abstract:

The world’s rapid urbanisation represents an excellent opportunity to implement initiatives that are oriented towards a country’s general development. However, this phenomenon has created considerable pressure on current urban models, pushing them nearer to a crisis. As a result, several factors usually associated with underdevelopment have been steadily rising. Moreover, actions taken by public authorities have not been able to keep up with the speed of urbanisation, which has impeded them from meeting the demands of society, responding with reactionary policies instead of with coordinated, organised efforts. In contrast, the concept of a Smart City which emerged around two decades ago, in principle, represents a city that utilises innovative technologies to remedy the everyday issues of the citizen, empowering them with the newest available technology and information. This concept has come to adopt a wider meaning, including human and social capital, as well as productivity, economic growth, quality of life, environment and participative governance. These developments have also disrupted the management of institutions such as academia, which have become key in generating scientific advancements that can solve pressing problems, and in forming a specialised class that is able to follow up on these breakthroughs. In this light, the Ministry of Modernisation of the Argentinian Nation has created a model that is rooted in the concept of a ‘Smart City’. This effort considered all the dimensions that are at play in an urban environment, with careful monitoring of each sub-dimensions in order to establish the government’s priorities and improving the effectiveness of its operations. In an attempt to ameliorate the overall efficiency of the country’s economic and social development, these focused initiatives have also encouraged citizen participation and the cooperation of the private sector: replacing short-sighted policies with some that are coherent and organised. This process was developed gradually. The first stage consisted in building the model’s structure; the second, at applying the method created on specific case studies and verifying that the mechanisms used respected the desired technical and social aspects. Finally, the third stage consists in the repetition and subsequent comparison of this experiment in order to measure the effects on the ‘treatment group’ over time. The first trial was conducted on 717 municipalities and evaluated the dimension of Governance. Results showed that levels of governmental maturity varied sharply with relation to size: cities with less than 150.000 people had a strikingly lower level of governmental maturity than cities with more than 150.000 people. With the help of this analysis, some important trends and target population were made apparent, which enabled the public administration to focus its efforts and increase its probability of being successful. It also permitted to cut costs, time, and create a dynamic framework in tune with the population’s demands, improving quality of life with sustained efforts to develop social and economic conditions within the territorial structure.

Keywords: composite index, comprehensive model, smart cities, strategic framework

Procedia PDF Downloads 176
267 (Anti)Depressant Effects of Non-Steroidal Antiinflammatory Drugs in Mice

Authors: Horia Păunescu

Abstract:

Purpose: The study aimed to assess the depressant or antidepressant effects of several Nonsteroidal Anti-Inflammatory Drugs (NSAIDs) in mice: the selective cyclooxygenase-2 (COX-2) inhibitor meloxicam, and the non-selective COX-1 and COX-2 inhibitors lornoxicam, sodium metamizole, and ketorolac. The current literature data regarding such effects of these agents are scarce. Materials and methods: The study was carried out on NMRI mice weighing 20-35 g, kept in a standard laboratory environment. The study was approved by the Ethics Committee of the University of Medicine and Pharmacy „Carol Davila”, Bucharest. The study agents were injected intraperitoneally, 10 mL/kg body weight (bw) 1 hour before the assessment of the locomotor activity by cage testing (n=10 mice/ group) and 2 hours before the forced swimming tests (n=15). The study agents were dissolved in normal saline (meloxicam, sodium metamizole), ethanol 11.8% v/v in normal saline (ketorolac), or water (lornoxicam), respectively. Negative and positive control agents were also given (amitryptilline in the forced swimming test). The cage floor used in the locomotor activity assessment was divided into 20 equal 10 cm squares. The forced swimming test involved partial immersion of the mice in cylinders (15/9cm height/diameter) filled with water (10 cm depth at 28C), where they were left for 6 minutes. The cage endpoint used in the locomotor activity assessment was the number of treaded squares. Four endpoints were used in the forced swimming test (immobility latency for the entire 6 minutes, and immobility, swimming, and climbing scores for the final 4 minutes of the swimming session), recorded by an observer that was "blinded" to the experimental design. The statistical analysis used the Levene test for variance homogeneity, ANOVA and post-hoc analysis as appropriate, Tukey or Tamhane tests.Results: No statistically significant increase or decrease in the number of treaded squares was seen in the locomotor activity assessment of any mice group. In the forced swimming test, amitryptilline showed an antidepressant effect in each experiment, at the 10 mg/kg bw dosage. Sodium metamizole was depressant at 100 mg/kg bw (increased the immobility score, p=0.049, Tamhane test), but not in lower dosages as well (25 and 50 mg/kg bw). Ketorolac showed an antidepressant effect at the intermediate dosage of 5 mg/kg bw, but not so in the dosages of 2.5 and 10 mg/kg bw, respectively (increased the swimming score, p=0.012, Tamhane test). Meloxicam and lornoxicam did not alter the forced swimming endpoints at any dosage level. Discussion: 1) Certain NSAIDs caused changes in the forced swimming patterns without interfering with locomotion. 2) Sodium metamizole showed a depressant effect, whereas ketorolac proved antidepressant. Conclusion: NSAID-induced mood changes are not class effects of these agents and apparently are independent of the type of inhibited cyclooxygenase (COX-1 or COX-2). Disclosure: This paper was co-financed from the European Social Fund, through the Sectorial Operational Programme Human Resources Development 2007-2013, project number POSDRU /159 /1.5 /S /138907 "Excellence in scientific interdisciplinary research, doctoral and postdoctoral, in the economic, social and medical fields -EXCELIS", coordinator The Bucharest University of Economic Studies.

Keywords: antidepressant, depressant, forced swim, NSAIDs

Procedia PDF Downloads 235
266 A Study of Interleukin-1β Genetic Polymorphisms in Gastric Carcinoma and Colorectal Carcinoma in Egyptian Patients

Authors: Mariam Khaled, Noha Farag, Ghada Mohamed Abdel Salam, Khaled Abu-Aisha, Mohamed El-Azizi

Abstract:

Gastric and colorectal cancers are among the most frequent causes of cancer-associated mortalities in Africa. They have been considered as a global public health concern, as nearly one million new cases are reported per year. IL-1β is a pro-inflammatory cytokine-produced by activated macrophages and monocytes- and a member of the IL-1 family. The inactive IL-1β precursor is cleaved and activated by caspase-1 enzyme, which itself is activated by the assembly of intracellular structures defined as NLRP3 (Nod Like receptor P3) inflammasomes. Activated IL-1β stimulates the Interleukin-1 receptor type-1 (IL-1R1), which is responsible for the initiation of a signal transduction pathway leading to cell proliferation. It has been proven that the IL-1β gene is a highly polymorphic gene in which single nucleotide polymorphisms (SNPs) may affect its expression. It has been previously reported that SNPs including base transitions between C and T at positions, -511 (C-T; dbSNP: rs16944) and -31 (C-T; dbSNP: rs1143627), from the transcriptional start site, contribute to the pathogenesis of gastric and colorectal cancers by affecting IL-1β levels. Altered production of IL-1β due to such polymorphisms is suspected to stimulate an amplified inflammatory response and promote Epithelial Mesenchymal Transition leading to malignancy. Allele frequency distribution of the IL-1β-31 and -511 SNPs, in different populations, and their correlation to the incidence of gastric and colorectal cancers, has been intriguing to researchers worldwide. The current study aims to investigate allele distributions of the IL-1β SNPs among gastric and colorectal cancers Egyptian patients. In order to achieve to that, 89 Biopsy and surgical specimens from the antrum and corpus mucosa of chronic gastritis subjects and gastric and colorectal carcinoma patients was collected for DNA extraction followed by restriction fragment length polymorphism polymerase chain reaction (RFLP-PCR). The amplified PCR products of IL-1β-31C > T and IL-1β-511T > C were digested by incubation with the restriction endonuclease enzymes ALu1 and Ava1. Statistical analysis was carried out to determine the allele frequency distribution in the three studied groups. Also, the effect of the IL-1β -31 and -511 SNPs on nuclear factor binding was analyzed using Fluorescence Electrophoretic Mobility Shift Assay (EMSA), preceded by nuclear factor extraction from gastric and colorectal tissue samples and LPS stimulated monocytes. The results of this study showed that a significantly higher percentage of Egyptian gastric cancer patients have a homozygous CC genotype at the IL-1β-31 position and a heterozygous TC genotype at the IL-1β-511 position. Moreover, a significantly higher percentage of the colorectal cancer patients have a homozygous CC genotype at the IL-1β-31 and -511 positions as compared to the control group. In addition, the EMSA results showed that IL-1β-31C/T and IL-1β-511T/C SNPs do not affect nuclear factor binding. Results of this study suggest that the IL-1β-31 C/T and IL-1β-511 T/C may be correlated to the incidence of gastric cancer in Egyptian patients; however, similar findings couldn’t be proven in the colorectal cancer patients group for the IL-1β-511 T/C SNP. This is the first study to investigate IL-1β -31 and -511 SNPs in the Egyptian population.

Keywords: colorectal cancer, Egyptian patients, gastric cancer, interleukin-1β, single nucleotide polymorphisms

Procedia PDF Downloads 141
265 Parents’ Perceptions of the Consent Arrangements for Dental Public Health Programmes in North London: A Qualitative Exploration

Authors: Charlotte Jeavons, Charitini Stavropoulous, Nicolas Drey

Abstract:

Background: Over one-third of five-year-olds and almost half of all eight-year-olds in the UK have obvious caries experience that can be detected by visual screening techniques. School-based caries preventions programs to apply fluoride varnish to young children’s teeth operate in many areas in the UK. Their aim is to reduce dental caries in children. The Department of Health guidance (2009) on consent states information must be provided to parents to enable informed autonomous decision-making prior to any treatment involving their young children. Fluoride varnish schemes delivered in primary schools use letters for this purpose. Parents are expected to return these indicating their consent or refusal. A large proportion of parents do not respond. In the absence of positive consent, these children are excluded from the program. Non-response is more common in deprived areas creating inequality. The reason for this is unknown. The consent process used is underpinned by the ethical theory of deontology that is prevalent in clinical dentistry and widely accepted in bio-ethics. Objective: To investigate parents’ views, understanding and experience of the fluoride varnish program taking place in their child’s school, including their views about the practical consent arrangements. Method: Schools participating in the fluoride varnish scheme operating in Enfield, North London, were asked to take part. Parents with children in nursery, reception, or year one were invited to participate via semi-structured interviews and focus groups. Thematic analysis was conducted. Findings: 40 parents were recruited from eight schools. The global theme of ‘trust’ was identified as the strongest influence on parental responses. Six themes were identified; protecting children from harm is viewed by parents as their role, parents have the capability to decide but lack confidence, sharing responsibility for their child’s oral health with the State is welcomed by a parent, existing relationships within parents’ social networks strongly influences consent decisions, official dental information is not communicated effectively, sending a letter to parents’ and excluding them from meeting dental practitioners is ineffective. The information delivered via a letter was not strongly identified by parents as influencing their response. Conclusions: Personal contact with the person(s) providing information and requesting consent has a greater impact on parental consent responses than written information provided alone. This demonstrates that traditional bio-ethical ideas about rational decision-making where emotions are transcended and interference is not justified unless preventing harm to an unaware person are outdated. Parental decision-making is relational and the consent process should be adapted to reflect this. The current system that has a deontology view of decision making at its core impoverishes parental autonomy and may, ultimately, increase dental inequalities as a result.

Keywords: consent, decision, ethics, fluoride, parents

Procedia PDF Downloads 171
264 Improving Recovery Reuse and Irrigation Scheme Efficiency – North Gaza Emergency Sewage Treatment Project as Case Study

Authors: Yaser S. Kishawi, Sadi R. Ali

Abstract:

Part of Palestine, Gaza Strip (365 km2 and 1.8 million inhabitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely cover the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is finding non-conventional water resource from treated wastewater to cover agricultural requirements and serve the population. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, only phase A is functioning. Nearly 23 Mm3 of partially treated wastewater were infiltrated into the aquifer. Phase B and phase C witnessed many delays and this forced a reassessment of the RRS original design. An Environmental Management Plan was conducted from Jul 2013 to Jun 2014 on 13 existing monitoring wells surrounding the project location. This is to measure the efficiency of the SAT system and the spread of the contamination plume with relation to the efficiency of the proposed RRS. Along with the proposed location of the 27 recovery wells as part of the proposed RRS. The results of monitored wells were assessed compared with PWA baseline data. This was put into a groundwater model to simulate the plume to propose the best suitable solution to the delays. The redesign mainly manipulated the pumping rate of wells, proposed locations and functioning schedules (including wells groupings). The proposed simulations were examined using visual MODFLOW V4.2 to simulate the results. The results of monitored wells were assessed based on the location of the monitoring wells related to the proposed recovery wells locations (200m, 500m and 750m away from the IBs). Near the 500m line (the first row of proposed recovery wells), an increase of nitrate (from 30 to 70mg/L) compare to a decrease in Chloride (1500 to below 900mg/L) was found during the monitoring period which indicated an expansion of plume to this distance. On this rate with the required time to construct the recovery scheme, keeping the original design the RRS will fail to capture the plume. Based on that many simulations were conducted leading into three main scenarios. The scenarios manipulated the starting dates, the pumping rate and the locations of recovery wells. A simulation of plume expansion and path-lines were extracted from the model monitoring how to prevent the expansion towards the nearby municipal wells. It was concluded that the location is the most important factor in determining the RRS efficiency. Scenario III was adopted and showed an effective results even with a reduced pumping rates. This scenario proposed adding two additional recovery wells in a location beyond the 750m line to compensate the delays and effectively capture the plume. A continuous monitoring program for current and future monitoring wells should be in place to support the proposed scenario and ensure maximum protection.

Keywords: soil aquifer treatment, recovery and reuse scheme, infiltration basins, north gaza

Procedia PDF Downloads 313
263 Environmental Effect of Empty Nest Households in Germany: An Empirical Approach

Authors: Dominik Kowitzke

Abstract:

Housing constructions have direct and indirect environmental impacts especially caused by soil sealing and gray energy consumption related to the use of construction materials. Accordingly, the German government introduced regulations limiting additional annual soil sealing. At the same time, in many regions like metropolitan areas the demand for further housing is high and of current concern in the media and politics. It is argued that meeting this demand by making better use of the existing housing supply is more sustainable than the construction of new housing units. In this context, targeting the phenomenon of so-called over the housing of empty nest households seems worthwhile to investigate for its potential to free living space and thus, reduce the need for new housing constructions and related environmental harm. Over housing occurs if no space adjustment takes place in household lifecycle stages when children move out from home and the space formerly created for the offspring is from then on under-utilized. Although in some cases the housing space consumption might actually meet households’ equilibrium preferences, frequently space-wise adjustments to the living situation doesn’t take place due to transaction or information costs, habit formation, or government intervention leading to increasing costs of relocations like real estate transfer taxes or tenant protection laws keeping tenure rents below the market price. Moreover, many detached houses are not long-term designed in a way that freed up space could be rent out. Findings of this research based on socio-economic survey data, indeed, show a significant difference between the living space of empty nest and a comparison group of households which never had children. The approach used to estimate the average difference in living space is a linear regression model regressing the response variable living space on a two-dimensional categorical variable distinguishing the two groups of household types and further controls. This difference is assumed to be the under-utilized space and is extrapolated to the total amount of empty nests in the population. Supporting this result, it is found that households that move, despite market frictions impairing the relocation, after children left their home tend to decrease the living space. In the next step, only for areas with tight housing markets in Germany and high construction activity, the total under-utilized space in empty nests is estimated. Under the assumption of full substitutability of housing space in empty nests and space in new dwellings in these locations, it is argued that in a perfect market with empty nest households consuming their equilibrium demand quantity of housing space, dwelling constructions in the amount of the excess consumption of living space could be saved. This, on the other hand, would prevent environmental harm quantified in carbon dioxide equivalence units related to average constructions of detached or multi-family houses. This study would thus provide information on the amount of under-utilized space inside dwellings which is missing in public data and further estimates the external effect of over housing in environmental terms.

Keywords: empty nests, environment, Germany, households, over housing

Procedia PDF Downloads 171
262 EEG and DC-Potential Level Сhanges in the Elderly

Authors: Irina Deputat, Anatoly Gribanov, Yuliya Dzhos, Alexandra Nekhoroshkova, Tatyana Yemelianova, Irina Bolshevidtseva, Irina Deryabina, Yana Kereush, Larisa Startseva, Tatyana Bagretsova, Irina Ikonnikova

Abstract:

In the modern world the number of elderly people increases. Preservation of functionality of an organism in the elderly becomes very important now. During aging the higher cortical functions such as feelings, perception, attention, memory, and ideation are gradual decrease. It is expressed in the rate of information processing reduction, volume of random access memory loss, ability to training and storing of new information decrease. Perspective directions in studying of aging neurophysiological parameters are brain imaging: computer electroencephalography, neuroenergy mapping of a brain, and also methods of studying of a neurodynamic brain processes. Research aim – to study features of a brain aging in elderly people by electroencephalogram (EEG) and the DC-potential level. We examined 130 people aged 55 - 74 years that did not have psychiatric disorders and chronic states in a decompensation stage. EEG was recorded with a 128-channel GES-300 system (USA). EEG recordings are collected while the participant sits at rest with their eyes closed for 3 minutes. For a quantitative assessment of EEG we used the spectral analysis. The range was analyzed on delta (0,5–3,5 Hz), a theta - (3,5–7,0 Hz), an alpha 1-(7,0–11,0 Hz) an alpha 2-(11–13,0 Hz), beta1-(13–16,5 Hz) and beta2-(16,5–20 Hz) ranges. In each frequency range spectral power was estimated. The 12-channel hardware-software diagnostic ‘Neuroenergometr-KM’ complex was applied for registration, processing and the analysis of a brain constant potentials level. The DC-potential level registered in monopolar leads. It is revealed that the EEG of elderly people differ in higher rates of spectral power in the range delta (р < 0,01) and a theta - (р < 0,05) rhythms, especially in frontal areas in aging. By results of the comparative analysis it is noted that elderly people 60-64 aged differ in higher values of spectral power alfa-2 range in the left frontal and central areas (р < 0,05) and also higher values beta-1 range in frontal and parieto-occipital areas (р < 0,05). Study of a brain constant potential level distribution revealed increase of total energy consumption on the main areas of a brain. In frontal leads we registered the lowest values of constant potential level. Perhaps it indicates decrease in an energy metabolism in this area and difficulties of executive functions. The comparative analysis of a potential difference on the main assignments testifies to unevenness of a lateralization of a brain functions at elderly people. The results of a potential difference between right and left hemispheres testify to prevalence of the left hemisphere activity. Thus, higher rates of functional activity of a cerebral cortex are peculiar to people of early advanced age (60-64 years) that points to higher reserve opportunities of central nervous system. By 70 years there are age changes of a cerebral power exchange and level of electrogenesis of a brain which reflect deterioration of a condition of homeostatic mechanisms of self-control and the program of processing of the perceptual data current flow.

Keywords: brain, DC-potential level, EEG, elderly people

Procedia PDF Downloads 485
261 Applications of Polyvagal Theory for Trauma in Clinical Practice: Auricular Acupuncture and Herbology

Authors: Aurora Sheehy, Caitlin Prince

Abstract:

Within current orthodox medical protocols, trauma and mental health issues are deemed to reside within the realm of cognitive or psychological therapists and are marginalised in these areas, in part due to limited drugs option available, mostly manipulating neurotransmitters or sedating patients to reduce symptoms. By contrast, this research presents examples from the clinical practice of how trauma can be assessed and treated physiologically. Adverse Childhood Experiences (ACEs) are a tally of different types of abuse and neglect. It has been used as a measurable and reliable predictor of the likelihood of the development of autoimmune disease. It is a direct way to demonstrate reliably the health impact of traumatic life experiences. A second assessment tool is Allostatic Load, which refers to the cumulative effects that chronic stress has on mental and physical health. It records the decline of an individual’s physiological capacity to cope with their experience. It uses a specific grouping of serum testing and physical measures. It includes an assessment of neuroendocrine, cardiovascular, immune and metabolic systems. Allostatic load demonstrates the health impact that trauma has throughout the body. It forms part of an initial intake assessment in clinical practice and could also be used in research to evaluate treatment. Examining medicinal plants for their physiological, neurological and somatic effects through the lens of Polyvagal theory offers new opportunities for trauma treatments. In situations where Polyvagal theory recommends activities and exercises to enable parasympathetic activation, many herbs that affect Effector Memory T (TEM) cells also enact these responses. Traditional or Indigenous European herbs show the potential to support the polyvagal tone, through multiple mechanisms. As the ventral vagal nerve reaches almost every major organ, plants that have actions on these tissues can be understood via their polyvagal actions, such as monoterpenes as agents to improve respiratory vagal tone, cyanogenic glycosides to reset polyvagal tone, volatile oils rich in phenyl methyl esters improve both sympathetic and parasympathetic tone, bitters activate gut function and can strongly promote parasympathetic regulation. Auricular Acupuncture uses a system of somatotopic mapping of the auricular surface overlaid with an image of an inverted foetus with each body organ and system featured. Given that the concha of the auricle is the only place on the body where the Vagus Nerve neurons reach the surface of the skin, several investigators have evaluated non-invasive, transcutaneous electrical nerve stimulation (TENS) at auricular points. Drawn from an interdisciplinary evidence base and developed through clinical practice, these assessment and treatment tools are examples of practitioners in the field innovating out of necessity for the best outcomes for patients. This paper draws on case studies to direct future research.

Keywords: polyvagal, auricular acupuncture, trauma, herbs

Procedia PDF Downloads 92
260 Fabrication of Antimicrobial Dental Model Using Digital Light Processing (DLP) Integrated with 3D-Bioprinting Technology

Authors: Rana Mohamed, Ahmed E. Gomaa, Gehan Safwat, Ayman Diab

Abstract:

Background: Bio-fabrication is a multidisciplinary research field that combines several principles, fabrication techniques, and protocols from different fields. The open-source-software movement is a movement that supports the use of open-source licenses for some or all software as part of the broader notion of open collaboration. Additive manufacturing is the concept of 3D printing, where it is a manufacturing method through adding layer-by-layer using computer-aided designs (CAD). There are several types of AM system used, and they can be categorized by the type of process used. One of these AM technologies is Digital light processing (DLP) which is a 3D printing technology used to rapidly cure a photopolymer resin to create hard scaffolds. DLP uses a projected light source to cure (Harden or crosslinking) the entire layer at once. Current applications of DLP are focused on dental and medical applications. Other developments have been made in this field, leading to the revolutionary field 3D bioprinting. The open-source movement was started to spread the concept of open-source software to provide software or hardware that is cheaper, reliable, and has better quality. Objective: Modification of desktop 3D printer into 3D bio-printer and the integration of DLP technology and bio-fabrication to produce an antibacterial dental model. Method: Modification of a desktop 3D printer into a 3D bioprinter. Gelatin hydrogel and sodium alginate hydrogel were prepared with different concentrations. Rhizome of Zingiber officinale, Flower buds of Syzygium aromaticum, and Bulbs of Allium sativum were extracted, and extractions were selected on different levels (Powder, aqueous extracts, total oils, and Essential oils) prepared for antibacterial bioactivity. Agar well diffusion method along with the E. coli have been used to perform the sensitivity test for the antibacterial activity of the extracts acquired by Zingiber officinale, Syzygium aromaticum, and Allium sativum. Lastly, DLP printing was performed to produce several dental models with the natural extracted combined with hydrogel to represent and simulate the Hard and Soft tissues. Result: The desktop 3D printer was modified into 3D bioprinter using open-source software Marline and modified custom-made 3D printed parts. Sodium alginate hydrogel and gelatin hydrogel were prepared at 5% (w/v), 10% (w/v), and 15%(w/v). Resin integration with the natural extracts of Rhizome of Zingiber officinale, Flower buds of Syzygium aromaticum, and Bulbs of Allium sativum was done following the percentage 1- 3% for each extract. Finally, the Antimicrobial dental model was printed; exhibits the antimicrobial activity, followed by merging with sodium alginate hydrogel. Conclusion: The open-source movement was successful in modifying and producing a low-cost Desktop 3D Bioprinter showing the potential of further enhancement in such scope. Additionally, the potential of integrating the DLP technology with bioprinting is a promising step toward the usage of the antimicrobial activity using natural products.

Keywords: 3D printing, 3D bio-printing, DLP, hydrogel, antibacterial activity, zingiber officinale, syzygium aromaticum, allium sativum, panax ginseng, dental applications

Procedia PDF Downloads 94
259 Comparison of On-Site Stormwater Detention Policies in Australian and Brazilian Cities

Authors: Pedro P. Drumond, James E. Ball, Priscilla M. Moura, Márcia M. L. P. Coelho

Abstract:

In recent decades, On-site Stormwater Detention (OSD) systems have been implemented in many cities around the world. In Brazil, urban drainage source control policies were created in the 1990’s and were mainly based on OSD. The concept of this technique is to promote the detention of additional stormwater runoff caused by impervious areas, in order to maintain pre-urbanization peak flow levels. In Australia OSD, was first adopted in the early 1980’s by the Ku-ring-gai Council in Sydney’s northern suburbs and Wollongong City Council. Many papers on the topic were published at that time. However, source control techniques related to stormwater quality have become to the forefront and OSD has been relegated to the background. In order to evaluate the effectiveness of the current regulations regarding OSD, the existing policies were compared in Australian cities, a country considered experienced in the use of this technique, and in Brazilian cities where OSD adoption has been increasing. The cities selected for analysis were Wollongong and Belo Horizonte, the first municipalities to adopt OSD in their respective countries, and Sydney and Porto Alegre, cities where these policies are local references. The Australian and Brazilian cities are located in Southern Hemisphere of the planet and similar rainfall intensities can be observed, especially in storm bursts greater than 15 minutes. Regarding technical criteria, Brazilian cities have a site-based approach, analyzing only on-site system drainage. This approach is criticized for not evaluating impacts on urban drainage systems and in rare cases may cause the increase of peak flows downstream. The city of Wollongong and most of the Sydney Councils adopted a catchment-based approach, requiring the use of Permissible Site Discharge (PSD) and Site Storage Requirements (SSR) values based on analysis of entire catchments via hydrograph-producing computer models. Based on the premise that OSD should be designed to dampen storms of 100 years Average Recurrence Interval (ARI) storm, the values of PSD and SSR in these four municipalities were compared. In general, Brazilian cities presented low values of PSD and high values of SSR. This can be explained by site-based approach and the low runoff coefficient value adopted for pre-development conditions. The results clearly show the differences between approaches and methodologies adopted in OSD designs among Brazilian and Australian municipalities, especially with regard to PSD values, being on opposite sides of the scale. However, lack of research regarding the real performance of constructed OSD does not allow for determining which is best. It is necessary to investigate OSD performance in a real situation, assessing the damping provided throughout its useful life, maintenance issues, debris blockage problems and the parameters related to rain-flow methods. Acknowledgments: The authors wish to thank CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico (Chamada Universal – MCTI/CNPq Nº 14/2014), FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais, and CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior for their financial support.

Keywords: on-site stormwater detention, source control, stormwater, urban drainage

Procedia PDF Downloads 180
258 A Review of Data Visualization Best Practices: Lessons for Open Government Data Portals

Authors: Bahareh Ansari

Abstract:

Background: The Open Government Data (OGD) movement in the last decade has encouraged many government organizations around the world to make their data publicly available to advance democratic processes. But current open data platforms have not yet reached to their full potential in supporting all interested parties. To make the data useful and understandable for everyone, scholars suggested that opening the data should be supplemented by visualization. However, different visualizations of the same information can dramatically change an individual’s cognitive and emotional experience in working with the data. This study reviews the data visualization literature to create a list of the methods empirically tested to enhance users’ performance and experience in working with a visualization tool. This list can be used in evaluating the OGD visualization practices and informing the future open data initiatives. Methods: Previous reviews of visualization literature categorized the visualization outcomes into four categories including recall/memorability, insight/comprehension, engagement, and enjoyment. To identify the papers, a search for these outcomes was conducted in the abstract of the publications of top-tier visualization venues including IEEE Transactions for Visualization and Computer Graphics, Computer Graphics, and proceedings of the CHI Conference on Human Factors in Computing Systems. The search results are complemented with a search in the references of the identified articles, and a search for 'open data visualization,' and 'visualization evaluation' keywords in the IEEE explore and ACM digital libraries. Articles are included if they provide empirical evidence through conducting controlled user experiments, or provide a review of these empirical studies. The qualitative synthesis of the studies focuses on identification and classifying the methods, and the conditions under which they are examined to positively affect the visualization outcomes. Findings: The keyword search yields 760 studies, of which 30 are included after the title/abstract review. The classification of the included articles shows five distinct methods: interactive design, aesthetic (artistic) style, storytelling, decorative elements that do not provide extra information including text, image, and embellishment on the graphs), and animation. Studies on decorative elements show consistency on the positive effects of these elements on user engagement and recall but are less consistent in their examination of the user performance. This inconsistency could be attributable to the particular data type or specific design method used in each study. The interactive design studies are consistent in their findings of the positive effect on the outcomes. Storytelling studies show some inconsistencies regarding the design effect on user engagement, enjoyment, recall, and performance, which could be indicative of the specific conditions required for the use of this method. Last two methods, aesthetics and animation, have been less frequent in the included articles, and provide consistent positive results on some of the outcomes. Implications for e-government: Review of the visualization best-practice methods show that each of these methods is beneficial under specific conditions. By using these methods in a potentially beneficial condition, OGD practices can promote a wide range of individuals to involve and work with the government data and ultimately engage in government policy-making procedures.

Keywords: best practices, data visualization, literature review, open government data

Procedia PDF Downloads 106
257 Assessing the Experiences of South African and Indian Legal Profession from the Perspective of Women Representation in Higher Judiciary: The Square Peg in a Round Hole Story

Authors: Sricheta Chowdhury

Abstract:

To require a woman to choose between her work and her personal life is the most acute form of discrimination that can be meted out against her. No woman should be given a choice to choose between her motherhood and her career at Bar, yet that is the most detrimental discrimination that has been happening in Indian Bar, which no one has questioned so far. The falling number of women in practice is a reality that isn’t garnering much attention given the sharp rise in women studying law but is not being able to continue in the profession. Moving from a colonial misogynist whim to a post-colonial “new-age construct of Indian woman” façade, the policymakers of the Indian Judiciary have done nothing so far to decolonize itself from its rudimentary understanding of ‘equality of gender’ when it comes to the legal profession. Therefore, when Indian jurisprudence was (and is) swooning to the sweeping effect of transformative constitutionalism in the understanding of equality as enshrined under the Indian Constitution, one cannot help but question why the legal profession remained out of brushing effect of achieving substantive equality. The Airline industry’s discriminatory policies were not spared from criticism, nor were the policies where women’s involvement in any establishment serving liquor (Anuj Garg case), but the judicial practice did not question the stereotypical bias of gender and unequal structural practices until recently. That necessitates the need to examine the existing Bar policies and the steps taken by the regulatory bodies in assessing the situations that are in favor or against the purpose of furthering women’s issues in present-day India. From a comparative feminist point of concern, South Africa’s pro-women Bar policies are attractive to assess their applicability and extent in terms of promoting inclusivity at the Bar. This article intends to tap on these two countries’ potential in carving a niche in giving women an equal platform to play a substantive role in designing governance policies through the Judiciary. The article analyses the current gender composition of the legal profession while endorsing the concept of substantive equality as a requisite in designing an appropriate appointment process of the judges. It studies the theoretical framework on gender equality, examines the international and regional instruments and analyses the scope of welfare policies that Indian legal and regulatory bodies can undertake towards a transformative initiative in re-modeling the Judiciary to a more diverse and inclusive institution. The methodology employs a comparative and analytical understanding of doctrinal resources. It makes quantitative use of secondary data and qualitative use of primary data collected for determining the present status of Indian women legal practitioners and judges. With respect to quantitative data, statistics on the representation of women as judges and chief justices and senior advocates from their official websites from 2018 till present have been utilized. In respect of qualitative data, results of the structured interviews conducted through open and close-ended questions with retired lady judges of the higher judiciary and senior advocates of the Supreme Court of India, contacted through snowball sampling, are utilized.

Keywords: gender, higher judiciary, legal profession, representation, substantive equality

Procedia PDF Downloads 83
256 Comparing Community Health Agents, Physicians and Nurses in Brazil's Family Health Strategy

Authors: Rahbel Rahman, Rogério Meireles Pinto, Margareth Santos Zanchetta

Abstract:

Background: Existing shortcomings of current health-service delivery include poor teamwork, competencies that do not address consumer needs, and episodic rather than continuous care. Brazil’s Sistema Único de Saúde (Unified Health System, UHS) is acknowledged worldwide as a model for delivering community-based care through Estratégia Saúde da Família (FHS; Family Health Strategy) interdisciplinary teams, comprised of Community Health Agents (in Portuguese, Agentes Comunitário de Saude, ACS), nurses, and physicians. FHS teams are mandated to collectively offer clinical care, disease prevention services, vector control, health surveillance and social services. Our study compares medical providers (nurses and physicians) and community-based providers (ACS) on their perceptions of work environment, professional skills, cognitive capacities and job context. Global health administrators and policy makers can leverage on similarities and differences across care providers to develop interprofessional training for community-based primary care. Methods: Cross-sectional data were collected from 168 ACS, 62 nurses and 32 physicians in Brazil. We compared providers’ demographic characteristics (age, race, and gender) and job context variables (caseload, work experience, work proximity to community, the length of commute, and familiarity with the community). Providers perceptions were compared to their work environment (work conditions and work resources), professional skills (consumer-input, interdisciplinary collaboration, efficacy of FHS teams, work-methods and decision-making autonomy), and cognitive capacities (knowledge and skills, skill variety, confidence and perseverance). Descriptive and bi-variate analysis, such as Pearson Chi-square and Analysis of Variance (ANOVA) F-tests, were performed to draw comparisons across providers. Results: Majority of participants were ACS (64%); 24% nurses; and 12% physicians. Majority of nurses and ACS identified as mixed races (ACS, n=85; nurses, n=27); most physicians identified as males (n=16; 52%), and white (n=18; 58%). Physicians were less likely to incorporate consumer-input and demonstrated greater decision-making autonomy than nurses and ACS. ACS reported the highest levels of knowledge and skills but the least confidence compared to nurses and physicians. ACS, nurses, and physicians were efficacious that FHS teams improved the quality of health in their catchment areas, though nurses tend to disagree that interdisciplinary collaboration facilitated their work. Conclusion: To our knowledge, there has been no study comparing key demographic and cognitive variables across ACS, nurses and physicians in the context of their work environment and professional training. We suggest that global health systems can leverage upon the diverse perspectives of providers to implement a community-based primary care model grounded in interprofessional training. Our study underscores the need for in-service trainings to instill reflective skills of providers, improve communication skills of medical providers and curative skills of ACS. Greater autonomy needs to be extended to community based providers to offer care integral to addressing consumer and community needs.

Keywords: global health systems, interdisciplinary health teams, community health agents, community-based care

Procedia PDF Downloads 229
255 Global News Coverage of the Pandemic: Towards an Ethical Framework for Media Professionalism

Authors: Anantha S. Babbili

Abstract:

This paper analyzes the current media practices dominant in global journalistic practices within the framework of world press theories of Libertarian, Authoritarian, Communist, and Social Responsibility to evaluate their efficacy in addressing their role in the coverage of the coronavirus, also known as COVID-19. The global media flows, determinants of news coverage, and international awareness and the Western view of the world will be critically analyzed within the context of the prevalent news values that underpin free press and media coverage of the world. While evaluating the global discourse paramount to a sustained and dispassionate understanding of world events, this paper proposes an ethical framework that brings clarity devoid of sensationalism, partisanship, right-wing and left-wing interpretations to a breaking and dangerous development of a pandemic. As the world struggles to contain the coronavirus pandemic with death climbing close to 6,000 from late January to mid-March, 2020, the populations of the developed as well as the developing nations are beset with news media renditions of the crisis that are contradictory, confusing and evoking anxiety, fear and hysteria. How are we to understand differing news standards and news values? What lessons do we as journalism and mass media educators, researchers, and academics learn in order to construct a better news model and structure of media practice that addresses science, health, and media literacy among media practitioners, journalists, and news consumers? As traditional media struggles to cover the pandemic to its audience and consumers, social media from which an increasing number of consumers get their news have exerted their influence both in a positive way and in a negative manner. Even as the world struggles to grasp the full significance of the pandemic, the World Health Organization (WHO) has been feverishly battling an additional challenge related to the pandemic in what it termed an 'infodemic'—'an overabundance of information, some accurate and some not, that makes it hard for people to find trustworthy sources and reliable guidance when they need it.' There is, indeed, a need for journalism and news coverage in times of pandemics that reflect social responsibility and ethos of public service journalism. Social media and high-tech information corporations, collectively termed GAMAF—Google, Apple, Microsoft, Amazon, and Facebook – can team up with reliable traditional media—newspapers, magazines, book publishers, radio and television corporates—to ease public emotions and be helpful in times of a pandemic outbreak. GAMAF can, conceivably, weed out sensational and non-credible sources of coronavirus information, exotic cures offered for sale on a quick fix, and demonetize videos that exploit peoples’ vulnerabilities at the lowest ebb. Credible news of utility delivered in a sustained, calm, and reliable manner serves people in a meaningful and helpful way. The world’s consumers of news and information, indeed, deserve a healthy and trustworthy news media – at least in the time of pandemic COVID-19. Towards this end, the paper will propose a practical model for news media and journalistic coverage during times of a pandemic.

Keywords: COVID-19, international news flow, social media, social responsibility

Procedia PDF Downloads 112
254 Applying Program Theory-Driven Approach to Design and Evaluate a Teacher Professional Development Program

Authors: S. C. Lin, M. S. Wu

Abstract:

Japanese Scholar Manabu Sato has been advocating the Learning Community, which changed Japanese fundamental education during the last three decades. It was also called a “Quiet Revolution.” Manabu Sato criticized that traditional education only focused on individual competition, exams, teacher-centered instruction, and memorization. The students lacked leaning motivation. Therefore, Manabu Sato proclaimed that learning should be a sustainable process of “constantly weaving the relationship and the meanings” by having dialogues with learning materials, with peers, and with oneself. For a long time, secondary school education in Taiwan has been focused on exams and emphasized reciting and memorizing. The incident of “giving up learning” happened to some students. Manabu Sato’s learning community program has been implemented very successfully in Japan. It is worth exploring if learning community can resolve the issue of “Escape from learning” phenomenon among secondary school students in Taiwan. This study was the first year of a two-year project. This project applied a program theory-driven approach to evaluating the impact of teachers’ professional development interventions on students’ learning by using a mix of methods, qualitative inquiry, and quasi-experimental design. The current study was to show the results of using the method of theory-driven approach to program planning to design and evaluate a teachers’ professional development program (TPDP). The Manabu Sato’s learning community theory was applied to structure all components of a 54-hour workshop. The participants consisted of seven secondary school science teachers from two schools. The research procedure was comprised of: 1) Defining the problem and assessing participants’ needs; 2) Selecting the Theoretical Framework; 3) Determining theory-based goals and objectives; 4) Designing the TPDP intervention; 5) Implementing the TPDP intervention; 6) Evaluating the TPDP intervention. Data was collected from a number of different sources, including TPDP checklist, activity responses of workshop, LC subject matter test, teachers’ e-portfolio, course design documents, and teachers’ belief survey. The major findings indicated that program design was suitable to participants. More than 70% of the participants were satisfied with program implementation. They revealed that TPDP was beneficial to their instruction and promoted their professional capacities. However, due to heavy teaching loadings during the project some participants were unable to attend all workshops. To resolve this problem, the author provided options to them by watching DVD or reading articles offered by the research team. This study also established a communication platform for participants to share their thoughts and learning experiences. The TPDP had marked impacts on participants’ teaching beliefs. They believe that learning should be a sustainable process of “constantly weaving the relationship and the meanings” by having dialogues with learning materials, with peers, and with oneself. Having learned from TPDP, they applied a “learner-centered” approach and instructional strategies to design their courses, such as learning by doing, collaborative learning, and reflective learning. To conclude, participants’ beliefs, knowledge, and skills were promoted by the program instructions.

Keywords: program theory-driven approach, learning community, teacher professional development program, program evaluation

Procedia PDF Downloads 308
253 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management

Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.

Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities

Procedia PDF Downloads 72
252 Avoidance of Brittle Fracture in Bridge Bearings: Brittle Fracture Tests and Initial Crack Size

Authors: Natalie Hoyer

Abstract:

Bridges in both roadway and railway systems depend on bearings to ensure extended service life and functionality. These bearings enable proper load distribution from the superstructure to the substructure while permitting controlled movement of the superstructure. The design of bridge bearings, according to Eurocode DIN EN 1337 and the relevant sections of DIN EN 1993, increasingly requires the use of thick plates, especially for long-span bridges. However, these plate thicknesses exceed the limits specified in the national appendix of DIN EN 1993-2. Furthermore, compliance with DIN EN 1993-1-10 regulations regarding material toughness and through-thickness properties necessitates further modifications. Consequently, these standards cannot be directly applied to the selection of bearing materials without supplementary guidance and design rules. In this context, a recommendation was developed in 2011 to regulate the selection of appropriate steel grades for bearing components. Prior to the initiation of the research project underlying this contribution, this recommendation had only been available as a technical bulletin. Since July 2023, it has been integrated into guideline 804 of the German railway. However, recent findings indicate that certain bridge-bearing components are exposed to high fatigue loads, which necessitate consideration in structural design, material selection, and calculations. Therefore, the German Centre for Rail Traffic Research called a research project with the objective of defining a proposal to expand the current standards in order to implement a sufficient choice of steel material for bridge bearings to avoid brittle fracture, even for thick plates and components subjected to specific fatigue loads. The results obtained from theoretical considerations, such as finite element simulations and analytical calculations, are validated through large-scale component tests. Additionally, experimental observations are used to calibrate the calculation models and modify the input parameters of the design concept. Within the large-scale component tests, a brittle failure is artificially induced in a bearing component. For this purpose, an artificially generated initial defect is introduced at the previously defined hotspot into the specimen using spark erosion. Then, a dynamic load is applied until the crack initiation process occurs to achieve realistic conditions in the form of a sharp notch similar to a fatigue crack. This initiation process continues until the crack length reaches a predetermined size. Afterward, the actual test begins, which requires cooling the specimen with liquid nitrogen until a temperature is reached where brittle fracture failure is expected. In the next step, the component is subjected to a quasi-static tensile test until failure occurs in the form of a brittle failure. The proposed paper will present the latest research findings, including the results of the conducted component tests and the derived definition of the initial crack size in bridge bearings.

Keywords: bridge bearings, brittle fracture, fatigue, initial crack size, large-scale tests

Procedia PDF Downloads 44
251 Exposing The Invisible

Authors: Kimberley Adamek

Abstract:

According to the Council on Tall Buildings, there has been a rapid increase in the construction of tall or “megatall” buildings over the past two decades. Simultaneously, the New England Journal of Medicine has reported that there has been a steady increase in climate related natural disasters since the 1970s; the eastern expansion of the USA's infamous Tornado Alley being just one of many current issues. In the future, this could mean that tall buildings, which already guide high speed winds down to pedestrian levels would have to withstand stronger forces and protect pedestrians in more extreme ways. Although many projects are required to be verified within wind tunnels and a handful of cities such as San Francisco have included wind testing within building code standards, there are still many examples where wind is only considered for basic loading. This typically results in and an increase of structural expense and unwanted mitigation strategies that are proposed late within a project. When building cities, architects rarely consider how each building alters the invisible patterns of wind and how these alterations effect other areas in different ways later on. It is not until these forces move, overpower and even destroy cities that people take notice. For example, towers have caused winds to blow objects into people (Walkie-Talkie Tower, Leeds, England), cause building parts to vibrate and produce loud humming noises (Beetham Tower, Manchester), caused wind tunnels in streets as well as many other issues. Alternatively, there exist towers which have used their form to naturally draw in air and ventilate entire facilities in order to eliminate the needs for costly HVAC systems (The Met, Thailand) and used their form to increase wind speeds to generate electricity (Bahrain Tower, Dubai). Wind and weather exist and effect all parts of the world in ways such as: Science, health, war, infrastructure, catastrophes, tourism, shopping, media and materials. Working in partnership with a leading wind engineering company RWDI, a series of tests, images and animations documenting discovered interactions of different building forms with wind will be collected to emphasize the possibilities for wind use to architects. A site within San Francisco (due to its increasing tower development, consistently wind conditions and existing strict wind comfort criteria) will host a final design. Iterations of this design will be tested within the wind tunnel and computational fluid dynamic systems which will expose, utilize and manipulate wind flows to create new forms, technologies and experiences. Ultimately, this thesis aims to question the amount which the environment is allowed to permeate building enclosures, uncover new programmatic possibilities for wind in buildings, and push the boundaries of working with the wind to ensure the development and safety of future cities. This investigation will improve and expand upon the traditional understanding of wind in order to give architects, wind engineers as well as the general public the ability to broaden their scope in order to productively utilize this living phenomenon that everyone constantly feels but cannot see.

Keywords: wind engineering, climate, visualization, architectural aerodynamics

Procedia PDF Downloads 358
250 Forming-Free Resistive Switching Effect in ZnₓTiᵧHfzOᵢ Nanocomposite Thin Films for Neuromorphic Systems Manufacturing

Authors: Vladimir Smirnov, Roman Tominov, Vadim Avilov, Oleg Ageev

Abstract:

The creation of a new generation micro- and nanoelectronics elements opens up unlimited possibilities for electronic devices parameters improving, as well as developing neuromorphic computing systems. Interest in the latter is growing up every year, which is explained by the need to solve problems related to the unstructured classification of data, the construction of self-adaptive systems, and pattern recognition. However, for its technical implementation, it is necessary to fulfill a number of conditions for the basic parameters of electronic memory, such as the presence of non-volatility, the presence of multi-bitness, high integration density, and low power consumption. Several types of memory are presented in the electronics industry (MRAM, FeRAM, PRAM, ReRAM), among which non-volatile resistive memory (ReRAM) is especially distinguished due to the presence of multi-bit property, which is necessary for neuromorphic systems manufacturing. ReRAM is based on the effect of resistive switching – a change in the resistance of the oxide film between low-resistance state (LRS) and high-resistance state (HRS) under an applied electric field. One of the methods for the technical implementation of neuromorphic systems is cross-bar structures, which are ReRAM cells, interconnected by cross data buses. Such a structure imitates the architecture of the biological brain, which contains a low power computing elements - neurons, connected by special channels - synapses. The choice of the ReRAM oxide film material is an important task that determines the characteristics of the future neuromorphic system. An analysis of literature showed that many metal oxides (TiO2, ZnO, NiO, ZrO2, HfO2) have a resistive switching effect. It is worth noting that the manufacture of nanocomposites based on these materials allows highlighting the advantages and hiding the disadvantages of each material. Therefore, as a basis for the neuromorphic structures manufacturing, it was decided to use ZnₓTiᵧHfzOᵢ nanocomposite. It is also worth noting that the ZnₓTiᵧHfzOᵢ nanocomposite does not need an electroforming, which degrades the parameters of the formed ReRAM elements. Currently, this material is not well studied, therefore, the study of the effect of resistive switching in forming-free ZnₓTiᵧHfzOᵢ nanocomposite is an important task and the goal of this work. Forming-free nanocomposite ZnₓTiᵧHfzOᵢ thin film was grown by pulsed laser deposition (Pioneer 180, Neocera Co., USA) on the SiO2/TiN (40 nm) substrate. Electrical measurements were carried out using a semiconductor characterization system (Keithley 4200-SCS, USA) with W probes. During measurements, TiN film was grounded. The analysis of the obtained current-voltage characteristics showed a resistive switching from HRS to LRS resistance states at +1.87±0.12 V, and from LRS to HRS at -2.71±0.28 V. Endurance test shown that HRS was 283.21±32.12 kΩ, LRS was 1.32±0.21 kΩ during 100 measurements. It was shown that HRS/LRS ratio was about 214.55 at reading voltage of 0.6 V. The results can be useful for forming-free nanocomposite ZnₓTiᵧHfzOᵢ films in neuromorphic systems manufacturing. This work was supported by RFBR, according to the research project № 19-29-03041 mk. The results were obtained using the equipment of the Research and Education Center «Nanotechnologies» of Southern Federal University.

Keywords: nanotechnology, nanocomposites, neuromorphic systems, RRAM, pulsed laser deposition, resistive switching effect

Procedia PDF Downloads 132
249 Reproductive Biology and Lipid Content of Albacore Tuna (Thunnus alalunga) in the Western Indian Ocean

Authors: Zahirah Dhurmeea, Iker Zudaire, Heidi Pethybridge, Emmanuel Chassot, Maria Cedras, Natacha Nikolic, Jerome Bourjea, Wendy West, Chandani Appadoo, Nathalie Bodin

Abstract:

Scientific advice on the status of fish stocks relies on indicators that are based on strong assumptions on biological parameters such as condition, maturity and fecundity. Currently, information on the biology of albacore tuna, Thunnus alalunga, in the Indian Ocean is scarce. Consequently, many parameters used in stock assessment models for Indian Ocean albacore originate largely from other studied stocks or species of tuna. Inclusion of incorrect biological data in stock assessment models would lead to inappropriate estimates of stock status used by fisheries manager’s to establish future catch allowances. The reproductive biology of albacore tuna in the western Indian Ocean was examined through analysis of the sex ratio, spawning season, length-at-maturity (L50), spawning frequency, fecundity and fish condition. In addition, the total lipid content (TL) and lipid class composition in the gonads, liver and muscle tissues of female albacore during the reproductive cycle was investigated. A total of 923 female and 867 male albacore were sampled from 2013 to 2015. A bias in sex-ratio was found in favour of females with fork length (LF) <100 cm. Using histological analyses and gonadosomatic index, spawning was found to occur between 10°S and 30°S, mainly to the east of Madagascar from October to January. Large females contributed more to reproduction through their longer spawning period compared to small individuals. The L50 (mean ± standard error) of female albacore was estimated at 85.3 ± 0.7 cm LF at the vitellogenic 3 oocyte stage maturity threshold. Albacore spawn on average every 2.2 days within the spawning region and spawning months from November to January. Batch fecundity varied between 0.26 and 2.09 million eggs and the relative batch fecundity (mean  standard deviation) was estimated at 53.4 ± 23.2 oocytes g-1 of somatic-gutted weight. Depending on the maturity stage, TL in ovaries ranged from 7.5 to 577.8 mg g-1 of wet weight (ww) with different proportions of phospholipids (PL), wax esters (WE), triacylglycerol (TAG) and sterol (ST). The highest TL were observed in immature (mostly TAG and PL) and spawning capable ovaries (mostly PL, WE and TAG). Liver TL varied from 21.1 to 294.8 mg g-1 (ww) and acted as an energy (mainly TAG and PL) storage prior to reproduction when the lowest TL was observed. Muscle TL varied from 2.0 to 71.7 g-1 (ww) in mature females without a clear pattern between maturity stages, although higher values of up to 117.3 g-1 (ww) was found in immature females. TL results suggest that albacore could be viewed predominantly as a capital breeder relying mostly on lipids stored before the onset of reproduction and with little additional energy derived from feeding. This study is the first one to provide new information on the reproductive development and classification of albacore in the western Indian Ocean. The reproductive parameters will reduce uncertainty in current stock assessment models which will eventually promote sustainability of the fishery.

Keywords: condition, size-at-maturity, spawning behaviour, temperate tuna, total lipid content

Procedia PDF Downloads 260
248 Lentiviral-Based Novel Bicistronic Therapeutic Vaccine against Chronic Hepatitis B Induces Robust Immune Response

Authors: Mohamad F. Jamiluddin, Emeline Sarry, Ana Bejanariu, Cécile Bauche

Abstract:

Introduction: Over 360 million people are chronically infected with hepatitis B virus (HBV), of whom 1 million die each year from HBV-associated liver cirrhosis or hepatocellular carcinoma. Current treatment options for chronic hepatitis B depend on interferon-α (IFNα) or nucleos(t)ide analogs, which control virus replication but rarely eliminate the virus. Treatment with PEG-IFNα leads to a sustained antiviral response in only one third of patients. After withdrawal of the drugs, the rebound of viremia is observed in the majority of patients. Furthermore, the long-term treatment is subsequently associated with the appearance of drug resistant HBV strains that is often the cause of the therapy failure. Among the new therapeutic avenues being developed, therapeutic vaccine aimed at inducing immune responses similar to those found in resolvers is of growing interest. The high prevalence of chronic hepatitis B necessitates the design of better vaccination strategies capable of eliciting broad-spectrum of cell-mediated immunity(CMI) and humoral immune response that can control chronic hepatitis B. Induction of HBV-specific T cells and B cells by therapeutic vaccination may be an innovative strategy to overcome virus persistence. Lentiviral vectors developed and optimized by THERAVECTYS, due to their ability to transduce non-dividing cells, including dendritic cells, and induce CMI response, have demonstrated their effectiveness as vaccination tools. Method: To develop a HBV therapeutic vaccine that can induce a broad but specific immune response, we generated recombinant lentiviral vector carrying IRES(Internal Ribosome Entry Site)-containing bicistronic constructs which allow the coexpression of two vaccine products, namely HBV T- cell epitope vaccine and HBV virus like particle (VLP) vaccine. HBV T-cell epitope vaccine consists of immunodominant cluster of CD4 and CD8 epitopes with spacer in between them and epitopes are derived from HBV surface protein, HBV core, HBV X and polymerase. While HBV VLP vaccine is a HBV core protein based chimeric VLP with surface protein B-cell epitopes displayed. In order to evaluate the immunogenicity, mice were immunized with lentiviral constructs by intramuscular injection. The T cell and antibody immune responses of the two vaccine products were analyzed using IFN-γ ELISpot assay and ELISA respectively to quantify the adaptive response to HBV antigens. Results: Following a single administration in mice, lentiviral construct elicited robust antigen-specific IFN-γ responses to the encoded antigens. The HBV T- cell epitope vaccine demonstrated significantly higher T cell immunogenicity than HBV VLP vaccine. Importantly, we demonstrated by ELISA that antibodies are induced against both HBV surface protein and HBV core protein when mice injected with vaccine construct (p < 0.05). Conclusion: Our results highlight that THERAVECTYS lentiviral vectors may represent a powerful platform for immunization strategy against chronic hepatitis B. Our data suggests the likely importance of Lentiviral vector based novel bicistronic construct for further study, in combination with drugs or as standalone antigens, as a therapeutic lentiviral based HBV vaccines. THERAVECTYS bicistronic HBV vaccine will be further evaluated in animal efficacy studies.

Keywords: chronic hepatitis B, lentiviral vectors, therapeutic vaccine, virus-like particle

Procedia PDF Downloads 335
247 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types

Authors: Qianxi Lv, Junying Liang

Abstract:

Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.

Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity

Procedia PDF Downloads 178
246 The Assessment of Infiltrated Wastewater on the Efficiency of Recovery Reuse and Irrigation Scheme: North Gaza Emergency Sewage Treatment Project as a Case Study

Authors: Yaser S. Kishawi, Sadi R. Ali

Abstract:

Part of Palestine, Gaza Strip (365 km2 and 1.8 million habitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely covers the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is finding non-conventional water resource from treated wastewater to cover agricultural requirements and serve the population. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line and infiltration basins-IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme–RRS– to capture the spreading plume). Currently, only phase A is functioning. Nearly 23 Mm3 of partially treated wastewater were infiltrated into the aquifer. Phase B and phase C witnessed many delays and this forced a reassessment of the RRS original design. An Environmental Management Plan was conducted from Jul 2013 to Jun 2014 on 13 existing monitoring wells surrounding the project location. This is to measure the efficiency of the SAT system and the spread of the contamination plume with relation to the efficiency of the proposed RRS. Along with the proposed location of the 27 recovery wells as part of the proposed RRS. The results of monitored wells were assessed compared with PWA baseline data. This was put into a groundwater model to simulate the plume to propose the best suitable solution to the delays. The redesign mainly manipulated the pumping rate of wells, proposed locations and functioning schedules (including wells groupings). The proposed simulations were examined using visual MODFLOW V4.2 to simulate the results. The results of monitored wells were assessed based on the location of the monitoring wells related to the proposed recovery wells locations (200m, 500m, and 750m away from the IBs). Near the 500m line (the first row of proposed recovery wells), an increase of nitrate (from 30 to 70mg/L) compare to a decrease in Chloride (1500 to below 900mg/L) was found during the monitoring period which indicated an expansion of plume to this distance. On this rate with the required time to construct the recovery scheme, keeping the original design the RRS will fail to capture the plume. Based on that many simulations were conducted leading into three main scenarios. The scenarios manipulated the starting dates, the pumping rate and the locations of recovery wells. A simulation of plume expansion and path-lines were extracted from the model monitoring how to prevent the expansion towards the nearby municipal wells. It was concluded that the location is the most important factor in determining the RRS efficiency. Scenario III was adopted and showed effective results even with a reduced pumping rates. This scenario proposed adding two additional recovery wells in a location beyond the 750m line to compensate the delays and effectively capture the plume. A continuous monitoring program for current and future monitoring wells should be in place to support the proposed scenario and ensure maximum protection.

Keywords: soil aquifer treatment, recovery reuse scheme, infiltration basins, North Gaza

Procedia PDF Downloads 204
245 A Clustering-Based Approach for Weblog Data Cleaning

Authors: Amine Ganibardi, Cherif Arab Ali

Abstract:

This paper addresses the data cleaning issue as a part of web usage data preprocessing within the scope of Web Usage Mining. Weblog data recorded by web servers within log files reflect usage activity, i.e., End-users’ clicks and underlying user-agents’ hits. As Web Usage Mining is interested in End-users’ behavior, user-agents’ hits are referred to as noise to be cleaned-off before mining. Filtering hits from clicks is not trivial for two reasons, i.e., a server records requests interlaced in sequential order regardless of their source or type, website resources may be set up as requestable interchangeably by end-users and user-agents. The current methods are content-centric based on filtering heuristics of relevant/irrelevant items in terms of some cleaning attributes, i.e., website’s resources filetype extensions, website’s resources pointed by hyperlinks/URIs, http methods, user-agents, etc. These methods need exhaustive extra-weblog data and prior knowledge on the relevant and/or irrelevant items to be assumed as clicks or hits within the filtering heuristics. Such methods are not appropriate for dynamic/responsive Web for three reasons, i.e., resources may be set up to as clickable by end-users regardless of their type, website’s resources are indexed by frame names without filetype extensions, web contents are generated and cancelled differently from an end-user to another. In order to overcome these constraints, a clustering-based cleaning method centered on the logging structure is proposed. This method focuses on the statistical properties of the logging structure at the requested and referring resources attributes levels. It is insensitive to logging content and does not need extra-weblog data. The used statistical property takes on the structure of the generated logging feature by webpage requests in terms of clicks and hits. Since a webpage consists of its single URI and several components, these feature results in a single click to multiple hits ratio in terms of the requested and referring resources. Thus, the clustering-based method is meant to identify two clusters based on the application of the appropriate distance to the frequency matrix of the requested and referring resources levels. As the ratio clicks to hits is single to multiple, the clicks’ cluster is the smallest one in requests number. Hierarchical Agglomerative Clustering based on a pairwise distance (Gower) and average linkage has been applied to four logfiles of dynamic/responsive websites whose click to hits ratio range from 1/2 to 1/15. The optimal clustering set on the basis of average linkage and maximum inter-cluster inertia results always in two clusters. The evaluation of the smallest cluster referred to as clicks cluster under the terms of confusion matrix indicators results in 97% of true positive rate. The content-centric cleaning methods, i.e., conventional and advanced cleaning, resulted in a lower rate 91%. Thus, the proposed clustering-based cleaning outperforms the content-centric methods within dynamic and responsive web design without the need of any extra-weblog. Such an improvement in cleaning quality is likely to refine dependent analysis.

Keywords: clustering approach, data cleaning, data preprocessing, weblog data, web usage data

Procedia PDF Downloads 170
244 A High-Throughput Enzyme Screening Method Using Broadband Coherent Anti-stokes Raman Spectroscopy

Authors: Ruolan Zhang, Ryo Imai, Naoko Senda, Tomoyuki Sakai

Abstract:

Enzymes have attracted increasing attentions in industrial manufacturing for their applicability in catalyzing complex chemical reactions under mild conditions. Directed evolution has become a powerful approach to optimize enzymes and exploit their full potentials under the circumstance of insufficient structure-function knowledge. With the incorporation of cell-free synthetic biotechnology, rapid enzyme synthesis can be realized because no cloning procedure such as transfection is needed. Its open environment also enables direct enzyme measurement. These properties of cell-free biotechnology lead to excellent throughput of enzymes generation. However, the capabilities of current screening methods have limitations. Fluorescence-based assay needs applicable fluorescent label, and the reliability of acquired enzymatic activity is influenced by fluorescent label’s binding affinity and photostability. To acquire the natural activity of an enzyme, another method is to combine pre-screening step and high-performance liquid chromatography (HPLC) measurement. But its throughput is limited by necessary time investment. Hundreds of variants are selected from libraries, and their enzymatic activities are then identified one by one by HPLC. The turn-around-time is 30 minutes for one sample by HPLC, which limits the acquirable enzyme improvement within reasonable time. To achieve the real high-throughput enzyme screening, i.e., obtain reliable enzyme improvement within reasonable time, a widely applicable high-throughput measurement of enzymatic reactions is highly demanded. Here, a high-throughput screening method using broadband coherent anti-Stokes Raman spectroscopy (CARS) was proposed. CARS is one of coherent Raman spectroscopy, which can identify label-free chemical components specifically from their inherent molecular vibration. These characteristic vibrational signals are generated from different vibrational modes of chemical bonds. With the broadband CARS, chemicals in one sample can be identified from their signals in one broadband CARS spectrum. Moreover, it can magnify the signal levels to several orders of magnitude greater than spontaneous Raman systems, and therefore has the potential to evaluate chemical's concentration rapidly. As a demonstration of screening with CARS, alcohol dehydrogenase, which converts ethanol and nicotinamide adenine dinucleotide oxidized form (NAD+) to acetaldehyde and nicotinamide adenine dinucleotide reduced form (NADH), was used. The signal of NADH at 1660 cm⁻¹, which is generated from nicotinamide in NADH, was utilized to measure the concentration of it. The evaluation time for CARS signal of NADH was determined to be as short as 0.33 seconds while having a system sensitivity of 2.5 mM. The time course of alcohol dehydrogenase reaction was successfully measured from increasing signal intensity of NADH. This measurement result of CARS was consistent with the result of a conventional method, UV-Vis. CARS is expected to have application in high-throughput enzyme screening and realize more reliable enzyme improvement within reasonable time.

Keywords: Coherent Anti-Stokes Raman Spectroscopy, CARS, directed evolution, enzyme screening, Raman spectroscopy

Procedia PDF Downloads 141