Search results for: mathematics result per gender
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12756

Search results for: mathematics result per gender

846 A Case Study on Quantitatively and Qualitatively Increasing Student Output by Using Available Word Processing Applications to Teach Reluctant Elementary School-Age Writers

Authors: Vivienne Cameron

Abstract:

Background: Between 2010 and 2017, teachers in a suburban public school district struggled to get students to consistently produce adequate writing samples as measured by the Pennsylvania state writing rubric for measuring focus, content, organization, style, and conventions. A common thread in all of the data was the need to develop stamina in the student writers. Method: All of the teachers used the traditional writing process model (prewrite, draft, revise, edit, final copy) during writing instruction. One teacher taught the writing process using word processing and incentivizing with publication instead of the traditional pencil/paper/grading method. Students did not have instruction in typing/keyboarding. The teacher submitted resulting student work to real-life contests, magazines, and publishers. Results: Students in the test group increased both the quantity and quality of their writing over a seven month period as measured by the Pennsylvania state writing rubric. Reluctant writers, as well as students with autism spectrum disorder, benefited from this approach. This outcome was repeated consistently over a five-year period. Interpretation: Removing the burden of pencil and paper allowed students to participate in the writing process more fully. Writing with pencil and paper is physically tiring. Students are discouraged when they submit a draft and are instructed to use the Add, Remove, Move, Substitute (ARMS) method to revise their papers. Each successive version becomes shorter. Allowing students to type their papers frees them to quickly and easily make changes. The result is longer writing pieces in shorter time frames, allowing the teacher to spend more time working on individual needs. With this additional time, the teacher can concentrate on teaching focus, content, organization, style, conventions, and audience. S/he also has a larger body of works from which to work on whole group instruction such as developing effective leads. The teacher submitted the resulting student work to contests, magazines, and publishers. Although time-consuming, the submission process was an invaluable lesson for teaching about audience and tone. All students in the test sample had work accepted for publication. Students became highly motivated to succeed when their work was accepted for publication. This motivation applied to special needs students, regular education students, and gifted students.

Keywords: elementary-age students, reluctant writers, teaching strategies, writing process

Procedia PDF Downloads 175
845 Knowledge Management Processes as a Driver of Knowledge-Worker Performance in Public Health Sector of Pakistan

Authors: Shahid Razzaq

Abstract:

The governments around the globe have started taking into considerations the knowledge management dynamics while formulating, implementing, and evaluating the strategies, with or without the conscious realization, for the different public sector organizations and public policy developments. Health Department of Punjab province in Pakistan is striving to deliver quality healthcare services to the community through an efficient and effective service delivery system. Despite of this struggle some employee performance issues yet exists in the form of challenge to government. To overcome these issues department took several steps including HR strategies, use of technologies and focus of hard issues. Consequently, this study was attempted to highlight the importance of soft issue that is knowledge management in its true essence to tackle their performance issues. Knowledge management in public sector is quite an ignored area in the knowledge management-a growing multidisciplinary research discipline. Knowledge-based view of the firm theory asserts the knowledge is the most deliberate resource that can result in competitive advantage for an organization over the other competing organizations. In the context of our study it means for gaining employee performance, organizations have to increase the heterogeneous knowledge bases. The study uses the cross-sectional and quantitative research design. The data is collected from the knowledge workers of Health Department of Punjab, the biggest province of Pakistan. A total of 341 sample size is achieved. The SmartPLS 3 Version 2.6 is used for analyzing the data. The data examination revealed that knowledge management processes has a strong impact on knowledge worker performance. All hypotheses are accepted according to the results. Therefore, it can be summed up that to increase the employee performance knowledge management activities should be implemented. Health Department within province of Punjab introduces the knowledge management infrastructure and systems to make effective availability of knowledge for the service staff. This knowledge management infrastructure resulted in an increase in the knowledge management process in different remote hospitals, basic health units and care centers which resulted in greater service provisions to public. This study is to have theoretical and practical significances. In terms of theoretical contribution, this study is to establish the relationship between knowledge management and performance for the first time. In case of the practical contribution, this study is to give an insight to public sector organizations and government about role of knowledge management in employ performance. Therefore, public policymakers are strongly advised to implement the activities of knowledge management for enhancing the performance of knowledge workers. The current research validated the substantial role of knowledge management in persuading and creating employee arrogances and behavioral objectives. To the best of authors’ knowledge, this study contribute to the impact of knowledge management on employee performance as its originality.

Keywords: employee performance, knowledge management, public sector, soft issues

Procedia PDF Downloads 141
844 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 114
843 Methods of Detoxification of Nuts With Aflatoxin B1 Contamination

Authors: Auteleyeva Laura, Maikanov Balgabai, Smagulova Ayana

Abstract:

In order to find and select detoxification methods, patent and information research was conducted, as a result of which 68 patents for inventions were found, among them from the near abroad - 14 (Russia), from far abroad: China – 27, USA - 6, South Korea–1, Germany - 2, Mexico – 4, Yugoslavia – 7, Austria, Taiwan, Belarus, Denmark, Italy, Japan, Canada for 1 security document. Aflatoxin B₁ in various nuts was determined by two methods: enzyme immunoassay "RIDASCREEN ® FAST Aflatoxin" with determination of optical density on a microplate spectrophotometer RIDA®ABSORPTION 96 with RIDASOFT® software Win.NET (Germany) and the method of high-performance liquid chromatography (HPLC Corporation Water, USA) according to GOST 307112001. For experimental contamination of nuts, the cultivation of strain A was carried out. flavus KWIK-STIK on the medium of Chapek (France) with subsequent infection of various nuts (peanuts, peanuts with shells, badam, walnuts with and without shells, pistachios).Based on our research, we have selected 2 detoxification methods: method 1 – combined (5% citric acid solution + microwave for 640 W for 3 min + UV for 20 min) and a chemical method with various leaves of plants: Artemisia terra-albae, Thymus vulgaris, Callogonum affilium, collected in the territory of Akmola region (Artemisia terra-albae, Thymus vulgaris) and Western Kazakhstan (Callogonum affilium). The first stage was the production of ethanol extracts of Artemisia terraea-albae, Thymus vulgaris, Callogonum affilium. To obtain them, 100 g of vegetable raw materials were taken, which was dissolved in 70% ethyl alcohol. Extraction was carried out for 2 hours at the boiling point of the solvent with a reverse refrigerator using an ultrasonic bath "Sapphire". The obtained extracts were evaporated on a rotary evaporator IKA RV 10. At the second stage, the three samples obtained were tested for antimicrobial and antifungal activity. Extracts of Thymus vulgaris and Callogonum affilium showed high antimicrobial and antifungal activity. Artemisia terraea-albae extract showed high antimicrobial activity and low antifungal activity. When testing method 1, it was found that in the first and third experimental groups there was a decrease in the concentration of aflatoxin B1 in walnut samples by 63 and 65%, respectively, but these values also exceeded the maximum permissible concentrations, while the nuts in the second and third experimental groups had a tart lemon flavor; When testing method 2, a decrease in the concentration of aflatoxin B1 to a safe level was observed by 91% (0.0038 mg/kg) in nuts of the 1st and 2nd experimental groups (Artemisia terra-albae, Thymus vulgaris), while in samples of the 2nd and 3rd experimental groups, a decrease in the amount of aflatoxin in 1 to a safe level was observed.

Keywords: nuts, aflatoxin B1, my, mycotoxins

Procedia PDF Downloads 86
842 Validating the Micro-Dynamic Rule in Opinion Dynamics Models

Authors: Dino Carpentras, Paul Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is dedicated to modeling the dynamic evolution of people's opinions. Models in this field are based on a micro-dynamic rule, which determines how people update their opinion when interacting. Despite the high number of new models (many of them based on new rules), little research has been dedicated to experimentally validate the rule. A few studies started bridging this literature gap by experimentally testing the rule. However, in these studies, participants are forced to express their opinion as a number instead of using natural language. Furthermore, some of these studies average data from experimental questions, without testing if differences existed between them. Indeed, it is possible that different topics could show different dynamics. For example, people may be more prone to accepting someone's else opinion regarding less polarized topics. In this work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions using natural language ('agree' or 'disagree') and the certainty of their answer, expressed as a number between 1 and 10. To keep the interaction based on natural language, certainty was not shown to other participants. We then showed to the participant someone else's opinion on the same topic and, after a distraction task, we repeated the measurement. To produce data compatible with standard opinion dynamics models, we multiplied the opinion (encoded as agree=1 and disagree=-1) with the certainty to obtain a single 'continuous opinion' ranging from -10 to 10. By analyzing the topics independently, we observed that each one shows a different initial distribution. However, the dynamics (i.e., the properties of the opinion change) appear to be similar between all topics. This suggested that the same micro-dynamic rule could be applied to unpolarized topics. Another important result is that participants that change opinion tend to maintain similar levels of certainty. This is in contrast with typical micro-dynamics rules, where agents move to an average point instead of directly jumping to the opposite continuous opinion. As expected, in the data, we also observed the effect of social influence. This means that exposing someone with 'agree' or 'disagree' influenced participants to respectively higher or lower values of the continuous opinion. However, we also observed random variations whose effect was stronger than the social influence’s one. We even observed cases of people that changed from 'agree' to 'disagree,' even if they were exposed to 'agree.' This phenomenon is surprising, as, in the standard literature, the strength of the noise is usually smaller than the strength of social influence. Finally, we also built an opinion dynamics model from the data. The model was able to explain more than 80% of the data variance. Furthermore, by iterating the model, we were able to produce polarized states even starting from an unpolarized population. This experimental approach offers a way to test the micro-dynamic rule. This also allows us to build models which are directly grounded on experimental results.

Keywords: experimental validation, micro-dynamic rule, opinion dynamics, update rule

Procedia PDF Downloads 162
841 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique

Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina

Abstract:

The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.

Keywords: diffusion, glass-ceramics, ion exchange, vitrification

Procedia PDF Downloads 269
840 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains

Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe

Abstract:

The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.

Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain

Procedia PDF Downloads 313
839 Care Experience of a Female Breast Cancer Patient Undergoing Modified Radical Mastectomy

Authors: Ting-I Lin

Abstract:

Purpose: This article explores the care experience of a 34-year-old female breast cancer patient who was admitted to the intensive care unit after undergoing a modified radical mastectomy. The patient discovered a lump in her right breast during a self-examination and, after mammography and ultrasound-guided biopsy, was diagnosed with a malignant tumor in the right breast. The tumor measured 1.5 x 1.4 x 2 cm, and the patient underwent a modified radical mastectomy. Postoperatively, she exhibited feelings of inferiority due to changes in her appearance. Method: During the care period, we engaged in conversations, observations, and active listening, using Gordon's Eleven Functional Health Patterns for a comprehensive assessment. In collaboration with the critical care team, a psychologist, and an oncology case manager, we conducted an interdisciplinary discussion and reached a consensus on key nursing issues. These included pain related to postoperative tumor excision and disturbed body image due to changes in appearance after surgery. Result: During the care period, a private space was provided to encourage the patient to express her feelings about her altered body image. Communication was conducted through active listening and a non-judgmental approach. The patient's anxiety level, as measured by the depression and anxiety scale, decreased from moderate to mild, and she was able to sleep for 6-8 hours at night. The oncology case manager was invited to provide education on breast reconstruction using breast models and videos to both the patient and her husband. This helped rebuild the patient's confidence. With the patient's consent, a support group was arranged where a peer with a similar experience shared her journey, offering emotional support and encouragement. This helped alleviate the psychological stress and shock caused by the cancer diagnosis. Additionally, pain management was achieved through adjusting the dosage of analgesics, administering Ultracet 37.5 mg/325 mg 1# Q6H PO, along with distraction techniques and acupressure therapy. These interventions helped the patient relax and alleviate discomfort, maintaining her pain score at a manageable level of 3, indicating mild pain. Conclusion: Disturbance in body image can cause significant psychological stress for patients. Through support group discussions, encouraging patients to express their feelings, and providing appropriate education on breast reconstruction and dressing techniques, the patient's self-concept was positively reinforced, and her emotions were stabilized. This led to renewed self-worth and confidence.

Keywords: breast cancer, modified radical mastectomy, acupressure therapy, Gordon's 11 functional health patterns

Procedia PDF Downloads 28
838 Insecticidal Activity of Bacillus Thuringiensis Strain AH-2 Against Hemiptera Insects Pests: Aphis. Gossypii, and Lepidoptera Insect Pests: Plutella Xylostella and Hyphantria Cunea

Authors: Ajuna B. Henry

Abstract:

In recent decades, climate change has demanded biological pesticides; more Bt strains are being discovered worldwide, some containing novel insecticidal genes while others have been modified through molecular approaches for increased yield, toxicity, and wider host target. In this study, B. thuringiensis strain AH-2 (Bt-2) was isolated from the soil and tested for insecticidal activity against Aphis gossypii (Hemiptera: Aphididae) and Lepidoptera insect pests: fall webworm (Hyphantria cunea) and diamondback moth (Plutella xylostella). A commercial strain B. thuringiensis subsp. kurstaki (Btk), and a chemical pesticide, imidacloprid (for Hemiptera) and chlorantraniliprole (for Lepidoptera), were used as positive control and the same media (without bacterial inoculum) as a negative control. For aphidicidal activity, Bt-2 caused a mortality rate of 70.2%, 78.1% or 88.4% in third instar nymphs of A. gossypii (3N) at 10%, 25% or 50% culture concentrations, respectively. Moreover, Bt-2 was effectively produced in cost-effective (PB) supplemented with either glucose (PBG) or sucrose (PBS) and maintained high aphicidal efficacy with 3N mortality rates of 85.9%, 82.9% or 82.2% in TSB, PBG or PBS media, respectively at 50% culture concentration. Bt-2 also suppressed adult fecundity by 98.3% compared to only 65.8% suppression by Btk at similar concentrations but was slightly lower than chemical treatment, which caused 100% suppression. Partial purification of 60 – 80% (NH4)2SO4 fraction of Bt-2 aphicidal proteins purified on anion exchange (DEAE-FF) column revealed a 105 kDa aphicidal protein with LC50 = 55.0 ng/µℓ. For Lepidoptera pests, chemical pesticide, Bt-2, and Btk cultures, mortality of 86.7%, 60%, and 60% in 3rd instar larvae of P. xylostella, and 96.7%, 80.0%, and 93.3% in 6th instar larvae of H. cunea, after 72h of exposure. When the entomopathogenic strains were cultured in a cost-effective PBG or PBS, the insecticidal activity in all strains was not significantly different compared to the use of a commercial medium (TSB). Bt-2 caused a mortality rate of 60.0%, 63.3%, and 50.0% against P. xylostella larvae and 76.7%, 83.3%, and 73.3% against H. cunea when grown in TSB, PBG, and PBS media, respectively. Bt-2 (grown in cost-effective PBG medium) caused a dose-dependent toxicity of 26.7%, 40.0%, and 63.3% against P. xylostella and 46.7%, 53.3%, and 76.7% against H. cunea at 10%, 25% and 50% culture concentration, respectively. The partially purified Bt-2 insecticidal proteins fractions F1, F2, F3, and F4 (extracted at different ratios of organic solvent) caused low toxicity (50.0%, 40.0%, 36.7%, and 30.0%) against P. xylostella and relatively high toxicity (56.7%, 76.7%, 66.7%, and 63.3%) against H. cunea at 100 µg/g of artificial diets. SDS-PAGE analysis revealed that a128kDa protein is associated with toxicity of Bt-2. Our result demonstrates a medium and strong larvicidal activity of Bt-2 against P. xylostella and H. cunea, respectively. Moreover, Bt-2 could be potentially produced using a cost-effective PBG medium which makes it an effective alternative biocontrol strategy to reduce chemical pesticide application.

Keywords: biocontrol, insect pests, larvae/nymph mortality, cost-effective media, aphis gossypii, plutella xylostella, hyphantria cunea, bacillus thuringiensi

Procedia PDF Downloads 19
837 Modeling Geogenic Groundwater Contamination Risk with the Groundwater Assessment Platform (GAP)

Authors: Joel Podgorski, Manouchehr Amini, Annette Johnson, Michael Berg

Abstract:

One-third of the world’s population relies on groundwater for its drinking water. Natural geogenic arsenic and fluoride contaminate ~10% of wells. Prolonged exposure to high levels of arsenic can result in various internal cancers, while high levels of fluoride are responsible for the development of dental and crippling skeletal fluorosis. In poor urban and rural settings, the provision of drinking water free of geogenic contamination can be a major challenge. In order to efficiently apply limited resources in the testing of wells, water resource managers need to know where geogenically contaminated groundwater is likely to occur. The Groundwater Assessment Platform (GAP) fulfills this need by providing state-of-the-art global arsenic and fluoride contamination hazard maps as well as enabling users to create their own groundwater quality models. The global risk models were produced by logistic regression of arsenic and fluoride measurements using predictor variables of various soil, geological and climate parameters. The maps display the probability of encountering concentrations of arsenic or fluoride exceeding the World Health Organization’s (WHO) stipulated concentration limits of 10 µg/L or 1.5 mg/L, respectively. In addition to a reconsideration of the relevant geochemical settings, these second-generation maps represent a great improvement over the previous risk maps due to a significant increase in data quantity and resolution. For example, there is a 10-fold increase in the number of measured data points, and the resolution of predictor variables is generally 60 times greater. These same predictor variable datasets are available on the GAP platform for visualization as well as for use with a modeling tool. The latter requires that users upload their own concentration measurements and select the predictor variables that they wish to incorporate in their models. In addition, users can upload additional predictor variable datasets either as features or coverages. Such models can represent an improvement over the global models already supplied, since (a) users may be able to use their own, more detailed datasets of measured concentrations and (b) the various processes leading to arsenic and fluoride groundwater contamination can be isolated more effectively on a smaller scale, thereby resulting in a more accurate model. All maps, including user-created risk models, can be downloaded as PDFs. There is also the option to share data in a secure environment as well as the possibility to collaborate in a secure environment through the creation of communities. In summary, GAP provides users with the means to reliably and efficiently produce models specific to their region of interest by making available the latest datasets of predictor variables along with the necessary modeling infrastructure.

Keywords: arsenic, fluoride, groundwater contamination, logistic regression

Procedia PDF Downloads 348
836 Towards a Strategic Framework for State-Level Epistemological Functions

Authors: Mark Darius Juszczak

Abstract:

While epistemology, as a sub-field of philosophy, is generally concerned with theoretical questions about the nature of knowledge, the explosion in digital media technologies has resulted in an exponential increase in the storage and transmission of human information. That increase has resulted in a particular non-linear dynamic – digital epistemological functions are radically altering how and what we know. Neither the rate of that change nor the consequences of it have been well studied or taken into account in developing state-level strategies for epistemological functions. At the current time, US Federal policy, like that of virtually all other countries, maintains, at the national state level, clearly defined boundaries between various epistemological agencies - agencies that, in one way or another, mediate the functional use of knowledge. These agencies can take the form of patent and trademark offices, national library and archive systems, departments of education, departments such as the FTC, university systems and regulations, military research systems such as DARPA, federal scientific research agencies, medical and pharmaceutical accreditation agencies, federal funding for scientific research and legislative committees and subcommittees that attempt to alter the laws that govern epistemological functions. All of these agencies are in the constant process of creating, analyzing, and regulating knowledge. Those processes are, at the most general level, epistemological functions – they act upon and define what knowledge is. At the same time, however, there are no high-level strategic epistemological directives or frameworks that define those functions. The only time in US history where a proxy state-level epistemological strategy existed was between 1961 and 1969 when the Kennedy Administration committed the United States to the Apollo program. While that program had a singular technical objective as its outcome, that objective was so technologically advanced for its day and so complex so that it required a massive redirection of state-level epistemological functions – in essence, a broad and diverse set of state-level agencies suddenly found themselves working together towards a common epistemological goal. This paper does not call for a repeat of the Apollo program. Rather, its purpose is to investigate the minimum structural requirements for a national state-level epistemological strategy in the United States. In addition, this paper also seeks to analyze how the epistemological work of the multitude of national agencies within the United States would be affected by such a high-level framework. This paper is an exploratory study of this type of framework. The primary hypothesis of the author is that such a function is possible but would require extensive re-framing and reclassification of traditional epistemological functions at the respective agency level. In much the same way that, for example, DHS (Department of Homeland Security) evolved to respond to a new type of security threat in the world for the United States, it is theorized that a lack of coordination and alignment in epistemological functions will equally result in a strategic threat to the United States.

Keywords: strategic security, epistemological functions, epistemological agencies, Apollo program

Procedia PDF Downloads 77
835 Semiotics of the New Commercial Music Paradigm

Authors: Mladen Milicevic

Abstract:

This presentation will address how the statistical analysis of digitized popular music influences the music creation and emotionally manipulates consumers.Furthermore, it will deal with semiological aspect of uniformization of musical taste in order to predict the potential revenues generated by popular music sales. In the USA, we live in an age where most of the popular music (i.e. music that generates substantial revenue) has been digitized. It is safe to say that almost everything that was produced in last 10 years is already digitized (either available on iTunes, Spotify, YouTube, or some other platform). Depending on marketing viability and its potential to generate additional revenue most of the “older” music is still being digitized. Once the music gets turned into a digital audio file,it can be computer-analyzed in all kinds of respects, and the similar goes for the lyrics because they also exist as a digital text file, to which any kin of N Capture-kind of analysis may be applied. So, by employing statistical examination of different popular music metrics such as tempo, form, pronouns, introduction length, song length, archetypes, subject matter,and repetition of title, the commercial result may be predicted. Polyphonic HMI (Human Media Interface) introduced the concept of the hit song science computer program in 2003.The company asserted that machine learning could create a music profile to predict hit songs from its audio features Thus,it has been established that a successful pop song must include: 100 bpm or more;an 8 second intro;use the pronoun 'you' within 20 seconds of the start of the song; hit the bridge middle 8 between 2 minutes and 2 minutes 30 seconds; average 7 repetitions of the title; create some expectations and fill that expectation in the title. For the country song: 100 bpm or less for a male artist; 14-second intro; uses the pronoun 'you' within the first 20 seconds of the intro; has a bridge middle 8 between 2 minutes and 2 minutes 30 seconds; has 7 repetitions of title; creates an expectation,fulfills it in 60 seconds.This approach to commercial popular music minimizes the human influence when it comes to which “artist” a record label is going to sign and market. Twenty years ago,music experts in the A&R (Artists and Repertoire) departments of the record labels were making personal aesthetic judgments based on their extensive experience in the music industry. Now, the computer music analyzing programs, are replacing them in an attempt to minimize investment risk of the panicking record labels, in an environment where nobody can predict the future of the recording industry.The impact on the consumers taste through the narrow bottleneck of the above mentioned music selection by the record labels,created some very peculiar effects not only on the taste of popular music consumers, but also the creative chops of the music artists as well. What is the meaning of this semiological shift is the main focus of this research and paper presentation.

Keywords: music, semiology, commercial, taste

Procedia PDF Downloads 393
834 Growing Pains and Organizational Development in Growing Enterprises: Conceptual Model and Its Empirical Examination

Authors: Maciej Czarnecki

Abstract:

Even though growth is one of the most important strategic objectives for many enterprises, we know relatively little about this phenomenon. This research contributes to broaden our knowledge of managerial consequences of growth. Scales for measuring organizational development and growing pains were developed. Conceptual model of connections among growth, organizational development, growing pains, selected development factors and financial performance were examined. The research process contained literature review, 20 interviews with managers, examination of 12 raters’ opinions, pilot research and 7 point Likert scale questionnaire research on 138 Polish enterprises employing 50-249 people which increased their employment at least by 50% within last three years. Factor analysis, Pearson product-moment correlation coefficient, student’s t-test and chi-squared test were used to develop scales. High Cronbach’s alpha coefficients were obtained. The verification of correlations among the constructs was carried out with factor correlations, multiple regressions and path analysis. When the enterprise grows, it is necessary to implement changes in its structure, management practices etc. (organizational development) to meet challenges of growing complexity. In this paper, organizational development was defined as internal changes aiming to improve the quality of existing or to introduce new elements in the areas of processes, organizational structure and culture, operational and management systems. Thus; H1: Growth has positive effects on organizational development. The main thesis of the research is that if organizational development does not catch up with growing complexity of growing enterprise, growing pains will arise (lower work comfort, conflicts, lack of control etc.). They will exert a negative influence on the financial performance and may result in serious organizational crisis or even bankruptcy. Thus; H2: Growth has positive effects on growing pains, H3: Organizational development has negative effects on growing pains, H4: Growing pains have negative effects on financial performance, H5: Organizational development has positive effects on financial performance. Scholars considered long lists of factors having potential influence on organizational development. The development of comprehensive model taking into account all possible variables may be beyond the capacity of any researcher or even statistical software used. After literature review, it was decided to increase the level of abstraction and to include following constructs in the conceptual model: organizational learning (OL), positive organization (PO) and high performance factors (HPF). H1a/b/c: OL/PO/HPF has positive effect on organizational development, H2a/b/c: OL/PO/HPF has negative effect on growing pains. The results of hypothesis testing: H1: partly supported, H1a/b/c: supported/not supported/supported, H2: not supported, H2a/b/c: not supported/partly supported/not supported, H3: supported, H4: partly supported, H5: supported. The research seems to be of a great value for both scholars and practitioners. It proved that OL and HPO matter for organizational development. Scales for measuring organizational development and growing pains were developed. Its main finding, though, is that organizational development is a good way of improving financial performance.

Keywords: organizational development, growth, growing pains, financial performance

Procedia PDF Downloads 219
833 Effects of Different Fungicide In-Crop Treatments on Plant Health Status of Sunflower (Helianthus annuus L.)

Authors: F. Pal-Fam, S. Keszthelyi

Abstract:

Phytosanitary condition of sunflower (Helianthus annuus L.) was endangered by several phytopathogenic agents, mainly microfungi, such as Sclerotinia sclerotiorum, Diaporthe helianthi, Plasmopara halstedtii, Macrophomina phaseolina and so on. There are more agrotechnical and chemical technologies against them, for instance, tolerant hybrids, crop rotations and eventually several in-crop chemical treatments. There are different fungicide treatment methods in sunflower in Hungarian agricultural practice in the quest of obtaining healthy and economic plant products. Besides, there are many choices of useable active ingredients in Hungarian sunflower protection. This study carried out into the examination of the effect of five different fungicide active substances (found on the market) and three different application modes (early; late; and early and late treatments) in a total number of 9 sample plots, 0.1 ha each other. Five successive vegetation periods have been investigated in long term, between 2013 and 2017. The treatments were: 1)untreated control; 2) boscalid and dimoxystrobin late treatment (July); 3) boscalid and dimoxystrobin early treatment (June); 4) picoxystrobin and cyproconazole early treatment; 5) picoxystrobin and cymoxanil and famoxadone early treatment; 6) picoxystrobin and cyproconazole early; cymoxanil and famoxadone late treatments; 7) picoxystrobin and cyproconazole early; picoxystrobin and cymoxanil and famoxadone late treatments; 8) trifloxystrobin and cyproconazole early treatment; and 9) trifloxystrobin and cyproconazole both early and late treatments. Due to the very different yearly weather conditions different phytopathogenic fungi were dominant in the particular years: Diaporthe and Alternaria in 2013; Alternaria and Sclerotinia in 2014 and 2015; Alternaria, Sclerotinia and Diaporthe in 2016; and Alternaria in 2017. As a result of treatments ‘infection frequency’ and ‘infestation rate’ showed a significant decrease compared to the control plot. There were no significant differences between the efficacies of the different fungicide mixes; all were almost the same effective against the phytopathogenic fungi. The most dangerous Sclerotinia infection was practically eliminated in all of the treatments. Among the single treatments, the late treatment realised in July was the less efficient, followed by the early treatments effectuated in June. The most efficient was the double treatments realised in both June and July, resulting 70-80% decrease of the infection frequency, respectively 75-90% decrease of the infestation rate, comparing with the control plot in the particular years. The lowest yield quantity was observed in the control plot, followed by the late single treatment. The yield of the early single treatments was higher, while the double treatments showed the highest yield quantities (18.3-22.5% higher than the control plot in particular years). In total, according to our five years investigation, the most effective application mode is the double in-crop treatment per vegetation time, which is reflected by the yield surplus.

Keywords: fungicides, treatments, phytopathogens, sunflower

Procedia PDF Downloads 141
832 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology

Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal

Abstract:

Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.

Keywords: chloramine decay, modelling, response surface methodology, water quality parameters

Procedia PDF Downloads 225
831 Analysis of Unconditional Conservatism and Earnings Quality before and after the IFRS Adoption

Authors: Monica Santi, Evita Puspitasari

Abstract:

International Financial Reporting Standard (IFRS) has developed the principle based accounting standard. Based on this, IASB then eliminated the conservatism concept within accounting framework. Conservatism concept represents a prudent reaction to uncertainty to try to ensure that uncertainties and risk inherent in business situations are adequately considered. The conservatism concept has two ingredients: conditional conservatism or ex-post (news depending prudence) and unconditional conservatism or ex-ante (news-independent prudence). IFRS in substance disregards the unconditional conservatism because the unconditional conservatism can cause the understatement assets or overstated liabilities, and eventually the financial statement would be irrelevance since the information does not represent the real fact. Therefore, the IASB eliminate the conservatism concept. However, it does not decrease the practice of unconditional conservatism in the financial statement reporting. Therefore, we expected the earnings quality would be affected because of this situation, even though the IFRS implementation was expected to increase the earnings quality. The objective of this study was to provide empirical findings about the unconditional conservatism and the earnings quality before and after the IFRS adoption. The earnings per accrual measure were used as the proxy for the unconditional conservatism. If the earnings per accrual were negative (positive), it meant the company was classified as the conservative (not conservative). The earnings quality was defined as the ability of the earnings in reflecting the future earnings by considering the earnings persistence and stability. We used the earnings response coefficient (ERC) as the proxy for the earnings quality. ERC measured the extant of a security’s abnormal market return in response to the unexpected component of reporting earning of the firm issuing that security. The higher ERC indicated the higher earnings quality. The manufacturing companies listed in the Indonesian Stock Exchange (IDX) were used as the sample companies, and the 2009-2010 period was used to represent the condition before the IFRS adoption, and 2011-2013 was used to represent the condition after the IFRS adoption. Data was analyzed using the Mann-Whitney test and regression analysis. We used the firm size as the control variable with the consideration the firm size would affect the earnings quality of the company. This study had proved that the unconditional conservatism had not changed, either before and after the IFRS adoption period. However, we found the different findings for the earnings quality. The earnings quality had decreased after the IFRS adoption period. This empirical results implied that the earnings quality before the IFRS adoption was higher. This study also had found that the unconditional conservatism positively influenced the earnings quality insignificantly. The findings implied that the implementation of the IFRS had not decreased the unconditional conservatism practice and has not altered the earnings quality of the manufacturing company. Further, we found that the unconditional conservatism did not affect the earnings quality. Eventhough the empirical result shows that the unconditional conservatism gave positive influence to the earnings quality, but the influence was not significant. Thus, we concluded that the implementation of the IFRS did not increase the earnings quality.

Keywords: earnings quality, earnings response coefficient, IFRS Adoption, unconditional conservatism

Procedia PDF Downloads 260
830 The Effects of Alpha-Lipoic Acid Supplementation on Post-Stroke Patients: A Systematic Review and Meta-Analysis of Randomized Controlled Trials

Authors: Hamid Abbasi, Neda Jourabchi, Ranasadat Abedi, Kiarash Tajernarenj, Mehdi Farhoudi, Sarvin Sanaie

Abstract:

Background: Alpha lipoic acid (ALA), fat- and water-soluble, coenzyme with sulfuret content, has received considerable attention for its potential therapeutic role in diabetes, cardiovascular diseases, cancers, and central nervous disease. This investigation aims to evaluate the probable protective effects of ALA in stroke patients. Methods: Based on Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines, This meta-analysis was performed. The PICO criteria for this meta-analysis were as follows: Population/Patients (P: stroke patients); Intervention (I: ALA); Comparison (C: control); Outcome (O: blood glucose, lipid profile, oxidative stress, inflammatory factors).In addition, Studies that were excluded from the analysis consisted of in vitro, in vivo, and ex vivo studies, case reports, quasi-experimental studies. Scopus, PubMed, Web of Science, EMBASE databases were searched until August 2023. Results: Of 496 records that were screened in the title/abstract stage, 9 studies were included in this meta-analysis. The sample sizes in the included studies vary between 28 and 90. The result of risk of bias was performed via risk of bias (RoB) in randomized-controlled trials (RCTs) based on the second version of the Cochrane RoB assessment tool. 8 studies had a definitely high risk of bias. Discussion: To the best of our knowledge, The present meta-analysis is the first study addressing the effectiveness of ALA supplementation in enhancing post-stroke metabolic markers, including lipid profile, oxidative stress, and inflammatory indices. It is imperative to acknowledge certain potential limitations inherent in this study. First of all, type of treatment (oral or intravenous infusion) could alter the bioavailability of ALA. Our study had restricted evidence regarding the impact of ALA supplementation on included outcomes. Therefore, further research is warranted to develop into the effects of ALA specifically on inflammation and oxidative stress. Funding: The research protocol was approved and supported by the Student Research Committee, Tabriz University of Medical Sciences (grant number: 72825). Registration: This study was registered in the International prospective register of systematic reviews (PROSPERO ID: CR42023461612).

Keywords: alpha-lipoic acid, lipid profile, blood glucose, inflammatory factors, oxidative stress, meta-analysis, post-stroke

Procedia PDF Downloads 63
829 Multi-Criteria Selection and Improvement of Effective Design for Generating Power from Sea Waves

Authors: Khaled M. Khader, Mamdouh I. Elimy, Omayma A. Nada

Abstract:

Sustainable development is the nominal goal of most countries at present. In general, fossil fuels are the development mainstay of most world countries. Regrettably, the fossil fuel consumption rate is very high, and the world is facing the problem of conventional fuels depletion soon. In addition, there are many problems of environmental pollution resulting from the emission of harmful gases and vapors during fuel burning. Thus, clean, renewable energy became the main concern of most countries for filling the gap between available energy resources and their growing needs. There are many renewable energy sources such as wind, solar and wave energy. Energy can be obtained from the motion of sea waves almost all the time. However, power generation from solar or wind energy is highly restricted to sunny periods or the availability of suitable wind speeds. Moreover, energy produced from sea wave motion is one of the cheapest types of clean energy. In addition, renewable energy usage of sea waves guarantees safe environmental conditions. Cheap electricity can be generated from wave energy using different systems such as oscillating bodies' system, pendulum gate system, ocean wave dragon system and oscillating water column device. In this paper, a multi-criteria model has been developed using Analytic Hierarchy Process (AHP) to support the decision of selecting the most effective system for generating power from sea waves. This paper provides a widespread overview of the different design alternatives for sea wave energy converter systems. The considered design alternatives have been evaluated using the developed AHP model. The multi-criteria assessment reveals that the off-shore Oscillating Water Column (OWC) system is the most appropriate system for generating power from sea waves. The OWC system consists of a suitable hollow chamber at the shore which is completely closed except at its base which has an open area for gathering moving sea waves. Sea wave's motion pushes the air up and down passing through a suitable well turbine for generating power. Improving the power generation capability of the OWC system is one of the main objectives of this research. After investigating the effect of some design modifications, it has been concluded that selecting the appropriate settings of some effective design parameters such as the number of layers of Wells turbine fans and the intermediate distance between the fans can result in significant improvements. Moreover, simple dynamic analysis of the Wells turbine is introduced. Furthermore, this paper strives for comparing the theoretical and experimental results of the built experimental prototype.

Keywords: renewable energy, oscillating water column, multi-criteria selection, Wells turbine

Procedia PDF Downloads 162
828 Prevalence of Work-Related Musculoskeletal Disorder among Dental Personnel in Perak

Authors: Nursyafiq Ali Shibramulisi, Nor Farah Fauzi, Nur Azniza Zawin Anuar, Nurul Atikah Azmi, Janice Hew Pei Fang

Abstract:

Background: Work related musculoskeletal disorders (WRMD) among dental personnel have been underestimated and under-reported worldwide and specifically in Malaysia. The problem will arise and progress slowly over time, as it results from accumulated injury throughout the period of work. Several risk factors, such as repetitive movement, static posture, vibration, and adapting poor working postures, have been identified to be contributing to WRMSD in dental practices. Dental personnel is at higher risk of getting this problem as it is their working nature and core business. This would cause pain and dysfunction syndrome among them and result in absence from work and substandard services to their patients. Methodology: A cross-sectional study involving 19 government dental clinics in Perak was done over the period of 3 months. Those who met the criteria were selected to participate in this study. Malay version of the Self-Reported Nordic Musculoskeletal Discomfort Form was used to identify the prevalence of WRMSD, while the intensity of pain in the respective regions was evaluated using a 10-point scale according to ‘Pain as The 5ᵗʰ Vital Sign’ by MOH Malaysia and later on were analyzed using SPSS version 25. Descriptive statistics, including mean and SD and median and IQR, were used for numerical data. Categorical data were described by percentage. Pearson’s Chi-Square Test and Spearman’s Correlation were used to find the association between the prevalence of WRMSD and other socio-demographic data. Results: 159 dentists, 73 dental therapists, 26 dental lab technicians, 81 dental surgery assistants, and 23 dental attendants participated in this study. The mean age for the participants was 34.9±7.4 and their mean years of service was 9.97±7.5. Most of them were female (78.5%), Malay (71.3%), married (69.6%) and right-handed (90.1%). The highest prevalence of WRMSD was neck (58.0%), followed by shoulder (48.1%), upper back (42.0%), lower back (40.6%), hand/wrist (31.5%), feet (21.3%), knee (12.2%), thigh 7.7%) and lastly elbow (6.9%). Most of those who reported having neck pain scaled their pain experiences at 2 out of 10 (19.5%), while for those who suffered upper back discomfort, most of them scaled their pain experience at 6 out of 10 (17.8%). It was found that there was a significant relationship between age and pain at neck (p=0.007), elbow (p=0.027), lower back (p=0.032), thigh (p=0.039), knee (p=0.001) and feet (p=0.000) regions. Job position also had been found to be having a significant relationship with pain experienced at the lower back (p=0.018), thigh (p=0.011), knee, and feet (p=0.000). Conclusion: The prevalence of WRMSD among dental personnel in Perak was found to be high. Age and job position were found to be having a significant relationship with pain experienced in several regions. Intervention programs should be planned and conducted to prevent and reduce the occurrence of WRMSD, as all harmful or unergonomic practices should be avoided at all costs.

Keywords: WRMSD, ergonomic, dentistry, dental

Procedia PDF Downloads 88
827 Industrial Waste Multi-Metal Ion Exchange

Authors: Thomas S. Abia II

Abstract:

Intel Chandler Site has internally developed its first-of-kind (FOK) facility-scale wastewater treatment system to achieve multi-metal ion exchange. The process was carried out using a serial process train of carbon filtration, pH / ORP adjustment, and cationic exchange purification to treat dilute metal wastewater (DMW) discharged from a substrate packaging factory. Spanning a trial period of 10 months, a total of 3,271 samples were collected and statistically analyzed (average baseline + standard deviation) to evaluate the performance of a 95-gpm, multi-reactor continuous copper ion exchange treatment system that was consequently retrofitted for manganese ion exchange to meet environmental regulations. The system is also equipped with an inline acid and hot caustic regeneration system to rejuvenate exhausted IX resins and occasionally remove surface crud. Data generated from lab-scale studies was transferred to system operating modifications following multiple trial-and-error experiments. Despite the DMW treatment system failing to meet internal performance specifications for manganese output, it was observed to remove the cation notwithstanding the prevalence of copper in the waste stream. Accordingly, the average manganese output declined from 6.5 + 5.6 mg¹L⁻¹ at pre-pilot to 1.1 + 1.2 mg¹L⁻¹ post-pilot (83% baseline reduction). This milestone was achieved regardless of the average influent manganese to DMW increasing from 1.0 + 13.7 mg¹L⁻¹ at pre-pilot to 2.1 + 0.2 mg¹L⁻¹ post-pilot (110% baseline uptick). Likewise, the pre-trial and post-trial average influent copper values to DMW were 22.4 + 10.2 mg¹L⁻¹ and 32.1 + 39.1 mg¹L⁻¹, respectively (43% baseline increase). As a result, the pre-trial and post-trial average copper output values were 0.1 + 0.5 mg¹L⁻¹ and 0.4 + 1.2 mg¹L⁻¹, respectively (300% baseline uptick). Conclusively, the operating pH range upstream of treatment (between 3.5 and 5) was shown to be the largest single point of influence for optimizing manganese uptake during multi-metal ion exchange. However, the high variability of the influent copper-to-manganese ratio was observed to adversely impact the system functionality. The journal herein intends to discuss the operating parameters such as pH and oxidation-reduction potential (ORP) that were shown to influence the functional versatility of the ion exchange system significantly. The literature also proposes to discuss limitations of the treatment system such as influent copper-to-manganese ratio variations, operational configuration, waste by-product management, and system recovery requirements to provide a balanced assessment of the multi-metal ion exchange process. The take-away from this literature is intended to analyze the overall feasibility of ion exchange for metals manufacturing facilities that lack the capability to expand hardware due to real estate restrictions, aggressive schedules, or budgetary constraints.

Keywords: copper, industrial wastewater treatment, multi-metal ion exchange, manganese

Procedia PDF Downloads 143
826 Use of Curcumin in Radiochemotherapy Induced Oral Mucositis Patients: A Control Trial Study

Authors: Shivayogi Charantimath

Abstract:

Radiotherapy and chemotherapy are effective for treating malignancies but are associated with side effects like oral mucositis. Chlorhexidine gluconate is one of the most commonly used mouthwash in prevention of signs and symptoms of mucositis. Evidence shows that chlorhexidine gluconate has side effects in terms of colonization of bacteria, bad breadth and less healing properties. Thus, it is essential to find a suitable alternative therapy which is more effective with minimal side effects. Curcumin, an extract of turmeric is gradually being studied for its wide-ranging therapeutic properties such as antioxidant, analgesic, anti-inflammatory, antitumor, antimicrobial, antiseptic, chemo sensitizing and radio sensitizing properties. The present study was conducted to evaluate the efficacy and safety of topical curcumin gel on radio-chemotherapy induced oral mucositis in cancer patients. The aim of the study is to evaluate the efficacy and safety of curcumin gel in the management of oral mucositis in cancer patients undergoing radio chemotherapy and compare with chlorhexidine. The study was conducted in K.L.E. Society’s Belgaum cancer hospital. 40 oral cancer patients undergoing the radiochemotheraphy with oral mucositis was selected and randomly divided into two groups of 20 each. The study group A [20 patients] was advised Cure next gel for 2 weeks. The control group B [20 patients] was advised chlorhexidine gel for 2 weeks. The NRS, Oral Mucositis Assessment scale and WHO mucositis scale were used to determine the grading. The results obtained were calculated by using SPSS 20 software. The comparison of grading was done by applying Mann-Whitney U test and intergroup comparison was calculated by Wilcoxon matched pairs test. The NRS scores observed from baseline to 1st and 2nd week follow up in both the group showed significant difference. The percentage of change in erythema in respect to group A was 63.3% for first week and for second week, changes were 100.0% with p = 0.0003. The changes in Group A in respect to erythema was 34.6% for 1st week and 57.7% in second week. The intergroup comparison was significant with p value of 0.0048 and 0.0006 in relation to group A and group B respectively. The size of the ulcer score was measured which showed 35.5% [P=0.0010] of change in Group A for 1st and 2nd week showed totally reduction i.e. 103.4% [P=0.0001]. Group B showed 24.7% change from baseline to 1st week and 53.6% for 2nd week follow up. The intergroup comparison with Wilcoxon matched pair test was significant with p=0.0001 in group A. The result obtained by WHO mucositis score in respect to group A shows 29.6% [p=0.0004] change in first week and 75.0% [p=0.0180] change in second week which is highly significant in comparison to group B. Group B showed minimum changes i.e. 20.1% in 1st week and 33.3% in 2nd week. The p value with Wilcoxon was significant with 0.0025 in Group A for 1st week follow up and 0.000 for 2nd week follow up. Curcumin gel appears to an effective and safer alternative to chlorhexidine gel in treatment of oral mucositis.

Keywords: curcumin, chemotheraphy, mucositis, radiotheraphy

Procedia PDF Downloads 351
825 Teacher’s Role in the Process of Identity Construction in Language Learners

Authors: Gaston Bacquet

Abstract:

The purpose of this research is to explore how language and culture shape a learner’s identity as they immerse themselves in the world of second language learning and how teachers can assist in the process of identity construction within a classroom setting. The study will be conducted as an in-classroom ethnography, using a qualitative methods approach and analyzing students’ experiences as language learners, their degree of investment, inclusion/exclusion, and attitudes, both towards themselves and their social context; the research question the study will attempt to answer is: What kind of pedagogical interventions are needed to help language learners in the process of identity construction so they can offset unequal conditions of power and gain further social inclusion? The following methods will be used for data collection: i) Questionnaires to investigate learners’ attitudes and feelings in different areas divided into four strands: themselves, their classroom, learning English and their social context. ii) Participant observations, conducted in a naturalistic manner. iii) Journals, which will be used in two different ways: on the one hand, learners will keep semi-structured, solicited diaries to record specific events as requested by the researcher (event-contingent). On the other, the researcher will keep his journal to maintain a record of events and situations as they happen to reduce the risk of inaccuracies. iv) Person-centered interviews, which will be conducted at the end of the study to unearth data that might have been occluded or be unclear from the methods above. The interviews will aim at gaining further data on experiences, behaviors, values, opinions, feelings, knowledge and sensory, background and demographic information. This research seeks to understand issues of socio-cultural identities and thus make a significant contribution to knowledge in this area by investigating the type of pedagogical interventions needed to assist language learners in the process of identity construction to achieve further social inclusion. It will also have applied relevance for those working with diverse student groups, especially taking our present social context into consideration: we live in a highly mobile world, with migrants relocating to wealthier, more developed countries that pose their own particular set of challenges for these communities. This point is relevant because an individual’s insight and understanding of their own identity shape their relationship with the world and their ability to continue constructing this relationship. At the same time, because a relationship is influenced by power, the goal of this study is to help learners feel and become more empowered by increasing their linguistic capital, which we hope might result in a greater ability to integrate themselves socially. Exactly how this help will be provided will vary as data is unearthed through questionnaires, focus groups and the actual participant observations being carried out.

Keywords: identity construction, second-language learning, investment, second-language culture, social inclusion

Procedia PDF Downloads 103
824 A Case of Prosthetic Vascular-Graft Infection Due to Mycobacterium fortuitum

Authors: Takaaki Nemoto

Abstract:

Case presentation: A 69-year-old Japanese man presented with a low-grade fever and fatigue that had persisted for one month. The patient had an aortic dissection on the aortic arch 13 years prior, an abdominal aortic aneurysm seven years prior, and an aortic dissection on the distal aortic arch one year prior, which were all treated with artificial blood-vessel replacement surgery. Laboratory tests revealed an inflammatory response (CRP 7.61 mg/dl), high serum creatinine (Cr 1.4 mg/dL), and elevated transaminase (AST 47 IU/L, ALT 45 IU/L). The patient was admitted to our hospital on suspicion of prosthetic vascular graft infection. Following further workups on the inflammatory response, an enhanced chest computed tomography (CT) and a non-enhanced chest DWI (MRI) were performed. The patient was diagnosed with a pulmonary fistula and a prosthetic vascular graft infection on the distal aortic arch. After admission, the patient was administered Ceftriaxion and Vancomycine for 10 days, but his fever and inflammatory response did not improve. On day 13 of hospitalization, a lung fistula repair surgery and an omental filling operation were performed, and Meropenem and Vancomycine were administered. The fever and inflammatory response continued, and therefore we took repeated blood cultures. M. fortuitum was detected in a blood culture on day 16 of hospitalization. As a result, we changed the treatment regimen to Amikacin (400 mg/day), Meropenem (2 g/day), and Cefmetazole (4 g/day), and the fever and inflammatory response began to decrease gradually. We performed a test of sensitivity for Mycobacterium fortuitum, and found that the MIC was low for fluoroquinolone antibacterial agent. The clinical course was good, and the patient was discharged after a total of 8 weeks of intravenous drug administration. At discharge, we changed the treatment regimen to Levofloxacin (500 mg/day) and Clarithromycin (800 mg/day), and prescribed these two drugs as a long life suppressive therapy. Discussion: There are few cases of prosthetic vascular graft infection caused by mycobacteria, and a standard therapy remains to be established. For prosthetic vascular graft infections, it is ideal to provide surgical and medical treatment in parallel, but in this case, surgical treatment was difficult and, therefore, a conservative treatment was chosen. We attempted to increase the treatment success rate of this refractory disease by conducting a susceptibility test for mycobacteria and treating with different combinations of antimicrobial agents, which was ultimately effective. With our treatment approach, a good clinical course was obtained and continues at the present stage. Conclusion: Although prosthetic vascular graft infection resulting from mycobacteria is a refractory infectious disease, it may be curative to administer appropriate antibiotics based on the susceptibility test in addition to surgical treatment.

Keywords: prosthetic vascular graft infection, lung fistula, Mycobacterium fortuitum, conservative treatment

Procedia PDF Downloads 156
823 Structural Balance and Creative Tensions in New Product Development Teams

Authors: Shankaran Sitarama

Abstract:

New Product Development involves team members coming together and working in teams to come up with innovative solutions to problems, resulting in new products. Thus, a core attribute of a successful NPD team is their creativity and innovation. They need to be creative as a group, generating a breadth of ideas and innovative solutions that solve or address the problem they are targeting and meet the user’s needs. They also need to be very efficient in their teamwork as they work through the various stages of the development of these ideas, resulting in a POC (proof-of-concept) implementation or a prototype of the product. There are two distinctive traits that the teams need to have, one is ideational creativity, and the other is effective and efficient teamworking. There are multiple types of tensions that each of these traits cause in the teams, and these tensions reflect in the team dynamics. Ideational conflicts arising out of debates and deliberations increase the collective knowledge and affect the team creativity positively. However, the same trait of challenging each other’s viewpoints might lead the team members to be disruptive, resulting in interpersonal tensions, which in turn lead to less than efficient teamwork. Teams that foster and effectively manage these creative tensions are successful, and teams that are not able to manage these tensions show poor team performance. In this paper, it explore these tensions as they result in the team communication social network and propose a Creative Tension Balance index along the lines of Degree of Balance in social networks that has the potential to highlight the successful (and unsuccessful) NPD teams. Team communication reflects the team dynamics among team members and is the data set for analysis. The emails between the members of the NPD teams are processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. This social network is subjected to traditional social network analysis methods to arrive at some established metrics and structural balance analysis metrics. Traditional structural balance is extended to include team interaction pattern metrics to arrive at a creative tension balance metric that effectively captures the creative tensions and tension balance in teams. This CTB (Creative Tension Balance) metric truly captures the signatures of successful and unsuccessful (dissonant) NPD teams. The dataset for this research study includes 23 NPD teams spread out over multiple semesters and computes this CTB metric and uses it to identify the most successful and unsuccessful teams by classifying these teams into low, high and medium performing teams. The results are correlated to the team reflections (for team dynamics and interaction patterns), the team self-evaluation feedback surveys (for teamwork metrics) and team performance through a comprehensive team grade (for high and low performing team signatures).

Keywords: team dynamics, social network analysis, new product development teamwork, structural balance, NPD teams

Procedia PDF Downloads 79
822 The Ephemeral Re-Use of Cultural Heritage: The Incorporation of the Festival Phenomenon Within Monuments and Archaeological Sites in Lebanon

Authors: Joe Kallas

Abstract:

It is now widely accepted that the preservation of cultural heritage must go beyond simple restoration and renovation actions. While some historic monuments have been preserved for millennia, many of them, less important or simply neglected because of lack of money, have disappeared. As a result, the adaptation of monuments and archaeological sites to new functions allow them to 'survive'. Temporary activities or 'ephemeral' re-use, are increasingly recognized as a means of vitalization of deprived areas and enhancement of historic sites that became obsolete. They have the potential to increase economic and cultural value while making the best use of existing resources. However, there are often conservation and preservation issues related to the implementation of this type of re-use, which can also threaten the integrity and authenticity of archaeological sites and monuments if they have not been properly managed. This paper aims to get a better knowledge of the ephemeral re-use of heritage, and more specifically the subject of the incorporation of the festival phenomenon within the monuments and archaeological sites in Lebanon, a topic that is not yet studied enough. This paper tried to determine the elements that compose it, in order to analyze this phenomenon and to trace its good practices, by comparing international study cases to important national cases: the International Festival of Baalbek, the International Festival of Byblos and the International Festival of Beiteddine. Various factors have been studied and analyzed in order to best respond to the main problematic of this paper: 'How can we preserve the integrity of sites and monuments after the integration of an ephemeral function? And what are the preventive conservation measures to be taken when holding festivals in archaeological sites with fragile structures?' The impacts of the technical problems were first analyzed using various data and more particularly the effects of mass tourism, the integration of temporary installations, sound vibrations, the effects of unstudied lighting, until the mystification of heritage. Unfortunately, the DGA (General Direction of Antiquities in Lebanon) does not specify any frequency limit for the sound vibrations emitted by the speakers during musical festivals. In addition, there is no requirement from its part regarding the installations of the lighting systems in the historic monuments and no monitoring is done in situ, due to the lack of awareness of the impact that could be generated by such interventions, and due to the lack of materials and tools needed for the monitoring process. The study and analysis of the various data mentioned above led us to the elaboration of the main objective of this paper, which is the establishment of a list of recommendations. This list enables to define various preventive conservation measures to be taken during the holding of the festivals within the cultural heritage sites in Lebanon. We strongly hope that this paper will be an awareness document to start taking into consideration several factors previously neglected, in order to improve the conservation practices in the archaeological sites and monuments during the incorporation of the festival phenomenon.

Keywords: archaeology, authenticity, conservation, cultural heritage, festival, historic sites, integrity, monuments, tourism

Procedia PDF Downloads 118
821 Minding the Gap: Consumer Contracts in the Age of Online Information Flow

Authors: Samuel I. Becher, Tal Z. Zarsky

Abstract:

The digital world becomes part of our DNA now. The way e-commerce, human behavior, and law interact and affect one another is rapidly and significantly changing. Among others things, the internet equips consumers with a variety of platforms to share information in a volume we could not imagine before. As part of this development, online information flows allow consumers to learn about businesses and their contracts in an efficient and quick manner. Consumers can become informed by the impressions that other, experienced consumers share and spread. In other words, consumers may familiarize themselves with the contents of contracts through the experiences that other consumers had. Online and offline, the relationship between consumers and businesses are most frequently governed by consumer standard form contracts. For decades, such contracts are assumed to be one-sided and biased against consumers. Consumer Law seeks to alleviate this bias and empower consumers. Legislatures, consumer organizations, scholars, and judges are constantly looking for clever ways to protect consumers from unscrupulous firms and unfair behaviors. While consumers-businesses relationships are theoretically administered by standardized contracts, firms do not always follow these contracts in practice. At times, there is a significant disparity between what the written contract stipulates and what consumers experience de facto. That is, there is a crucial gap (“the Gap”) between how firms draft their contracts on the one hand, and how firms actually treat consumers on the other. Interestingly, the Gap is frequently manifested by deviation from the written contract in favor of consumers. In other words, firms often exercise lenient approach in spite of the stringent written contracts they draft. This essay examines whether, counter-intuitively, policy makers should add firms’ leniency to the growing list of firms suspicious behaviors. At first glance, firms should be allowed, if not encouraged, to exercise leniency. Many legal regimes are looking for ways to cope with unfair contract terms in consumer contracts. Naturally, therefore, consumer law should enable, if not encourage, firms’ lenient practices. Firms’ willingness to deviate from their strict contracts in order to benefit consumers seems like a sensible approach. Apparently, such behavior should not be second guessed. However, at times online tools, firm’s behaviors and human psychology result in a toxic mix. Beneficial and helpful online information should be treated with due respect as it may occasionally have surprising and harmful qualities. In this essay, we illustrate that technological changes turn the Gap into a key component in consumers' understanding, or misunderstanding, of consumer contracts. In short, a Gap may distort consumers’ perception and undermine rational decision-making. Consequently, this essay explores whether, counter-intuitively, consumer law should sanction firms that create a Gap and use it. It examines when firms’ leniency should be considered as manipulative or exercised in bad faith. It then investigates whether firms should be allowed to enforce the written contract even if the firms deliberately and consistently deviated from it.

Keywords: consumer contracts, consumer protection, information flow, law and economics, law and technology, paper deal v firms' behavior

Procedia PDF Downloads 198
820 Prevalence of Antibiotic Resistant Enterococci in Treated Wastewater Effluent in Durban, South Africa and Characterization of Vancomycin and High-Level Gentamicin-Resistant Strains

Authors: S. H. Gasa, L. Singh, B. Pillay, A. O. Olaniran

Abstract:

Wastewater treatment plants (WWTPs) have been implicated as the leading reservoir for antibiotic resistant bacteria (ARB), including Enterococci spp. and antibiotic resistance genes (ARGs), worldwide. Enterococci are a group of clinically significant bacteria that have gained much attention as a result of their antibiotic resistance. They play a significant role as the principal cause of nosocomial infections and dissemination of antimicrobial resistance genes in the environment. The main objective of this study was to ascertain the role of WWTPs in Durban, South Africa as potential reservoirs for antibiotic resistant Enterococci (ARE) and their related ARGs. Furthermore, the antibiogram and resistance gene profile of Enterococci species recovered from treated wastewater effluent and receiving surface water in Durban were also investigated. Using membrane filtration technique, Enterococcus selective agar and selected antibiotics, ARE were enumerated in samples (influent, activated sludge, before chlorination and final effluent) collected from two WWTPs, as well as from upstream and downstream of the receiving surface water. Two hundred Enterococcus isolates recovered from the treated effluent and receiving surface water were identified by biochemical and PCR-based methods, and their antibiotic resistance profiles determined by the Kirby-Bauer disc diffusion assay, while PCR-based assays were used to detect the presence of resistance and virulence genes. High prevalence of ARE was obtained at both WWTPs, with values reaching a maximum of 40%. The influent and activated sludge samples contained the greatest prevalence of ARE with lower values observed in the before and after chlorination samples. Of the 44 vancomycin and high-level gentamicin-resistant isolates, 11 were identified as E. faecium, 18 as E. faecalis, 4 as E. hirae while 11 are classified as “other” Enterococci species. High-level aminoglycoside resistance for gentamicin (39%) and vancomycin (61%) was recorded in species tested. The most commonly detected virulence gene was the gelE (44%), followed by asa1 (40%), while cylA and esp were detected in only 2% of the isolates. The most prevalent aminoglycoside resistance genes were aac(6')-Ie-aph(2''), aph(3')-IIIa, and ant(6')-Ia detected in 43%, 45% and 41% of the isolates, respectively. Positive correlation was observed between resistant phenotypes to high levels of aminoglycosides and presence of all aminoglycoside resistance genes. Resistance genes for glycopeptide: vanB (37%) and vanC-1 (25%), and macrolide: ermB (11%) and ermC (54%) were detected in the isolates. These results show the need for more efficient wastewater treatment and disposal in order to prevent the release of virulent and antibiotic resistant Enterococci species and safeguard public health.

Keywords: antibiogram, enterococci, gentamicin, vancomycin, virulence signatures

Procedia PDF Downloads 219
819 Evaluation Method for Fouling Risk Using Quartz Crystal Microbalance

Authors: Natsuki Kishizawa, Keiko Nakano, Hussam Organji, Amer Shaiban, Mohammad Albeirutty

Abstract:

One of the most important tasks in operating desalination plants using a reverse osmosis (RO) method is preventing RO membrane fouling caused by foulants found in seawater. Optimal design of the pre-treatment process of RO process for plants enables the reduction of foulants. Therefore, a quantitative evaluation of the fouling risk in pre-treated water, which is fed to RO, is required for optimal design. Some measurement methods for water quality such as silt density index (SDI) and total organic carbon (TOC) have been conservatively applied for evaluations. However, these methods have not been effective in some situations for evaluating the fouling risk of RO feed water. Furthermore, stable management of plants will be possible by alerts and appropriate control of the pre-treatment process by using the method if it can be applied to the inline monitoring system for the fouling risk of RO feed water. The purpose of this study is to develop a method to evaluate the fouling risk of RO feed water. We applied a quartz crystal microbalance (QCM) to measure the amount of foulants found in seawater using a sensor whose surface is coated with polyamide thin film, which is the main material of a RO membrane. The increase of the weight of the sensor after a certain length of time in which the sample water passes indicates the fouling risk of the sample directly. We classified the values as “FP: Fouling Potential”. The characteristics of the method are to measure the very small amount of substances in seawater in a short time: < 2h, and from a small volume of the sample water: < 50mL. Using some RO cell filtration units, a higher correlation between the pressure increase given by RO fouling and the FP from the method than SDI and TOC was confirmed in the laboratory-scale test. Then, to establish the correlation in the actual bench-scale RO membrane module, and to confirm the feasibility of the monitoring system as a control tool for the pre-treatment process, we have started a long-term test at an experimental desalination site by the Red Sea in Jeddah, Kingdom of Saudi Arabia. Implementing inline equipment for the method made it possible to measure FP intermittently (4 times per day) and automatically. Moreover, for two 3-month long operations, the RO operation pressure among feed water samples of different qualities was compared. The pressure increase through a RO membrane module was observed at a high FP RO unit in which feed water was treated by a cartridge filter only. On the other hand, the pressure increase was not observed at a low FP RO unit in which feed water was treated by an ultra-filter during the operation. Therefore, the correlation in an actual scale RO membrane was established in two runs of two types of feed water. The result suggested that the FP method enables the evaluation of the fouling risk of RO feed water.

Keywords: fouling, monitoring, QCM, water quality

Procedia PDF Downloads 212
818 Driver of Migration and Appropriate Policy Concern Considering the Southwest Coastal Part of Bangladesh

Authors: Aminul Haque, Quazi Zahangir Hossain, Dilshad Sharmin Chowdhury

Abstract:

The human migration is getting growing concern around the world, and recurrent disasters and climate change impact have great influence on migration. Bangladesh is one of the disaster prone countries that/and has greater susceptibility to stress migration by recurrent disasters and climate change. The study was conducted to investigate the factors that have a strong influence on current migration and changing pattern of life and livelihood means of the southwest coastal part of Bangladesh. Moreover, the study also revealed a strong relationship between disasters and migration and appropriate policy concern. To explore this relation, both qualitative and quantitative methods were applied to a questionnaire survey at household level and simple random sampling technique used in the sampling process along with different secondary data sources for understanding policy concern and practices. The study explores the most influential driver of migration and its relationship with social, economic and environmental drivers. The study denotes that, the environmental driver has a greater effect on the intention of permanent migration (t=1.481, p-value=0.000) at the 1 percent significance level. The significant number of respondents denotes that abrupt pattern of cyclone, flood, salinity intrusion and rainfall are the most significant environmental driver to make a decision on permanent migration. The study also found that the temporary migration pattern has 2-fold increased compared to last ten (10) years. It also appears from the study that environmental factors have a great implication on the changing pattern of the occupation of the study area and it has reported that about 76% of the respondent now in the changing modality of livelihood compare to their traditional practices. The study bares that the migration has foremost impact on children and women by increasing hardship and creating critical social security. The exposure-route of permanent migration is not smooth indeed, these migrations creating urban and conflict in Chittagong hill tracks of Bangladesh. The study denotes that there is not any safeguard of the stress migrant on existing policy and not have any measures for safe migration and resettlement rather considering the emergency response and shelter. The majority of (98%) people believes that migration is not to be the adoption strategies, but contrary to this young group of respondent believes that safe migration could be the adaptation strategy which could bring a positive result compare to the other resilience strategies. On the other hand, the significant number of respondents uttered that appropriate policy measure could be an adaptation strategy for being the formation of a resilient community and reduce the migration by meaningful livelihood options with appropriate protection measure.

Keywords: environmental driver, livelihood, migration, resilience

Procedia PDF Downloads 264
817 Modeling of Tsunami Propagation and Impact on West Vancouver Island, Canada

Authors: S. Chowdhury, A. Corlett

Abstract:

Large tsunamis strike the British Columbia coast every few hundred years. The Cascadia Subduction Zone, which extends along the Pacific coast from Vancouver Island to Northern California is one of the most seismically active regions in Canada. Significant earthquakes have occurred in this region, including the 1700 Cascade Earthquake with an estimated magnitude of 9.2. Based on geological records, experts have predicted a 'great earthquake' of a similar magnitude within this region may happen any time. This earthquake is expected to generate a large tsunami that could impact the coastal communities on Vancouver Island. Since many of these communities are in remote locations, they are more likely to be vulnerable, as the post-earthquake relief efforts would be impacted by the damage to critical road infrastructures. To assess the coastal vulnerability within these communities, a hydrodynamic model has been developed using MIKE-21 software. We have considered a 500 year probabilistic earthquake design criteria including the subsidence in this model. The bathymetry information was collected from Canadian Hydrographic Services (CHS), and National Oceanic Atmospheric and Administration (NOAA). The arial survey was conducted using a Cessna-172 aircraft for the communities, and then the information was converted to generate a topographic digital elevation map. Both survey information was incorporated into the model, and the domain size of the model was about 1000km x 1300km. This model was calibrated with the tsunami occurred off the west coast of Moresby Island on October 28, 2012. The water levels from the model were compared with two tide gauge stations close to the Vancouver Island and the output from the model indicates the satisfactory result. For this study, the design water level was considered as High Water Level plus the Sea Level Rise for 2100 year. The hourly wind speeds from eight directions were collected from different wind stations and used a 200-year return period wind speed in the model for storm events. The regional model was set for 12 hrs simulation period, which takes more than 16 hrs to complete one simulation using double Xeon-E7 CPU computer plus a K-80 GPU. The boundary information for the local model was generated from the regional model. The local model was developed using a high resolution mesh to estimate the coastal flooding for the communities. It was observed from this study that many communities will be effected by the Cascadia tsunami and the inundation maps were developed for the communities. The infrastructures inside the coastal inundation area were identified. Coastal vulnerability planning and resilient design solutions will be implemented to significantly reduce the risk.

Keywords: tsunami, coastal flooding, coastal vulnerable, earthquake, Vancouver, wave propagation

Procedia PDF Downloads 131