Search results for: distribution function
907 Data-Driven Surrogate Models for Damage Prediction of Steel Liquid Storage Tanks under Seismic Hazard
Authors: Laura Micheli, Majd Hijazi, Mahmoud Faytarouni
Abstract:
The damage reported by oil and gas industrial facilities revealed the utmost vulnerability of steel liquid storage tanks to seismic events. The failure of steel storage tanks may yield devastating and long-lasting consequences on built and natural environments, including the release of hazardous substances, uncontrolled fires, and soil contamination with hazardous materials. It is, therefore, fundamental to reliably predict the damage that steel liquid storage tanks will likely experience under future seismic hazard events. The seismic performance of steel liquid storage tanks is usually assessed using vulnerability curves obtained from the numerical simulation of a tank under different hazard scenarios. However, the computational demand of high-fidelity numerical simulation models, such as finite element models, makes the vulnerability assessment of liquid storage tanks time-consuming and often impractical. As a solution, this paper presents a surrogate model-based strategy for predicting seismic-induced damage in steel liquid storage tanks. In the proposed strategy, the surrogate model is leveraged to reduce the computational demand of time-consuming numerical simulations. To create the data set for training the surrogate model, field damage data from past earthquakes reconnaissance surveys and reports are collected. Features representative of steel liquid storage tank characteristics (e.g., diameter, height, liquid level, yielding stress) and seismic excitation parameters (e.g., peak ground acceleration, magnitude) are extracted from the field damage data. The collected data are then utilized to train a surrogate model that maps the relationship between tank characteristics, seismic hazard parameters, and seismic-induced damage via a data-driven surrogate model. Different types of surrogate algorithms, including naïve Bayes, k-nearest neighbors, decision tree, and random forest, are investigated, and results in terms of accuracy are reported. The model that yields the most accurate predictions is employed to predict future damage as a function of tank characteristics and seismic hazard intensity level. Results show that the proposed approach can be used to estimate the extent of damage in steel liquid storage tanks, where the use of data-driven surrogates represents a viable alternative to computationally expensive numerical simulation models.Keywords: damage prediction , data-driven model, seismic performance, steel liquid storage tanks, surrogate model
Procedia PDF Downloads 143906 Life Cycle Assessment to Study the Acidification and Eutrophication Impacts of Sweet Cherry Production
Authors: G. Bravo, D. Lopez, A. Iriarte
Abstract:
Several organizations and governments have created a demand for information about the environmental impacts of agricultural products. Today, the export oriented fruit sector in Chile is being challenged to quantify and reduce their environmental impacts. Chile is the largest southern hemisphere producer and exporter of sweet cherry fruit. Chilean sweet cherry production reached a volume of 80,000 tons in 2012. The main destination market for the Chilean cherry in 2012 was Asia (including Hong Kong and China), taking in 69% of exported volume. Another important market was the United States with 16% participation, followed by Latin America (7%) and Europe (6%). Concerning geographical distribution, the Chilean conventional cherry production is focused in the center-south area, between the regions of Maule and O’Higgins; both regions represent 81% of the planted surface. The Life Cycle Assessment (LCA) is widely accepted as one of the major methodologies for assessing environmental impacts of products or services. The LCA identifies the material, energy, material, and waste flows of a product or service, and their impact on the environment. There are scant studies that examine the impacts of sweet cherry cultivation, such as acidification and eutrophication. Within this context, the main objective of this study is to evaluate, using the LCA, the acidification and eutrophication impacts of sweet cherry production in Chile. The additional objective is to identify the agricultural inputs that contributed significantly to the impacts of this fruit. The system under study included all the life cycle stages from the cradle to the farm gate (harvested sweet cherry). The data of sweet cherry production correspond to nationwide representative practices and are based on technical-economic studies and field information obtained in several face-to-face interviews. The study takes into account the following agricultural inputs: fertilizers, pesticides, diesel consumption for agricultural operations, machinery and electricity for irrigation. The results indicated that the mineral fertilizers are the most important contributors to the acidification and eutrophication impacts of the sheet cherry cultivation. Improvement options are suggested for the hotspot in order to reduce the environmental impacts. The results allow planning and promoting low impacts procedures across fruit companies, as well as policymakers, and other stakeholders on the subject. In this context, this study is one of the first assessments of the environmental impacts of sweet cherry production. New field data or evaluation of other life cycle stages could further improve the knowledge on the impacts of this fruit. This study may contribute to environmental information in other countries where there is similar agricultural production for sweet cherry.Keywords: acidification, eutrophication, life cycle assessment, sweet cherry production
Procedia PDF Downloads 271905 Women Writing Group as a Mean for Personal and Social Change
Authors: Michal Almagor, Rivka Tuval-Mashiach
Abstract:
This presentation will explore the main processes identified in women writing group, as an interdisciplinary field with personal and social effects. It is based on the initial findings of a Ph.D. research focus on the intersection of group processes with the element of writing, in the context of gender. Writing as a therapeutic mean has been recognized and found to be highly effective. Additionally, a substantial amount of research reveals the psychological impact of group processes. However, the combination of writing and groups as a therapeutic tool was hardly investigated; this is the contribution of this research. In the following qualitative-phenomenological study, the experiences of eight women participating in a 10-sessions structured writing group were investigated. We used the meetings transcripts, semi-structured interviews, and the texts to analyze and understand the experience of participating in the group. The two significant findings revealed were spiral intersubjectivity and archaic level of semiotic language. We realized that the content and the process are interwoven; participants are writing, reading and discussing their texts in a group setting that enhanced self-dialogue between the participants and their own narratives and texts, as well as dialogue with others. This process includes working through otherness within and between while discovering and creating a multiplicity of narratives. A movement of increasing shared circles from the personal to the group and to the social-cultural environment was identified, forming what we termed as spiral intersubjectivity. An additional layer of findings was revealed while we listened to the resonance of the group-texts, and discourse; during this process, we could trace the semiotic level in addition to the symbolic one. We were witness to the dominant presence of the body, and primal sensuality, expressed by rhythm, sound and movements, signs of pre-verbal language. Those findings led us to a new understanding of the semiotic function as a way to express the fullness of women experience and the enabling role of writing in reviving what was repressed. The poetic language serves as a bridge between the symbolic and the semiotic. Re-reading the group materials, exposed another layer of expression, an old-new language. This approach suggests a feminine expression of subjective experience with personal and social importance. It is a subversive move, encouraging women to write themselves, as a craft that every woman can use, giving voice to the silent and hidden, and experiencing the power of performing 'my story'. We suggest that women writing group is an efficient, powerful yet welcoming way to raise the awareness of researchers and clinicians, and more importantly of the participants, to the uniqueness of the feminine experience, and to gender-sensitive curative approaches.Keywords: group, intersubjectivity, semiotic, writing
Procedia PDF Downloads 219904 CSPG4 Molecular Target in Canine Melanoma, Osteosarcoma and Mammary Tumors for Novel Therapeutic Strategies
Authors: Paola Modesto, Floriana Fruscione, Isabella Martini, Simona Perga, Federica Riccardo, Mariateresa Camerino, Davide Giacobino, Cecilia Gola, Luca Licenziato, Elisabetta Razzuoli, Katia Varello, Lorella Maniscalco, Elena Bozzetta, Angelo Ferrari
Abstract:
Canine and human melanoma, osteosarcoma (OSA), and mammary carcinomas are aggressive tumors with common characteristics making dogs a good model for comparative oncology. Novel therapeutic strategies against these tumors could be useful to both species. In humans, chondroitin sulphate proteoglycan 4 (CSPG4) is a marker involved in tumor progression and could be a candidate target for immunotherapy. The anti-CSPG4 DNA electrovaccination has shown to be an effective approach for canine malignant melanoma (CMM) [1]. An immunohistochemistry evaluation of CSPG4 expression in tumour tissue is generally performed prior to electrovaccination. To assess the possibility to perform a rapid molecular evaluation and in order to validate these spontaneous canine tumors as the model for human studies, we investigate the CSPG4 gene expression by RT qPCR in CMM, OSA, and canine mammary tumors (CMT). The total RNA was extracted from RNAlater stored tissue samples (CMM n=16; OSA n=13; CMT n=6; five paired normal tissues for CMM, five paired normal tissues for OSA and one paired normal tissue for CMT), retro-transcribed and then analyzed by duplex RT-qPCR using two different TaqMan assays for the target gene CSPG4 and the internal reference gene (RG) Ribosomal Protein S19 (RPS19). RPS19 was selected from a panel of 9 candidate RGs, according to NormFinder analysis following the protocol already described [2]. Relative expression was analyzed by CFX Maestro™ Software. Student t-test and ANOVA were performed (significance set at P<0.05). Results showed that gene expression of CSPG4 in OSA tissues is significantly increased by 3-4 folds when compared to controls. In CMT, gene expression of the target was increased from 1.5 to 19.9 folds. In melanoma, although an increasing trend was observed, no significant differences between the two groups were highlighted. Immunohistochemistry analysis of the two cancer types showed that the expression of CSPG4 within CMM is concentrated in isles of cells compared to OSA, where the distribution of positive cells is homogeneous. This evidence could explain the differences in gene expression results.CSPG4 immunohistochemistry evaluation in mammary carcinoma is in progress. The evidence of CSPG4 expression in a different type of canine tumors opens the way to the possibility of extending the CSPG4 immunotherapy marker in CMM, OSA, and CMT and may have an impact to translate this strategy modality to human oncology.Keywords: canine melanoma, canine mammary carcinomas, canine osteosarcoma, CSPG4, gene expression, immunotherapy
Procedia PDF Downloads 174903 Comparison of the Toxicity of Silver and Gold Nanoparticles in Murine Fibroblasts
Authors: Šárka Hradilová, Aleš Panáček, Radek Zbořil
Abstract:
Nanotechnologies are considered the most promising fields with high added value, brings new possibilities in various sectors from industry to medicine. With the growing of interest in nanomaterials and their applications, increasing nanoparticle production leads to increased exposure of people and environment with ‘human made’ nanoparticles. Nanoparticles (NPs) are clusters of atoms in the size range of 1–100 nm. Metal nanoparticles represent one of the most important and frequently used types of NPs due to their unique physical, chemical and biological properties, which significantly differ from those of bulk material. Biological properties including toxicity of metal nanoparticles are generally determined by their size, size distribution, shape, surface area, surface charge, surface chemistry, stability in the environment and ability to release metal ions. Therefore, the biological behavior of NPs and their possible adverse effect cannot be derived from the bulk form of material because nanoparticles show unique properties and interactions with biological systems just due to their nanodimensions. Silver and gold NPs are intensively studied and used. Both can be used for instance in surface enhanced Raman spectroscopy, a considerable number of applications of silver NPs is associated with antibacterial effects, while gold NPs are associated with cancer treatment and bio imaging. Antibacterial effects of silver ions are known for centuries. Silver ions and silver-based compounds are highly toxic to microorganisms. Toxic properties of silver NPs are intensively studied, but the mechanism of cytoxicity is not fully understood. While silver NPs are considered toxic, gold NPs are referred to as toxic but also innocuous for eukaryotic cells. Therefore, gold NPs are used in various biological applications without a risk of cell damaging, even when we want to suppress the growth of cancer cells. Thus, gold NPs are toxic or harmless. Because most studies comparing particles of various sizes prepared in various ways, and testing is performed on different cell lines, it is very difficult to generalize. The novelty and significance of our research is focused to the complex biological effects of silver and gold NPs prepared by the same method, have the same parameters and the same stabilizer. That is why we can compare the biological effects of pure nanometals themselves based on their chemical nature without the influence of other variable. Aim of our study therefore is to compare the cytotoxic effect of two types of noble metal NPs focusing on the mechanisms that contribute to cytotoxicity. The study was conducted on murine fibroblasts by selected common used tests. Each of these tests monitors the selected area related to toxicity and together provides a comprehensive view on the issue of interactions of nanoparticles and living cells.Keywords: cytotoxicity, gold nanoparticles, mechanism of cytotoxicity, silver nanoparticles
Procedia PDF Downloads 254902 Application of Principal Component Analysis and Ordered Logit Model in Diabetic Kidney Disease Progression in People with Type 2 Diabetes
Authors: Mequanent Wale Mekonen, Edoardo Otranto, Angela Alibrandi
Abstract:
Diabetic kidney disease is one of the main microvascular complications caused by diabetes. Several clinical and biochemical variables are reported to be associated with diabetic kidney disease in people with type 2 diabetes. However, their interrelations could distort the effect estimation of these variables for the disease's progression. The objective of the study is to determine how the biochemical and clinical variables in people with type 2 diabetes are interrelated with each other and their effects on kidney disease progression through advanced statistical methods. First, principal component analysis was used to explore how the biochemical and clinical variables intercorrelate with each other, which helped us reduce a set of correlated biochemical variables to a smaller number of uncorrelated variables. Then, ordered logit regression models (cumulative, stage, and adjacent) were employed to assess the effect of biochemical and clinical variables on the order-level response variable (progression of kidney function) by considering the proportionality assumption for more robust effect estimation. This retrospective cross-sectional study retrieved data from a type 2 diabetic cohort in a polyclinic hospital at the University of Messina, Italy. The principal component analysis yielded three uncorrelated components. These are principal component 1, with negative loading of glycosylated haemoglobin, glycemia, and creatinine; principal component 2, with negative loading of total cholesterol and low-density lipoprotein; and principal component 3, with negative loading of high-density lipoprotein and a positive load of triglycerides. The ordered logit models (cumulative, stage, and adjacent) showed that the first component (glycosylated haemoglobin, glycemia, and creatinine) had a significant effect on the progression of kidney disease. For instance, the cumulative odds model indicated that the first principal component (linear combination of glycosylated haemoglobin, glycemia, and creatinine) had a strong and significant effect on the progression of kidney disease, with an effect or odds ratio of 0.423 (P value = 0.000). However, this effect was inconsistent across levels of kidney disease because the first principal component did not meet the proportionality assumption. To address the proportionality problem and provide robust effect estimates, alternative ordered logit models, such as the partial cumulative odds model, the partial adjacent category model, and the partial continuation ratio model, were used. These models suggested that clinical variables such as age, sex, body mass index, medication (metformin), and biochemical variables such as glycosylated haemoglobin, glycemia, and creatinine have a significant effect on the progression of kidney disease.Keywords: diabetic kidney disease, ordered logit model, principal component analysis, type 2 diabetes
Procedia PDF Downloads 40901 Epigenetic and Archeology: A Quest to Re-Read Humanity
Authors: Salma A. Mahmoud
Abstract:
Epigenetic, or alteration in gene expression influenced by extragenetic factors, has emerged as one of the most promising areas that will address some of the gaps in our current knowledge in understanding patterns of human variation. In the last decade, the research investigating epigenetic mechanisms in many fields has flourished and witnessed significant progress. It paved the way for a new era of integrated research especially between anthropology/archeology and life sciences. Skeletal remains are considered the most significant source of information for studying human variations across history, and by utilizing these valuable remains, we can interpret the past events, cultures and populations. In addition to archeological, historical and anthropological importance, studying bones has great implications in other fields such as medicine and science. Bones also can hold within them the secrets of the future as they can act as predictive tools for health, society characteristics and dietary requirements. Bones in their basic forms are composed of cells (osteocytes) that are affected by both genetic and environmental factors, which can only explain a small part of their variability. The primary objective of this project is to examine the epigenetic landscape/signature within bones of archeological remains as a novel marker that could reveal new ways to conceptualize chronological events, gender differences, social status and ecological variations. We attempted here to address discrepancies in common variants such as methylome as well as novel epigenetic regulators such as chromatin remodelers, which to our best knowledge have not yet been investigated by anthropologists/ paleoepigenetists using plethora of techniques (biological, computational, and statistical). Moreover, extracting epigenetic information from bones will highlight the importance of osseous material as a vector to study human beings in several contexts (social, cultural and environmental), and strengthen their essential role as model systems that can be used to investigate and construct various cultural, political and economic events. We also address all steps required to plan and conduct an epigenetic analysis from bone materials (modern and ancient) as well as discussing the key challenges facing researchers aiming to investigate this field. In conclusion, this project will serve as a primer for bioarcheologists/anthropologists and human biologists interested in incorporating epigenetic data into their research programs. Understanding the roles of epigenetic mechanisms in bone structure and function will be very helpful for a better comprehension of their biology and highlighting their essentiality as interdisciplinary vectors and a key material in archeological research.Keywords: epigenetics, archeology, bones, chromatin, methylome
Procedia PDF Downloads 108900 The Influence of Nutritional and Immunological Status on the Prognosis of Head and Neck Cancer
Authors: Ching-Yi Yiu, Hui-Chen Hsu
Abstract:
Objectives: Head and neck cancer (HNC) is a big global health problem in the world. Despite the development of diagnosis and treatment, the overall survival of HNC is still low. The well recognition of the interaction of the host immune system and cancer cells has led to realizing the processes of tumor initiation, progression and metastasis. Many systemic inflammatory responses have been shown to play a crucial role in cancer progression. The pre and post-treatment nutritional and immunological status of HNC patients is a reliable prognostic indicator of tumor outcomes and survivors. Methods: Between July 2020 to June 2022, We have enrolled 60 HNC patients, including 59 males and 1 female, in Chi Mei Medical Center, Liouying, Taiwan. The age distribution was from 37 to 81 years old (y/o), with a mean age of 57.6 y/o. We evaluated the pre-and post-treatment nutritional and immunological status of these HNC patients with body weight, body weight loss, body mass index (BMI), whole blood count including hemoglobin (Hb), lymphocyte, neutrophil and platelet counts, biochemistry including prealbumin, albumin, c-reactive protein (CRP), with the time period of before treatment, post-treatment 3 and 6 months. We calculated the neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) to assess how these biomarkers influence the outcomes of HNC patients. Results: We have carcinoma of the hypopharynx in 21 cases with 35%, carcinoma of the larynx in 9 cases, carcinoma of the tonsil and tongue every 6 cases, carcinoma soft palate and tongue base every 5 cases, carcinoma of buccal mucosa, retromolar trigone and mouth floor every 2 cases, carcinoma of the hard palate and low lip each 1 case. There were stage I 15 cases, stage II 13 cases, stage III 6 cases, stage IVA 10 cases, and stage IVB 16 cases. All patients have received surgery, chemoradiation therapy or combined therapy. We have wound infection in 6 cases, 2 cases of pharyngocutaneous fistula, flap necrosis in 2 cases, and mortality in 6 cases. In the wound infection group, the average BMI is 20.4 kg/m2; the average Hb is 12.9 g/dL, the average albumin is 3.5 g/dL, the average NLR is 6.78, and the average PLR is 243.5. In the PC fistula and flap necrosis group, the average BMI is 21.65 kg/m2; the average Hb is 11.7 g/dL, the average albumin is 3.15 g/dL, average NLR is 13.28, average PLR is 418.84. In the mortality group, the average BMI is 22.3 kg/m2; the average Hb is 13.58 g/dL, the average albumin is 3.77 g/dL, the average NLR is 6.06, and the average PLR is 275.5. Conclusion: HNC is a big challenging public health problem worldwide, especially in the high prevalence of betel nut consumption area Taiwan. Besides the definite risk factors of smoking, drinking and betel nut related, the other biomarkers may play significant prognosticators in the HNC outcomes. We concluded that the average BMI is less than 22 kg/m2, the average Hb is low than 12.0 g/dL, the average albumin is low than 3.3 g/dL, the average NLR is low than 3, and the average PLR is more than 170, the surgical complications and mortality will be increased, and the prognosis is poor in HNC patients.Keywords: nutritional, immunological, neutrophil-to-lymphocyte ratio, paltelet-to-lymphocyte ratio.
Procedia PDF Downloads 79899 Identification of Natural Liver X Receptor Agonists as the Treatments or Supplements for the Management of Alzheimer and Metabolic Diseases
Authors: Hsiang-Ru Lin
Abstract:
Cholesterol plays an essential role in the regulation of the progression of numerous important diseases including atherosclerosis and Alzheimer disease so the generation of suitable cholesterol-lowering reagents is urgent to develop. Liver X receptor (LXR) is a ligand-activated transcription factor whose natural ligands are cholesterols, oxysterols and glucose. Once being activated, LXR can transactivate the transcription action of various genes including CYP7A1, ABCA1, and SREBP1c, involved in the lipid metabolism, glucose metabolism and inflammatory pathway. Essentially, the upregulation of ABCA1 facilitates cholesterol efflux from the cells and attenuates the production of beta-amyloid (ABeta) 42 in brain so LXR is a promising target to develop the cholesterol-lowering reagents and preventative treatment of Alzheimer disease. Engelhardia roxburghiana is a deciduous tree growing in India, China, and Taiwan. However, its chemical composition is only reported to exhibit antitubercular and anti-inflammatory effects. In this study, four compounds, engelheptanoxides A, C, engelhardiol A, and B isolated from the root of Engelhardia roxburghiana were evaluated for their agonistic activity against LXR by the transient transfection reporter assays in the HepG2 cells. Furthermore, their interactive modes with LXR ligand binding pocket were generated by molecular modeling programs. By using the cell-based biological assays, engelheptanoxides A, C, engelhardiol A, and B showing no cytotoxic effect against the proliferation of HepG2 cells, exerted obvious LXR agonistic effects with similar activity as T0901317, a novel synthetic LXR agonist. Further modeling studies including docking and SAR (structure-activity relationship) showed that these compounds can locate in LXR ligand binding pocket in the similar manner as T0901317. Thus, LXR is one of nuclear receptors targeted by pharmaceutical industry for developing treatments of Alzheimer and atherosclerosis diseases. Importantly, the cell-based assays, together with molecular modeling studies suggesting a plausible binding mode, demonstrate that engelheptanoxides A, C, engelhardiol A, and B function as LXR agonists. This is the first report to demonstrate that the extract of Engelhardia roxburghiana contains LXR agonists. As such, these active components of Engelhardia roxburghiana or subsequent analogs may show important therapeutic effects through selective modulation of the LXR pathway.Keywords: Liver X receptor (LXR), Engelhardia roxburghiana, CYP7A1, ABCA1, SREBP1c, HepG2 cells
Procedia PDF Downloads 420898 Adaptation of the Scenario Test for Greek-speaking People with Aphasia: Reliability and Validity Study
Authors: Marina Charalambous, Phivos Phylactou, Thekla Elriz, Loukia Psychogios, Jean-Marie Annoni
Abstract:
Background: Evidence-based practices for the evaluation and treatment of people with aphasia (PWA) in Greek are mainly impairment-based. Functional and multimodal communication is usually under assessed and neglected by clinicians. This study explores the adaptation and psychometric testing of the Greek (GR) version of The Scenario Test. The Scenario Test assesses the everyday functional communication of PWA in an interactive multimodal communication setting with the support of an active communication facilitator. Aims: To define the reliability and validity of The Scenario Test GR and discuss its clinical value. Methods & Procedures: The Scenario Test-GR was administered to 54 people with chronic stroke (6+ months post-stroke): 32 PWA and 22 people with stroke without aphasia. Participants were recruited from Greece and Cyprus. All measures were performed in an interview format. Standard psychometric criteria were applied to evaluate reliability (internal consistency, test-retest, and interrater reliability) and validity (construct and known – groups validity) of the Scenario Test GR. Video analysis was performed for the qualitative examination of the communication modes used. Outcomes & Results: The Scenario Test-GR shows high levels of reliability and validity. High scores of internal consistency (Cronbach’s α = .95), test-retest reliability (ICC = .99), and interrater reliability (ICC = .99) were found. Interrater agreement in scores on individual items fell between good and excellent levels of agreement. Correlations with a tool measuring language function in aphasia (the Aphasia Severity Rating Scale of the Boston Diagnostic Aphasia Examination), a measure of functional communication (the Communicative Effectiveness Index), and two instruments examining the psychosocial impact of aphasia (the Stroke and Aphasia Quality of Life questionnaire and the Aphasia Impact Questionnaire) revealed good convergent validity (all ps< .05). Results showed good known – groups validity (Mann-Whitney U = 96.5, p < .001), with significantly higher scores for participants without aphasia compared to those with aphasia. Conclusions: The psychometric qualities of The Scenario Test-GR support the reliability and validity of the tool for the assessment of functional communication for Greek-speaking PWA. The Scenario Test-GR can be used to assess multimodal functional communication, orient aphasia rehabilitation goal setting towards the activity and participation level, and be used as an outcome measure of everyday communication. Future studies will focus on the measurement of sensitivity to change in PWA with severe non-fluent aphasia.Keywords: the scenario test GR, functional communication assessment, people with aphasia (PWA), tool validation
Procedia PDF Downloads 128897 Engineers 'Write' Job Description: Development of English for Specific Purposes (ESP)-Based Instructional Materials for Engineering Students
Authors: Marjorie Miguel
Abstract:
Globalization offers better career opportunities hence demands more competent professionals efficient for the job. With the transformation of the world industry from competition to collaboration coupled with the rapid development in the field of science and technology, engineers need not only to be technically proficient, but also multilingual-skilled: two characteristics that a global engineer possesses. English often serves as the global language between people from different cultures being the medium mostly used in international business. Ironically, most universities worldwide adapt engineering curriculum heavily built around the language of mathematics not realizing that the goal of an engineer is not only to create and design, but more importantly to promote his creations and designs to the general public through effective communication. This premise led to some developments in the teaching process of English subjects in the tertiary level which include the integration of the technical knowledge related to the area of specialization of the students in the English subjects that they are taking. This is also known as English for Specific Purposes. This study focused on the development of English for Specific Purposes-Based Instructional Materials for Engineering Students of Bulacan State University (BulSU). The materials were tailor-made in which the contents and structure were designed to meet the specific needs of the students as well as the industry. Based on the needs analysis, the needs of the students and the industry were determined to make the study descriptive in nature. The major respondents included fifty engineering students and ten professional engineers from selected institutions. The needs analysis was done and the results showed the common writing difficulties of the students and the writing skills needed among the engineers in the industry. The topics in the instructional materials were established after the needs analysis was conducted. Simple statistical treatment including frequency distribution, percentages, mean, standard deviation, and weighted mean were used. The findings showed that the greatest number of the respondents had an average proficiency rating in writing, and the much-needed skills that must be developed by the engineers are directly related to the preparation and presentation of technical reports about their projects, as well as to the different communications they transmit to their colleagues and superiors. The researcher undertook the following phases in the development of the instructional materials: a design phase, development phase, and evaluation phase. Evaluations are given by some college instructors about the instructional materials generally helped in its usefulness and significance making the study beneficial not only as a career enhancer for BulSU engineering students, but also creating the university one of the educational institutions ready for the new millennium.Keywords: English for specific purposes, instructional materials, needs analysis, write (right) job description
Procedia PDF Downloads 239896 Analysis of Distance Travelled by Plastic Consumables Used in the First 24 Hours of an Intensive Care Admission: Impacts and Methods of Mitigation
Authors: Aidan N. Smallwood, Celestine R. Weegenaar, Jack N. Evans
Abstract:
The intensive care unit (ICU) is a particularly resource heavy environment, in terms of staff, drugs and equipment required. Whilst many areas of the hospital are attempting to cut down on plastic use and minimise their impact on the environment, this has proven challenging within the confines of intensive care. Concurrently, as globalization has progressed over recent decades, there has been a tendency towards centralised manufacturing with international distribution networks for products, often covering large distances. In this study, we have modelled the standard consumption of plastic single-use items over the course of the first 24-hours of an average individual patient’s stay in a 12 bed ICU in the United Kingdom (UK). We have identified the country of manufacture and calculated the minimum possible distance travelled by each item from factory to patient. We have assumed direct transport via the shortest possible straight line from country of origin to the UK and have not accounted for transport within either country. Assuming an intubated patient with invasive haemodynamic monitoring and central venous access, there are a total of 52 distincts, largely plastic, disposable products which would reasonably be required in the first 24-hours after admission. Each product type has only been counted once to account for multiple items being shipped as one package. Travel distances from origin were summed to give the total distance combined for all 52 products. The minimum possible total distance travelled from country of origin to the UK for all types of product was 273,353 km, equivalent to 6.82 circumnavigations of the globe, or 71% of the way to the moon. The mean distance travelled was 5,256 km, approximately the distance from London to Mecca. With individual packaging for each item, the total weight of consumed products was 4.121 kg. The CO2 produced shipping these items by air freight would equate to 30.1 kg, however doing the same by sea would produce 0.2 kg CO2. Extrapolating these results to the 211,932 UK annual ICU admissions (2018-2019), even with the underestimates of distance and weight of our assumptions, air freight would account for 6586 tons CO2 emitted annually, approximately 130 times that of sea freight. Given the drive towards cost saving within the UK health service, and the decline of the local manufacturing industry, buying from intercontinental manufacturers is inevitable However, transporting all consumables by sea where feasible would be environmentally beneficial, as well as being less costly than air freight. At present, the NHS supply chain purchases from medical device companies, and there is no freely available information as to the transport mode used to deliver the product to the UK. This must be made available to purchasers in order to give a fuller picture of life cycle impact and allow for informed decision making in this regard.Keywords: CO2, intensive care, plastic, transport
Procedia PDF Downloads 178895 Providing Health Promotion Information by Digital Animation to International Visitors in Japan: A Factorial Design View of Nurses
Authors: Mariko Nishikawa, Masaaki Yamanaka, Ayami Kondo
Abstract:
Background: International visitors to Japan are at a risk of travel-related illnesses or injury that could result in hospitalization in a country where the language and customs are unique. Over twelve million international visitors came to Japan in 2015, and more are expected leading up to the Tokyo Olympics. One aspect of this is the potentially greater demand on healthcare services by foreign visitors. Nurses who take care of them have anxieties and concerns of their knowledge of the Japanese health system. Objectives: An effective distribution of travel-health information is vital for facilitating care for international visitors. Our research investigates whether a four-minute digital animation (Mari Info Japan), designed and developed by the authors and applied to a survey of 513 nurses who take care of foreigners daily, could clarify travel health procedures, reduce anxieties, while making it enjoyable to learn. Methodology: Respondents to a survey were divided into two groups. The intervention group watched Mari Info Japan. The control group read a standard guidebook. The participants were requested to fill a two-page questionnaire called Mari Meter-X, STAI-Y in English and mark a face scale, before and after the interventions. The questions dealt with knowledge of health promotion, the Japanese healthcare system, cultural concerns, anxieties, and attitudes in Japan. Data were collected from an intervention group (n=83) and control group (n=83) of nurses in a hospital, Japan for foreigners from February to March, 2016. We analyzed the data using Text Mining Studio for open-ended questions and JMP for statistical significance. Results: We found that the intervention group displayed more confidence and less anxiety to take care of foreign patients compared to the control group. The intervention group indicated a greater comfort after watching the animation. However, both groups were most likely to be concerned about language, the cost of medical expenses, informed consent, and choice of hospital. Conclusions: From the viewpoint of nurses, the provision of travel-health information by digital animation to international visitors to Japan was more effective than traditional methods as it helped them be better prepared to treat travel-related diseases and injury among international visitors. This study was registered number UMIN000020867. Funding: Grant–in-Aid for Challenging Exploratory Research 2010-2012 & 2014-16, Japanese Government.Keywords: digital animation, health promotion, international visitor, Japan, nurse
Procedia PDF Downloads 307894 Biodegradation Ability of Polycyclic Aromatic Hydrocarbon (PAHs) Degrading Bacillus cereus Strain JMG-01 Isolated from PAHs Contaminated Soil
Authors: Momita Das, Sofia Banu, Jibon Kotoky
Abstract:
Environmental contamination of natural resources with persistent organic pollutants is of great world-wide apprehension. Polycyclic aromatic hydrocarbons (PAHs) are among the organic pollutants, released due to various anthropogenic activities. Due to their toxic, carcinogenic and mutagenic properties, PAHs are of environmental and human concern. Presently, bioremediation has evolved as the most promising biotechnology for cleanup of such contaminants because of its economical and less cost effectiveness. In the present study, distribution of 16 USEPA priority PAHs was determined in the soil samples collected from fifteen different sites of Guwahati City, the Gateway of the North East Region of India. The total concentrations of 16 PAHs (Σ16 PAHs) ranged from 42.7-742.3 µg/g. Higher concentration of total PAHs was found more in the Industrial areas compared to all the sites (742.3 µg/g and 628 µg/g). It is noted that among all the PAHs, Naphthalene, Acenaphthylene, Anthracene, Fluoranthene, Chrysene and Benzo(a)Pyrene were the most available and contain the higher concentration of all the PAHs. Since microbial activity has been deemed the most influential and significant cause of PAH removal; further, twenty-three bacteria were isolated from the most contaminated sites using the enrichment process. These strains were acclimatized to utilize naphthalene and anthracene, each at 100 µg/g concentration as sole carbon source. Among them, one Gram-positive strain (JMG-01) was selected, and biodegradation ability and initial catabolic genes of PAHs degradation were investigated. Based on 16S rDNA analysis, the isolate was identified as Bacillus cereus strain JMG-01. Topographic images obtained using Scanning Electron Microscope (SEM) and Atomic Force Microscope (AFM) at scheduled time intervals of 7, 14 and 21 days, determined the variation in cell morphology during the period of degradation. AFM and SEM micrograph of biomass showed high filamentous growth leading to aggregation of cells in the form of biofilm with reference to the incubation period. The percentage degradation analysis using gas chromatography and mass analyses (GC-MS) suggested that more than 95% of the PAHs degraded when the concentration was at 500 µg/g. Naphthalene, naphthalene-2-methy, benzaldehyde-4-propyl, 1, 2, benzene di-carboxylic acid and benzene acetic acid were the major metabolites produced after degradation. Moreover, PCR experiments with specific primers for catabolic genes, ndo B and Cat A suggested that JMG-01 possess genes for PAHs degradation. Thus, the study concludes that Bacillus cereus strain JMG-01 has efficient biodegrading ability and can trigger the clean-up of PAHs contaminated soil.Keywords: AFM, Bacillus cereus strain JMG-01, degradation, polycyclic aromatic hydrocarbon, SEM
Procedia PDF Downloads 277893 Quantifying Multivariate Spatiotemporal Dynamics of Malaria Risk Using Graph-Based Optimization in Southern Ethiopia
Authors: Yonas Shuke Kitawa
Abstract:
Background: Although malaria incidence has substantially fallen sharply over the past few years, the rate of decline varies by district, time, and malaria type. Despite this turn-down, malaria remains a major public health threat in various districts of Ethiopia. Consequently, the present study is aimed at developing a predictive model that helps to identify the spatio-temporal variation in malaria risk by multiple plasmodium species. Methods: We propose a multivariate spatio-temporal Bayesian model to obtain a more coherent picture of the temporally varying spatial variation in disease risk. The spatial autocorrelation in such a data set is typically modeled by a set of random effects that assign a conditional autoregressive prior distribution. However, the autocorrelation considered in such cases depends on a binary neighborhood matrix specified through the border-sharing rule. Over here, we propose a graph-based optimization algorithm for estimating the neighborhood matrix that merely represents the spatial correlation by exploring the areal units as the vertices of a graph and the neighbor relations as the series of edges. Furthermore, we used aggregated malaria count in southern Ethiopia from August 2013 to May 2019. Results: We recognized that precipitation, temperature, and humidity are positively associated with the malaria threat in the area. On the other hand, enhanced vegetation index, nighttime light (NTL), and distance from coastal areas are negatively associated. Moreover, nonlinear relationships were observed between malaria incidence and precipitation, temperature, and NTL. Additionally, lagged effects of temperature and humidity have a significant effect on malaria risk by either species. More elevated risk of P. falciparum was observed following the rainy season, and unstable transmission of P. vivax was observed in the area. Finally, P. vivax risks are less sensitive to environmental factors than those of P. falciparum. Conclusion: The improved inference was gained by employing the proposed approach in comparison to the commonly used border-sharing rule. Additionally, different covariates are identified, including delayed effects, and elevated risks of either of the cases were observed in districts found in the central and western regions. As malaria transmission operates in a spatially continuous manner, a spatially continuous model should be employed when it is computationally feasible.Keywords: disease mapping, MSTCAR, graph-based optimization algorithm, P. falciparum, P. vivax, waiting matrix
Procedia PDF Downloads 79892 Mental Balance, Emotional Balance, and Stress Management: The Role of Ancient Vedic Philosophy from India
Authors: Emily Schulz
Abstract:
The ancient Vedic culture from India had traditions that supported all aspects of health, including psychological health, and are relevant in the current era. These traditions have been compiled by Professor Dr. Purna, a rare Himalayan Master, into the Purna Health Management System (PHMS). The PHMS is a unique, holistic, and integrated approach to health management. It is comprised of four key factors: Health, Fitness, and Nutrition (HF&N), Life Balance (Stress Management) (LB-SM), Spiritual Growth and Development (SG&D); and Living in Harmony with the Natural Environment (LHWNE). The purpose of the PHMS is to give people the tools to take responsibility for managing their own holistic health and wellbeing. A study using a cross-sectional mixed-methods anonymous online survey was conducted during 2017-2018. Adult students of Professor Dr. Purna were invited to participate through announcements made at various events He held throughout the globe. Follow-up emails were sent with consenting language for interested parties and provided them with a link to the survey. Participation in the study was completely voluntary and no incentives were given to respond to the survey. The overall aim of the study was to investigate the effectiveness of implementation of the PHMS on practitioners' emotional balance. However, given the holistic nature of the PHMS, survey questions also inquired about participants’ physical health, stress level, ability to manage stress, and wellbeing using Likert scales. The survey also included some open-ended questions to gain an understanding of the participants’ experiences with the PHMS relative to their emotional balance. In total, 52 people out of 253 potential respondents participated in the study. Data were analyzed using nonparametric Spearman’s Rho correlation coefficient (rs) since the data were not on a normal distribution. Statistical significance was set at p < .05. Results of the study suggested that there are moderate to strong statistically significant relationships (p < .001) between participants' frequent implementation of each of the four key factors of the PHMS and self-reported mental/emotional health (HF&N rs = 0.42; LB-SM rs = 0.54; SG&D rs = 0.49; LHWNE rs = 0.45) Results also demonstrated statistically significant relationships (p < .001) between participants' frequent implementation of each of the four key factors of the PHMS and their self-reported ability to manage stress (HF&N rs = 0.44; LB-SM rs = 0.55; SG&D rs = 0.39; LHWNE rs = 0.55). Additionally, those who reported experiencing better physical health also reported better mental/emotional health (rs = 0.49, p < .001) and better ability to manage stress (rs = 0.46, p < .001). The findings of this study suggest that wisdom from the ancient Vedic culture may be useful for those working in the field of psychology and related fields who would like to assist clients in calming their mind and emotions and managing their stress levels.Keywords: balanced emotions, balanced mind, stress management, Vedic philosophy
Procedia PDF Downloads 122891 Restricted Boltzmann Machines and Deep Belief Nets for Market Basket Analysis: Statistical Performance and Managerial Implications
Authors: H. Hruschka
Abstract:
This paper presents the first comparison of the performance of the restricted Boltzmann machine and the deep belief net on binary market basket data relative to binary factor analysis and the two best-known topic models, namely Dirichlet allocation and the correlated topic model. This comparison shows that the restricted Boltzmann machine and the deep belief net are superior to both binary factor analysis and topic models. Managerial implications that differ between the investigated models are treated as well. The restricted Boltzmann machine is defined as joint Boltzmann distribution of hidden variables and observed variables (purchases). It comprises one layer of observed variables and one layer of hidden variables. Note that variables of the same layer are not connected. The comparison also includes deep belief nets with three layers. The first layer is a restricted Boltzmann machine based on category purchases. Hidden variables of the first layer are used as input variables by the second-layer restricted Boltzmann machine which then generates second-layer hidden variables. Finally, in the third layer hidden variables are related to purchases. A public data set is analyzed which contains one month of real-world point-of-sale transactions in a typical local grocery outlet. It consists of 9,835 market baskets referring to 169 product categories. This data set is randomly split into two halves. One half is used for estimation, the other serves as holdout data. Each model is evaluated by the log likelihood for the holdout data. Performance of the topic models is disappointing as the holdout log likelihood of the correlated topic model – which is better than Dirichlet allocation - is lower by more than 25,000 compared to the best binary factor analysis model. On the other hand, binary factor analysis on its own is clearly surpassed by both the restricted Boltzmann machine and the deep belief net whose holdout log likelihoods are higher by more than 23,000. Overall, the deep belief net performs best. We also interpret hidden variables discovered by binary factor analysis, the restricted Boltzmann machine and the deep belief net. Hidden variables characterized by the product categories to which they are related differ strongly between these three models. To derive managerial implications we assess the effect of promoting each category on total basket size, i.e., the number of purchased product categories, due to each category's interdependence with all the other categories. The investigated models lead to very different implications as they disagree about which categories are associated with higher basket size increases due to a promotion. Of course, recommendations based on better performing models should be preferred. The impressive performance advantages of the restricted Boltzmann machine and the deep belief net suggest continuing research by appropriate extensions. To include predictors, especially marketing variables such as price, seems to be an obvious next step. It might also be feasible to take a more detailed perspective by considering purchases of brands instead of purchases of product categories.Keywords: binary factor analysis, deep belief net, market basket analysis, restricted Boltzmann machine, topic models
Procedia PDF Downloads 199890 In-Plume H₂O, CO₂, H₂S and SO₂ in the Fumarolic Field of La Fossa Cone (Vulcano Island, Aeolian Archipelago)
Authors: Cinzia Federico, Gaetano Giudice, Salvatore Inguaggiato, Marco Liuzzo, Maria Pedone, Fabio Vita, Christoph Kern, Leonardo La Pica, Giovannella Pecoraino, Lorenzo Calderone, Vincenzo Francofonte
Abstract:
The periods of increased fumarolic activity at La Fossa volcano have been characterized, since early 80's, by changes in the gas chemistry and in the output rate of fumaroles. Excepting the direct measurements of the steam output from fumaroles performed from 1983 to 1995, the mass output of the single gas species has been recently measured, with various methods, only sporadically or for short periods. Since 2008, a scanning DOAS system is operating in the Palizzi area for the remote measurement of the in-plume SO₂ flux. On these grounds, the need of a cross-comparison of different methods for the in situ measurement of the output rate of different gas species is envisaged. In 2015, two field campaigns have been carried out, aimed at: 1. The mapping of the concentration of CO₂, H₂S and SO₂ in the fumarolic plume at 1 m from the surface, by using specific open-path diode tunable lasers (GasFinder Boreal Europe Ltd.) and an Active DOAS for SO₂, respectively; these measurements, coupled to simultaneous ultrasonic wind speed and meteorological data, have been elaborated to obtain the dispersion map and the output rate of single species in the overall fumarolic field; 2. The mapping of the concentrations of CO₂, H₂S, SO₂, H₂O in the fumarolic plume at 0.5 m from the soil, by using an integrated system, including IR spectrometers and specific electrochemical sensors; this has provided the concentration ratios of the analysed gas species and their distribution in the fumarolic field; 3. The in-fumarole sampling of vapour and measurement of the steam output, to validate the remote measurements. The dispersion map of CO₂, obtained from the tunable laser measurements, shows a maximum CO₂ concentration at 1m from the soil of 1000 ppmv along the rim, and 1800 ppmv in the inner slopes. As observed, the largest contribution derives from a wide fumarole of the inner-slope, despite its present outlet temperature of 230°C, almost 200°C lower than those measured at the rim fumaroles. Actually, fumaroles in the inner slopes are among those emitting the largest amount of magmatic vapour and, during the 1989-1991 crisis, reached the temperature of 690°C. The estimated CO₂ and H₂S fluxes are 400 t/d and 4.4 t/d, respectively. The coeval SO₂ flux, measured by the scanning DOAS system, is 9±1 t/d. The steam output, recomputed from CO₂ flux measurements, is about 2000 t/d. The various direct and remote methods (as described at points 1-3) have produced coherent results, which encourage to the use of daily and automatic DOAS SO₂ data, coupled with periodic in-plume measurements of different acidic gases, to obtain the total mass rates.Keywords: DOAS, fumaroles, plume, tunable laser
Procedia PDF Downloads 399889 Coastal Resources Spatial Planning and Potential Oil Risk Analysis: Case Study of Misratah’s Coastal Resources, Libya
Authors: Abduladim Maitieg, Kevin Lynch, Mark Johnson
Abstract:
The goal of the Libyan Environmental General Authority (EGA) and National Oil Corporation (Department of Health, Safety & Environment) during the last 5 years has been to adopt a common approach to coastal and marine spatial planning. Protection and planning of the coastal zone is a significant for Libya, due to the length of coast and, the high rate of oil export, and spills’ potential negative impacts on coastal and marine habitats. Coastal resource scenarios constitute an important tool for exploring the long-term and short-term consequences of oil spill impact and available response options that would provide an integrated perspective on mitigation. To investigate that, this paper reviews the Misratah coastal parameters to present the physical and human controls and attributes of coastal habitats as the first step in understanding how they may be damaged by an oil spill. This paper also investigates costal resources, providing a better understanding of the resources and factors that impact the integrity of the ecosystem. Therefore, the study described the potential spatial distribution of oil spill risk and the coastal resources value, and also created spatial maps of coastal resources and their vulnerability to oil spills along the coast. This study proposes an analysis of coastal resources condition at a local level in the Misratah region of the Mediterranean Sea, considering the implementation of coastal and marine spatial planning over time as an indication of the will to manage urban development. Oil spill contamination analysis and their impact on the coastal resources depend on (1) oil spill sequence, (2) oil spill location, (3) oil spill movement near the coastal area. The resulting maps show natural, socio-economic activity, environmental resources along of the coast, and oil spill location. Moreover, the study provides significant geodatabase information which is required for coastal sensitivity index mapping and coastal management studies. The outcome of study provides the information necessary to set an Environmental Sensitivity Index (ESI) for the Misratah shoreline, which can be used for management of coastal resources and setting boundaries for each coastal sensitivity sectors, as well as to help planners measure the impact of oil spills on coastal resources. Geographic Information System (GIS) tools were used in order to store and illustrate the spatial convergence of existing socio-economic activities such as fishing, tourism, and the salt industry, and ecosystem components such as sea turtle nesting area, Sabkha habitats, and migratory birds feeding sites. These geodatabases help planners investigate the vulnerability of coastal resources to an oil spill.Keywords: coastal and marine spatial planning advancement training, GIS mapping, human uses, ecosystem components, Misratah coast, Libyan, oil spill
Procedia PDF Downloads 362888 Analyzing Transit Network Design versus Urban Dispersion
Authors: Hugo Badia
Abstract:
This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.Keywords: analytical network design model, network structure, public transport, urban dispersion
Procedia PDF Downloads 230887 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives
Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes
Abstract:
The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system
Procedia PDF Downloads 121886 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images
Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod
Abstract:
The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck
Procedia PDF Downloads 216885 Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.Keywords: aggregation, deployment, embedding, resource allocation
Procedia PDF Downloads 203884 Optimal Uses of Rainwater to Maintain Water Level in Gomti Nagar, Uttar Pradesh, India
Authors: Alok Saini, Rajkumar Ghosh
Abstract:
Water is nature's important resource for survival of all living things, but freshwater scarcity exists in some parts of world. This study has predicted that Gomti Nagar area (49.2 sq. km.) will harvest about 91110 ML of rainwater till 2051 (assuming constant and present annual rainfall). But 17.71 ML of rainwater was harvested from only 53 buildings in Gomti Nagar area in the year 2021. Water level will be increased (rise) by 13 cm in Gomti Nagar from such groundwater recharge. The total annual groundwater abstraction from Gomti Nagar area was 35332 ML (in 2021). Due to hydrogeological constraints and lower annual rainfall, groundwater recharge is less than groundwater abstraction. The recent scenario is only 0.07% of rainwater recharges by RTRWHs in Gomti Nagar. But if RTRWHs would be installed in all buildings then 12.39% of rainwater could recharge groundwater table in Gomti Nagar area. But if RTRWHs would be installed in all buildings then 12.39% of rainwater could recharge groundwater table in Gomti Nagar area. Gomti Nagar is situated in 'Zone–A' (water distribution area) and groundwater is the primary source of freshwater supply. Current scenario indicates only 0.07% of rainwater recharges by RTRWHs in Gomti Nagar. In Gomti Nagar, the difference between groundwater abstraction and recharge will be 735570 ML in 30 yrs. Statistically, all buildings at Gomti Nagar (new and renovated) could harvest 3037 ML of rainwater through RTRWHs annually. The most recent monsoonal recharge in Gomti Nagar was 10813 ML/yr. Harvested rainwater collected from RTRWHs can be used for rooftop irrigation, and residential kitchen and gardens (home grown fruit and vegetables). According to bylaws, RTRWH installations are required in both newly constructed and existing buildings plot areas of 300 sq. m or above. Harvested rainwater is of higher quality than contaminated groundwater. Harvested rainwater from RTRWHs can be considered water self-sufficient. Rooftop Rainwater Harvesting Systems (RTRWHs) are least expensive, eco-friendly, most sustainable, and alternative water resource for artificial recharge. This study also predicts about 3.9 m of water level rise in Gomti Nagar area till 2051, only when all buildings will install RTRWHs and harvest for groundwater recharging. As a result, this current study responds to an impact assessment study of RTRWHs implementation for the water scarcity problem in the Gomti Nagar area (1.36 sq.km.). This study suggests that common storage tanks (recharge wells) should be built for a group of at least ten (10) households and optimal amount of harvested rainwater will be stored annually. Artificial recharge from alternative water sources will be required to improve the declining water level trend and balance the groundwater table in this area. This over-exploitation of groundwater may lead to land subsidence, and development of vertical cracks.Keywords: aquifer, aquitard, artificial recharge, bylaws, groundwater, monsoon, rainfall, rooftop rainwater harvesting system, RTRWHs water table, water level
Procedia PDF Downloads 97883 Solar Cell Packed and Insulator Fused Panels for Efficient Cooling in Cubesat and Satellites
Authors: Anand K. Vinu, Vaishnav Vimal, Sasi Gopalan
Abstract:
All spacecraft components have a range of allowable temperatures that must be maintained to meet survival and operational requirements during all mission phases. Due to heat absorption, transfer, and emission on one side, the satellite surface presents an asymmetric temperature distribution and causes a change in momentum, which can manifest in spinning and non-spinning satellites in different manners. This problem can cause orbital decays in satellites which, if not corrected, will interfere with its primary objective. The thermal analysis of any satellite requires data from the power budget for each of the components used. This is because each of the components has different power requirements, and they are used at specific times in an orbit. There are three different cases that are run, one is the worst operational hot case, the other one is the worst non-operational cold case, and finally, the operational cold case. Sunlight is a major source of heating that takes place on the satellite. The way in which it affects the spacecraft depends on the distance from the Sun. Any part of a spacecraft or satellite facing the Sun will absorb heat (a net gain), and any facing away will radiate heat (a net loss). We can use the state-of-the-art foldable hybrid insulator/radiator panel. When the panels are opened, that particular side acts as a radiator for dissipating the heat. Here the insulator, in our case, the aerogel, is sandwiched with solar cells and radiator fins (solar cells outside and radiator fins inside). Each insulated side panel can be opened and closed using actuators depending on the telemetry data of the CubeSat. The opening and closing of the panels are dependent on the special code designed for this particular application, where the computer calculates where the Sun is relative to the satellites. According to the data obtained from the sensors, the computer decides which panel to open and by how many degrees. For example, if the panels open 180 degrees, the solar panels will directly face the Sun, in turn increasing the current generator of that particular panel. One example is when one of the corners of the CubeSat is facing or if more than one side is having a considerable amount of sun rays incident on it. Then the code will analyze the optimum opening angle for each panel and adjust accordingly. Another means of cooling is the passive way of cooling. It is the most suitable system for a CubeSat because of its limited power budget constraints, low mass requirements, and less complex design. Other than this fact, it also has other advantages in terms of reliability and cost. One of the passive means is to make the whole chase act as a heat sink. For this, we can make the entire chase out of heat pipes and connect the heat source to this chase with a thermal strap that transfers the heat to the chassis.Keywords: passive cooling, CubeSat, efficiency, satellite, stationary satellite
Procedia PDF Downloads 100882 Monitoring of Serological Test of Blood Serum in Indicator Groups of the Population of Central Kazakhstan
Authors: Praskovya Britskaya, Fatima Shaizadina, Alua Omarova, Nessipkul Alysheva
Abstract:
Planned preventive vaccination, which is carried out in the Republic of Kazakhstan, promoted permanent decrease in the incidence of measles and viral hepatitis B. In the structure of VHB patients prevail people of young, working age. Monitoring of infectious incidence, monitoring of coverage of immunization of the population, random serological control over the immunity enable well-timed identification of distribution of the activator, effectiveness of the taken measures and forecasting. The serological blood analysis was conducted in indicator groups of the population of Central Kazakhstan for the purpose of identification of antibody titre for vaccine preventable infections (measles, viral hepatitis B). Measles antibodies were defined by method of enzyme-linked assay (ELA) with test-systems "VektoKor" – Ig G ('Vektor-Best' JSC). Antibodies for HBs-antigen of hepatitis B virus in blood serum was identified by method of enzyme-linked assay (ELA) with VektoHBsAg test systems – antibodies ('Vektor-Best' JSC). The result of the analysis is positive, the concentration of IgG to measles virus in the studied sample is equal to 0.18 IU/ml or more. Protective level of concentration of anti-HBsAg makes 10 mIU/ml. The results of the study of postvaccinal measles immunity showed that the share of seropositive people made 87.7% of total number of surveyed. The level of postvaccinal immunity to measles in age groups differs. So, among people older than 56 the percentage of seropositive made 95.2%. Among people aged 15-25 were registered 87.0% seropositive, at the age of 36-45 – 86.6%. In age groups of 25-35 and 36-45 the share of seropositive people was approximately at the same level – 88.5% and 88.8% respectively. The share of people seronegative to a measles virus made 12.3%. The biggest share of seronegative people was found among people aged 36-45 – 13.4% and 15-25 – 13.0%. The analysis of results of the examined people for the existence of postvaccinal immunity to viral hepatitis B showed that from all surveyed only 33.5% have the protective level of concentration of anti-HBsAg of 10 mIU/ml and more. The biggest share of people protected from VHB virus is observed in the age group of 36-45 and makes 60%. In the indicator group – above 56 – seropositive people made 4.8%. The high percentage of seronegative people has been observed in all studied age groups from 40.0% to 95.2%. The group of people which is least protected from getting VHB is people above 56 (95.2%). The probability to get VHB is also high among young people aged 25-35, the percentage of seronegative people made 80%. Thus, the results of the conducted research testify to the need for carrying out serological monitoring of postvaccinal immunity for the purpose of operational assessment of the epidemiological situation, early identification of its changes and prediction of the approaching danger.Keywords: antibodies, blood serum, immunity, immunoglobulin
Procedia PDF Downloads 255881 Development of a Framework for Assessing Public Health Risk Due to Pluvial Flooding: A Case Study of Sukhumvit, Bangkok
Authors: Pratima Pokharel
Abstract:
When sewer overflow due to rainfall in urban areas, this leads to public health risks when an individual is exposed to that contaminated floodwater. Nevertheless, it is still unclear the extent to which the infections pose a risk to public health. This study analyzed reported diarrheal cases by month and age in Bangkok, Thailand. The results showed that the cases are reported higher in the wet season than in the dry season. It was also found that in Bangkok, the probability of infection with diarrheal diseases in the wet season is higher for the age group between 15 to 44. However, the probability of infection is highest for kids under 5 years, but they are not influenced by wet weather. Further, this study introduced a vulnerability that leads to health risks from urban flooding. This study has found some vulnerability variables that contribute to health risks from flooding. Thus, for vulnerability analysis, the study has chosen two variables, economic status, and age, that contribute to health risk. Assuming that the people's economic status depends on the types of houses they are living in, the study shows the spatial distribution of economic status in the vulnerability maps. The vulnerability map result shows that people living in Sukhumvit have low vulnerability to health risks with respect to the types of houses they are living in. In addition, from age the probability of infection of diarrhea was analyzed. Moreover, a field survey was carried out to validate the vulnerability of people. It showed that health vulnerability depends on economic status, income level, and education. The result depicts that people with low income and poor living conditions are more vulnerable to health risks. Further, the study also carried out 1D Hydrodynamic Advection-Dispersion modelling with 2-year rainfall events to simulate the dispersion of fecal coliform concentration in the drainage network as well as 1D/2D Hydrodynamic model to simulate the overland flow. The 1D result represents higher concentrations for dry weather flows and a large dilution of concentration on the commencement of a rainfall event, resulting in a drop of the concentration due to runoff generated after rainfall, whereas the model produced flood depth, flood duration, and fecal coliform concentration maps, which were transferred to ArcGIS to produce hazard and risk maps. In addition, the study also simulates the 5-year and 10-year rainfall simulations to show the variation in health hazards and risks. It was found that even though the hazard coverage is very high with a 10-year rainfall events among three rainfall events, the risk was observed to be the same with a 5-year and 10-year rainfall events.Keywords: urban flooding, risk, hazard, vulnerability, health risk, framework
Procedia PDF Downloads 75880 Landscape Pattern Evolution and Optimization Strategy in Wuhan Urban Development Zone, China
Abstract:
With the rapid development of urbanization process in China, its environmental protection pressure is severely tested. So, analyzing and optimizing the landscape pattern is an important measure to ease the pressure on the ecological environment. This paper takes Wuhan Urban Development Zone as the research object, and studies its landscape pattern evolution and quantitative optimization strategy. First, remote sensing image data from 1990 to 2015 were interpreted by using Erdas software. Next, the landscape pattern index of landscape level, class level, and patch level was studied based on Fragstats. Then five indicators of ecological environment based on National Environmental Protection Standard of China were selected to evaluate the impact of landscape pattern evolution on the ecological environment. Besides, the cost distance analysis of ArcGIS was applied to simulate wildlife migration thus indirectly measuring the improvement of ecological environment quality. The result shows that the area of land for construction increased 491%. But the bare land, sparse grassland, forest, farmland, water decreased 82%, 47%, 36%, 25% and 11% respectively. They were mainly converted into construction land. On landscape level, the change of landscape index all showed a downward trend. Number of patches (NP), Landscape shape index (LSI), Connection index (CONNECT), Shannon's diversity index (SHDI), Aggregation index (AI) separately decreased by 2778, 25.7, 0.042, 0.6, 29.2%, all of which indicated that the NP, the degree of aggregation and the landscape connectivity declined. On class level, the construction land and forest, CPLAND, TCA, AI and LSI ascended, but the Distribution Statistics Core Area (CORE_AM) decreased. As for farmland, water, sparse grassland, bare land, CPLAND, TCA and DIVISION, the Patch Density (PD) and LSI descended, yet the patch fragmentation and CORE_AM increased. On patch level, patch area, Patch perimeter, Shape index of water, farmland and bare land continued to decline. The three indexes of forest patches increased overall, sparse grassland decreased as a whole, and construction land increased. It is obvious that the urbanization greatly influenced the landscape evolution. Ecological diversity and landscape heterogeneity of ecological patches clearly dropped. The Habitat Quality Index continuously declined by 14%. Therefore, optimization strategy based on greenway network planning is raised for discussion. This paper contributes to the study of landscape pattern evolution in planning and design and to the research on spatial layout of urbanization.Keywords: landscape pattern, optimization strategy, ArcGIS, Erdas, landscape metrics, landscape architecture
Procedia PDF Downloads 166879 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 60878 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance
Authors: Eva Laryea, Clement Yeboah Authors
Abstract:
A pretest-posttest within subjects, experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising, as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers, and will continue to be a dynamic and rapidly evolving field for years to come.Keywords: pretest-posttest within subjects, experimental design, achievement, statistics-related anxiety
Procedia PDF Downloads 58