Search results for: John B. Wood
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1163

Search results for: John B. Wood

143 Natural Mexican Zeolite Modified with Iron to Remove Arsenic Ions from Water Sources

Authors: Maritza Estela Garay-Rodriguez, Mirella Gutierrez-Arzaluz, Miguel Torres-Rodriguez, Violeta Mugica-Alvarez

Abstract:

Arsenic is an element present in the earth's crust and is dispersed in the environment through natural processes and some anthropogenic activities. Naturally released into the environment through the weathering and erosion of sulphides mineral, some activities such as mining, the use of pesticides or wood preservatives potentially increase the concentration of arsenic in air, water, and soil. The natural arsenic release of a geological material is a threat to the world's drinking water sources. In aqueous phase is found in inorganic form, as arsenate and arsenite mainly, the contamination of groundwater by salts of this element originates what is known as endemic regional hydroarsenicism. The International Agency for Research on Cancer (IARC) categorizes the inorganic As within group I, as a substance with proven carcinogenic action for humans. It has been found the presence of As in groundwater in several countries such as Argentina, Mexico, Bangladesh, Canada and the United States. Regarding the concentration of arsenic in drinking water according to the World Health Organization (WHO) and the Environmental Protection Agency (EPA) establish maximum concentrations of 10 μg L⁻¹. In Mexico, in some states as Hidalgo, Morelos and Michoacán concentrations of arsenic have been found in bodies of water around 1000 μg L⁻¹, a concentration that is well above what is allowed by Mexican regulations with the NOM-127- SSA1-1994 that establishes a limit of 25 μg L⁻¹. Given this problem in Mexico, this research proposes the use of a natural Mexican zeolite (clinoptilolite type) native to the district of Etla in the central valley region of Oaxaca, as an adsorbent for the removal of arsenic. The zeolite was subjected to a conditioning with iron oxide by the precipitation-impregnation method with 0.5 M iron nitrate solution, in order to increase the natural adsorption capacity of this material. The removal of arsenic was carried out in a column with a fixed bed of conditioned zeolite, since it combines the advantages of a conventional filter with those of a natural adsorbent medium, providing a continuous treatment, of low cost and relatively easy to operate, for its implementation in marginalized areas. The zeolite was characterized by XRD, SEM/EDS, and FTIR before and after the arsenic adsorption tests, the results showed that the modification methods used are adequate to prepare adsorbent materials since it does not modify its structure, the results showed that with a particle size of 1.18 mm, an initial concentration of As (V) ions of 1 ppm, a pH of 7 and at room temperature, a removal of 98.7% was obtained with an adsorption capacity of 260 μg As g⁻¹ zeolite. The results obtained indicated that the conditioned zeolite is favorable for the elimination of arsenate in water containing up to 1000 μg As L⁻¹ and could be suitable for removing arsenate from pits of water.

Keywords: adsorption, arsenic, iron conditioning, natural zeolite

Procedia PDF Downloads 154
142 Evaluation of Electrophoretic and Electrospray Deposition Methods for Preparing Graphene and Activated Carbon Modified Nano-Fibre Electrodes for Hydrogen/Vanadium Flow Batteries and Supercapacitors

Authors: Barun Chakrabarti, Evangelos Kalamaras, Vladimir Yufit, Xinhua Liu, Billy Wu, Nigel Brandon, C. T. John Low

Abstract:

In this work, we perform electrophoretic deposition of activated carbon on a number of substrates to prepare symmetrical coin cells for supercapacitor applications. From several recipes that involve the evaluation of a few solvents such as isopropyl alcohol, N-Methyl-2-pyrrolidone (NMP), or acetone to binders such as polyvinylidene fluoride (PVDF) and charging agents such as magnesium chloride, we display a working means for achieving supercapacitors that can achieve 100 F/g in a consistent manner. We then adapt this EPD method to deposit reduced graphene oxide on SGL 10AA carbon paper to achieve cathodic materials for testing in a hydrogen/vanadium flow battery. In addition, a self-supported hierarchical carbon nano-fibre is prepared by means of electrospray deposition of an iron phthalocyanine solution onto a temporary substrate followed by carbonisation to remove heteroatoms. This process also induces a degree of nitrogen doping on the carbon nano-fibres (CNFs), which allows its catalytic performance to improve significantly as detailed in other publications. The CNFs are then used as catalysts by attaching them to graphite felt electrodes facing the membrane inside an all-vanadium flow battery (Scribner cell using serpentine flow distribution channels) and efficiencies as high as 60% is noted at high current densities of 150 mA/cm². About 20 charge and discharge cycling show that the CNF catalysts consistently perform better than pristine graphite felt electrodes. Following this, we also test the CNF as an electro-catalyst in the hydrogen/vanadium flow battery (cathodic side as mentioned briefly in the first paragraph) facing the membrane, based upon past studies from our group. Once again, we note consistently good efficiencies of 85% and above for CNF modified graphite felt electrodes in comparison to 60% for pristine felts at low current density of 50 mA/cm² (this reports 20 charge and discharge cycles of the battery). From this preliminary investigation, we conclude that the CNFs may be used as catalysts for other systems such as vanadium/manganese, manganese/manganese and manganese/hydrogen flow batteries in the future. We are generating data for such systems at present, and further publications are expected.

Keywords: electrospinning, carbon nano-fibres, all-vanadium redox flow battery, hydrogen-vanadium fuel cell, electrocatalysis

Procedia PDF Downloads 279
141 Improving Screening and Treatment of Binge Eating Disorders in Pediatric Weight Management Clinic through a Quality Improvement Framework

Authors: Cristina Fernandez, Felix Amparano, John Tumberger, Stephani Stancil, Sarah Hampl, Brooke Sweeney, Amy R. Beck, Helena H Laroche, Jared Tucker, Eileen Chaves, Sara Gould, Matthew Lindquist, Lora Edwards, Renee Arensberg, Meredith Dreyer, Jazmine Cedeno, Alleen Cummins, Jennifer Lisondra, Katie Cox, Kelsey Dean, Rachel Perera, Nicholas A. Clark

Abstract:

Background: Adolescents with obesity are at higher risk of disordered eating than the general population. Detection of eating disorders (ED) is difficult. Screening questionnaires may aid in early detection of ED. Our team’s prior efforts focused on increasing ED screening rates to ≥90% using a validated 10-question adolescent binge eating disorder screening questionnaire (ADO-BED). This aim was achieved. We then aimed to improve treatment plan initiation of patients ≥12 years of age who screen positive for BED within our WMC from 33% to 70% within 12 months. Methods: Our WMC is within a tertiary-care, free-standing children’s hospital. A3, an improvement framework, was used. A multidisciplinary team (physicians, nurses, registered dietitians, psychologists, and exercise physiologists) was created. The outcome measure was documentation of treatment plan initiation of those who screen positive (goal 70%). The process measure was ADO-BED screening rate of WMC patients (goal ≥90%). Plan-Do-Study-Act (PDSA) cycle 1 included provider education on current literature and treatment plan initiation based upon ADO-BED responses. PDSA 2 involved increasing documentation of treatment plan and retrain process to providers. Pre-defined treatment plans were: 1) repeat screen in 3-6 months, 2) resources provided only, or 3) comprehensive multidisciplinary weight management team evaluation. Run charts monitored impact over time. Results: Within 9 months, 166 patients were seen in WMC. Process measure showed sustained performance above goal (mean 98%). Outcome measure showed special cause improvement from mean of 33% to 100% (n=31). Of treatment plans provided, 45% received Plan 1, 4% Plan 2, and 46% Plan 3. Conclusion: Through a multidisciplinary improvement team approach, we maintained sustained ADO-BED screening performance, and, prior to our 12-month timeline, achieved our project aim. Our efforts may serve as a model for other multidisciplinary WMCs. Next steps may include expanding project scope to other WM programs.

Keywords: obesity, pediatrics, clinic, eating disorder

Procedia PDF Downloads 40
140 Predictors of Pericardial Effusion Requiring Drainage Following Coronary Artery Bypass Graft Surgery: A Retrospective Analysis

Authors: Nicholas McNamara, John Brookes, Michael Williams, Manish Mathew, Elizabeth Brookes, Tristan Yan, Paul Bannon

Abstract:

Objective: Pericardial effusions are an uncommon but potentially fatal complication after cardiac surgery. The goal of this study was to describe the incidence and risk factors associated with the development of pericardial effusion requiring drainage after coronary artery bypass graft surgery (CABG). Methods: A retrospective analysis was undertaken using prospectively collected data. All adult patients who underwent CABG at our institution between 1st January 2017 and 31st December 2018 were included. Pericardial effusion was diagnosed using transthoracic echocardiography (TTE) performed for clinical suspicion of pre-tamponade or tamponade. Drainage was undertaken if considered clinically necessary and performed via a sub-xiphoid incision, pericardiocentesis, or via re-sternotomy at the discretion of the treating surgeon. Patient demographics, operative characteristics, anticoagulant exposure, and postoperative outcomes were examined to identify those variables associated with the development of pericardial effusion requiring drainage. Tests of association were performed using the Fischer exact test for dichotomous variables and the Student t-test for continuous variables. Logistic regression models were used to determine univariate predictors of pericardial effusion requiring drainage. Results: Between January 1st, 2017, and December 31st, 2018, a total of 408 patients underwent CABG at our institution, and eight (1.9%) required drainage of pericardial effusion. There was no difference in age, gender, or the proportion of patients on preoperative therapeutic heparin between the study and control groups. Univariate analysis identified preoperative atrial arrhythmia (37.5% vs 8.8%, p = 0.03), reduced left ventricular ejection fraction (47% vs 56%, p = 0.04), longer cardiopulmonary bypass (130 vs 84 min, p < 0.01) and cross-clamp (107 vs 62 min, p < 0.01) times, higher drain output in the first four postoperative hours (420 vs 213 mL, p <0.01), postoperative atrial fibrillation (100% vs 32%, p < 0.01), and pleural effusion requiring drainage (87.5% vs 12.5%, p < 0.01) to be associated with development of pericardial effusion requiring drainage. Conclusion: In this study, the incidence of pericardial effusion requiring drainage was 1.9%. Several factors, mainly related to preoperative or postoperative arrhythmia, length of surgery, and pleural effusion requiring drainage, were identified to be associated with developing clinically significant pericardial effusions. High clinical suspicion and low threshold for transthoracic echo are pertinent to ensure this potentially lethal condition is not missed.

Keywords: coronary artery bypass, pericardial effusion, pericardiocentesis, tamponade, sub-xiphoid drainage

Procedia PDF Downloads 151
139 Differences in Preschool Educators' and Parents' Interactive Behavior during a Cooperative Task with Children

Authors: Marina Fuertes

Abstract:

Introduction: In everyday life experiences, children are solicited to cooperate with others. Often they perform cooperative tasks with their parents (e.g., setting the table for dinner) or in school. These tasks are very significant since children may learn to turn taking in interactions, to participate as well to accept others participation, to trust, to respect, to negotiate, to self-regulate their emotions, etc. Indeed, cooperative tasks contribute to children social, motor, cognitive and linguistic development. Therefore, it is important to study what learning, social and affective experiences are provided to children during these tasks. In this study, we included parents and preschool educators. Parents and educators are both significant: educative, interactive and affective figures. Rarely parents and educators behavior have been compared in studies about cooperative tasks. Parents and educators have different but complementary styles of interaction and communication. Aims: Therefore, this study aims to compare parents and educators' (of both genders) interactive behavior (cooperativity, empathy, ability to challenge the child, reciprocity, elaboration) during a play/individualized situation involving a cooperative task. Moreover, to compare parents and educators' behavior with girls and boys. Method: A quasi-experimental study with 45 dyads educators-children and 45 dyads with parents and their children. In this study, participated children between 3 and 5 years old and with age appropriate development. Adults and children were videotaped using a variety of materials (e.g., pencils, wood, wool) and tools (e.g., scissors, hammer) to produce together something of their choice during 20-minutes. Each dyad (one adult and one child) was observed and videotaped independently. Adults and children agreed and consented to participate. Experimental conditions were suitable, pleasant and age appropriated. Results: Findings indicate that parents and teachers offer different learning experiences. Teachers were more likely to challenged children to explore new concepts and to accept children ideas. In turn, parents gave more support to children actions and were more likely to use their own example to teach children. Multiple regression analysis indicates that parent versus educator status predicts their behavior. Gender of both children and adults affected the results. Adults acted differently with girls and boys (e.g., adults worked more cooperatively with girls than boys). Male participants supported more girls participation rather than boys while female adults allowed boys to make more decisions than girls. Discussion: Taking our results and past studies, we learn that different qualitative interactions and learning experiences are offered by parents, educators according to parents and children gender. Thus, the same child needs to learn different cooperative strategies according to their interactive patterns and specific context. Yet, cooperative play and individualized activities with children generate learning opportunities and benefits children participation and involvement.

Keywords: early childhood education, parenting, gender, cooperative tasks, adult-child interaction

Procedia PDF Downloads 311
138 Aerosol Characterization in a Coastal Urban Area in Rimini, Italy

Authors: Dimitri Bacco, Arianna Trentini, Fabiana Scotto, Flavio Rovere, Daniele Foscoli, Cinzia Para, Paolo Veronesi, Silvia Sandrini, Claudia Zigola, Michela Comandini, Marilena Montalti, Marco Zamagni, Vanes Poluzzi

Abstract:

The Po Valley, in the north of Italy, is one of the most polluted areas in Europe. The air quality of the area is linked not only to anthropic activities but also to its geographical characteristics and stagnant weather conditions with frequent inversions, especially in the cold season. Even the coastal areas present high values of particulate matter (PM10 and PM2.5) because the area closed between the Adriatic Sea and the Apennines does not favor the dispersion of air pollutants. The aim of the present work was to identify the main sources of particulate matter in Rimini, a tourist city in northern Italy. Two sampling campaigns were carried out in 2018, one in winter (60 days) and one in summer (30 days), in 4 sites: an urban background, a city hotspot, a suburban background, and a rural background. The samples are characterized by the concentration of the ionic composition of the particulates and of the main a hydro-sugars, in particular levoglucosan, a marker of the biomass burning, because one of the most important anthropogenic sources in the area, both in the winter and surprisingly even in the summer, is the biomass burning. Furthermore, three sampling points were chosen in order to maximize the contribution of a specific biomass source: a point in a residential area (domestic cooking and domestic heating), a point in the agricultural area (weed fires), and a point in the tourist area (restaurant cooking). In these sites, the analyzes were enriched with the quantification of the carbonaceous component (organic and elemental carbon) and with measurement of the particle number concentration and aerosol size distribution (6 - 600 nm). The results showed a very significant impact of the combustion of biomass due to domestic heating in the winter period, even though many intense peaks were found attributable to episodic wood fires. In the summer season, however, an appreciable signal was measured linked to the combustion of biomass, although much less intense than in winter, attributable to domestic cooking activities. Further interesting results were the verification of the total absence of sea salt's contribution in the particulate with the lower diameter (PM2.5), and while in the PM10, the contribution becomes appreciable only in particular wind conditions (high wind from north, north-east). Finally, it is interesting to note that in a small town, like Rimini, in summer, the traffic source seems to be even more relevant than that measured in a much larger city (Bologna) due to tourism.

Keywords: aerosol, biomass burning, seacoast, urban area

Procedia PDF Downloads 112
137 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization

Authors: Aitor Bilbao, Dragos Axinte, John Billingham

Abstract:

The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.

Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation

Procedia PDF Downloads 265
136 From Talk to Action-Tackling Africa’s Pollution and Climate Change Problem

Authors: Ngabirano Levis

Abstract:

One of Africa’s major environmental challenges remains air pollution. In 2017, UNICEF estimated over 400,000 children in Africa died as a result of indoor pollution, while 350 million children remain exposed to the risks of indoor pollution due to the use of biomass and burning of wood for cooking. Over time, indeed, the major causes of mortality across Africa are shifting from the unsafe water, poor sanitation, and malnutrition to the ambient and household indoor pollution, and greenhouse gas (GHG) emissions remain a key factor in this. In addition, studies by the OECD estimated that the economic cost of premature deaths due to Ambient Particulate Matter Pollution (APMP) and Household Air Pollution across Africa in 2013 was about 215 Billion US Dollars and US 232 Billion US Dollars, respectively. This is not only a huge cost for a continent where over 41% of the Sub-Saharan population lives on less than 1.9 US Dollars a day but also makes the people extremely vulnerable to the negative climate change and environmental degradation effects. Such impacts have led to extended droughts, flooding, health complications, and reduced crop yields hence food insecurity. Climate change, therefore, poses a threat to global targets like poverty reduction, health, and famine. Despite efforts towards mitigation, air contributors like carbon dioxide emissions are on a generally upward trajectory across Africa. In Egypt, for instance, emission levels had increased by over 141% in 2010 from the 1990 baseline. Efforts like the climate change adaptation and mitigation financing have also hit obstacles on the continent. The International Community and developed nations stress that Africa still faces challenges of limited human, institutional and financial systems capable of attracting climate funding from these developed economies. By using the qualitative multi-case study method supplemented by interviews of key actors and comprehensive textual analysis of relevant literature, this paper dissects the key emissions and air pollutant sources, their impact on the well-being of the African people, and puts forward suggestions as well as a remedial mechanism to these challenges. The findings reveal that whereas climate change mitigation plans appear comprehensive and good on paper for many African countries like Uganda; the lingering political interference, limited research guided planning, lack of population engagement, irrational resource allocation, and limited system and personnel capacity has largely impeded the realization of the set targets. Recommendations have been put forward to address the above climate change impacts that threaten the food security, health, and livelihoods of the people on the continent.

Keywords: Africa, air pollution, climate change, mitigation, emissions, effective planning, institutional strengthening

Procedia PDF Downloads 61
135 Prenatal Genetic Screening and Counselling Competency Challenges of Nurse-Midwife

Authors: Girija Madhavanprabhakaran, Frincy Franacis, Sheeba Elizabeth John

Abstract:

Introduction: A wide range of prenatal genetic screening is introduced with increasing incidences of congenital anomalies even in low-risk pregnancies and is an emerging standard of care. Being frontline caretakers, the role and responsibilities of nurses and midwives are critical as they are working along with couples to provide evidence-based supportive educative care. The increasing genetic disorders and advances in prenatal genetic screening with limited genetic counselling facilities urge nurses and midwifery nurses with essential competencies to help couples to take informed decision. Objective: This integrative literature review aimed to explore nurse midwives’ knowledge and role in prenatal screening and genetic counselling competency and the challenges faced by them to cater to all pregnant women to empower their autonomy in decision making and ensuring psychological comfort. Method: An electronic search using keywords prenatal screening, genetic counselling, prenatal counselling, nurse midwife, nursing education, genetics, and genomics were done in the PUBMED, SCOPUS and Medline, Google Scholar. Finally, based on inclusion criteria, 8 relevant articles were included. Results: The main review results suggest that nurses and midwives lack essential support, knowledge, or confidence to be able to provide genetic counselling and help the couples ethically to ensure client autonomy and decision making. The majority of nurses and midwives reported inadequate levels of knowledge on genetic screening and their roles in obtaining family history, pedigrees, and providing genetic information for an affected client or high-risk families. The deficiency of well-recognized and influential clinical academic midwives in midwifery practice is also reported. Evidence recommended to update and provide sound educational training to improve nurse-midwife competence and confidence. Conclusion: Overcoming the challenges to achieving informed choices about fetal anomaly screening globally is a major concern. Lack of adequate knowledge and counselling competency, communication insufficiency, need for education and policy are major areas to address. Prenatal nurses' and midwives’ knowledge on prenatal genetic screening and essential counselling competencies can ensure services to the majority of pregnant women around the globe to be better-informed decision-makers and enhances their autonomy, and reduces ethical dilemmas.

Keywords: challenges, genetic counselling, prenatal screening, prenatal counselling

Procedia PDF Downloads 178
134 Measurement of Fatty Acid Changes in Post-Mortem Belowground Carcass (Sus-scrofa) Decomposition: A Semi-Quantitative Methodology for Determining the Post-Mortem Interval

Authors: Nada R. Abuknesha, John P. Morgan, Andrew J. Searle

Abstract:

Information regarding post-mortem interval (PMI) in criminal investigations is vital to establish a time frame when reconstructing events. PMI is defined as the time period that has elapsed between the occurrence of death and the discovery of the corpse. Adipocere, commonly referred to as ‘grave-wax’, is formed when post-mortem adipose tissue is converted into a solid material that is heavily comprised of fatty acids. Adipocere is of interest to forensic anthropologists, as its formation is able to slow down the decomposition process. Therefore, analysing the changes in the patterns of fatty acids during the early decomposition process may be able to estimate the period of burial, and hence the PMI. The current study concerned the investigation of the fatty acid composition and patterns in buried pig fat tissue. This was in an attempt to determine whether particular patterns of fatty acid composition can be shown to be associated with the duration of the burial, and hence may be used to estimate PMI. The use of adipose tissue from the abdominal region of domestic pigs (Sus-scrofa), was used to model the human decomposition process. 17 x 20cm piece of pork belly was buried in a shallow artificial grave, and weekly samples (n=3) from the buried pig fat tissue were collected over an 11-week period. Marker fatty acids: palmitic (C16:0), oleic (C18:1n-9) and linoleic (C18:2n-6) acid were extracted from the buried pig fat tissue and analysed as fatty acid methyl esters using the gas chromatography system. Levels of the marker fatty acids were quantified from their respective standards. The concentrations of C16:0 (69.2 mg/mL) and C18:1n-9 (44.3 mg/mL) from time zero exhibited significant fluctuations during the burial period. Levels rose (116 and 60.2 mg/mL, respectively) and fell starting from the second week to reach 19.3 and 18.3 mg/mL, respectively at week 6. Levels showed another increase at week 9 (66.3 and 44.1 mg/mL, respectively) followed by gradual decrease at week 10 (20.4 and 18.5 mg/mL, respectively). A sharp increase was observed in the final week (131.2 and 61.1 mg/mL, respectively). Conversely, the levels of C18:2n-6 remained more or less constant throughout the study. In addition to fluctuations in the concentrations, several new fatty acids appeared in the latter weeks. Other fatty acids which were detectable in the time zero sample, were lost in the latter weeks. There are several probable opportunities to utilise fatty acid analysis as a basic technique for approximating PMI: the quantification of marker fatty acids and the detection of selected fatty acids that either disappear or appear during the burial period. This pilot study indicates that this may be a potential semi-quantitative methodology for determining the PMI. Ideally, the analysis of particular fatty acid patterns in the early stages of decomposition could be an additional tool to the already available techniques or methods in improving the overall processes in estimating PMI of a corpse.

Keywords: adipocere, fatty acids, gas chromatography, post-mortem interval

Procedia PDF Downloads 112
133 Furniko Flour: An Emblematic Traditional Food of Greek Pontic Cuisine

Authors: A. Keramaris, T. Sawidis, E. Kasapidou, P. Mitlianga

Abstract:

Although the gastronomy of the Greeks of Pontus is highly prominent, it has not received the same level of scientific analysis as another local cuisine of Greece, that of Crete. As a result, we intended to focus our research on Greek Pontic cuisine to shed light on its unique recipes, food products, and, ultimately, its features. The Greeks of Pontus, who lived for a long time in the northern part (Black Sea Region) of contemporary Turkey and now widely inhabit northern Greece, have one of Greece's most distinguished local cuisines. Despite their gastronomy being simple, it features several inspiring delicacies. It's been a century since they immigrated to Greece, yet their gastronomic culture remains a critical component of their collective identity. As a first step toward comprehending Greek Pontic cuisine, it was attempted to investigate the production of one of its most renowned traditional products, furniko flour. In this project, we targeted residents of Western Macedonia, a province in northern Greece with a large population of descendants of Greeks of Pontus who are primarily engaged in agricultural activities. In this quest, we approached a descendant of the Greeks of Pontus who is involved in the production of furniko flour and who consented to show us the entire process of its production as we participated in it. The furniko flour is made from non-hybrid heirloom corn. It is harvested by hand when the moisture content of the seeds is low enough to make them suitable for roasting. Manual harvesting entails removing the cob from the plant and detaching the husks. The harvested cobs are then roasted for 24 hours in a traditional wood oven. The roasted cobs are then collected and stored in sacks. The next step is to extract the seeds, which is accomplished by rubbing the cobs. The seeds should ideally be ground in a traditional stone hand mill. We end up with aromatic and dark golden furniko flour, which is used to cook havitz. Accompanied by the preparation of the furnikoflour, we also recorded the cooking process of the havitz (a porridge-like cornflour dish). A savory delicacy that is simple to prepare and one of the most delightful dishes in Greek Pontic cuisine. According to the research participant, havitzis a highly nutritious dish due to the ingredients of furniko flour. In addition, he argues that preparing havitz is a great way to bring families together, share stories, and revisit fond memories. In conclusion, this study illustrates the traditional preparation of furnikoflour and its use in various traditional recipes as an initial effort to highlight the elements of Pontic Greek cuisine. As a continuation of the current study, it could be the analysis of the chemical components of the furniko flour to evaluate its nutritional content.

Keywords: furniko flour, greek pontic cuisine, havitz, traditional foods

Procedia PDF Downloads 121
132 Applications of Multi-Path Futures Analyses for Homeland Security Assessments

Authors: John Hardy

Abstract:

A range of future-oriented intelligence techniques is commonly used by states to assess their national security and develop strategies to detect and manage threats, to develop and sustain capabilities, and to recover from attacks and disasters. Although homeland security organizations use future's intelligence tools to generate scenarios and simulations which inform their planning, there have been relatively few studies of the methods available or their applications for homeland security purposes. This study presents an assessment of one category of strategic intelligence techniques, termed Multi-Path Futures Analyses (MPFA), and how it can be applied to three distinct tasks for the purpose of analyzing homeland security issues. Within this study, MPFA are categorized as a suite of analytic techniques which can include effects-based operations principles, general morphological analysis, multi-path mapping, and multi-criteria decision analysis techniques. These techniques generate multiple pathways to potential futures and thereby generate insight into the relative influence of individual drivers of change, the desirability of particular combinations of pathways, and the kinds of capabilities which may be required to influence or mitigate certain outcomes. The study assessed eighteen uses of MPFA for homeland security purposes and found that there are five key applications of MPFA which add significant value to analysis. The first application is generating measures of success and associated progress indicators for strategic planning. The second application is identifying homeland security vulnerabilities and relationships between individual drivers of vulnerability which may amplify or dampen their effects. The third application is selecting appropriate resources and methods of action to influence individual drivers. The fourth application is prioritizing and optimizing path selection preferences and decisions. The fifth application is informing capability development and procurement decisions to build and sustain homeland security organizations. Each of these applications provides a unique perspective of a homeland security issue by comparing a range of potential future outcomes at a set number of intervals and by contrasting the relative resource requirements, opportunity costs, and effectiveness measures of alternative courses of action. These findings indicate that MPFA enhances analysts’ ability to generate tangible measures of success, identify vulnerabilities, select effective courses of action, prioritize future pathway preferences, and contribute to ongoing capability development in homeland security assessments.

Keywords: homeland security, intelligence, national security, operational design, strategic intelligence, strategic planning

Procedia PDF Downloads 123
131 Technology Changing Senior Care

Authors: John Kosmeh

Abstract:

Introduction – For years, senior health care and skilled nursing facilities have been plagued with the dilemma of not having the necessary tools and equipment to adequately care for senior residents in their communities. This has led to high transport rates to emergency departments and high 30-day readmission rates, costing billions of unnecessary dollars each year, as well as quality assurance issues. Our Senior care telemedicine program is designed to solve this issue. Methods – We conducted a 1-year pilot program using our technology coupled with our 24/7 telemedicine program with skilled nursing facilities in different parts of the United States. We then compared transports rates and 30-day readmission rates to previous years before the use of our program, as well as transport rates of other communities of similar size not using our program. This data was able to give us a clear and concise look at the success rate of reducing unnecessary transport and readmissions as well as cost savings. Results – A 94% reduction nationally of unnecessary out-of-facility transports, and to date, complete elimination of 30-day readmissions. Our virtual platform allowed us to instruct facility staff on the utilization of our tools and system as well as deliver treatment by our ER-trained providers. Delay waiting for PCP callbacks was eliminated. We were able to obtain lung, heart, and abdominal ultrasound imaging, 12 lead EKG, blood labs, auscultate lung and heart sounds, and collect other diagnostic tests at the bedside within minutes, providing immediate care and allowing us to treat residents within the SNF. Are virtual capabilities allowed for loved ones, family members, and others who had medical power of attorney to virtually connect with us at the time of visit, to speak directly with the medical provider, providing increased confidence in the decision to treat the resident in-house. The decline in transports and readmissions will greatly reduce governmental cost burdens, as well as fines imposed on SNF for high 30-day readmissions, reduce the cost of Medicare A readmissions, and significantly impact the number of patients visiting overcrowded ERs. Discussion – By utilizing our program, SNF can effectively reduce the number of unnecessary transports of residents, as well as create significant savings from loss of day rates, transportation costs, and high CMS fines. The cost saving is in the thousands monthly, but more importantly, these facilities can create a higher quality of life and medical care for residents by providing definitive care instantly with ER-trained personnel.

Keywords: senior care, long term care, telemedicine, technology, senior care communities

Procedia PDF Downloads 81
130 Interrelationship between Quadriceps' Activation and Inhibition as a Function of Knee-Joint Angle and Muscle Length: A Torque and Electro and Mechanomyographic Investigation

Authors: Ronald Croce, Timothy Quinn, John Miller

Abstract:

Incomplete activation, or activation failure, of motor units during maximal voluntary contractions is often referred to as muscle inhibition (MI), and is defined as the inability of the central nervous system to maximally drive a muscle during a voluntary contraction. The purpose of the present study was to assess the interrelationship amongst peak torque (PT), muscle inhibition (MI; incomplete activation of motor units), and voluntary muscle activation (VMA) of the quadriceps’ muscle group as a function of knee angle and muscle length during maximal voluntary isometric contractions (MVICs). Nine young adult males (mean + standard deviation: age: 21.58 + 1.30 years; height: 180.07 + 4.99 cm; weight: 89.07 + 7.55 kg) performed MVICs in random order with the knee at 15, 55, and 95° flexion. MI was assessed using the interpolated twitch technique and was estimated by the amount of additional knee extensor PT evoked by the superimposed twitch during MVICs. Voluntary muscle activation was estimated by root mean square amplitude electromyography (EMGrms) and mechanomyography (MMGrms) of agonist (vastus medialis [VM], vastus lateralis [VL], and rectus femoris [RF]) and antagonist (biceps femoris ([BF]) muscles during MVICs. Data were analyzed using separate repeated measures analysis of variance. Results revealed a strong dependency of quadriceps’ PT (p < 0.001), MI (p < 0.001) and MA (p < 0.01) on knee joint position: PT was smallest at the most shortened muscle position (15°) and greatest at mid-position (55°); MI and MA were smallest at the most shortened muscle position (15°) and greatest at the most lengthened position (95°), with the RF showing the greatest change in MA. It is hypothesized that the ability to more fully activate the quadriceps at short compared to longer muscle lengths (96% contracted at 15°; 91% at 55°; 90% at 95°) might partly compensate for the unfavorable force-length mechanics at the more extended position and consequent declines in VMA (decreases in EMGrms and MMGrms muscle amplitude during MVICs) and force production (PT = 111-Nm at 15°, 217-NM at 55°, 199-Nm at 95°). Biceps femoris EMG and MMG data showed no statistical differences (p = 0.11 and 0.12, respectively) at joint angles tested, although there were greater values at the extended position. Increased BF muscle amplitude at this position could be a mechanism by which anterior shear and tibial rotation induced by high quadriceps’ activity are countered. Measuring and understanding the degree to which one sees MI and VMA in the QF muscle has particular clinical relevance because different knee-joint disorders, such ligament injuries or osteoarthritis, increase levels of MI observed and markedly reduced the capability of full VMA.

Keywords: electromyography, interpolated twitch technique, mechanomyography, muscle activation, muscle inhibition

Procedia PDF Downloads 328
129 Notes on Matter: Ibn Arabi, Bernard Silvestris, and Other Ghosts

Authors: Brad Fox

Abstract:

Between something and nothing, a bit of both, neither/nor, a figment of the imagination, the womb of the universe - questions of what matter is, where it exists and what it means continue to surge up from the bottom of our concepts and theories. This paper looks at divergences and convergences, intimations and mistranslations, in a lineage of thought that begins with Plato’s Timaeus, travels through Arabic Spain and Syria, finally to end up in the language of science. Up to the 13th century, philosophers in Christian France based such inquiries on a questionable and fragmented translation of the Timaeus by Calcidius, with a commentary that conflated the Platonic concept of khora (‘space’ or ‘void’) with Aristotle’s hyle (‘primal matter’ as derived from ‘wood’ as a building material). Both terms were translated by Calcidius as silva. For 700 years, this was the only source for philosophers of matter in the Latin-speaking world. Bernard Silvestris, in his Cosmographia, exemplifies the concepts developed before new translations from Arabic began to pour into the Latin world from such centers as the court of Toledo. Unlike their counterparts across the Pyrenees, 13th century philosophers in Muslim Spain had access to a broad vocabulary for notions of primal matter. The prolific and visionary theologian, philosopher, and poet Muhyiddin Ibn Arabi could draw on the Ikhwan Al-Safa’s 10th Century renderings of Aristotle, which translated the Greek hyle as the everyday Arabic word maddah, still used for building materials today. He also often used the simple transliteration of hyle as hayula, probably taken from Ibn Sina. The prophet’s son-in-law Ali talked of dust in the air, invisible until it is struck by sunlight. Ibn Arabi adopted this dust - haba - as an expression for an original metaphysical substance, nonexistent but susceptible to manifesting forms. Ibn Arabi compares the dust to a phoenix, because we have heard about it and can conceive of it, but it has no existence unto itself and can be described only in similes. Elsewhere he refers to it as quwwa wa salahiyya - pure potentiality and readiness. The final portion of the paper will compare Bernard and Ibn Arabi’s notions of matter to the recent ontology developed by theoretical physicist and philosopher Karen Barad. Looking at Barad’s work with the work of Nils Bohr, it will argue that there is a rich resonance between Ibn Arabi’s paradoxical conceptions of matter and the quantum vacuum fluctuations verified by recent lab experiments. The inseparability of matter and meaning in Barad recall Ibn Arabi’s original response to Ibn Rushd’s question: Does revelation offer the same knowledge as rationality? ‘Yes and No,’ Ibn Arabi said, ‘and between the yes and no spirit is divided from matter and heads are separated from bodies.’ Ibn Arabi’s double affirmation continues to offer insight into our relationship to momentary experience at its most fundamental level.

Keywords: Karen Barad, Muhyiddin Ibn Arabi, primal matter, Bernard Silvestris

Procedia PDF Downloads 411
128 If the Architecture Is in Harmony With Its Surrounding, It Reconnects People With Nature

Authors: Aboubakr Mashali

Abstract:

Context: The paper focuses on the relationship between architecture and nature, emphasizing the importance of incorporating natural elements in design to reconnect individuals with the natural environment. It highlights the positive impact of a harmonious architecture on people's well-being and the environment, as well as the concept of sustainable architecture. Research aim: The aim of this research is to showcase how nature can be integrated into architectural designs, ultimately reestablishing a connection between humans and the natural world. Methodology: The research employs an in-depth approach, delving into the subject matter through extensive research and the analysis of case studies. These case studies provide practical examples and insights into successful architectural designs that have effectively incorporated nature. Findings: The findings suggest that when architecture and nature coexist harmoniously, it creates a positive atmosphere and enhances people's wellbeing. The use of materials obtained from nature in their raw or minimally refined form, such as wood, clay, stone, and bamboo, contributes to a natural atmosphere within the built environment. Additionally, a color palette inspired by nature, consisting of earthy tones, green, brown, and rusty shades, further enhances the harmonious relationship between individuals and their surroundings. The paper also discusses the concept of sustainable architecture, where materials used are renewable, and energy consumption is minimal. It acknowledges the efforts of organizations such as the US Green Building Council in promoting sustainable design practices. Theoretical importance: This research contributes to the understanding of the relationship between architecture and nature and highlights the importance of incorporating natural elements into design. It emphasizes the potential of naturefriendly architecture to create greener, resilient, and sustainable cities. Data collection and analysis procedures: The researcher gathered data through comprehensive research, examining existing literature, and studying relevant case studies. The analysis involved studying the successful implementation of nature in architectural design and its impact on individuals and the environment. Question addressed: The research addresses the question of how nature can be incorporated into architectural designs to reconnect humans with the nature. Conclusion: In conclusion, this research highlights the significance of architecture being in harmony with its surrounding, which in turn should be in harmony with nature. By incorporating nature in architectural designs, individuals can rediscover their connection with nature and experience its positive impact on their well-being. The use of natural materials and a color palette inspired by nature further enhances this relationship. Additionally, embracing sustainable design practices contributes to the creation of greener and more resilient cities. This research underscores the importance of integrating nature-friendly architecture to foster a healthier and more sustainable future.

Keywords: nature, architecture, reconnecting, greencities, sustainable, openspaces, landscape

Procedia PDF Downloads 52
127 Comparison of Two Home Sleep Monitors Designed for Self-Use

Authors: Emily Wood, James K. Westphal, Itamar Lerner

Abstract:

Background: Polysomnography (PSG) recordings are regularly used in research and clinical settings to study sleep and sleep-related disorders. Typical PSG studies are conducted in professional laboratories and performed by qualified researchers. However, the number of sleep labs worldwide is disproportionate to the increasing number of individuals with sleep disorders like sleep apnea and insomnia. Consequently, there is a growing need to supply cheaper yet reliable means to measure sleep, preferably autonomously by subjects in their own home. Over the last decade, a variety of devices for self-monitoring of sleep became available in the market; however, very few have been directly validated against PSG to demonstrate their ability to perform reliable automatic sleep scoring. Two popular mobile EEG-based systems that have published validation results, the DREEM 3 headband and the Z-Machine, have never been directly compared one to the other by independent researchers. The current study aimed to compare the performance of DREEM 3 and the Z-Machine to help investigators and clinicians decide which of these devices may be more suitable for their studies. Methods: 26 participants have completed the study for credit or monetary compensation. Exclusion criteria included any history of sleep, neurological or psychiatric disorders. Eligible participants arrived at the lab in the afternoon and received the two devices. They then spent two consecutive nights monitoring their sleep at home. Participants were also asked to keep a sleep log, indicating the time they fell asleep, woke up, and the number of awakenings occurring during the night. Data from both devices, including detailed sleep hypnograms in 30-second epochs (differentiating Wake, combined N1/N2, N3; and Rapid Eye Movement sleep), were extracted and aligned upon retrieval. For analysis, the number of awakenings each night was defined as four or more consecutive wake epochs between sleep onset and termination. Total sleep time (TST) and the number of awakenings were compared to subjects’ sleep logs to measure consistency with the subjective reports. In addition, the sleep scores from each device were compared epoch-by-epoch to calculate the agreement between the two devices using Cohen’s Kappa. All analysis was performed using Matlab 2021b and SPSS 27. Results/Conclusion: Subjects consistently reported longer times spent asleep than the time reported by each device (M= 448 minutes for sleep logs compared to M= 406 and M= 345 minutes for the DREEM and Z-Machine, respectively; both ps<0.05). Linear correlations between the sleep log and each device were higher for the DREEM than the Z-Machine for both TST and the number of awakenings, and, likewise, the mean absolute bias between the sleep logs and each device was higher for the Z-Machine for both TST (p<0.001) and awakenings (p<0.04). There was some indication that these effects were stronger for the second night compared to the first night. Epoch-by-epoch comparisons showed that the main discrepancies between the devices were for detecting N2 and REM sleep, while N3 had a high agreement. Overall, the DREEM headband seems superior for reliably scoring sleep at home.

Keywords: DREEM, EEG, seep monitoring, Z-machine

Procedia PDF Downloads 91
126 Innovative Fabric Integrated Thermal Storage Systems and Applications

Authors: Ahmed Elsayed, Andrew Shea, Nicolas Kelly, John Allison

Abstract:

In northern European climates, domestic space heating and hot water represents a significant proportion of total primary total primary energy use and meeting these demands from a national electricity grid network supplied by renewable energy sources provides an opportunity for a significant reduction in EU CO2 emissions. However, in order to adapt to the intermittent nature of renewable energy generation and to avoid co-incident peak electricity usage from consumers that may exceed current capacity, the demand for heat must be decoupled from its generation. Storage of heat within the fabric of dwellings for use some hours, or days, later provides a route to complete decoupling of demand from supply and facilitates the greatly increased use of renewable energy generation into a local or national electricity network. The integration of thermal energy storage into the building fabric for retrieval at a later time requires much evaluation of the many competing thermal, physical, and practical considerations such as the profile and magnitude of heat demand, the duration of storage, charging and discharging rate, storage media, space allocation, etc. In this paper, the authors report investigations of thermal storage in building fabric using concrete material and present an evaluation of several factors that impact upon performance including heating pipe layout, heating fluid flow velocity, storage geometry, thermo-physical material properties, and also present an investigation of alternative storage materials and alternative heat transfer fluids. Reducing the heating pipe spacing from 200 mm to 100 mm enhances the stored energy by 25% and high-performance Vacuum Insulation results in heat loss flux of less than 3 W/m2, compared to 22 W/m2 for the more conventional EPS insulation. Dense concrete achieved the greatest storage capacity, relative to medium and light-weight alternatives, although a material thickness of 100 mm required more than 5 hours to charge fully. Layers of 25 mm and 50 mm thickness can be charged in 2 hours, or less, facilitating a fast response that could, aggregated across multiple dwellings, provide significant and valuable reduction in demand from grid-generated electricity in expected periods of high demand and potentially eliminate the need for additional new generating capacity from conventional sources such as gas, coal, or nuclear.

Keywords: fabric integrated thermal storage, FITS, demand side management, energy storage, load shifting, renewable energy integration

Procedia PDF Downloads 157
125 Upgrading of Bio-Oil by Bio-Pd Catalyst

Authors: Sam Derakhshan Deilami, Iain N. Kings, Lynne E. Macaskie, Brajendra K. Sharma, Anthony V. Bridgwater, Joseph Wood

Abstract:

This paper reports the application of a bacteria-supported palladium catalyst to the hydrodeoxygenation (HDO) of pyrolysis bio-oil, towards producing an upgraded transport fuel. Biofuels are key to the timely replacement of fossil fuels in order to mitigate the emissions of greenhouse gases and depletion of non-renewable resources. The process is an essential step in the upgrading of bio-oils derived from industrial by-products such as agricultural and forestry wastes, the crude oil from pyrolysis containing a large amount of oxygen that requires to be removed in order to create a fuel resembling fossil-derived hydrocarbons. The bacteria supported catalyst manufacture is a means of utilizing recycled metals and second life bacteria, and the metal can also be easily recovered from the spent catalysts after use. Comparisons are made between bio-Pd, and a conventional activated carbon supported Pd/C catalyst. Bio-oil was produced by fast pyrolysis of beechwood at 500 C at a residence time below 2 seconds, provided by Aston University. 5 wt % BioPd/C was prepared under reducing conditions, exposing cells of E. coli MC4100 to a solution of sodium tetrachloropalladate (Na2PdCl4), followed by rinsing, drying and grinding to form a powder. Pd/C was procured from Sigma-Aldrich. The HDO experiments were carried out in a 100 mL Parr batch autoclave using ~20g bio-crude oil and 0.6 g bio-Pd/C catalyst. Experimental variables investigated for optimization included temperature (160-350C) and reaction times (up to 5 h) at a hydrogen pressure of 100 bar. Most of the experiments resulted in an aqueous phase (~40%) and an organic phase (~50-60%) as well as gas phase (<5%) and coke (<2%). Study of the temperature and time upon the process showed that the degree of deoxygenation increased (from ~20 % up to 60 %) at higher temperatures in the region of 350 C and longer residence times up to 5 h. However minimum viscosity (~0.035 Pa.s) occurred at 250 C and 3 h residence time, indicating that some polymerization of the oil product occurs at the higher temperatures. Bio-Pd showed a similar degree of deoxygenation (~20 %) to Pd/C at lower temperatures of 160 C, but did not rise as steeply with temperature. More coke was formed over bio-Pd/C than Pd/C at temperatures above 250 C, suggesting that bio-Pd/C may be more susceptible to coke formation than Pd/C. Reactions occurring during bio-oil upgrading include catalytic cracking, decarbonylation, decarboxylation, hydrocracking, hydrodeoxygenation and hydrogenation. In conclusion, it was shown that bio-Pd/C displays an acceptable rate of HDO, which increases with residence time and temperature. However some undesirable reactions also occur, leading to a deleterious increase in viscosity at higher temperatures. Comparisons are also drawn with earlier work on the HDO of Chlorella derived bio-oil manufactured from micro-algae via hydrothermal liquefaction. Future work will analyze the kinetics of the reaction and investigate the effect of bi-metallic catalysts.

Keywords: bio-oil, catalyst, palladium, upgrading

Procedia PDF Downloads 159
124 Valorisation of Food Waste Residue into Sustainable Bioproducts

Authors: Krishmali N. Ekanayake, Brendan J. Holland, Colin J. Barrow, Rick Wood

Abstract:

Globally, more than one-third of all food produced is lost or wasted, equating to 1.3 billion tonnes per year. Around 31.2 million tonnes of food waste are generated across the production, supply, and consumption chain in Australia. Generally, the food waste management processes adopt environmental-friendly and more sustainable approaches such as composting, anerobic digestion and energy implemented technologies. However, unavoidable, and non-recyclable food waste ends up as landfilling and incineration that involve many undesirable impacts and challenges on the environment. A biorefinery approach contributes to a waste-minimising circular economy by converting food and other organic biomass waste into valuable outputs, including feeds, nutrition, fertilisers, and biomaterials. As a solution, Green Eco Technologies has developed a food waste treatment process using WasteMaster system. The system uses charged oxygen and moderate temperatures to convert food waste, without bacteria, additives, or water, into a virtually odour-free, much reduced quantity of reusable residual material. In the context of a biorefinery, the WasteMaster dries and mills food waste into a form suitable for storage or downstream extraction/separation/concentration to create products. The focus of the study is to determine the nutritional composition of WasteMaster processed residue to potential develop aquafeed ingredients. The global aquafeed industry is projected to reach a high value market in future, which has shown high demand for the aquafeed products. Therefore, food waste can be utilized for aquaculture feed development by reducing landfill. This framework will lessen the requirement of raw crops cultivation for aquafeed development and reduce the aquaculture footprint. In the present study, the nutritional elements of processed residue are consistent with the input food waste type, which has shown that the WasteMaster is not affecting the expected nutritional distribution. The macronutrient retention values of protein, lipid, and nitrogen free extract (NFE) are detected >85%, >80%, and >95% respectively. The sensitive food components including omega 3 and omega 6 fatty acids, amino acids, and phenolic compounds have been found intact in each residue material. Preliminary analysis suggests a price comparability with current aquafeed ingredient cost making the economic feasibility. The results suggest high potentiality of aquafeed development as 5 to 10% of the ingredients to replace/partially substitute other less sustainable ingredients across biorefinery setting. Our aim is to improve the sustainability of aquaculture and reduce the environmental impacts of food waste.

Keywords: biorefinery, ffood waste residue, input, wasteMaster

Procedia PDF Downloads 44
123 Estimation of the Dynamic Fragility of Padre Jacinto Zamora Bridge Due to Traffic Loads

Authors: Kimuel Suyat, Francis Aldrine Uy, John Paul Carreon

Abstract:

The Philippines, composed of many islands, is connected with approximately 8030 bridges. Continuous evaluation of the structural condition of these bridges is needed to safeguard the safety of the general public. With most bridges reaching its design life, retrofitting and replacement may be needed. Concerned government agencies allocate huge costs for periodic monitoring and maintenance of these structures. The rising volume of traffic and aging of these infrastructures is challenging structural engineers to give rise for structural health monitoring techniques. Numerous techniques are already proposed and some are now being employed in other countries. Vibration Analysis is one way. The natural frequency and vibration of a bridge are design criteria in ensuring the stability, safety and economy of the structure. Its natural frequency must not be so high so as not to cause discomfort and not so low that the structure is so stiff causing it to be both costly and heavy. It is well known that the stiffer the member is, the more load it attracts. The frequency must not also match the vibration caused by the traffic loads. If this happens, a resonance occurs. Vibration that matches a systems frequency will generate excitation and when this exceeds the member’s limit, a structural failure will happen. This study presents a method for calculating dynamic fragility through the use of vibration-based monitoring system. Dynamic fragility is the probability that a structural system exceeds a limit state when subjected to dynamic loads. The bridge is modeled in SAP2000 based from the available construction drawings provided by the Department of Public Works and Highways. It was verified and adjusted based from the actual condition of the bridge. The bridge design specifications are also checked using nondestructive tests. The approach used in this method properly accounts the uncertainty of observed values and code-based structural assumptions. The vibration response of the structure due to actual loads is monitored using installed sensors on the bridge. From the determinacy of these dynamic characteristic of a system, threshold criteria can be established and fragility curves can be estimated. This study conducted in relation with the research project between Department of Science and Technology, Mapúa Institute of Technology, and the Department of Public Works and Highways also known as Mapúa-DOST Smart Bridge Project deploys Structural Health Monitoring Sensors at Zamora Bridge. The bridge is selected in coordination with the Department of Public Works and Highways. The structural plans for the bridge are also readily available.

Keywords: structural health monitoring, dynamic characteristic, threshold criteria, traffic loads

Procedia PDF Downloads 252
122 Structural Analysis of a Composite Wind Turbine Blade

Authors: C. Amer, M. Sahin

Abstract:

The design of an optimised horizontal axis 5-meter-long wind turbine rotor blade in according with IEC 61400-2 standard is a research and development project in order to fulfil the requirements of high efficiency of torque from wind production and to optimise the structural components to the lightest and strongest way possible. For this purpose, a research study is presented here by focusing on the structural characteristics of a composite wind turbine blade via finite element modelling and analysis tools. In this work, first, the required data regarding the general geometrical parts are gathered. Then, the airfoil geometries are created at various sections along the span of the blade by using CATIA software to obtain the two surfaces, namely; the suction and the pressure side of the blade in which there is a hat shaped fibre reinforced plastic spar beam, so-called chassis starting at 0.5m from the root of the blade and extends up to 4 m and filled with a foam core. The root part connecting the blade to the main rotor differential metallic hub having twelve hollow threaded studs is then modelled. The materials are assigned as two different types of glass fabrics, polymeric foam core material and the steel-balsa wood combination for the root connection parts. The glass fabrics are applied using hand wet lay-up lamination with epoxy resin as METYX L600E10C-0, is the unidirectional continuous fibres and METYX XL800E10F having a tri-axial architecture with fibres in the 0,+45,-45 degree orientations in a ratio of 2:1:1. Divinycell H45 is used as the polymeric foam. The finite element modelling of the blade is performed via MSC PATRAN software with various meshes created on each structural part considering shell type for all surface geometries, and lumped mass were added to simulate extra adhesive locations. For the static analysis, the boundary conditions are assigned as fixed at the root through aforementioned bolts, where for dynamic analysis both fixed-free and free-free boundary conditions are made. By also taking the mesh independency into account, MSC NASTRAN is used as a solver for both analyses. The static analysis aims the tip deflection of the blade under its own weight and the dynamic analysis comprises normal mode dynamic analysis performed in order to obtain the natural frequencies and corresponding mode shapes focusing the first five in and out-of-plane bending and the torsional modes of the blade. The analyses results of this study are then used as a benchmark prior to modal testing, where the experiments over the produced wind turbine rotor blade has approved the analytical calculations.

Keywords: dynamic analysis, fiber reinforced composites, horizontal axis wind turbine blade, hand-wet layup, modal testing

Procedia PDF Downloads 413
121 Composition, Velocity, and Mass of Projectiles Generated from a Chain Shot Event

Authors: Eric Shannon, Mark J. McGuire, John P. Parmigiani

Abstract:

A hazard associated with the use of timber harvesters is chain shot. Harvester saw chain is subjected to large dynamic mechanical stresses which can cause it to fracture. The resulting open loop of saw chain can fracture a second time and create a projectile consisting of several saw-chain links referred to as a chain shot. Its high kinetic energy enables it to penetrate operator enclosures and be a significant hazard. Accurate data on projectile composition, mass, and speed are needed for the design of both operator enclosures resistant to projectile penetration and for saw chain resistant to fracture. The work presented here contributes to providing this data through the use of a test machine designed and built at Oregon State University. The machine’s enclosure is a standard shipping container. To safely contain any anticipated chain shot, the container was lined with both 9.5 mm AR500 steel plates and 50 mm high-density polyethylene (HDPE). During normal operation, projectiles are captured virtually undamaged in the HDPE enabling subsequent analysis. Standard harvester components are used for bar mounting and chain tensioning. Standard guide bars and saw chains are used. An electric motor with flywheel drives the system. Testing procedures follow ISO Standard 11837. Chain speed at break was approximately 45.5 m/s. Data was collected using both a 75 cm solid bar (Oregon 752HSFB149) and 90 cm solid bar (Oregon 902HSFB149). Saw chains used were 89 Drive Link .404”-18HX loops made from factory spools. Standard 16-tooth sprockets were used. Projectile speed was measured using both a high-speed camera and a chronograph. Both rotational and translational kinetic energy are calculated. For this study 50 chain shot events were executed. Results showed that projectiles consisted of a variety combinations of drive links, tie straps, and cutter links. Most common (occurring in 60% of the events) was a drive-link / tie-strap / drive-link combination having a mass of approximately 10.33 g. Projectile mass varied from a minimum of 2.99 g corresponding to a drive link only to a maximum of 18.91 g corresponding to a drive-link / tie-strap / drive-link / cutter-link / drive-link combination. Projectile translational speed was measured to be approximately 270 m/s and rotational speed of approximately 14000 r/s. The calculated translational and rotational kinetic energy magnitudes each average over 600 J. This study provides useful information for both timber harvester manufacturers and saw chain manufacturers to design products that reduce the hazards associated with timber harvesting.

Keywords: chain shot, timber harvesters, safety, testing

Procedia PDF Downloads 134
120 Changing the Landscape of Fungal Genomics: New Trends

Authors: Igor V. Grigoriev

Abstract:

Understanding of biological processes encoded in fungi is instrumental in addressing future food, feed, and energy demands of the growing human population. Genomics is a powerful and quickly evolving tool to understand these processes. The Fungal Genomics Program of the US Department of Energy Joint Genome Institute (JGI) partners with researchers around the world to explore fungi in several large scale genomics projects, changing the fungal genomics landscape. The key trends of these changes include: (i) rapidly increasing scale of sequencing and analysis, (ii) developing approaches to go beyond culturable fungi and explore fungal ‘dark matter,’ or unculturables, and (iii) functional genomics and multi-omics data integration. Power of comparative genomics has been recently demonstrated in several JGI projects targeting mycorrhizae, plant pathogens, wood decay fungi, and sugar fermenting yeasts. The largest JGI project ‘1000 Fungal Genomes’ aims at exploring the diversity across the Fungal Tree of Life in order to better understand fungal evolution and to build a catalogue of genes, enzymes, and pathways for biotechnological applications. At this point, at least 65% of over 700 known families have one or more reference genomes sequenced, enabling metagenomics studies of microbial communities and their interactions with plants. For many of the remaining families no representative species are available from culture collections. To sequence genomes of unculturable fungi two approaches have been developed: (a) sequencing DNA from fruiting bodies of ‘macro’ and (b) single cell genomics using fungal spores. The latter has been tested using zoospores from the early diverging fungi and resulted in several near-complete genomes from underexplored branches of the Fungal Tree, including the first genomes of Zoopagomycotina. Genome sequence serves as a reference for transcriptomics studies, the first step towards functional genomics. In the JGI fungal mini-ENCODE project transcriptomes of the model fungus Neurospora crassa grown on a spectrum of carbon sources have been collected to build regulatory gene networks. Epigenomics is another tool to understand gene regulation and recently introduced single molecule sequencing platforms not only provide better genome assemblies but can also detect DNA modifications. For example, 6mC methylome was surveyed across many diverse fungi and the highest among Eukaryota levels of 6mC methylation has been reported. Finally, data production at such scale requires data integration to enable efficient data analysis. Over 700 fungal genomes and other -omes have been integrated in JGI MycoCosm portal and equipped with comparative genomics tools to enable researchers addressing a broad spectrum of biological questions and applications for bioenergy and biotechnology.

Keywords: fungal genomics, single cell genomics, DNA methylation, comparative genomics

Procedia PDF Downloads 187
119 Civilian and Military Responses to Domestic Security Threats: A Cross-Case Analysis of Belgium, France, and the United Kingdom

Authors: John Hardy

Abstract:

The domestic security environment in Europe has changed dramatically in recent years. Since January 2015, a significant number of domestic security threats that emerged in Europe were located in Belgium, France and the United Kingdom. While some threats were detected in the planning phase, many also resulted in terrorist attacks. Authorities in all three countries instituted special or emergency measures to provide additional security to their populations. Each country combined an additional policing presence with a specific military operation to contribute to a comprehensive security response to domestic threats. This study presents a cross-case analysis of three countries’ civilian and military responses to domestic security threats in Europe. Each case study features a unique approach to combining civilian and military capabilities in similar domestic security operations during the same time period and threat environment. The research design focuses on five variables relevant to the relationship between civilian and military roles in each security response. These are the distinction between policing and military roles, the legal framework for the domestic deployment of military forces, prior experience in civil-military coordination, the institutional framework for threat assessments, and the level of public support for the domestic use of military forces. These variables examine the influence of domestic social, political, and legal factors on the design of combined civil-military operations in response to domestic security threats. Each case study focuses on a specific operation: Operation Vigilant Guard in Belgium, Operation Sentinel in France, and Operation Temperer in the United Kingdom. The results demonstrate that the level of distinction between policing and military roles and the existence of a clear and robust legal framework for the domestic use force by military personnel significantly influence the design and implementation of civilian and military roles in domestic security operations. The findings of this study indicate that Belgium, France and the United Kingdom experienced different design and implementation challenges for their domestic security operations. Belgium and France initially had less-developed legal frameworks for deploying the military in domestic security operations than the United Kingdom. This was offset by public support for enacting emergency measures and the strength of existing civil-military coordination mechanisms. The United Kingdom had a well-developed legal framework for integrating civilian and military capabilities in domestic security operations. However, its experiences in Ireland also made the government more sensitive to public perceptions regarding the domestic deployment of military forces.

Keywords: counter-terrorism, democracy, homeland security, intelligence, militarization, policing

Procedia PDF Downloads 118
118 Distinguishing Substance from Spectacle in Violent Extremist Propaganda through Frame Analysis

Authors: John Hardy

Abstract:

Over the last decade, the world has witnessed an unprecedented rise in the quality and availability of violent extremist propaganda. This phenomenon has been fueled primarily by three interrelated trends: rapid adoption of online content mediums by creators of violent extremist propaganda, increasing sophistication of violent extremist content production, and greater coordination of content and action across violent extremist organizations. In particular, the self-styled ‘Islamic State’ attracted widespread attention from its supporters and detractors alike by mixing shocking video and imagery content in with substantive ideological and political content. Although this practice was widely condemned for its brutality, it proved to be effective at engaging with a variety of international audiences and encouraging potential supporters to seek further information. The reasons for the noteworthy success of this kind of shock-value propaganda content remain unclear, despite many governments’ attempts to produce counterpropaganda. This study examines violent extremist propaganda distributed by five terrorist organizations between 2010 and 2016, using material released by the ‎Al Hayat Media Center of the Islamic State, Boko Haram, Al Qaeda, Al Qaeda in the Arabian Peninsula, and Al Qaeda in the Islamic Maghreb. The time period covers all issues of the infamous publications Inspire and Dabiq, as well as the most shocking video content released by the Islamic State and its affiliates. The study uses frame analysis to distinguish thematic from symbolic content in violent extremist propaganda by contrasting the ways that substantive ideology issues were framed against the use of symbols and violence to garner attention and to stylize propaganda. The results demonstrate that thematic content focuses significantly on diagnostic frames, which explain violent extremist groups’ causes, and prognostic frames, which propose solutions to addressing or rectifying the cause shared by groups and their sympathizers. Conversely, symbolic violence is primarily stylistic and rarely linked to thematic issues or motivational framing. Frame analysis provides a useful preliminary tool in disentangling substantive ideological and political content from stylistic brutality in violent extremist propaganda. This provides governments and researchers a method for better understanding the framing and content used to design narratives and propaganda materials used to promote violent extremism around the world. Increased capacity to process and understand violent extremist narratives will further enable governments and non-governmental organizations to develop effective counternarratives which promote non-violent solutions to extremists’ grievances.

Keywords: countering violent extremism, counternarratives, frame analysis, propaganda, terrorism, violent extremism

Procedia PDF Downloads 163
117 Comparative Analysis of Biodegradation on Polythene and Plastics Buried in Fadama Soil Amended With Organic and Inorganic Fertilizer

Authors: Baba John, Abdullahi Mohammed

Abstract:

The aim of this research is to compare the analysis of biodegradation on polythene and plastics buried in fadama soil amended with Organic and Inorganic fertilizer. Physico- chemical properties of the samples were determined. Bacteria and Fungi implicated in the biodegradation were identified and enumerated. Physico- chemical properties before the analysis indicated pH range of the samples from 4.28 – 5.80 , While the percentage Organic carbon and Organic matter was highest in cow dung samples with 3.89% and 6.69% respectively. The total Nitrogen percentage was recorded to be highest in Chicken dropping (0.68), while the availability of Phosphorus (P), Sodium (Na), Pottasium (K), and Magnessium (mg) was recorded to be highest in F – soil (Control), with values to be 37ppm, 1.63 Cmolkg-1, 0.35 Cmolkg-1 and 1.18 Cmolkg-1 respectively, except for calcium which was recorded to be highest in Cow dung (5.80 Cmolkg-1). However, physico – chemical properties of the samples after analysis indicated pH range of 4.6 – 5.80, Percentage Organic carbon and Organic matter was highest in Fadama soil mixed with fertilizer, having 0.7% and 1.2% respectively. Total Percentage Nitrogen content was found to be highest (0.56) in Fadama soil mixed with poultry dropping. Availability of Sodium (Na), Pottasium (K), and Calcium (Ca) was recorded to be highest in Fadama Soil mixed with Cow dung with values to be 0.64 Cmolkg-1, 2.07 Cmolkg-1 and 3.36 Cmolkg-1 respectively. The percentage weight loss of polythene and plastic bags after nine months in fadama soil mixed with poultry dropping was 11.9% for polythene and 6.0% for plastics. Weight loss in fadama soil mixed with cow dung was 18.1% for polythene and 4.7% for plastics. Weight loss of polythene and plastic in fadama soil mixed with fertilizer (NPK) was 7.4% for polythene and 3.3% for plastics. While, the percentage weight loss of polythene and plastics after nine months of burial in fadama soil (control) was 3.5% and 0.0% respectively. The bacteria species isolated from Fadama soil, organic and inorganic fertilizers before amendments include: S. aureus, Micrococcus sp, Streptococcus. pyogenes, Psuedomonas aeruginosa Bacillus subtilis and Bacillus cereus. The fungi species include: Aspergillus niger, Aspergillus fumigatus, Aspergillus flavus, Fusarium sp, Mucor sp Penicillium sp and Candida sp. The bacteria species isolated and characterized after nine months of seeding include: S. aureus, Micrococcus sp, S. pyogenes, P. aeruginosa and B. subtilis. The fungi species are: A. niger A. flavus, A. fumigatus, Mucor sp, Penicillium sp and Fusarium sp. The result of this study indicated that plastic materials can be degraded in the fadama soil irrespective of whether the soil is amended or not. The Period of composting also has a significant impact on the rate at which polythene and plastics are degraded.

Keywords: Fadama, fertilizer, plastic and polythene, biodegradation

Procedia PDF Downloads 511
116 Anthelmintic Property of Pomegranate Peel Aqueous Extraction Against Ascaris Suum: An In-vitro Analysis

Authors: Edison Ramos, John Peter V. Dacanay, Milwida Josefa Villanueva

Abstract:

Soil-Transmitted Helminth (STH) infections caused by helminths are the most prevalent neglected tropical diseases (NTDs). They are commonly found in warm, humid regions and developing countries, particularly in rural areas with poor hygiene. Occasionally, human hosts exposed to pig manure may harbor Ascaris suum parasites without experiencing any symptoms. To address the significant issue of helminth infections, an effective anthelmintic is necessary. However, the effectiveness of various medications as anthelmintics can be reduced due to mutations. In recent years, there has been a growing interest in using plants as a source of medicine due to their natural origin, accessibility, affordability, and potential lack of complications. Herbal medicine has been advocated as an alternative treatment for helminth infections, especially in underdeveloped countries, considering the numerous adverse effects and drug resistance associated with commercially available anthelmintics. Medicinal plants are considered suitable replacements for current anthelmintics due to their historical usage in treating helminth infections. The objective of this research was to investigate the effects of aqueous extraction of pomegranate peel (Punica granatum L.) as an anthelmintic on female Ascaris suum in vitro. The in vitro assay involved observing the motility of Ascaris suum in different concentrations (25%, 50%, 75%, and 100%) of pomegranate peel aqueous extraction, along with mebendazole as a positive control. The results indicated that as the concentration of the extract increased, the time required to paralyze the worms decreased. At 25% concentration, the average time for paralysis was 362.0 minutes, which decreased to 181.0 minutes at 50% concentration, 122.7 minutes at 75% concentration, and 90.0 minutes at 100% concentration. The time of death for the worms was directly proportional to the concentration of the pomegranate peel extract. Death was observed at an average time of 240.7 minutes at 75% concentration and 147.7 minutes at 100% concentration. The findings suggest that as the concentration of pomegranate peel extract increases, the time required for paralysis and death of Ascaris suum decreases. This indicates a concentration-dependent relationship, where higher concentrations of the extract exhibit greater effectiveness in inducing paralysis and causing the death of the worms. These results emphasize the potential anthelmintic properties of pomegranate peel extract and its ability to effectively combat Ascaris suum infestations. There was no significant difference in the anthelmintic effectiveness between the pomegranate peel extract and Mebendazole. These findings highlight the potential of pomegranate peel extract as an alternative anthelmintic treatment for Ascaris suum infections. The researchers recommend determining the optimal dose and administration route to maximize the effectiveness of pomegranate peel as an anthelmintic therapeutic against Ascaris suum.

Keywords: pomegranate peel, aqueous extract, anthelmintic, in vitro

Procedia PDF Downloads 84
115 Factors Affecting Air Surface Temperature Variations in the Philippines

Authors: John Christian Lequiron, Gerry Bagtasa, Olivia Cabrera, Leoncio Amadore, Tolentino Moya

Abstract:

Changes in air surface temperature play an important role in the Philippine’s economy, industry, health, and food production. While increasing global mean temperature in the recent several decades has prompted a number of climate change and variability studies in the Philippines, most studies still focus on rainfall and tropical cyclones. This study aims to investigate the trend and variability of observed air surface temperature and determine its major influencing factor/s in the Philippines. A non-parametric Mann-Kendall trend test was applied to monthly mean temperature of 17 synoptic stations covering 56 years from 1960 to 2015 and a mean change of 0.58 °C or a positive trend of 0.0105 °C/year (p < 0.05) was found. In addition, wavelet decomposition was used to determine the frequency of temperature variability show a 12-month, 30-80-month and more than 120-month cycles. This indicates strong annual variations, interannual variations that coincide with ENSO events, and interdecadal variations that are attributed to PDO and CO2 concentrations. Air surface temperature was also correlated with smoothed sunspot number and galactic cosmic rays, the results show a low to no effect. The influence of ENSO teleconnection on temperature, wind pattern, cloud cover, and outgoing longwave radiation on different ENSO phases had significant effects on regional temperature variability. Particularly, an anomalous anticyclonic (cyclonic) flow east of the Philippines during the peak and decay phase of El Niño (La Niña) events leads to the advection of warm southeasterly (cold northeasterly) air mass over the country. Furthermore, an apparent increasing cloud cover trend is observed over the West Philippine Sea including portions of the Philippines, and this is believed to lessen the effect of the increasing air surface temperature. However, relative humidity was also found to be increasing especially on the central part of the country, which results in a high positive trend of heat index, exacerbating the effects on human discomfort. Finally, an assessment of gridded temperature datasets was done to look at the viability of using three high-resolution datasets in future climate analysis and model calibration and verification. Several error statistics (i.e. Pearson correlation, Bias, MAE, and RMSE) were used for this validation. Results show that gridded temperature datasets generally follows the observed surface temperature change and anomalies. In addition, it is more representative of regional temperature rather than a substitute to station-observed air temperature.

Keywords: air surface temperature, carbon dioxide, ENSO, galactic cosmic rays, smoothed sunspot number

Procedia PDF Downloads 295
114 Strength Performance and Microstructure Characteristics of Natural Bonded Fiber Composites from Malaysian Bamboo

Authors: Shahril Anuar Bahari, Mohd Azrie Mohd Kepli, Mohd Ariff Jamaludin, Kamarulzaman Nordin, Mohamad Jani Saad

Abstract:

Formaldehyde release from wood-based panel composites can be very toxicity and may increase the risk of human health as well as environmental problems. A new bio-composites product without synthetic adhesive or resin is possible to be developed in order to reduce these problems. Apart from formaldehyde release, adhesive is also considered to be expensive, especially in the manufacturing of composite products. Natural bonded composites can be termed as a panel product composed with any type of cellulosic materials without the addition of synthetic resins. It is composed with chemical content activation in the cellulosic materials. Pulp and paper making method (chemical pulping) was used as a general guide in the composites manufacturing. This method will also generally reduce the manufacturing cost and the risk of formaldehyde emission and has potential to be used as an alternative technology in fiber composites industries. In this study, the natural bonded bamboo fiber composite was produced from virgin Malaysian bamboo fiber (Bambusa vulgaris). The bamboo culms were chipped and digested into fiber using this pulping method. The black liquor collected from the pulping process was used as a natural binding agent in the composition. Then the fibers were mixed and blended with black liquor without any resin addition. The amount of black liquor used per composite board was 20%, with approximately 37% solid content. The composites were fabricated using a hot press machine at two different board densities, 850 and 950 kg/m³, with two sets of hot pressing time, 25 and 35 minutes. Samples of the composites from different densities and hot pressing times were tested in flexural strength and internal bonding (IB) for strength performance according to British Standard. Modulus of elasticity (MOE) and modulus of rupture (MOR) was determined in flexural test, while tensile force perpendicular to the surface was recorded in IB test. Results show that the strength performance of the composites with 850 kg/m³ density were significantly higher than 950 kg/m³ density, especially for samples from 25 minutes hot pressing time. Strength performance of composites from 25 minutes hot pressing time were generally greater than 35 minutes. Results show that the maximum mean values of strength performance were recorded from composites with 850 kg/m³ density and 25 minutes pressing time. The maximum mean values for MOE, MOR and IB were 3251.84, 16.88 and 0.27 MPa, respectively. Only MOE result has conformed to high density fiberboard (HDF) standard (2700 MPa) in British Standard for Fiberboard Specification, BS EN 622-5: 2006. Microstructure characteristics of composites can also be related to the strength performance of the composites, in which, the observed fiber damage in composites from 950 kg/m³ density and overheat of black liquor led to the low strength properties, especially in IB test.

Keywords: bamboo fiber, natural bonded, black liquor, mechanical tests, microstructure observations

Procedia PDF Downloads 241