Search results for: urban project stakeholders
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9569

Search results for: urban project stakeholders

1109 Deflagration and Detonation Simulation in Hydrogen-Air Mixtures

Authors: Belyayev P. E., Makeyeva I. R., Mastyuk D. A., Pigasov E. E.

Abstract:

Previously, the phrase ”hydrogen safety” was often used in terms of NPP safety. Due to the rise of interest to “green” and, particularly, hydrogen power engineering, the problem of hydrogen safety at industrial facilities has become ever more urgent. In Russia, the industrial production of hydrogen is meant to be performed by placing a chemical engineering plant near NPP, which supplies the plant with the necessary energy. In this approach, the production of hydrogen involves a wide range of combustible gases, such as methane, carbon monoxide, and hydrogen itself. Considering probable incidents, sudden combustible gas outburst into open space with further ignition is less dangerous by itself than ignition of the combustible mixture in the presence of many pipelines, reactor vessels, and any kind of fitting frames. Even ignition of 2100 cubic meters of the hydrogen-air mixture in open space gives velocity and pressure that are much lesser than velocity and pressure in Chapman-Jouguet condition and do not exceed 80 m/s and 6 kPa accordingly. However, the space blockage, the significant change of channel diameter on the way of flame propagation, and the presence of gas suspension lead to significant deflagration acceleration and to its transition into detonation or quasi-detonation. At the same time, process parameters acquired from the experiments at specific experimental facilities are not general, and their application to different facilities can only have a conventional and qualitative character. Yet, conducting deflagration and detonation experimental investigation for each specific industrial facility project in order to determine safe infrastructure unit placement does not seem feasible due to its high cost and hazard, while the conduction of numerical experiments is significantly cheaper and safer. Hence, the development of a numerical method that allows the description of reacting flows in domains with complex geometry seems promising. The base for this method is the modification of Kuropatenko method for calculating shock waves recently developed by authors, which allows using it in Eulerian coordinates. The current work contains the results of the development process. In addition, the comparison of numerical simulation results and experimental series with flame propagation in shock tubes with orifice plates is presented.

Keywords: CFD, reacting flow, DDT, gas explosion

Procedia PDF Downloads 82
1108 Management of Third Stage Labour in a Rural Ugandan Hospital

Authors: Brid Dinnee, Jessica Taylor, Joseph Hartland, Michael Natarajan

Abstract:

Background:The third stage of labour (TSL) can be complicated by Post-Partum Haemorrhage (PPH), which can have a significant impact on maternal mortality and morbidity. In Africa, 33.9% of maternal deaths are attributable to PPH1. In order to minimise this figure, current recommendations for the developing world are that all women have active management of the third stage of labour (AMTSL). The aim of this project was to examine TSL practice in a rural Ugandan Hospital, highlight any deviation from best practice and identify barriers to change in resource limited settings as part of a 4th year medical student External Student Selected Component field trip. Method: Five key elements from the current World Health Organisation (WHO) guidelines on AMTSL were used to develop an audit tool. All daytime vaginal deliveries over a two week period in July 2016 were audited. In addition to this, a retrospective comparison of PPH rates, between 2006 (when ubiquitous use of intramuscular oxytocin for management of TSL was introduced) and 2015 was performed. Results: Eight vaginal deliveries were observed; at all of which intramuscular oxytocin was administered and controlled cord traction used. Against WHO recommendation, all umbilical cords were clamped within one minute, and no infants received early skin-to-skin contact. In only one case was uterine massage performed after placental delivery. A retrospective comparison of data rates identified a 40% reduction in total number of PPHs from November 2006 to November 2015. Maternal deaths per delivery reduced from 2% to 0.5%. Discussion: Maternal mortality and PPH are still major issues in developing countries. Maternal mortality due to PPH can be reduced by good practices regarding TSL, but not all of these are used in low-resource settings. There is a notable difference in outcomes between the developed and developing world. At Kitovu Hospital, there has been a reduction in maternal mortality and number of PPHs following introduction of IM Oxytocin administration. In order to further improve these rates, staff education and further government funding is key.

Keywords: post-partum haemorrhage, PPH, third stage labour, Uganda

Procedia PDF Downloads 199
1107 Exploring an Exome Target Capture Method for Cross-Species Population Genetic Studies

Authors: Benjamin A. Ha, Marco Morselli, Xinhui Paige Zhang, Elizabeth A. C. Heath-Heckman, Jonathan B. Puritz, David K. Jacobs

Abstract:

Next-generation sequencing has enhanced the ability to acquire massive amounts of sequence data to address classic population genetic questions for non-model organisms. Targeted approaches allow for cost effective or more precise analyses of relevant sequences; although, many such techniques require a known genome and it can be costly to purchase probes from a company. This is challenging for non-model organisms with no published genome and can be expensive for large population genetic studies. Expressed exome capture sequencing (EecSeq) synthesizes probes in the lab from expressed mRNA, which is used to capture and sequence the coding regions of genomic DNA from a pooled suite of samples. A normalization step produces probes to recover transcripts from a wide range of expression levels. This approach offers low cost recovery of a broad range of genes in the genome. This research project expands on EecSeq to investigate if mRNA from one taxon may be used to capture relevant sequences from a series of increasingly less closely related taxa. For this purpose, we propose to use the endangered Northern Tidewater goby, Eucyclogobius newberryi, a non-model organism that inhabits California coastal lagoons. mRNA will be extracted from E. newberryi to create probes and capture exomes from eight other taxa, including the more at-risk Southern Tidewater goby, E. kristinae, and more divergent species. Captured exomes will be sequenced, analyzed bioinformatically and phylogenetically, then compared to previously generated phylogenies across this group of gobies. This will provide an assessment of the utility of the technique in cross-species studies and for analyzing low genetic variation within species as is the case for E. kristinae. This method has potential applications to provide economical ways to expand population genetic and evolutionary biology studies for non-model organisms.

Keywords: coastal lagoons, endangered species, non-model organism, target capture method

Procedia PDF Downloads 180
1106 Sediment Transport Monitoring in the Port of Veracruz Expansion Project

Authors: Francisco Liaño-Carrera, José Isaac Ramírez-Macías, David Salas-Monreal, Mayra Lorena Riveron-Enzastiga, Marcos Rangel-Avalos, Adriana Andrea Roldán-Ubando

Abstract:

The construction of most coastal infrastructure developments around the world are usually made considering wave height, current velocities and river discharges; however, little effort has been paid to surveying sediment transport during dredging or the modification to currents outside the ports or marinas during and after the construction. This study shows a complete survey during the construction of one of the largest ports of the Gulf of Mexico. An anchored Acoustic Doppler Current Velocity profiler (ADCP), a towed ADCP and a combination of model outputs were used at the Veracruz port construction in order to describe the hourly sediment transport and current modifications in and out of the new port. Owing to the stability of the system the new port was construction inside Vergara Bay, a low wave energy system with a tidal range of up to 0.40 m. The results show a two-current system pattern within the bay. The north side of the bay has an anticyclonic gyre, while the southern part of the bay shows a cyclonic gyre. Sediment transport trajectories were made every hour using the anchored ADCP, a numerical model and the weekly data obtained from the towed ADCP within the entire bay. The sediment transport trajectories were carefully tracked since the bay is surrounded by coral reef structures which are sensitive to sedimentation rate and water turbidity. The survey shows that during dredging and rock input used to build the wave breaker sediments were locally added (< 2500 m2) and local currents disperse it in less than 4 h. While the river input located in the middle of the bay and the sewer system plant may add more than 10 times this amount during a rainy day or during the tourist season. Finally, the coastal line obtained seasonally with a drone suggests that the southern part of the bay has not been modified by the construction of the new port located in the northern part of the bay, owing to the two subsystem division of the bay.

Keywords: Acoustic Doppler Current Profiler, construction around coral reefs, dredging, port construction, sediment transport monitoring,

Procedia PDF Downloads 221
1105 Optimal Beam for Accelerator Driven Systems

Authors: M. Paraipan, V. M. Javadova, S. I. Tyutyunnikov

Abstract:

The concept of energy amplifier or accelerator driven system (ADS) involves the use of a particle accelerator coupled with a nuclear reactor. The accelerated particle beam generates a supplementary source of neutrons, which allows the subcritical functioning of the reactor, and consequently a safe exploitation. The harder neutron spectrum realized ensures a better incineration of the actinides. The almost generalized opinion is that the optimal beam for ADS is represented by protons with energy around 1 GeV (gigaelectronvolt). In the present work, a systematic analysis of the energy gain for proton beams with energy from 0.5 to 3 GeV and ion beams from deuteron to neon with energies between 0.25 and 2 AGeV is performed. The target is an assembly of metallic U-Pu-Zr fuel rods in a bath of lead-bismuth eutectic coolant. The rods length is 150 cm. A beryllium converter with length 110 cm is used in order to maximize the energy released in the target. The case of a linear accelerator is considered, with a beam intensity of 1.25‧10¹⁶ p/s, and a total accelerator efficiency of 0.18 for proton beam. These values are planned to be achieved in the European Spallation Source project. The energy gain G is calculated as the ratio between the energy released in the target to the energy spent to accelerate the beam. The energy released is obtained through simulation with the code Geant4. The energy spent is calculating by scaling from the data about the accelerator efficiency for the reference particle (proton). The analysis concerns the G values, the net power produce, the accelerator length, and the period between refueling. The optimal energy for proton is 1.5 GeV. At this energy, G reaches a plateau around a value of 8 and a net power production of 120 MW (megawatt). Starting with alpha, ion beams have a higher G than 1.5 GeV protons. A beam of 0.25 AGeV(gigaelectronvolt per nucleon) ⁷Li realizes the same net power production as 1.5 GeV protons, has a G of 15, and needs an accelerator length 2.6 times lower than for protons, representing the best solution for ADS. Beams of ¹⁶O or ²⁰Ne with energy 0.75 AGeV, accelerated in an accelerator with the same length as 1.5 GeV protons produce approximately 900 MW net power, with a gain of 23-25. The study of the evolution of the isotopes composition during irradiation shows that the increase in power production diminishes the period between refueling. For a net power produced of 120 MW, the target can be irradiated approximately 5000 days without refueling, but only 600 days when the net power reaches 1 GW (gigawatt).

Keywords: accelerator driven system, ion beam, electrical power, energy gain

Procedia PDF Downloads 132
1104 Management as a Proxy for Firm Quality

Authors: Petar Dobrev

Abstract:

There is no agreed-upon definition of firm quality. While profitability and stock performance often qualify as popular proxies of quality, in this project, we aim to identify quality without relying on a firm’s financial statements or stock returns as selection criteria. Instead, we use firm-level data on management practices across small to medium-sized U.S. manufacturing firms from the World Management Survey (WMS) to measure firm quality. Each firm in the WMS dataset is assigned a mean management score from 0 to 5, with higher scores identifying better-managed firms. This management score serves as our proxy for firm quality and is the sole criteria we use to separate firms into portfolios comprised of high-quality and low-quality firms. We define high-quality (low-quality) firms as those firms with a management score of one standard deviation above (below) the mean. To study whether this proxy for firm quality can identify better-performing firms, we link this data to Compustat and The Center for Research in Security Prices (CRSP) to obtain firm-level data on financial performance and monthly stock returns, respectively. We find that from 1999 to 2019 (our sample data period), firms in the high-quality portfolio are consistently more profitable — higher operating profitability and return on equity compared to low-quality firms. In addition, high-quality firms also exhibit a lower risk of bankruptcy — a higher Altman Z-score. Next, we test whether the stocks of the firms in the high-quality portfolio earn superior risk-adjusted excess returns. We regress the monthly excess returns on each portfolio on the Fama-French 3-factor, 4-factor, and 5-factor models, the betting-against-beta factor, and the quality-minus-junk factor. We find no statistically significant differences in excess returns between both portfolios, suggesting that stocks of high-quality (well managed) firms do not earn superior risk-adjusted returns compared to low-quality (poorly managed) firms. In short, our proxy for firm quality, the WMS management score, can identify firms with superior financial performance (higher profitability and reduced risk of bankruptcy). However, our management proxy cannot identify stocks that earn superior risk-adjusted returns, suggesting no statistically significant relationship between managerial quality and stock performance.

Keywords: excess stock returns, management, profitability, quality

Procedia PDF Downloads 87
1103 The KAPSARC Energy Policy Database: Introducing a Quantified Library of China's Energy Policies

Authors: Philipp Galkin

Abstract:

Government policy is a critical factor in the understanding of energy markets. Regardless, it is rarely approached systematically from a research perspective. Gaining a precise understanding of what policies exist, their intended outcomes, geographical extent, duration, evolution, etc. would enable the research community to answer a variety of questions that, for now, are either oversimplified or ignored. Policy, on its surface, also seems a rather unstructured and qualitative undertaking. There may be quantitative components, but incorporating the concept of policy analysis into quantitative analysis remains a challenge. The KAPSARC Energy Policy Database (KEPD) is intended to address these two energy policy research limitations. Our approach is to represent policies within a quantitative library of the specific policy measures contained within a set of legal documents. Each of these measures is recorded into the database as a single entry characterized by a set of qualitative and quantitative attributes. Initially, we have focused on the major laws at the national level that regulate coal in China. However, KAPSARC is engaged in various efforts to apply this methodology to other energy policy domains. To ensure scalability and sustainability of our project, we are exploring semantic processing using automated computer algorithms. Automated coding can provide a more convenient input data for human coders and serve as a quality control option. Our initial findings suggest that the methodology utilized in KEPD could be applied to any set of energy policies. It also provides a convenient tool to facilitate understanding in the energy policy realm enabling the researcher to quickly identify, summarize, and digest policy documents and specific policy measures. The KEPD captures a wide range of information about each individual policy contained within a single policy document. This enables a variety of analyses, such as structural comparison of policy documents, tracing policy evolution, stakeholder analysis, and exploring interdependencies of policies and their attributes with exogenous datasets using statistical tools. The usability and broad range of research implications suggest a need for the continued expansion of the KEPD to encompass a larger scope of policy documents across geographies and energy sectors.

Keywords: China, energy policy, policy analysis, policy database

Procedia PDF Downloads 317
1102 Assessing the Applicability of Kevin Lynch’s Framework of ‘the Image of the City’ in the Case of a Walled City of Jaipur

Authors: Jay Patel

Abstract:

This Research is about investigating the ‘image’ of the city, and asks whether this ‘image’ holds any significance that can be changed. Kevin Lynch in the book ‘The image of the city’ develops a framework that breaks down the city’s image into five physical elements. These elements (Paths, Edge, Nodes, Districts, and Landmarks), according to Lynch assess the legibility of the urbanscapes, that emerged from his perception-based study in 3 different cities (New Jersey, Los Angeles, and Boston) in the USA. The aim of this research is to investigate whether Lynch’s framework can be applied within an Indian context or not. If so, what are the possibilities and whether the imageability of Indian cities can be depicted through the Lynch’s physical elements or it demands an extension to the framework by either adding or subtracting a physical attribute. For this research project, the walled city of Jaipur was selected, as it is considered one of the futuristic designed cities of all time in India. The other significant reason for choosing Jaipur was that it is a historically planned city with solid historical, touristic and local importance; allowing an opportunity to understand the application of Lynch's elements to the city's image. In other words, it provides an opportunity to examine how the disadvantages of a city's implicit programme (its relics of bygone eras) can be converted into assets by improving the imageability of the city. To obtain data, a structured semi-open ended interview method was chosen. The reason for selecting this method explicitly was to gain qualitative data from the users rather than collecting quantitative data from closed-ended questions. This allowed in-depth understanding and applicability of Kevin Lynch’s framework while assessing what needs to be added. The interviews were conducted in Jaipur that yielded varied inferences that were different from the expected learning outcomes, highlighting the need for extension on Lynch’s physical elements to achieve city’s image. Whilst analyzing the data, there were few attributes found that defined the image of Jaipur. These were categorized into two: a Physical aspect (streets and arcade entities, natural features, temples and temporary/ informal activities) and Associational aspects (History, Culture and Tradition, Medium of help in wayfinding, and intangible aspects).

Keywords: imageability, Kevin Lynch, people’s perception, assessment, associational aspects, physical aspects

Procedia PDF Downloads 192
1101 Numerical Investigation of the Operating Parameters of the Vertical Axis Wind Turbine

Authors: Zdzislaw Kaminski, Zbigniew Czyz, Tytus Tulwin

Abstract:

This paper describes the geometrical model, algorithm and CFD simulation of an airflow around a Vertical Axis Wind Turbine rotor. A solver, ANSYS Fluent, was applied for the numerical simulation. Numerical simulation, unlike experiments, enables us to validate project assumptions when it is designed to avoid a costly preparation of a model or a prototype for a bench test. This research focuses on the rotor designed according to patent no PL 219985 with its blades capable of modifying their working surfaces, i.e. absorbing wind kinetic energy. The operation of this rotor is based on a regulation of blade angle α between the top and bottom parts of blades mounted on an axis. If angle α increases, the working surface which absorbs wind kinetic energy also increases. CFD calculations enable us to compare aerodynamic characteristics of forces acting on rotor working surfaces and specify rotor operation parameters like torque or turbine assembly power output. This paper is part of the research to improve an efficiency of a rotor assembly and it contains investigation of the impact of a blade angle of wind turbine working blades on the power output as a function of rotor torque, specific rotational speed and wind speed. The simulation was made for wind speeds ranging from 3.4 m/s to 6.2 m/s and blade angles of 30°, 60°, 90°. The simulation enables us to create a mathematical model to describe how aerodynamic forces acting each of the blade of the studied rotor are generated. Also, the simulation results are compared with the wind tunnel ones. This investigation enables us to estimate the growth in turbine power output if a blade angle changes. The regulation of blade angle α enables a smooth change in turbine rotor power, which is a kind of safety measures if the wind is strong. Decreasing blade angle α reduces the risk of damaging or destroying a turbine that is still in operation and there is no complete rotor braking as it is in other Horizontal Axis Wind Turbines. This work has been financed by the Polish Ministry of Science and Higher Education.

Keywords: computational fluid dynamics, mathematical model, numerical analysis, power, renewable energy, wind turbine

Procedia PDF Downloads 332
1100 Immobilizing Quorum Sensing Inhibitors on Biomaterial Surfaces

Authors: Aditi Taunk, George Iskander, Kitty Ka Kit Ho, Mark Willcox, Naresh Kumar

Abstract:

Bacterial infections on biomaterial implants and medical devices accounts for 60-70% of all hospital acquired infections (HAIs). Treatment or removal of these infected devices results in high patient mortality and morbidity along with increased hospital expenses. In addition, with no effective strategies currently available and rapid development of antibacterial resistance has made device-related infections extremely difficult to treat. Therefore, in this project we have developed biomaterial surfaces using antibacterial compounds that inhibit biofilm formation by interfering with the bacterial communication mechanism known as quorum sensing (QS). This study focuses on covalent attachment of potent quorum sensing (QS) inhibiting compounds, halogenated furanones (FUs) and dihydropyrrol-2-ones (DHPs), onto glass surfaces. The FUs were attached by photoactivating the azide groups on the surface, and the acid functionalized DHPs were immobilized on amine surface via EDC/NHS coupling. The modified surfaces were tested in vitro against pathogenic organisms such as Staphylococcus aureus and Pseudomonas aeruginosa using confocal laser scanning microscopy (CLSM). Successful attachment of compounds on the substrates was confirmed by X-ray photoelectron spectroscopy (XPS) and contact angle measurements. The antibacterial efficacy was assessed, and significant reduction in bacterial adhesion and biofilm formation was observed on the FU and DHP coated surfaces. The activity of the coating was dependent upon the type of substituent present on the phenyl group of the DHP compound. For example, the ortho-fluorophenyl DHP (DHP-2) exhibited 79% reduction in bacterial adhesion against S. aureus and para-fluorophenyl DHP (DHP-3) exhibited 70% reduction against P. aeruginosa. The results were found to be comparable to DHP coated surfaces prepared in earlier study via Michael addition reaction. FUs and DHPs were able to retain their in vitro antibacterial efficacy after covalent attachment via azide chemistry. This approach is a promising strategy to develop efficient antibacterial biomaterials to reduce device related infections.

Keywords: antibacterial biomaterials, biomedical device-related infections, quorum sensing, surface functionalization

Procedia PDF Downloads 260
1099 Health Monitoring of Composite Pile Construction Using Fiber Bragg Gratings Sensor Arrays

Authors: B. Atli-Veltin, A. Vosteen, D. Megan, A. Jedynska, L. K. Cheng

Abstract:

Composite materials combine the advantages of being lightweight and possessing high strength. This is in particular of interest for the development of large constructions, e.g., aircraft, space applications, wind turbines, etc. One of the shortcomings of using composite materials is the complex nature of the failure mechanisms which makes it difficult to predict the remaining lifetime. Therefore, condition and health monitoring are essential for using composite material for critical parts of a construction. Different types of sensors are used/developed to monitor composite structures. These include ultrasonic, thermography, shearography and fiber optic. The first 3 technologies are complex and mostly used for measurement in laboratory or during maintenance of the construction. Optical fiber sensor can be surface mounted or embedded in the composite construction to provide the unique advantage of in-operation measurement of mechanical strain and other parameters of interest. This is identified to be a promising technology for Structural Health Monitoring (SHM) or Prognostic Health Monitoring (PHM) of composite constructions. Among the different fiber optic sensing technologies, Fiber Bragg Grating (FBG) sensor is the most mature and widely used. FBG sensors can be realized in an array configuration with many FBGs in a single optical fiber. In the current project, different aspects of using embedded FBG for composite wind turbine monitoring are investigated. The activities are divided into two parts. Firstly, FBG embedded carbon composite laminate is subjected to tensile and bending loading to investigate the response of FBG which are placed in different orientations with respect to the fiber. Secondly, the demonstration of using FBG sensor array for temperature and strain sensing and monitoring of a 5 m long scale model of a glass fiber mono-pile is investigated. Two different FBG types are used; special in-house fibers and off-the-shelf ones. The results from the first part of the study are showing that the FBG sensors survive the conditions during the production of the laminate. The test results from the tensile and the bending experiments are indicating that the sensors successfully response to the change of strain. The measurements from the sensors will be correlated with the strain gauges that are placed on the surface of the laminates.

Keywords: Fiber Bragg Gratings, embedded sensors, health monitoring, wind turbine towers

Procedia PDF Downloads 241
1098 A Good Start for Digital Transformation of the Companies: A Literature and Experience-Based Predefined Roadmap

Authors: Batuhan Kocaoglu

Abstract:

Nowadays digital transformation is a hot topic both in service and production business. For the companies who want to stay alive in the following years, they should change how they do their business. Industry leaders started to improve their ERP (Enterprise Resource Planning) like backbone technologies to digital advances such as analytics, mobility, sensor-embedded smart devices, AI (Artificial Intelligence) and more. Selecting the appropriate technology for the related business problem also is a hot topic. Besides this, to operate in the modern environment and fulfill rapidly changing customer expectations, a digital transformation of the business is required and change the way the business runs, affect how they do their business. Even the digital transformation term is trendy the literature is limited and covers just the philosophy instead of a solid implementation plan. Current studies urge firms to start their digital transformation, but few tell us how to do. The huge investments scare companies with blur definitions and concepts. The aim of this paper to solidify the steps of the digital transformation and offer a roadmap for the companies and academicians. The proposed roadmap is developed based upon insights from the literature review, semi-structured interviews, and expert views to explore and identify crucial steps. We introduced our roadmap in the form of 8 main steps: Awareness; Planning; Operations; Implementation; Go-live; Optimization; Autonomation; Business Transformation; including a total of 11 sub-steps with examples. This study also emphasizes four dimensions of the digital transformation mainly: Readiness assessment; Building organizational infrastructure; Building technical infrastructure; Maturity assessment. Finally, roadmap corresponds the steps with three main terms used in digital transformation literacy as Digitization; Digitalization; and Digital Transformation. The resulted model shows that 'business process' and 'organizational issues' should be resolved before technology decisions and 'digitization'. Companies can start their journey with the solid steps, using the proposed roadmap to increase the success of their project implementation. Our roadmap is also adaptable for relevant Industry 4.0 and enterprise application projects. This roadmap will be useful for companies to persuade their top management for investments. Our results can be used as a baseline for further researches related to readiness assessment and maturity assessment studies.

Keywords: digital transformation, digital business, ERP, roadmap

Procedia PDF Downloads 157
1097 Sorghum Resilience and Sustainability under Limiting and Non-limiting Conditions of Water and Nitrogen

Authors: Muhammad Tanveer Altaf, Mehmet Bedir, Waqas Liaqat, Gönül Cömertpay, Volkan Çatalkaya, Celaluddin Barutçular, Nergiz Çoban, Ibrahim Cerit, Muhammad Azhar Nadeem, Tolga Karaköy, Faheem Shehzad Baloch

Abstract:

Food production needs to be almost double by 2050 in order to feed around 9 billion people around the Globe. Plant production mostly relies on fertilizers, which also have one of the main roles in environmental pollution. In addition to this, climatic conditions are unpredictable, and the earth is expected to face severe drought conditions in the future. Therefore, water and fertilizers, especially nitrogen are considered as main constraints for future food security. To face these challenges, developing integrative approaches for germplasm characterization and selecting the resilient genotypes performing under limiting conditions is very crucial for effective breeding to meet the food requirement under climatic change scenarios. This study is part of the European Research Area Network (ERANET) project for the characterization of the diversity panel of 172 sorghum accessions and six hybrids as control cultivars under limiting (+N/-H2O, -N/+H2O) and non-limiting conditions (+N+H2O). This study was planned to characterize the sorghum diversity in relation to resource Use Efficiency (RUE), with special attention on harnessing the interaction between genotype and environment (GxE) from a physiological and agronomic perspective. Experiments were conducted at Adana, a Mediterranean climate, with augmented design, and data on various agronomic and physiological parameters were recorded. Plentiful diversity was observed in the sorghum diversity panel and significant variations were seen among the limiting water and nitrogen conditions in comparison with the control experiment. Potential genotypes with the best performance are identified under limiting conditions. Whole genome resequencing was performed for whole germplasm under investigation for diversity analysis. GWAS analysis will be performed using genotypic and phenotypic data and linked markers will be identified. The results of this study will show the adaptation and improvement of sorghum under climate change conditions for future food security.

Keywords: germplasm, sorghum, drought, nitrogen, resources use efficiency, sequencing

Procedia PDF Downloads 70
1096 The Impact of COVID-19 Waste on Aquatic Organisms: Nano/microplastics and Molnupiravir in Salmo trutta Embryos and Lervae

Authors: Živilė Jurgelėnė, Vitalijus Karabanovas, Augustas Morkvėnas, Reda Dzingelevičienė, Nerijus Dzingelevičius, Saulius Raugelė, Boguslaw Buszewski

Abstract:

The short- and long-term effects of COVID-19 antiviral drug molnupiravir and micro/nanoplastics on the early development of Salmo trutta were investigated using accumulation and exposure studies. Salmo trutta were used as standardized test organisms in toxicity studies of COVID-19 waste contaminants. The 2D/3D imaging was performed using confocal fluorescence spectral imaging microscopy to assess the uptake, bioaccumulation, and distribution of molnupiravir and micro/nanoplastics complex in live fish. Our study results demonstrated that molnupiravir may interact with a micro/nanoplastics and modify their spectroscopic parameters and toxicity to S. trutta embryos and larvae. The 0.2 µm size microplastics at a concentration of 10 mg/L were found to be stable in aqueous media than 0.02 µm, and 2 µm sizes polymeric particles. This study demonstrated that polymeric particles can adsorb molnupiravir that are present in mixtures and modify the accumulation of molnupiravir in Salmo trutta embryos and larvae. In addition, 2D/3D confocal fluorescence imaging showed that the single polymeric particle hardly accumulates and couldn't penetrate outer tissues of the tested organism. However, co-exposure micro/nanoplastics and molnupiravir could significantly enhance the polymeric particles capability of accumulating on surface tissues and penetrating surface tissue of fish in early development. Exposure to molnupiravir at 2 g/L concentration and co-exposure to micro/nanoplastics and molnupiravir did not bring about survival changes in in the early stages of Salmo trutta development, but we observed the reduction in heart rate and decrease in gill ventilation. The statistical analysis confirmed that micro/nanoplastics used in combination with molnupiravir enhance the toxicity of the latter micro/nanoplastics to embryos and larvae. This research has received funding from the European Regional Development Fund (project No 13.1.1-LMT-K-718-05-0014) under a grant agreement with the Research Council of Lithuania (LMTLT), and it was funded as part of the European Union’s measure in response to the COVID-19 pandemic.

Keywords: fish, micro/nanoplastics, molnupiravir, toxicity

Procedia PDF Downloads 86
1095 Low Energy Technology for Leachate Valorisation

Authors: Jesús M. Martín, Francisco Corona, Dolores Hidalgo

Abstract:

Landfills present long-term threats to soil, air, groundwater and surface water due to the formation of greenhouse gases (methane gas and carbon dioxide) and leachate from decomposing garbage. The composition of leachate differs from site to site and also within the landfill. The leachates alter with time (from weeks to years) since the landfilled waste is biologically highly active and their composition varies. Mainly, the composition of the leachate depends on factors such as characteristics of the waste, the moisture content, climatic conditions, degree of compaction and the age of the landfill. Therefore, the leachate composition cannot be generalized and the traditional treatment models should be adapted in each case. Although leachate composition is highly variable, what different leachates have in common is hazardous constituents and their potential eco-toxicological effects on human health and on terrestrial ecosystems. Since leachate has distinct compositions, each landfill or dumping site would represent a different type of risk on its environment. Nevertheless, leachates consist always of high organic concentration, conductivity, heavy metals and ammonia nitrogen. Leachate could affect the current and future quality of water bodies due to uncontrolled infiltrations. Therefore, control and treatment of leachate is one of the biggest issues in urban solid waste treatment plants and landfills design and management. This work presents a treatment model that will be carried out "in-situ" using a cost-effective novel technology that combines solar evaporation/condensation plus forward osmosis. The plant is powered by renewable energies (solar energy, biomass and residual heat), which will minimize the carbon footprint of the process. The final effluent quality is very high, allowing reuse (preferred) or discharge into watercourses. In the particular case of this work, the final effluents will be reused for cleaning and gardening purposes. A minority semi-solid residual stream is also generated in the process. Due to its special composition (rich in metals and inorganic elements), this stream will be valorized in ceramic industries to improve the final products characteristics.

Keywords: forward osmosis, landfills, leachate valorization, solar evaporation

Procedia PDF Downloads 198
1094 A Post-Occupancy Evaluation of LEED-Certified Residential Communities Using Structural Equation Modeling

Authors: Mohsen Goodarzi, George Berghorn

Abstract:

Despite the rapid growth in the number of green building and community development projects, the long-term performance of these projects has not yet been sufficiently evaluated from the users’ points of view. This is partially due to the lack of post-occupancy evaluation tools available for this type of project. In this study, a post-construction evaluation model is developed to evaluate the relationship between the perceived performance and satisfaction of residents in LEED-certified residential buildings and communities. To develop this evaluation model, a primary five-factor model was developed based on the existing models and residential satisfaction theories. Each factor of the model included several measures that were adopted from LEED certification systems such as LEED-BD+C New Construction, LEED-BD+C Multifamily Midrise, LEED-ND, as well as the UC Berkeley’s Center for the Built Environment survey tool. The model included four predictor variables (factors), including perceived building performance (8 measures), perceived infrastructure performance (9 measures), perceived neighborhood design (6 measures), and perceived economic performance (4 measures), and one dependent variable (factor), which was residential satisfaction (6 measures). An online survey was then conducted to collect the data from the residents of LEED-certified residential communities (n=192) and the validity of the model was tested through Confirmatory Factor Analysis (CFA). After modifying the CFA model, 26 measures, out of the initial 33 measures, were retained to enter into a Structural Equation Model (SEM) and to find the relationships between the perceived buildings performance, infrastructure performance, neighborhood design, economic performance and residential Satisfaction. The results of the SEM showed that the perceived building performance was the most influential factor in determining residential satisfaction in LEED-certified communities, followed by the perceived neighborhood design. On the other hand, perceived infrastructure performance and perceived economic performance did not show any significant relationship with residential satisfaction in these communities. This study can benefit green building researchers by providing a model for the evaluation of the long-term performance of these projects. It can also provide opportunities for green building practitioners to determine priorities for future residential development projects.

Keywords: green building, residential satisfaction, perceived performance, confirmatory factor analysis, structural equation modeling

Procedia PDF Downloads 230
1093 Development of an Implicit Coupled Partitioned Model for the Prediction of the Behavior of a Flexible Slender Shaped Membrane in Interaction with Free Surface Flow under the Influence of a Moving Flotsam

Authors: Mahtab Makaremi Masouleh, Günter Wozniak

Abstract:

This research is part of an interdisciplinary project, promoting the design of a light temporary installable textile defence system against flood. In case river water levels increase abruptly especially in winter time, one can expect massive extra load on a textile protective structure in term of impact as a result of floating debris and even tree trunks. Estimation of this impulsive force on such structures is of a great importance, as it can ensure the reliability of the design in critical cases. This fact provides the motivation for the numerical analysis of a fluid structure interaction application, comprising flexible slender shaped and free-surface water flow, where an accelerated heavy flotsam tends to approach the membrane. In this context, the analysis on both the behavior of the flexible membrane and its interaction with moving flotsam is conducted by finite elements based solvers of the explicit solver and implicit Abacus solver available as products of SIMULIA software. On the other hand, a study on how free surface water flow behaves in response to moving structures, has been investigated using the finite volume solver of Star CCM+ from Siemens PLM Software. An automatic communication tool (CSE, SIMULIA Co-Simulation Engine) and the implementation of an effective partitioned strategy in form of an implicit coupling algorithm makes it possible for partitioned domains to be interconnected powerfully. The applied procedure ensures stability and convergence in the solution of these complicated issues, albeit with high computational cost; however, the other complexity of this study stems from mesh criterion in the fluid domain, where the two structures approach each other. This contribution presents the approaches for the establishment of a convergent numerical solution and compares the results with experimental findings.

Keywords: co-simulation, flexible thin structure, fluid-structure interaction, implicit coupling algorithm, moving flotsam

Procedia PDF Downloads 384
1092 Building Information Modeling Acting as Protagonist and Link between the Virtual Environment and the Real-World for Efficiency in Building Production

Authors: Cristiane R. Magalhaes

Abstract:

Advances in Information and Communication Technologies (ICT) have led to changes in different sectors particularly in architecture, engineering, construction, and operation (AECO) industry. In this context, the advent of BIM (Building Information Modeling) has brought a number of opportunities in the field of the digital architectural design process bringing integrated design concepts that impact on the development, elaboration, coordination, and management of ventures. The project scope has begun to contemplate, from its original stage, the third dimension, by means of virtual environments (VEs), composed of models containing different specialties, substituting the two-dimensional products. The possibility to simulate the construction process of a venture in a VE starts at the beginning of the design process offering, through new technologies, many possibilities beyond geometrical digital modeling. This is a significant change and relates not only to form, but also to how information is appropriated in architectural and engineering models and exchanged among professionals. In order to achieve the main objective of this work, the Design Science Research Method will be adopted to elaborate an artifact containing strategies for the application and use of ICTs from BIM flows, with pre-construction cut-off to the execution of the building. This article intends to discuss and investigate how BIM can be extended to the site acting as a protagonist and link between the Virtual Environments and the Real-World, as well as its contribution to the integration of the value chain and the consequent increase of efficiency in the production of the building. The virtualization of the design process has reached high levels of development through the use of BIM. Therefore it is essential that the lessons learned with the virtual models be transposed to the actual building production increasing precision and efficiency. Thus, this paper discusses how the Fourth Industrial Revolution has impacted on property developments and how BIM could be the propellant acting as the main fuel and link between the virtual environment and the real production for the structuring of flows, information management and efficiency in this process. The results obtained are partial and not definite up to the date of this publication. This research is part of a doctoral thesis development, which focuses on the discussion of the impact of digital transformation in the construction of residential buildings in Brazil.

Keywords: building information modeling, building production, digital transformation, ICT

Procedia PDF Downloads 118
1091 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea

Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim

Abstract:

Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.

Keywords: deep learning, algae concentration, remote sensing, satellite

Procedia PDF Downloads 181
1090 Biophysical Assessment of the Ecological Condition of Wetlands in the Parkland and Grassland Natural Regions of Alberta, Canada

Authors: Marie-Claude Roy, David Locky, Ermias Azeria, Jim Schieck

Abstract:

It is estimated that up to 70% of the wetlands in the Parkland and Grassland natural regions of Alberta have been lost due to various land-use activities. These losses include ecosystem function and services they once provided. Those wetlands remaining are often embedded in a matrix of human-modified habitats and despite efforts taken to protect them the effects of land-uses on wetland condition and function remain largely unknown. We used biophysical field data and remotely-sensed human footprint data collected at 322 open-water wetlands by the Alberta Biodiversity Monitoring Institute (ABMI) to evaluate the impact of surrounding land use on the physico-chemistry characteristics and plant functional traits of wetlands. Eight physio-chemistry parameters were assessed: wetland water depth, water temperature, pH, salinity, dissolved oxygen, total phosphorus, total nitrogen, and dissolved organic carbon. Three plant functional traits were evaluated: 1) origin (native and non-native), 2) life history (annual, biennial, and perennial), and 3) habitat requirements (obligate-wetland and obligate-upland). Intensity land-use was quantified within a 250-meter buffer around each wetland. Ninety-nine percent of wetlands in the Grassland and Parkland regions of Alberta have land-use activities in their surroundings, with most being agriculture-related. Total phosphorus in wetlands increased with the cover of surrounding agriculture, while salinity, total nitrogen, and dissolved organic carbon were positively associated with the degree of soft-linear (e.g. pipelines, trails) land-uses. The abundance of non-native and annual/biennial plants increased with the amount of agriculture, while urban-industrial land-use lowered abundance of natives, perennials, and obligate wetland plants. Our study suggests that land-use types surrounding wetlands affect the physicochemical and biological conditions of wetlands. This research suggests that reducing human disturbances through reclamation of wetland buffers may enhance the condition and function of wetlands in agricultural landscapes.

Keywords: wetlands, biophysical assessment, land use, grassland and parkland natural regions

Procedia PDF Downloads 326
1089 Research on the Spatial Organization and Collaborative Innovation of Innovation Corridors from the Perspective of Ecological Niche: A Case Study of Seven Municipal Districts in Jiangsu Province, China

Authors: Weikang Peng

Abstract:

The innovation corridor is an important spatial carrier to promote regional collaborative innovation, and its development process is the spatial re-organization process of regional innovation resources. This paper takes the Nanjing-Zhenjiang G312 Industrial Innovation Corridor, which involves seven municipal districts in Jiangsu Province, as empirical evidence. Based on multi-source spatial big data in 2010, 2016, and 2022, this paper applies triangulated irregular network (TIN), head/tail breaks, regional innovation ecosystem (RIE) niche fitness evaluation model, and social network analysis to carry out empirical research on the spatial organization and functional structural evolution characteristics of innovation corridors and their correlation with the structural evolution of collaborative innovation network. The results show, first, the development of innovation patches in the corridor has fractal characteristics in time and space and tends to be multi-center and cluster layout along the Nanjing Bypass Highway and National Highway G312. Second, there are large differences in the spatial distribution pattern of niche fitness in the corridor in various dimensions, and the niche fitness of innovation patches along the highway has increased significantly. Third, the scale of the collaborative innovation network in the corridor is expanding fast. The core of the network is shifting from the main urban area to the periphery of the city along the highway, with small-world and hierarchical levels, and the core-edge network structure is highlighted. With the development of the Innovation Corridor, the main collaborative mode in the corridor is changing from collaboration within innovation patches to collaboration between innovation patches, and innovation patches with high ecological suitability tend to be the active areas of collaborative innovation. Overall, polycentric spatial layout, graded functional structure, diversified innovation clusters, and differentiated environmental support play an important role in effectively constructing collaborative innovation linkages and the stable expansion of the scale of collaborative innovation within the innovation corridor.

Keywords: innovation corridor development, spatial structure, niche fitness evaluation model, head/tail breaks, innovation network

Procedia PDF Downloads 8
1088 Potential of High Performance Ring Spinning Based on Superconducting Magnetic Bearing

Authors: M. Hossain, A. Abdkader, C. Cherif, A. Berger, M. Sparing, R. Hühne, L. Schultz, K. Nielsch

Abstract:

Due to the best quality of yarn and the flexibility of the machine, the ring spinning process is the most widely used spinning method for short staple yarn production. However, the productivity of these machines is still much lower in comparison to other spinning systems such as rotor or air-jet spinning process. The main reason for this limitation lies on the twisting mechanism of the ring spinning process. In the ring/traveler twisting system, each rotation of the traveler along with the ring inserts twist in the yarn. The rotation of the traveler at higher speed includes strong frictional forces, which in turn generates heat. Different ring/traveler systems concerning with its geometries, material combinations and coatings have already been implemented to solve the frictional problem. However, such developments can neither completely solve the frictional problem nor increase the productivity. The friction free superconducting magnetic bearing (SMB) system can be a right alternative replacing the existing ring/traveler system. The unique concept of SMB bearings is that they possess a self-stabilizing behavior, i.e. they remain fully passive without any necessity for expensive position sensing and control. Within the framework of a research project funded by German research foundation (DFG), suitable concepts of the SMB-system have been designed, developed, and integrated as a twisting device of ring spinning replacing the existing ring/traveler system. With the help of the developed mathematical model and experimental investigation, the physical limitations of this innovative twisting device in the spinning process have been determined. The interaction among the parameters of the spinning process and the superconducting twisting element has been further evaluated, which derives the concrete information regarding the new spinning process. Moreover, the influence of the implemented SMB twisting system on the yarn quality has been analyzed with respect to different process parameters. The presented work reveals the enormous potential of the innovative twisting mechanism, so that the productivity of the ring spinning process especially in case of thermoplastic materials can be at least doubled for the first time in a hundred years. The SMB ring spinning tester has also been presented in the international fair “International Textile Machinery Association (ITMA) 2015”.

Keywords: ring spinning, superconducting magnetic bearing, yarn properties, productivity

Procedia PDF Downloads 229
1087 A Multiple Perspectives Approach on the Well-Being of Students with Autism Spectrum Disorder

Authors: Joanne Danker, Iva Strnadová, Therese Cumming

Abstract:

As a consequence of the increased evidence of the bi-directional relationship between student well-being and positive educational outcomes, there has been a surge in the number of research studies dedicated to understanding the notion of student well-being and the ways to enhance it. In spite of these efforts, the concept of student well-being remains elusive. Additionally, studies on student well-being mainly consulted adults' perspectives and failed to take into account students' views, which if considered, could contribute to a clearer understanding of the complex concept of student well-being. Furthermore, there is a lack of studies focusing on the well-being of students with autism spectrum disorder (ASD), and these students continue to fare worse in post-school outcomes as compared to students without disabilities, indicating a significant gap in the current research literature. Findings from research conducted on students without disabilities may not be applicable to students with ASD as their educational experiences may differ due to the characteristics associated with ASD. Thus, the purpose of this study was to explore how students with ASD, their parents, and teachers conceptualise student well-being. It also aims to identify the barriers and assets of the well-being of these students. To collect data, 19 teachers and 11 parents participated in interviews while 16 high school students with ASD were involved in a photovoice project regarding their well-being in school. Grounded theory approaches such as open and axial coding, memo-writing, diagramming, and making constant comparisons were adopted to analyse the data. All three groups of participants conceptualised student well-being as a multidimensional construct consisting of several domains. These domains were relationships, engagement, positive/negative emotions, and accomplishment. Three categories of barriers were identified. These were environmental, attitudes and behaviours of others, and impact of characteristics associated with ASD. The identified internal assets that could contribute to student well-being were acceptance, resilience, self-regulation, and ability to work with others. External assets were knowledgeable and inclusive school community, and having access to various school programs and resources. It is crucial that schools and policymakers provide ample resources and programs to adequately support the development of each identified domain of student well-being. This could in turn enhance student well-being and lead to more successful educational outcomes for students with ASD.

Keywords: autism spectrum disorder, grounded theory approach, school experiences, student well-being

Procedia PDF Downloads 282
1086 Spatial Distribution and Source Identification of Trace Elements in Surface Soil from Izmir Metropolitan Area

Authors: Melik Kara, Gulsah Tulger Kara

Abstract:

The soil is a crucial component of the ecosystem, and in industrial and urban areas it receives large amounts of trace elements from several sources. Therefore, accumulated pollutants in surface soils can be transported to different environmental components, such as deep soil, water, plants, and dust particles. While elemental contamination of soils is caused mainly by atmospheric deposition, soil also affects the air quality since enriched trace elemental contents in atmospheric particulate matter originate from resuspension of polluted soils. The objectives of this study were to determine the total and leachate concentrations of trace elements in soils of city area in Izmir and characterize their spatial distribution and to identify the possible sources of trace elements in surface soils. The surface soil samples were collected from 20 sites. They were analyzed for total element concentrations and leachate concentrations. Analyses of trace elements (Ag, Al, As, B, Ba, Be, Bi, Ca, Cd, Ce, Co, Cr, Cs, Cu, Dy, Er, Eu, Fe, Ga, Gd, Hf, Ho, K, La, Li, Lu, Mg, Mn, Mo, Na, Nd, Ni, P, Pb, Pr, Rb, Sb, Sc, Se, Si, Sm, Sn, Sr, Tb, Th, Ti, Tl, Tm, U, V, W, Y, Yb, Zn and Zr) were carried out using ICP-MS (Inductively Coupled Plasma-Mass Spectrometer). The elemental concentrations were calculated along with overall median, kurtosis, and skewness statistics. Elemental composition indicated that the soil samples were dominated by crustal elements such as Si, Al, Fe, Ca, K, Mg and the sea salt element, Na which is typical for Aegean region. These elements were followed by Ti, P, Mn, Ba and Sr. On the other hand, Zn, Cr, V, Pb, Cu, and Ni (which are anthropogenic based elements) were measured as 61.6, 39.4, 37.9, 26.9, 22.4, and 19.4 mg/kg dw, respectively. The leachate element concentrations were showed similar sorting although their concentrations were much lower than total concentrations. In the study area, the spatial distribution patterns of elemental concentrations varied among sampling sites. The highest concentrations were measured in the vicinity of industrial areas and main roads. To determine the relationships among elements and to identify the possible sources, PCA (Principal Component Analysis) was applied to the data. The analysis resulted in six factors. The first factor exhibited high loadings of Co, K, Mn, Rb, V, Al, Fe, Ni, Ga, Se, and Cr. This factor could be interpreted as residential heating because of Co, K, Rb, and Se. The second factor associated positively with V, Al, Fe, Na, Ba, Ga, Sr, Ti, Se, and Si. Therefore, this factor presents mixed city dust. The third factor showed high loadings with Fe, Ni, Sb, As, Cr. This factor could be associated with industrial facilities. The fourth factor associated with Cu, Mo, Zn, Sn which are the marker elements of traffic. The fifth factor presents crustal dust, due to its high correlation with Si, Ca, and Mg. The last factor is loaded with Pb and Cd emitted from industrial activities.

Keywords: trace elements, surface soil, source apportionment, Izmir

Procedia PDF Downloads 135
1085 Open Source Cloud Managed Enterprise WiFi

Authors: James Skon, Irina Beshentseva, Michelle Polak

Abstract:

Wifi solutions come in two major classes. Small Office/Home Office (SOHO) WiFi, characterized by inexpensive WiFi routers, with one or two service set identifiers (SSIDs), and a single shared passphrase. These access points provide no significant user management or monitoring, and no aggregation of monitoring and control for multiple routers. The other solution class is managed enterprise WiFi solutions, which involve expensive Access Points (APs), along with (also costly) local or cloud based management components. These solutions typically provide portal based login, per user virtual local area networks (VLANs), and sophisticated monitoring and control across a large group of APs. The cost for deploying and managing such managed enterprise solutions is typically about 10 fold that of inexpensive consumer APs. Low revenue organizations, such as schools, non-profits, non-government organizations (NGO's), small businesses, and even homes cannot easily afford quality enterprise WiFi solutions, though they may need to provide quality WiFi access to their population. Using available lower cost Wifi solutions can significantly reduce their ability to provide reliable, secure network access. This project explored and created a new approach for providing secured managed enterprise WiFi based on low cost hardware combined with both new and existing (but modified) open source software. The solution provides a cloud based management interface which allows organizations to aggregate the configuration and management of small, medium and large WiFi solutions. It utilizes a novel approach for user management, giving each user a unique passphrase. It provides unlimited SSID's across an unlimited number of WiFI zones, and the ability to place each user (and all their devices) on their own VLAN. With proper configuration it can even provide user local services. It also allows for users' usage and quality of service to be monitored, and for users to be added, enabled, and disabled at will. As inferred above, the ultimate goal is to free organizations with limited resources from the expense of a commercial enterprise WiFi, while providing them with most of the qualities of such a more expensive managed solution at a fraction of the cost.

Keywords: wifi, enterprise, cloud, managed

Procedia PDF Downloads 88
1084 Facilitating Career Development of Women in Science, Technology, Engineering, Mathematics and Medicine: Towards Increasing Understanding, Participation, Progression and Retention through an Intersectionality Perspective

Authors: Maria Tsouroufli, Andrea Mondokova, Subashini Suresh

Abstract:

Background: The under-representation of women and consequent failure to fulfil their potential contribution to Science, Technology, Engineering, Maths, and Medicine (STEMM) subjects in the UK is an issue that the Higher Education sector is being encouraged to address. Focus: The aim of this research is to investigate the barriers, facilitators, and incentives that influence diverse groups of women who have embarked upon a related career in STEMM subjects. The project will address a number of interconnected research questions: 1. How do participants perceive the barriers, facilitators and incentives for women in terms of research, teaching and management/leadership at each stage of their development towards forging a career in STEMM? 2. How might gender intersect with ethnicity, pregnancy/maternity and academic grade in the career experiences of women in STEMM? 3. How do participants perceive the example of female role models in emulating them as a career model? 4. How do successful females in STEMM see themselves as role models and what strategies do they employ to promote their careers? 5. How does institutional culture manifest itself as a barrier or facilitator for women in STEMM subjects in the institution? Methodology and Theoretical framework: A mixed-methodology will be employed in a case study of one university. The study will draw on extant quantitative data for context and involve conducting a qualitative inquiry to discover the perceptions of staff and students around the key concepts under study (career progression, sense of belonging and tenure, role-models, personal satisfaction, perceived gender in/equality, institutional culture). The analysis will be informed by an intersectionality framework, feminist and gender theory, and organisational psychology and human resource management perspectives. Implications: Preliminary findings will be collected in 2017. Conclusions will be drawn and used to inform recruitment and retention, and the development and implementation of initiatives to enhance the experiences and outcomes of women working and studying in STEMM subjects in Higher Education.

Keywords: under-representation, women, STEMM subjects, intersectionality

Procedia PDF Downloads 279
1083 Improving Data Completeness and Timely Reporting: A Joint Collaborative Effort between Partners in Health and Ministry of Health in Remote Areas, Neno District, Malawi

Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Moses Banda Aron, Julia Higgins, Manuel Mulwafu, Kondwani Mpinga, Mwayi Chunga, Grace Momba, Enock Ndarama, Dickson Sumphi, Atupere Phiri, Fabien Munyaneza

Abstract:

Background: Data is key to supporting health service delivery as stakeholders, including NGOs rely on it for effective service delivery, decision-making, and system strengthening. Several studies generated debate on data quality from national health management information systems (HMIS) in sub-Saharan Africa. This limits the utilization of data in resource-limited settings, which already struggle to meet standards set by the World Health Organization (WHO). We aimed to evaluate data quality improvement of Neno district HMIS over a 4-year period (2018 – 2021) following quarterly data reviews introduced in January 2020 by the district health management team and Partners In Health. Methods: Exploratory Mixed Research was used to examine report rates, followed by in-depth interviews using Key Informant Interviews (KIIs) and Focus Group Discussions (FGDs). We used the WHO module desk review to assess the quality of HMIS data in the Neno district captured from 2018 to 2021. The metrics assessed included the completeness and timeliness of 34 reports. Completeness was measured as a percentage of non-missing reports. Timeliness was measured as the span between data inputs and expected outputs meeting needs. We computed T-Test and recorded P-values, summaries, and percentage changes using R and Excel 2016. We analyzed demographics for key informant interviews in Power BI. We developed themes from 7 FGDs and 11 KIIs using Dedoose software, from which we picked perceptions of healthcare workers, interventions implemented, and improvement suggestions. The study was reviewed and approved by Malawi National Health Science Research Committee (IRB: 22/02/2866). Results: Overall, the average reporting completeness rate was 83.4% (before) and 98.1% (after), while timeliness was 68.1% and 76.4 respectively. Completeness of reports increased over time: 2018, 78.8%; 2019, 88%; 2020, 96.3% and 2021, 99.9% (p< 0.004). The trend for timeliness has been declining except in 2021, where it improved: 2018, 68.4%; 2019, 68.3%; 2020, 67.1% and 2021, 81% (p< 0.279). Comparing 2021 reporting rates to the mean of three preceding years, both completeness increased from 88% to 99% (in 2021), while timeliness increased from 68% to 81%. Sixty-five percent of reports have maintained meeting a national standard of 90%+ in completeness while only 24% in timeliness. Thirty-two percent of reports met the national standard. Only 9% improved on both completeness and timeliness, and these are; cervical cancer, nutrition care support and treatment, and youth-friendly health services reports. 50% of reports did not improve to standard in timeliness, and only one did not in completeness. On the other hand, factors associated with improvement included improved communications and reminders using internal communication, data quality assessments, checks, and reviews. Decentralizing data entry at the facility level was suggested to improve timeliness. Conclusion: Findings suggest that data quality in HMIS for the district has improved following collaborative efforts. We recommend maintaining such initiatives to identify remaining quality gaps and that results be shared publicly to support increased use of data. These results can inform Ministry of Health and its partners on some interventions and advise initiatives for improving its quality.

Keywords: data quality, data utilization, HMIS, collaboration, completeness, timeliness, decision-making

Procedia PDF Downloads 78
1082 Developing Improvements to Multi-Hazard Risk Assessments

Authors: A. Fathianpour, M. B. Jelodar, S. Wilkinson

Abstract:

This paper outlines the approaches taken to assess multi-hazard assessments. There is currently confusion in assessing multi-hazard impacts, and so this study aims to determine which of the available options are the most useful. The paper uses an international literature search, and analysis of current multi-hazard assessments and a case study to illustrate the effectiveness of the chosen method. Findings from this study will help those wanting to assess multi-hazards to undertake a straightforward approach. The paper is significant as it helps to interpret the various approaches and concludes with the preferred method. Many people in the world live in hazardous environments and are susceptible to disasters. Unfortunately, when a disaster strikes it is often compounded by additional cascading hazards, thus people would confront more than one hazard simultaneously. Hazards include natural hazards (earthquakes, floods, etc.) or cascading human-made hazards (for example, Natural Hazard Triggering Technological disasters (Natech) such as fire, explosion, toxic release). Multi-hazards have a more destructive impact on urban areas than one hazard alone. In addition, climate change is creating links between different disasters such as causing landslide dams and debris flows leading to more destructive incidents. Much of the prevailing literature deals with only one hazard at a time. However, recently sophisticated multi-hazard assessments have started to appear. Given that multi-hazards occur, it is essential to take multi-hazard risk assessment under consideration. This paper aims to review the multi-hazard assessment methods through articles published to date and categorize the strengths and disadvantages of using these methods in risk assessment. Napier City is selected as a case study to demonstrate the necessity of using multi-hazard risk assessments. In order to assess multi-hazard risk assessments, first, the current multi-hazard risk assessment methods were described. Next, the drawbacks of these multi-hazard risk assessments were outlined. Finally, the improvements to current multi-hazard risk assessments to date were summarised. Generally, the main problem of multi-hazard risk assessment is to make a valid assumption of risk from the interactions of different hazards. Currently, risk assessment studies have started to assess multi-hazard situations, but drawbacks such as uncertainty and lack of data show the necessity for more precise risk assessment. It should be noted that ignoring or partial considering multi-hazards in risk assessment will lead to an overestimate or overlook in resilient and recovery action managements.

Keywords: cascading hazards, disaster assessment, mullti-hazards, risk assessment

Procedia PDF Downloads 106
1081 Communication in the Sciences: A Discourse Analysis of Biology Research Articles and Magazine Articles

Authors: Gayani Ranawake

Abstract:

Effective communication is widely regarded as an important aspect of any discipline. This particular study deals with written communication in science. Writing conventions and linguistic choices play a key role in conveying the message effectively to a target audience. Scientists are responsible for conveying their findings or research results not only to their discourse community but also to the general public. Recognizing appropriate linguistic choices is crucial since they vary depending on the target audience. The majority of scientists can communicate effectively with their discourse community, but public engagement seems more challenging to them. There is a lack of research into the language use of scientists, and in particular how it varies by discipline and audience (genre). A better understanding of the different linguistic conventions used in effective science writing by scientists for scientists and by scientists for the public will help to guide scientists who are familiar with their discourse community norms to write effectively for the public. This study investigates the differences and similarities of linguistic choices in biology articles written by scientists for their discourse community and biology magazine articles written by scientists and science communicators for the general public. This study is a part of a larger project investigating linguistic differences in different genres of science academic writing. The sample for this particular study is composed of 20 research articles from the journal Biological Reviews and 20 magazine articles from the magazine Australian Popular Science. Differences in the linguistic devices were analyzed using Hyland’s metadiscourse model for academic writing proposed in 2005. The frequency of the usage of interactive resources (transitions, frame markers, endophoric markers, evidentials and code glosses) and interactional resources (hedges, boosters, attitude markers, self-mentions and engagement markers) were compared and contrasted using the NVivo textual analysis tool. The results clearly show the differences in the frequency of usage of interactional and interactive resources in the two disciplines under investigation. The findings of this study provide a reference guide for scientists and science writers to understand the differences in the linguistic choices between the two genres. This will be particularly helpful for scientists who are proficient at writing for their discourse community, but not for the public.

Keywords: discourse analysis, linguistic choices, metadiscourse, science writing

Procedia PDF Downloads 137
1080 Diagrid Structural System

Authors: K. Raghu, Sree Harsha

Abstract:

The interrelationship between the technology and architecture of tall buildings is investigated from the emergence of tall buildings in late 19th century to the present. In the late 19th century early designs of tall buildings recognized the effectiveness of diagonal bracing members in resisting lateral forces. Most of the structural systems deployed for early tall buildings were steel frames with diagonal bracings of various configurations such as X, K, and eccentric. Though the historical research a filtering concept is developed original and remedial technology- through which one can clearly understand inter-relationship between the technical evolution and architectural esthetic and further stylistic transition buildings. Diagonalized grid structures – “diagrids” - have emerged as one of the most innovative and adaptable approaches to structuring buildings in this millennium. Variations of the diagrid system have evolved to the point of making its use non-exclusive to the tall building. Diagrid construction is also to be found in a range of innovative mid-rise steel projects. Contemporary design practice of tall buildings is reviewed and design guidelines are provided for new design trends. Investigated in depths are the behavioral characteristics and design methodology for diagrids structures, which emerge as a new direction in the design of tall buildings with their powerful structural rationale and symbolic architectural expression. Moreover, new technologies for tall building structures and facades are developed for performance enhancement through design integration, and their architectural potentials are explored. By considering the above data the analysis and design of 40-100 storey diagrids steel buildings is carried out using E-TABS software with diagrids of various angle to be found for entire building which will be helpful to reduce the steel requirement for the structure. The present project will have to undertake wind analysis, seismic analysis for lateral loads acting on the structure due to wind loads, earthquake loads, gravity loads. All structural members are designed as per IS 800-2007 considering all load combination. Comparison of results in terms of time period, top storey displacement and inter-storey drift to be carried out. The secondary effect like temperature variations are not considered in the design assuming small variation.

Keywords: diagrid, bracings, structural, building

Procedia PDF Downloads 379