Search results for: acceptable
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 967

Search results for: acceptable

127 CertifHy: Developing a European Framework for the Generation of Guarantees of Origin for Green Hydrogen

Authors: Frederic Barth, Wouter Vanhoudt, Marc Londo, Jaap C. Jansen, Karine Veum, Javier Castro, Klaus Nürnberger, Matthias Altmann

Abstract:

Hydrogen is expected to play a key role in the transition towards a low-carbon economy, especially within the transport sector, the energy sector and the (petro)chemical industry sector. However, the production and use of hydrogen only make sense if the production and transportation are carried out with minimal impact on natural resources, and if greenhouse gas emissions are reduced in comparison to conventional hydrogen or conventional fuels. The CertifHy project, supported by a wide range of key European industry leaders (gas companies, chemical industry, energy utilities, green hydrogen technology developers and automobile manufacturers, as well as other leading industrial players) therefore aims to: 1. Define a widely acceptable definition of green hydrogen. 2. Determine how a robust Guarantee of Origin (GoO) scheme for green hydrogen should be designed and implemented throughout the EU. It is divided into the following work packages (WPs). 1. Generic market outlook for green hydrogen: Evidence of existing industrial markets and the potential development of new energy related markets for green hydrogen in the EU, overview of the segments and their future trends, drivers and market outlook (WP1). 2. Definition of “green” hydrogen: step-by-step consultation approach leading to a consensus on the definition of green hydrogen within the EU (WP2). 3. Review of existing platforms and interactions between existing GoO and green hydrogen: Lessons learnt and mapping of interactions (WP3). 4. Definition of a framework of guarantees of origin for “green” hydrogen: Technical specifications, rules and obligations for the GoO, impact analysis (WP4). 5. Roadmap for the implementation of an EU-wide GoO scheme for green hydrogen: the project implementation plan will be presented to the FCH JU and the European Commission as the key outcome of the project and shared with stakeholders before finalisation (WP5 and 6). Definition of Green Hydrogen: CertifHy Green hydrogen is hydrogen from renewable sources that is also CertifHy Low-GHG-emissions hydrogen. Hydrogen from renewable sources is hydrogen belonging to the share of production equal to the share of renewable energy sources (as defined in the EU RES directive) in energy consumption for hydrogen production, excluding ancillary functions. CertifHy Low-GHG hydrogen is hydrogen with emissions lower than the defined CertifHy Low-GHG-emissions threshold, i.e. 36.4 gCO2eq/MJ, produced in a plant where the average emissions intensity of the non-CertifHy Low-GHG hydrogen production (based on an LCA approach), since sign-up or in the past 12 months, does not exceed the emissions intensity of the benchmark process (SMR of natural gas), i.e. 91.0 gCO2eq/MJ.

Keywords: green hydrogen, cross-cutting, guarantee of origin, certificate, DG energy, bankability

Procedia PDF Downloads 462
126 Dealing with the Spaces: Ultra Conservative Approach from Childhood to Adulthood

Authors: Maryam Firouzmandi, Moosa Miri

Abstract:

Common reasons for early tooth loss are trauma, extraction due to caries or periodontal disease and congenital missing. The remaining space after tooth loss may cause functional and esthetic problems. Therefore restorative dentists should attempt to manage these spaces using conservative methods. The goal is to restore the lost esthetic and function, prevent phonetic, self-esteem and personality problems and tongue habits. Preserving alveolar bone is also of great importance during the growth stage. Purpose: When deciding about the management of the missing tooth, space implants are contradicted until the completion of dentoalveolar development. Even in adulthood, due to systemic or periodontal problems or biological and economic issues, the implant might not be indicated. In this article, the alternative conservative restorative methods of space maintenance are going to be discussed. Essix retainers are made chair-side as easy as forming a custom bleaching tray with some modifications. They are esthetically acceptable and not expensive. These temporaries provide support for the lips but could not be used during function. Mini-screw-supported temporaries are another option for maintaining the space, especially after orthodontic treatment when there is a time lag between the termination of orthodontic treatment and definitive restoration. Two techniques will be presented for this kind of restoration: Denture tooth pontic or a composite crown. The benefits are alveolar bone preservation, Physiologic pressure on the alveolar ridge to increase its density and even can be retained until the completion of the definitive treatment. Bonded fixed partial denture includes Maryland bridge, fiber-reinforced composite bridge, resin-bonded bridge, and ceramic bonded bridge. These types of bridges are recommended to be used after a pubertal growth spurt and a recent meta-analysis considered their clinical success similar to conventional FDPs and implant-supported crowns. However, they have several advantages that are going to be discussed by presenting some clinical examples. Practical instruction on how to construct an FRC bridge and a novel chair-side Maryland bridge will be given by means of clinical cases. Clinical relevance: minimally invasive options should always be considered and destruction of healthy enamel and dentin during the preparation phase should be avoided as much as possible.

Keywords: tooth missing, fiber-reinforced composite, Maryland, Essix retainers, screw-retained restoration

Procedia PDF Downloads 174
125 Phytochemistry and Alpha-Amylase Inhibitory Activities of Rauvolfia vomitoria (Afzel) Leaves and Picralima nitida (Stapf) Seeds

Authors: Oseyemi Omowunmi Olubomehin, Olufemi Michael Denton

Abstract:

Diabetes mellitus is a disease that is related to the digestion of carbohydrates, proteins and fats and how this affects the blood glucose levels. Various synthetic drugs employed in the management of the disease work through different mechanisms. Keeping postprandial blood glucose levels within acceptable range is a major factor in the management of type 2 diabetes and its complications. Thus, the inhibition of carbohydrate-hydrolyzing enzymes such as α-amylase is an important strategy in lowering postprandial blood glucose levels, but synthetic inhibitors have undesirable side effects like flatulence, diarrhea, gastrointestinal disorders to mention a few. Therefore, it is necessary to identify and explore the α-amylase inhibitors from plants due to their availability, safety, and low costs. In the present study, extracts from the leaves of Rauvolfia vomitoria and seeds of Picralima nitida which are used in the Nigeria traditional system of medicine to treat diabetes were tested for their α-amylase inhibitory effect. The powdered plant samples were subjected to phytochemical screening using standard procedures. The leaves and seeds macerated successively using n-hexane, ethyl acetate and methanol resulted in the crude extracts which at different concentrations (0.1, 0.5 and 1 mg/mL) alongside the standard drug acarbose, were subjected to α-amylase inhibitory assay using the Benfield and Miller methods, with slight modification. Statistical analysis was done using ANOVA, SPSS version 2.0. The phytochemical screening results of the leaves of Rauvolfia vomitoria and the seeds of Picralima nitida showed the presence of alkaloids, tannins, saponins and cardiac glycosides while in addition Rauvolfia vomitoria had phenols and Picralima nitida had terpenoids. The α-amylase assay results revealed that at 1 mg/mL the methanol, hexane, and ethyl acetate extracts of the leaves of Rauvolfia vomitoria gave (15.74, 23.13 and 26.36 %) α-amylase inhibitions respectively, the seeds of Picralima nitida gave (15.50, 30.68, 36.72 %) inhibitions which were not significantly different from the control at p < 0.05, while acarbose gave a significant 56 % inhibition at p < 0.05. The presence of alkaloids, phenols, tannins, steroids, saponins, cardiac glycosides and terpenoids in these plants are responsible for the observed anti-diabetic activity. However, the low percentages of α-amylase inhibition by these plant samples shows that α-amylase inhibition is not the major way by which both plants exhibit their anti-diabetic effect.

Keywords: alpha-amylase, Picralima nitida, postprandial hyperglycemia, Rauvolfia vomitoria

Procedia PDF Downloads 160
124 Mature Field Rejuvenation Using Hydraulic Fracturing: A Case Study of Tight Mature Oilfield with Reveal Simulator

Authors: Amir Gharavi, Mohamed Hassan, Amjad Shah

Abstract:

The main characteristics of unconventional reservoirs include low-to ultra low permeability and low-to-moderate porosity. As a result, hydrocarbon production from these reservoirs requires different extraction technologies than from conventional resources. An unconventional reservoir must be stimulated to produce hydrocarbons at an acceptable flow rate to recover commercial quantities of hydrocarbons. Permeability for unconventional reservoirs is mostly below 0.1 mD, and reservoirs with permeability above 0.1 mD are generally considered to be conventional. The hydrocarbon held in these formations naturally will not move towards producing wells at economic rates without aid from hydraulic fracturing which is the only technique to assess these tight reservoir productions. Horizontal well with multi-stage fracking is the key technique to maximize stimulated reservoir volume and achieve commercial production. The main objective of this research paper is to investigate development options for a tight mature oilfield. This includes multistage hydraulic fracturing and spacing by building of reservoir models in the Reveal simulator to model potential development options based on sidetracking the existing vertical well. To simulate potential options, reservoir models have been built in the Reveal. An existing Petrel geological model was used to build the static parts of these models. A FBHP limit of 40bars was assumed to take into account pump operating limits and to maintain the reservoir pressure above the bubble point. 300m, 600m and 900m lateral length wells were modelled, in conjunction with 4, 6 and 8 stages of fracs. Simulation results indicate that higher initial recoveries and peak oil rates are obtained with longer well lengths and also with more fracs and spacing. For a 25year forecast, the ultimate recovery ranging from 0.4% to 2.56% for 300m and 1000m laterals respectively. The 900m lateral with 8 fracs 100m spacing gave the highest peak rate of 120m3/day, with the 600m and 300m cases giving initial peak rates of 110m3/day. Similarly, recovery factor for the 900m lateral with 8 fracs and 100m spacing was the highest at 2.65% after 25 years. The corresponding values for the 300m and 600m laterals were 2.37% and 2.42%. Therefore, the study suggests that longer laterals with 8 fracs and 100m spacing provided the optimal recovery, and this design is recommended as the basis for further study.

Keywords: unconventional, resource, hydraulic, fracturing

Procedia PDF Downloads 277
123 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation

Authors: W. Meron Mebrahtu, R. Absi

Abstract:

Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.

Keywords: accuracy, eddy viscosity, sewers, velocity profile

Procedia PDF Downloads 83
122 Performance Estimation of Small Scale Wind Turbine Rotor for Very Low Wind Regime Condition

Authors: Vilas Warudkar, Dinkar Janghel, Siraj Ahmed

Abstract:

Rapid development experienced by India requires huge amount of energy. Actual supply capacity additions have been consistently lower than the targets set by the government. According to World Bank 40% of residences are without electricity. In 12th five year plan 30 GW grid interactive renewable capacity is planned in which 17 GW is Wind, 10 GW is from solar and 2.1 GW from small hydro project, and rest is compensated by bio gas. Renewable energy (RE) and energy efficiency (EE) meet not only the environmental and energy security objectives, but also can play a crucial role in reducing chronic power shortages. In remote areas or areas with a weak grid, wind energy can be used for charging batteries or can be combined with a diesel engine to save fuel whenever wind is available. India according to IEC 61400-1 belongs to class IV Wind Condition; it is not possible to set up wind turbine in large scale at every place. So, the best choice is to go for small scale wind turbine at lower height which will have good annual energy production (AEP). Based on the wind characteristic available at MANIT Bhopal, rotor for small scale wind turbine is designed. Various Aero foil data is reviewed for selection of airfoil in the Blade Profile. Airfoil suited of Low wind conditions i.e. at low Reynold’s number is selected based on Coefficient of Lift, Drag and angle of attack. For designing of the rotor blade, standard Blade Element Momentum (BEM) Theory is implanted. Performance of the Blade is estimated using BEM theory in which axial induction factor and angular induction factor is optimized using iterative technique. Rotor performance is estimated for particular designed blade specifically for low wind Conditions. Power production of rotor is determined at different wind speeds for particular pitch angle of the blade. At pitch 15o and velocity 5 m/sec gives good cut in speed of 2 m/sec and power produced is around 350 Watts. Tip speed of the Blade is considered as 6.5 for which Coefficient of Performance of the rotor is calculated 0.35, which is good acceptable value for Small scale Wind turbine. Simple Load Model (SLM, IEC 61400-2) is also discussed to improve the structural strength of the rotor. In SLM, Edge wise Moment and Flap Wise moment is considered which cause bending stress at the root of the blade. Various Load case mentioned in the IEC 61400-2 is calculated and checked for the partial safety factor of the wind turbine blade.

Keywords: annual energy production, Blade Element Momentum Theory, low wind Conditions, selection of airfoil

Procedia PDF Downloads 316
121 Development and Psychometric Validation of the Hospitalised Older Adults Dignity Scale for Measuring Dignity during Acute Hospital Admissions

Authors: Abdul-Ganiyu Fuseini, Bernice Redley, Helen Rawson, Lenore Lay, Debra Kerr

Abstract:

Aim: The study aimed to develop and validate a culturally appropriate patient-reported outcome measure for measuring dignity for older adults during acute hospital admissions. Design: A three-phased mixed-method sequential exploratory design was used. Methods: Concept elicitation and generation of items for the scale was informed by older adults’ perspectives about dignity during acute hospitalization and a literature review. Content validity evaluation and pre-testing were undertaken using standard instrument development techniques. A cross-sectional survey design was conducted involving 270 hospitalized older adults for evaluation of construct and convergent validity, internal consistency reliability, and test–retest reliability of the scale. Analysis was performed using Statistical Package for the Social Sciences, version 25. Reporting of the study was guided by the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist. Results: We established the 15-item Hospitalized Older Adults’ Dignity Scale that has a 5-factor structure: Shared Decision-Making (3 items); Healthcare Professional-Patient Communication (3 items); Patient Autonomy (4 items); Patient Privacy (2 items); and Respectful Care (3 items). Excellent content validity, adequate construct and convergent validity, acceptable internal consistency reliability, and good test-retest reliability were demonstrated. Conclusion: We established the Hospitalized Older Adults Dignity Scale as a valid and reliable scale to measure dignity for older adults during acute hospital admissions. Future studies using confirmatory factor analysis are needed to corroborate the dimensionality of the factor structure and external validity of the scale. Routine use of the scale may provide information that informs the development of strategies to improve dignity-related care in the future. Impact: The development and validation of the Hospitalized Older Adults Dignity Scale will provide healthcare professionals with a feasible and reliable scale for measuring older adults’ dignity during acute hospitalization. Routine use of the scale may enable the capturing and incorporation of older patients’ perspectives about their healthcare experience and provide information that informs the development of strategies to improve dignity-related care in the future.

Keywords: dignity, older adults, hospitalisation, scale, patients, dignified care, acute care

Procedia PDF Downloads 70
120 Ground Improvement Using Deep Vibro Techniques at Madhepura E-Loco Project

Authors: A. Sekhar, N. Ramakrishna Raju

Abstract:

This paper is a result of ground improvement using deep vibro techniques with combination of sand and stone columns performed on a highly liquefaction susceptible site (70 to 80% sand strata and balance silt) with low bearing capacities due to high settlements located (earth quake zone V as per IS code) at Madhepura, Bihar state in northern part of India. Initially, it was envisaged with bored cast in-situ/precast piles, stone/sand columns. However, after detail analysis to address both liquefaction and improve bearing capacities simultaneously, it was analyzed the deep vibro techniques with combination of sand and stone columns is excellent solution for given site condition which may be first time in India. First after detail soil investigation, pre eCPT test was conducted to evaluate the potential depth of liquefaction to densify silty sandy soils to improve factor of safety against liquefaction. Then trail test were being carried out at site by deep vibro compaction technique with sand and stone columns combination with different spacings of columns in triangular shape with different timings during each lift of vibro up to ground level. Different spacings and timing was done to obtain the most effective spacing and timing with vibro compaction technique to achieve maximum densification of saturated loose silty sandy soils uniformly for complete treated area. Then again, post eCPT test and plate load tests were conducted at all trail locations of different spacings and timing of sand and stone columns to evaluate the best results for obtaining the required factor of safety against liquefaction and the desired bearing capacities with reduced settlements for construction of industrial structures. After reviewing these results, it was noticed that the ground layers are densified more than the expected with improved factor of safety against liquefaction and achieved good bearing capacities for a given settlements as per IS codal provisions. It was also worked out for cost-effectiveness of lightly loaded single storied structures by using deep vibro technique with sand column avoiding stone. The results were observed satisfactory for resting the lightly loaded foundations. In this technique, the most important is to mitigating liquefaction with improved bearing capacities and reduced settlements to acceptable limits as per IS: 1904-1986 simultaneously up to a depth of 19M. To our best knowledge it was executed first time in India.

Keywords: ground improvement, deep vibro techniques, liquefaction, bearing capacity, settlement

Procedia PDF Downloads 174
119 Flow-Induced Vibration Marine Current Energy Harvesting Using a Symmetrical Balanced Pair of Pivoted Cylinders

Authors: Brad Stappenbelt

Abstract:

The phenomenon of vortex-induced vibration (VIV) for elastically restrained cylindrical structures in cross-flows is relatively well investigated. The utility of this mechanism in harvesting energy from marine current and tidal flows is however arguably still in its infancy. With relatively few moving components, a flow-induced vibration-based energy conversion device augers low complexity compared to the commonly employed turbine design. Despite the interest in this concept, a practical device has yet to emerge. It is desirable for optimal system performance to design for a very low mass or mass moment of inertia ratio. The device operating range, in particular, is maximized below the vortex-induced vibration critical point where an infinite resonant response region is realized. An unfortunate consequence of this requirement is large buoyancy forces that need to be mitigated by gravity-based, suction-caisson or anchor mooring systems. The focus of this paper is the testing of a novel VIV marine current energy harvesting configuration that utilizes a symmetrical and balanced pair of horizontal pivoted cylinders. The results of several years of experimental investigation, utilizing the University of Wollongong fluid mechanics laboratory towing tank, are analyzed and presented. A reduced velocity test range of 0 to 60 was covered across a large array of device configurations. In particular, power take-off damping ratios spanning from 0.044 to critical damping were examined in order to determine the optimal conditions and hence the maximum device energy conversion efficiency. The experiments conducted revealed acceptable energy conversion efficiencies of around 16% and desirable low flow-speed operating ranges when compared to traditional turbine technology. The potentially out-of-phase spanwise VIV cells on each arm of the device synchronized naturally as no decrease in amplitude response and comparable energy conversion efficiencies to the single cylinder arrangement were observed. In addition to the spatial design benefits related to the horizontal device orientation, the main advantage demonstrated by the current symmetrical horizontal configuration is to allow large velocity range resonant response conditions without the excessive buoyancy. The novel configuration proposed shows clear promise in overcoming many of the practical implementation issues related to flow-induced vibration marine current energy harvesting.

Keywords: flow-induced vibration, vortex-induced vibration, energy harvesting, tidal energy

Procedia PDF Downloads 126
118 Comparison Between Two Techniques (Extended Source to Surface Distance & Field Alignment) Of Craniospinal Irradiation (CSI) In the Eclipse Treatment Planning System

Authors: Naima Jannat, Ariful Islam, Sharafat Hossain

Abstract:

Due to the involvement of the large target volume, Craniospinal Irradiation makes it challenging to achieve a uniform dose, and it requires different isocenters. This isocentric junction needs to shift after every five fractions to overcome the possibility of hot and cold spots. This study aims to evaluate the Planning Target Volume coverage & sparing Organ at Risk between two techniques and shows that the Field Alignment Technique does not need replanning and resetting. Planning method for Craniospinal Irradiation by Eclipse treatment planning system Field Alignment and Extended Source to Surface Distance technique was developed where 36 Gy in 20 Fraction at the rate of 1.8 Gy was prescribed. The patient was immobilized in the prone position. In the Field Alignment technique, the plan consists of half beam blocked parallel opposed cranium and a single posterior cervicospine field was developed by sharing the same isocenter, which obviates divergence matching. Further, a single field was created to treat the remaining lumbosacral spine. Matching between the inferior diverging edge of the cervicospine field and the superior diverging edge of a lumbosacral field, the field alignment option was used, which automatically matches the field edge divergence as per the field alignment rule in Eclipse Treatment Planning System where the couch was set to 2700. In the Extended Source to Surface Distance technique, two parallel opposed fields were created for the cranium, and a single posterior cervicospine field was created where the Source to Surface Distance was from 120-140 cm. Dose Volume Histograms were obtained for each organ contoured and for each technique used. In all, the patient’s maximum dose to Planning Target Volume is higher for the Extended Source to Surface Distance technique to Field Alignment technique. The dose to all surrounding structures was increased with the use of a single Extended Source to Surface Distance when compared to the Field Alignment technique. The average mean dose to Eye, Brain Steam, Kidney, Oesophagus, Heart, Liver, Lung, and Ovaries were respectively (58% & 60 %), (103% & 98%), (13% & 15%), (10% & 63%), (12% & 16%), (33% & 30%), (14% & 18%), (69% & 61%) for Field Alignment and Extended Source to Surface Distance technique. However, the clinical target volume at the spine junction site received a less homogeneous dose with the Field Alignment technique as compared to Extended Source to Surface Distance. We conclude that, although the use of a single field Extended Source to Surface Distance delivered a more homogenous, but its maximum dose is higher than the Field Alignment technique. Also, a huge advantage of the Field Alignment technique for Craniospinal Irradiation is that it doesn’t need replanning and resetting up of patients after every five fractions and 95% prescribed dose was received by more than 95% of the Planning Target Volume in all the plane with the acceptable hot spot.

Keywords: craniospinalirradiation, cranium, cervicospine, immobilize, lumbosacral spine

Procedia PDF Downloads 76
117 Simo-syl: A Computer-Based Tool to Identify Language Fragilities in Italian Pre-Schoolers

Authors: Marinella Majorano, Rachele Ferrari, Tamara Bastianello

Abstract:

The recent technological advance allows for applying innovative and multimedia screen-based assessment tools to test children's language and early literacy skills, monitor their growth over the preschool years, and test their readiness for primary school. Several are the advantages that a computer-based assessment tool offers with respect to paper-based tools. Firstly, computer-based tools which provide the use of games, videos, and audio may be more motivating and engaging for children, especially for those with language difficulties. Secondly, computer-based assessments are generally less time-consuming than traditional paper-based assessments: this makes them less demanding for children and provides clinicians and researchers, but also teachers, with the opportunity to test children multiple times over the same school year and, thus, to monitor their language growth more systematically. Finally, while paper-based tools require offline coding, computer-based tools sometimes allow obtaining automatically calculated scores, thus producing less subjective evaluations of the assessed skills and provide immediate feedback. Nonetheless, using computer-based assessment tools to test meta-phonological and language skills in children is not yet common practice in Italy. The present contribution aims to estimate the internal consistency of a computer-based assessment (i.e., the Simo-syl assessment). Sixty-three Italian pre-schoolers aged between 4;10 and 5;9 years were tested at the beginning of the last year of the preschool through paper-based standardised tools in their lexical (Peabody Picture Vocabulary Test), morpho-syntactical (Grammar Repetition Test for Children), meta-phonological (Meta-Phonological skills Evaluation test), and phono-articulatory skills (non-word repetition). The same children were tested through Simo-syl assessment on their phonological and meta-phonological skills (e.g., recognise syllables and vowels and read syllables and words). The internal consistency of the computer-based tool was acceptable (Cronbach's alpha = .799). Children's scores obtained in the paper-based assessment and scores obtained in each task of the computer-based assessment were correlated. Significant and positive correlations emerged between all the tasks of the computer-based assessment and the scores obtained in the CMF (r = .287 - .311, p < .05) and in the correct sentences in the RCGB (r = .360 - .481, p < .01); non-word repetition standardised test significantly correlates with the reading tasks only (r = .329 - .350, p < .05). Further tasks should be included in the current version of Simo-syl to have a comprehensive and multi-dimensional approach when assessing children. However, such a tool represents a good chance for the teachers to early identifying language-related problems even in the school environment.

Keywords: assessment, computer-based, early identification, language-related skills

Procedia PDF Downloads 152
116 Modelling and Assessment of an Off-Grid Biogas Powered Mini-Scale Trigeneration Plant with Prioritized Loads Supported by Photovoltaic and Thermal Panels

Authors: Lorenzo Petrucci

Abstract:

This paper is intended to give insight into the potential use of small-scale off-grid trigeneration systems powered by biogas generated in a dairy farm. The off-grid plant object of analysis comprises a dual-fuel Genset as well as electrical and thermal storage equipment and an adsorption machine. The loads are the different apparatus used in the dairy farm, a household where the workers live and a small electric vehicle whose batteries can also be used as a power source in case of emergency. The insertion in the plant of an adsorption machine is mainly justified by the abundance of thermal energy and the simultaneous high cooling demand associated with the milk-chilling process. In the evaluated operational scenario, our research highlights the importance of prioritizing specific small loads which cannot sustain an interrupted supply of power over time. As a consequence, a photovoltaic and thermal panel is included in the plant and is tasked with providing energy independently of potentially disruptive events such as engine malfunctioning or scarce and unstable supplies of fuels. To efficiently manage the plant an energy dispatch strategy is created in order to control the flow of energy between the power sources and the thermal and electric storages. In this article we elaborate on models of the equipment and from these models, we extract parameters useful to build load-dependent profiles of the prime movers and storage efficiencies. We show that under reasonable assumptions the analysis provides a sensible estimate of the generated energy. The simulations indicate that a Diesel Generator sized to a value 25% higher than the total electrical peak demand operates 65% of the time below the minimum acceptable load threshold. To circumvent such a critical operating mode, dump loads are added through the activation and deactivation of small resistors. In this way, the excess of electric energy generated can be transformed into useful heat. The combination of PVT and electrical storage to support the prioritized load in an emergency scenario is evaluated in two different days of the year having the lowest and highest irradiation values, respectively. The results show that the renewable energy component of the plant can successfully sustain the prioritized loads and only during a day with very low irradiation levels it also needs the support of the EVs’ battery. Finally, we show that the adsorption machine can reduce the ice builder and the air conditioning energy consumption by 40%.

Keywords: hybrid power plants, mathematical modeling, off-grid plants, renewable energy, trigeneration

Procedia PDF Downloads 153
115 Solar Power Generation in a Mining Town: A Case Study for Australia

Authors: Ryan Chalk, G. M. Shafiullah

Abstract:

Climate change is a pertinent issue facing governments and societies around the world. The industrial revolution has resulted in a steady increase in the average global temperature. The mining and energy production industries have been significant contributors to this change prompting government to intervene by promoting low emission technology within these sectors. This paper initially reviews the energy problem in Australia and the mining sector with a focus on the energy requirements and production methods utilised in Western Australia (WA). Renewable energy in the form of utility-scale solar photovoltaics (PV) provides a solution to these problems by providing emission-free energy which can be used to supplement the existing natural gas turbines in operation at the proposed site. This research presents a custom renewable solution for the mining site considering the specific township network, local weather conditions, and seasonal load profiles. A summary of the required PV output is presented to supply slightly over 50% of the towns power requirements during the peak (summer) period, resulting in close to full coverage in the trench (winter) period. Dig Silent Power Factory Software has been used to simulate the characteristics of the existing infrastructure and produces results of integrating PV. Large scale PV penetration in the network introduce technical challenges, that includes; voltage deviation, increased harmonic distortion, increased available fault current and power factor. Results also show that cloud cover has a dramatic and unpredictable effect on the output of a PV system. The preliminary analyses conclude that mitigation strategies are needed to overcome voltage deviations, unacceptable levels of harmonics, excessive fault current and low power factor. Mitigation strategies are proposed to control these issues predominantly through the use of high quality, made for purpose inverters. Results show that use of inverters with harmonic filtering reduces the level of harmonic injections to an acceptable level according to Australian standards. Furthermore, the configuration of inverters to supply active and reactive power assist in mitigating low power factor problems. Use of FACTS devices; SVC and STATCOM also reduces the harmonics and improve the power factor of the network, and finally, energy storage helps to smooth the power supply.

Keywords: climate change, mitigation strategies, photovoltaic (PV), power quality

Procedia PDF Downloads 145
114 The Role of Citizen Journalism on the Rising of Public Awareness in the Kurdistan Region Government-Iraq

Authors: Abdulsamad Qadir Hussien

Abstract:

The development of new technology in recent years has offered ordinary people various online digital platform tools and internet access to provide news stories, information, and subjects of public interest in the Kurdistan Region Government-Iraq (KRI). This shifting aspect has offered more chances for ordinary people to engage with other individuals on many issues in order to discuss and argue matters relating to their everyday lives. The key purpose of this research project will examine the role of citizen journalism in the increase of public awareness in the Kurdish community in the KRi; particularly, citizen journalism provides a new opportunity for ordinary people to raise their voices about problems and public matters in the KRI. The sample of this research project encompasses ordinary people who use social media platforms as sources of information and news concerning the KRI government policy. In the research project, the focus is on the ordinary people who are interacting with the blogs, posts, and footage that are produced by citizen journalism. The questionnaire was sent to more than 1,000 participants in the Kurdish community; this aspect produces statistically acceptable numbers to obtain a significant result for this research project. The sampling process is mainly based on the survey method in this study. The online questionnaire form includes many sections, which are divided into four key sections. The first section contains socio-demographic questions, including gender, age, and level of education. The research project applied the survey method in order to gather data and information surrounding the role of citizen journalism in increasing awareness of individuals in the Kurdish community. For this purpose, the researcher designed a questionnaire as the primary tool for the data collection process from ordinary people who use social media as a source of news and information. During the research project, online questionnaires were mailed in two ways – via Facebook and email – to participants in the Kurdish community, and this questionnaire looked for answers to questions from ordinary people, such as to what extent citizen journalism helps users to obtain information and news about public affairs and government policy. The research project found that citizen journalism has an essential role in increasing awareness of the Kurdish community, especially mainstream journalism has helped ordinary people to raise their voices in the KRI. Furthermore, citizen journalism carries more advantages as digital sources of news, footage, and information related to public affairs. This study provides useful tools to fore the news stories that are unreachable to professional journalists in the KRI.

Keywords: citizen journalism, public awareness, demonstration and democracy, social media news

Procedia PDF Downloads 37
113 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics

Authors: Maria Arechavaleta, Mark Halpin

Abstract:

In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.

Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems

Procedia PDF Downloads 211
112 Efficacy of Pooled Sera in Comparison with Commercially Acquired Quality Control Sample for Internal Quality Control at the Nkwen District Hospital Laboratory

Authors: Diom Loreen Ndum, Omarine Njimanted

Abstract:

With increasing automation in clinical laboratories, the requirements for quality control materials have greatly increased in order to monitor daily performance. The constant use of commercial control material is not economically feasible for many developing countries because of non-availability or the high-cost of the materials. Therefore, preparation and use of in-house quality control serum will be a very cost-effective measure with respect to laboratory needs.The objective of this study was to determine the efficacy of in-house prepared pooled sera with respect to commercially acquired control sample for routine internal quality control at the Nkwen District Hospital Laboratory. This was an analytical study, serum was taken from leftover serum samples of 5 healthy adult blood donors at the blood bank of Nkwen District Hospital, which had been screened negative for human immunodeficiency virus (HIV), hepatitis C virus (HCV) and Hepatitis B antigens (HBsAg), and were pooled together in a sterile container. From the pooled sera, sixty aliquots of 150µL each were prepared. Forty aliquots of 150µL each of commercially acquired samples were prepared after reconstitution and stored in a deep freezer at − 20°C until it was required for analysis. This study started from the 9th June to 12th August 2022. Every day, alongside with commercial control sample, one aliquot of pooled sera was removed from the deep freezer and allowed to thaw before analyzed for the following parameters: blood urea, serum creatinine, aspartate aminotransferase (AST), alanine aminotransferase (ALT), potassium and sodium. After getting the first 20 values for each parameter of pooled sera, the mean, standard deviation and coefficient of variation were calculated, and a Levey-Jennings (L-J) chart established. The mean and standard deviation for commercially acquired control sample was provided by the manufacturer. The following results were observed; pooled sera had lesser standard deviation for creatinine, urea and AST than commercially acquired control samples. There was statistically significant difference (p<0.05) between the mean values of creatinine, urea and AST for in-house quality control when compared with commercial control. The coefficient of variation for the parameters for both commercial control and in-house control samples were less than 30%, which is an acceptable difference. The L-J charts revealed shifts and trends (warning signs), so troubleshooting and corrective measures were taken. In conclusion, in-house quality control sample prepared from pooled serum can be a good control sample for routine internal quality control.

Keywords: internal quality control, levey-jennings chart, pooled sera, shifts, trends, westgard rules

Procedia PDF Downloads 48
111 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN

Authors: Mohamed Gaafar, Evan Davies

Abstract:

Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.

Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN

Procedia PDF Downloads 271
110 Space Tourism Pricing Model Revolution from Time Independent Model to Time-Space Model

Authors: Kang Lin Peng

Abstract:

Space tourism emerged in 2001 and became famous in 2021, following the development of space technology. The space market is twisted because of the excess demand. Space tourism is currently rare and extremely expensive, with biased luxury product pricing, which is the seller’s market that consumers can not bargain with. Spaceship companies such as Virgin Galactic, Blue Origin, and Space X have been charged space tourism prices from 200 thousand to 55 million depending on various heights in space. There should be a reasonable price based on a fair basis. This study aims to derive a spacetime pricing model, which is different from the general pricing model on the earth’s surface. We apply general relativity theory to deduct the mathematical formula for the space tourism pricing model, which covers the traditional time-independent model. In the future, the price of space travel will be different from current flight travel when space travel is measured in lightyear units. The pricing of general commodities mainly considers the general equilibrium of supply and demand. The pricing model considers risks and returns with the dependent time variable as acceptable when commodities are on the earth’s surface, called flat spacetime. Current economic theories based on the independent time scale in the flat spacetime do not consider the curvature of spacetime. Current flight services flying the height of 6, 12, and 19 kilometers are charging with a pricing model that measures time coordinate independently. However, the emergence of space tourism is flying heights above 100 to 550 kilometers that have enlarged the spacetime curvature, which means tourists will escape from a zero curvature on the earth’s surface to the large curvature of space. Different spacetime spans should be considered in the pricing model of space travel to echo general relativity theory. Intuitively, this spacetime commodity needs to consider changing the spacetime curvature from the earth to space. We can assume the value of each spacetime curvature unit corresponding to the gradient change of each Ricci or energy-momentum tensor. Then we know how much to spend by integrating the spacetime from the earth to space. The concept is adding a price p component corresponding to the general relativity theory. The space travel pricing model degenerates into a time-independent model, which becomes a model of traditional commodity pricing. The contribution is that the deriving of the space tourism pricing model will be a breakthrough in philosophical and practical issues for space travel. The results of the space tourism pricing model extend the traditional time-independent flat spacetime mode. The pricing model embedded spacetime as the general relativity theory can better reflect the rationality and accuracy of space travel on the universal scale. The universal scale from independent-time scale to spacetime scale will bring a brand-new pricing concept for space traveling commodities. Fair and efficient spacetime economics will also bring to humans’ travel when we can travel in lightyear units in the future.

Keywords: space tourism, spacetime pricing model, general relativity theory, spacetime curvature

Procedia PDF Downloads 91
109 Biorefinery as Extension to Sugar Mills: Sustainability and Social Upliftment in the Green Economy

Authors: Asfaw Gezae Daful, Mohsen Alimandagari, Kathleen Haigh, Somayeh Farzad, Eugene Van Rensburg, Johann F. Görgens

Abstract:

The sugar industry has to 're-invent' itself to ensure long-term economic survival and opportunities for job creation and enhanced community-level impacts, given increasing pressure from fluctuating and low global sugar prices, increasing energy prices and sustainability demands. We propose biorefineries for re-vitalisation of the sugar industry using low value lignocellulosic biomass (sugarcane bagasse, leaves, and tops) annexed to existing sugar mills, producing a spectrum of high value platform chemicals along with biofuel, bioenergy, and electricity. Opportunity is presented for greener products, to mitigate climate change and overcome economic challenges. Xylose from labile hemicellulose remains largely underutilized and the conversion to value-add products a major challenge. Insight is required on pretreatment and/or extraction to optimize production of cellulosic ethanol together with lactic acid, furfural or biopolymers from sugarcane bagasse, leaves, and tops. Experimental conditions for alkaline and pressurized hot water extraction dilute acid and steam explosion pretreatment of sugarcane bagasse and harvest residues were investigated to serve as a basis for developing various process scenarios under a sugarcane biorefinery scheme. Dilute acid and steam explosion pretreatment were optimized for maximum hemicellulose recovery, combined sugar yield and solids digestibility. An optimal range of conditions for alkaline and liquid hot water extraction of hemicellulosic biopolymers, as well as conditions for acceptable enzymatic digestibility of the solid residue, after such extraction was established. Using data from the above, a series of energy efficient biorefinery scenarios are under development and modeled using Aspen Plus® software, to simulate potential factories to better understand the biorefinery processes and estimate the CAPEX and OPEX, environmental impacts, and overall viability. Rigorous and detailed sustainability assessment methodology was formulated to address all pillars of sustainability. This work is ongoing and to date, models have been developed for some of the processes which can ultimately be combined into biorefinery scenarios. This will allow systematic comparison of a series of biorefinery scenarios to assess the potential to reduce negative impacts on and maximize the benefits of social, economic, and environmental factors on a lifecycle basis.

Keywords: biomass, biorefinery, green economy, sustainability

Procedia PDF Downloads 486
108 Risk Assessment of Flood Defences by Utilising Condition Grade Based Probabilistic Approach

Authors: M. Bahari Mehrabani, Hua-Peng Chen

Abstract:

Management and maintenance of coastal defence structures during the expected life cycle have become a real challenge for decision makers and engineers. Accurate evaluation of the current condition and future performance of flood defence structures is essential for effective practical maintenance strategies on the basis of available field inspection data. Moreover, as coastal defence structures age, it becomes more challenging to implement maintenance and management plans to avoid structural failure. Therefore, condition inspection data are essential for assessing damage and forecasting deterioration of ageing flood defence structures in order to keep the structures in an acceptable condition. The inspection data for flood defence structures are often collected using discrete visual condition rating schemes. In order to evaluate future condition of the structure, a probabilistic deterioration model needs to be utilised. However, existing deterioration models may not provide a reliable prediction of performance deterioration for a long period due to uncertainties. To tackle the limitation, a time-dependent condition-based model associated with a transition probability needs to be developed on the basis of condition grade scheme for flood defences. This paper presents a probabilistic method for predicting future performance deterioration of coastal flood defence structures based on condition grading inspection data and deterioration curves estimated by expert judgement. In condition-based deterioration modelling, the main task is to estimate transition probability matrices. The deterioration process of the structure related to the transition states is modelled according to Markov chain process, and a reliability-based approach is used to estimate the probability of structural failure. Visual inspection data according to the United Kingdom Condition Assessment Manual are used to obtain the initial condition grade curve of the coastal flood defences. The initial curves then modified in order to develop transition probabilities through non-linear regression based optimisation algorithms. The Monte Carlo simulations are then used to evaluate the future performance of the structure on the basis of the estimated transition probabilities. Finally, a case study is given to demonstrate the applicability of the proposed method under no-maintenance and medium-maintenance scenarios. Results show that the proposed method can provide an effective predictive model for various situations in terms of available condition grading data. The proposed model also provides useful information on time-dependent probability of failure in coastal flood defences.

Keywords: condition grading, flood defense, performance assessment, stochastic deterioration modelling

Procedia PDF Downloads 211
107 I, Me and the Bot: Forming a theory of symbolic interactivity with a Chatbot

Authors: Felix Liedel

Abstract:

The rise of artificial intelligence has numerous and far-reaching consequences. In addition to the obvious consequences for entire professions, the increasing interaction with chatbots also has a wide range of social consequences and implications. We are already increasingly used to interacting with digital chatbots, be it in virtual consulting situations, creative development processes or even in building personal or intimate virtual relationships. A media-theoretical classification of these phenomena has so far been difficult, partly because the interactive element in the exchange with artificial intelligence has undeniable similarities to human-to-human communication but is not identical to it. The proposed study, therefore, aims to reformulate the concept of symbolic interaction in the tradition of George Herbert Mead as symbolic interactivity in communication with chatbots. In particular, Mead's socio-psychological considerations will be brought into dialog with the specific conditions of digital media, the special dispositive situation of chatbots and the characteristics of artificial intelligence. One example that illustrates this particular communication situation with chatbots is so-called consensus fiction: In face-to-face communication, we use symbols on the assumption that they will be interpreted in the same or a similar way by the other person. When briefing a chatbot, it quickly becomes clear that this is by no means the case: only the bot's response shows whether the initial request corresponds to the sender's actual intention. This makes it clear that chatbots do not just respond to requests. Rather, they function equally as projection surfaces for their communication partners but also as distillations of generalized social attitudes. The personalities of the chatbot avatars result, on the one hand, from the way we behave towards them and, on the other, from the content we have learned in advance. Similarly, we interpret the response behavior of the chatbots and make it the subject of our own actions with them. In conversation with the virtual chatbot, we enter into a dialog with ourselves but also with the content that the chatbot has previously learned. In our exchanges with chatbots, we, therefore, interpret socially influenced signs and behave towards them in an individual way according to the conditions that the medium deems acceptable. This leads to the emergence of situationally determined digital identities that are in exchange with the real self but are not identical to it: In conversation with digital chatbots, we bring our own impulses, which are brought into permanent negotiation with a generalized social attitude by the chatbot. This also leads to numerous media-ethical follow-up questions. The proposed approach is a continuation of my dissertation on moral decision-making in so-called interactive films. In this dissertation, I attempted to develop a concept of symbolic interactivity based on Mead. Current developments in artificial intelligence are now opening up new areas of application.

Keywords: artificial intelligence, chatbot, media theory, symbolic interactivity

Procedia PDF Downloads 18
106 The Influence of Operational Changes on Efficiency and Sustainability of Manufacturing Firms

Authors: Dimitrios Kafetzopoulos

Abstract:

Nowadays, companies are more concerned with adopting their own strategies for increased efficiency and sustainability. Dynamic environments are fertile fields for developing operational changes. For this purpose, organizations need to implement an advanced management philosophy that boosts changes to companies’ operation. Changes refer to new applications of knowledge, ideas, methods, and skills that can generate unique capabilities and leverage an organization’s competitiveness. So, in order to survive and compete in the global and niche markets, companies should incorporate the adoption of operational changes into their strategy with regard to their products and their processes. Creating the appropriate culture for changes in terms of products and processes helps companies to gain a sustainable competitive advantage in the market. Thus, the purpose of this study is to investigate the role of both incremental and radical changes into operations of a company, taking into consideration not only product changes but also process changes, and continues by measuring the impact of these two types of changes on business efficiency and sustainability of Greek manufacturing companies. The above discussion leads to the following hypotheses: H1: Radical operational changes have a positive impact on firm efficiency. H2: Incremental operational changes have a positive impact on firm efficiency. H3: Radical operational changes have a positive impact on firm sustainability. H4: Incremental operational changes have a positive impact on firm sustainability. In order to achieve the objectives of the present study, a research study was carried out in Greek manufacturing firms. A total of 380 valid questionnaires were received while a seven-point Likert scale was used to measure all the questionnaire items of the constructs (radical changes, incremental changes, efficiency and sustainability). The constructs of radical and incremental operational changes, each one as one variable, has been subdivided into product and process changes. Non-response bias, common method variance, multicollinearity, multivariate normal distribution and outliers have been checked. Moreover, the unidimensionality, reliability and validity of the latent factors were assessed. Exploratory Factor Analysis and Confirmatory Factor Analysis were applied to check the factorial structure of the constructs and the factor loadings of the items. In order to test the research hypotheses, the SEM technique was applied (maximum likelihood method). The goodness of fit of the basic structural model indicates an acceptable fit of the proposed model. According to the present study findings, radical operational changes and incremental operational changes significantly influence both efficiency and sustainability of Greek manufacturing firms. However, it is in the dimension of radical operational changes, meaning those in process and product, that the most significant contributors to firm efficiency are to be found, while its influence on sustainability is low albeit statistically significant. On the contrary, incremental operational changes influence sustainability more than firms’ efficiency. From the above, it is apparent that the embodiment of the concept of the changes into the products and processes operational practices of a firm has direct and positive consequences for what it achieves from efficiency and sustainability perspective.

Keywords: incremental operational changes, radical operational changes, efficiency, sustainability

Procedia PDF Downloads 107
105 Poly(Acrylamide-Co-Itaconic Acid) Nanocomposite Hydrogels and Its Use in the Removal of Lead in Aqueous Solution

Authors: Majid Farsadrouh Rashti, Alireza Mohammadinejad, Amir Shafiee Kisomi

Abstract:

Lead (Pb²⁺), a cation, is a prime constituent of the majority of the industrial effluents such as mining, smelting and coal combustion, Pb-based painting and Pb containing pipes in water supply systems, paper and pulp refineries, printing, paints and pigments, explosive manufacturing, storage batteries, alloy and steel industries. The maximum permissible limit of lead in the water used for drinking and domesticating purpose is 0.01 mg/L as advised by Bureau of Indian Standards, BIS. This becomes the acceptable 'safe' level of lead(II) ions in water beyond which, the water becomes unfit for human use and consumption, and is potential enough to lead health problems and epidemics leading to kidney failure, neuronal disorders, and reproductive infertility. Superabsorbent hydrogels are loosely crosslinked hydrophilic polymers that in contact with aqueous solution can easily water and swell to several times to their initial volume without dissolving in aqueous medium. Superabsorbents are kind of hydrogels capable to swell and absorb a large amount of water in their three-dimensional networks. While the shapes of hydrogels do not change extensively during swelling, because of tremendously swelling capacity of superabsorbent, their shape will broadly change.Because of their superb response to changing environmental conditions including temperature pH, and solvent composition, superabsorbents have been attracting in numerous industrial applications. For instance, water retention property and subsequently. Natural-based superabsorbent hydrogels have attracted much attention in medical pharmaceutical, baby diapers, agriculture, and horticulture because of their non-toxicity, biocompatibility, and biodegradability. Novel superabsorbent hydrogel nanocomposites were prepared by graft copolymerization of acrylamide and itaconic acid in the presence of nanoclay (laponite), using methylene bisacrylamide (MBA) and potassium persulfate, former as a crosslinking agent and the second as an initiator. The superabsorbent hydrogel nanocomposites structure was characterized by FTIR spectroscopy, SEM and TGA Spectroscopy adsorption of metal ions on poly (AAm-co-IA). The equilibrium swelling values of copolymer was determined by gravimetric method. During the adsorption of metal ions on polymer, residual metal ion concentration in the solution and the solution pH were measured. The effects of the clay content of the hydrogel on its metal ions uptake behavior were studied. The NC hydrogels may be considered as a good candidate for environmental applications to retain more water and to remove heavy metals.

Keywords: adsorption, hydrogel, nanocomposite, super adsorbent

Procedia PDF Downloads 165
104 Assessment of Urban Environmental Noise in Urban Habitat: A Spatial Temporal Study

Authors: Neha Pranav Kolhe, Harithapriya Vijaye, Arushi Kamle

Abstract:

The economic growth engines are urban regions. As the economy expands, so does the need for peace and quiet, and noise pollution is one of the important social and environmental issue. Health and wellbeing are at risk from environmental noise pollution. Because of urbanisation, population growth, and the consequent rise in the usage of increasingly potent, diverse, and highly mobile sources of noise, it is now more severe and pervasive than ever before, and it will only become worse. Additionally, it will expand as long as there is an increase in air, train, and highway traffic, which continue to be the main contributors of noise pollution. The current study will be conducted in two zones of class I city of central India (population range: 1 million–4 million). Total 56 measuring points were chosen to assess noise pollution. The first objective evaluates the noise pollution in various urban habitats determined as formal and informal settlement. It identifies the comparison of noise pollution within the settlements using T- Test analysis. The second objective assess the noise pollution in silent zones (as stated in Central Pollution Control Board) in a hierarchical way. It also assesses the noise pollution in the settlements and compares with prescribed permissible limits using class I sound level equipment. As appropriate indices, equivalent noise level on the (A) frequency weighting network, minimum sound pressure level and maximum sound pressure level were computed. The survey is conducted for a period of 1 week. Arc GIS is used to plot and map the temporal and spatial variability in urban settings. It is discovered that noise levels at most stations, particularly at heavily trafficked crossroads and subway stations, were significantly different and higher than acceptable limits and squares. The study highlights the vulnerable areas that should be considered while city planning. The study demands area level planning while preparing a development plan. It also demands attention to noise pollution from the perspective of residential and silent zones. The city planning in urban areas neglects the noise pollution assessment at city level. This contributes to that, irrespective of noise pollution guidelines, the ground reality is far away from its applicability. The result produces incompatible land use on a neighbourhood scale with respect to noise pollution. The study's final results will be useful to policymakers, architects and administrators in developing countries. This will be useful for noise pollution in urban habitat governance by efficient decision making and policy formulation to increase the profitability of these systems.

Keywords: noise pollution, formal settlements, informal settlements, built environment, silent zone, residential area

Procedia PDF Downloads 91
103 Quality Assessment Of Instant Breakfast Cereals From Yellow Maize (Zea mays), Sesame (Sesamum indicium), And Mushroom (Pleurotusostreatus) Flour Blends

Authors: Mbaeyi-Nwaoha, Ifeoma Elizabeth, Orngu, Africa Orngu

Abstract:

Composite flours were processed from blends of yellow maize (Zea mays), sesame seed (Sesamum indicum) and oyster mushroom (Pleurotus ostreatus) powder in the ratio of 80:20:0; 75:20:5; 70:20:10; 65:20:10 and 60:20:20, respectively to produce the breakfast cereal coded as YSB, SMB, TMB, PMB and OMB with YSB as the control. The breakfast cereals were produced by hydration and toasting of yellow maize and sesame to 160oC for 25 minutes and blended together with oven dried and packaged oyster mushroom. The developed products (flours and breakfast cereals) were analyzed for proximate composition, vitamins, minerals, anti-nutrients, phytochemicals, functional, microbial and sensory properties. Results for the flours showed: proximate composition (%): moisture (2.59-7.27), ash (1.29-7.57), crude fat (0.98-14.91), fibre (1.03-16.02), protein (10.13-35.29), carbohydrate (75.48-38.18) and energy (295.18-410.75kcal). Vitamins ranged as: vitamin A (0.14-9.03 ug/100g), vitamin B1 (0.14-0.38), vitamin B2 (0.07-0.15), vitamin B3(0.89-4.88) and Vitamin C (0.03-4.24). Minerals (mg/100g) were reported thus: calcium (8.01-372.02), potassium (1.40-1.85), magnesium (12.09-13.15), iron (1.23-5.25) and zinc (0.85-2.20). The results for anti-nutrients and phytochemical ranged from: tannin (1.50-1.61mg/g), Phytate (0.40-0.71mg/g), Oxalate(1.81-2.02mg/g), Flavonoid (0.21-1.27%) and phenolic (1.12-2.01%). Functional properties showed: bulk density (0.51-0.77g/ml), water absorption capacity (266.0-301.5%), swelling capacity (136.0-354.0%), least Gelation (0.55-1.45g/g) and reconstitution index (35.20-69.60%). The total viable count ranged from 6.4× 102to1.0× 103cfu/g while the total mold count was from 1.0× 10to 3.0× 10 cfu/g. For the breakfast cereals, proximate composition (%) ranged thus: moisture (4.07-7.08), ash (3.09-2.28), crude fat(16.04-12.83), crude fibre(4.30-8.22), protein(16.14-22.54), carbohydrate(56.34-47.04) and energy (434.34-393.83Kcal).Vitamin A (7.99-5.98 ug/100g), vitamin B1(0.08-0.42mg/100g), vitamin B2(0.06-0.15 mg/100g), vitamin B3(1.91-4.52 mg/100g) and Vitamin C(3.55-3.32 mg/100g) were reported while Minerals (mg/100g) were: calcium (75.31-58.02), potassium (0.65-4.01), magnesium(12.25-12.62), iron (1.21-4.15) and zinc (0.40-1.32). The anti-nutrients and phytochemical revealed the range (mg/g) as: tannin (1.12-1.21), phytate (0.69-0.53), oxalate (1.21-0.43), flavonoid (0.23-1.22%) and phenolic (0.23-1.23%). The bulk density (0.77-0.63g/ml), water absorption capacity (156.5-126.0%), swelling capacity (309.5-249.5%), least gelation (1.10-0.75g/g) and reconstitution index (49.95-39.95%) were recorded. From the total viable count, it ranged from 3.3× 102to4.2× 102cfu/g but no mold growth was detected. Sensory scores revealed that the breakfast cereals were acceptable to the panelist with oyster mushroom supplementation up to 10%.

Keywords: oyster mushroom (Pleurotus ostreatus), sesame seed (Sesamum indicum), yellow maize (Zea mays, instant breakfast cereals

Procedia PDF Downloads 173
102 Institutional and Economic Determinants of Foreign Direct Investment: Comparative Analysis of Three Clusters of Countries

Authors: Ismatilla Mardanov

Abstract:

There are three types of countries, the first of which is willing to attract foreign direct investment (FDI) in enormous amounts and do whatever it takes to make this happen. Therefore, FDI pours into such countries. In the second cluster of countries, even if the country is suffering tremendously from the shortage of investments, the governments are hesitant to attract investments because they are at the hands of local oligarchs/cartels. Therefore, FDI inflows are moderate to low in such countries. The third type is countries whose companies prefer investing in the most efficient locations globally and are hesitant to invest in the homeland. Sorting countries into such clusters, the present study examines the essential institutions and economic factors that make these countries different. Past literature has discussed various determinants of FDI in all kinds of countries. However, it did not classify countries based on government motivation, institutional setup, and economic factors. A specific approach to each target country is vital for corporate foreign direct investment risk analysis and decisions. The research questions are 1. What specific institutional and economic factors paint the pictures of the three clusters; 2. What specific institutional and economic factors are determinants of FDI; 3. Which of the determinants are endogenous and exogenous variables? 4. How can institutions and economic and political variables impact corporate investment decisions Hypothesis 1: In the first type, country institutions and economic factors will be favorable for FDI. Hypothesis 2: In the second type, even if country economic factors favor FDI, institutions will not. Hypothesis 3: In the third type, even if country institutions favorFDI, economic factors will not favor domestic investments. Therefore, FDI outflows occur in large amounts. Methods: Data come from open sources of the World Bank, the Fraser Institute, the Heritage Foundation, and other reliable sources. The dependent variable is FDI inflows. The independent variables are institutions (economic and political freedom indices) and economic factors (natural, material, and labor resources, government consumption, infrastructure, minimum wage, education, unemployment, tax rates, consumer price index, inflation, and others), the endogeneity or exogeneity of which are tested in the instrumental variable estimation. Political rights and civil liberties are used as instrumental variables. Results indicate that in the first type, both country institutions and economic factors, specifically labor and logistics/infrastructure/energy intensity, are favorable for potential investors. In the second category of countries, the risk of loss of assets is very high due to governmentshijacked by local oligarchs/cartels/special interest groups. In the third category of countries, the local economic factors are unfavorable for domestic investment even if the institutions are well acceptable. Cluster analysis and instrumental variable estimation were used to reveal cause-effect patterns in each of the clusters.

Keywords: foreign direct investment, economy, institutions, instrumental variable estimation

Procedia PDF Downloads 138
101 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks

Authors: Afnan Al-Romi, Iman Al-Momani

Abstract:

The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.

Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN

Procedia PDF Downloads 295
100 A New Index for the Differential Diagnosis of Morbid Obese Children with and without Metabolic Syndrome

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Metabolic syndrome (MetS) is a severe health problem which is common among obese individuals. The components of MetS are rather stable in adults compared to the components discussed for children. Due to the ambiguity in this group of the population, how to diagnose MetS in morbid obese (MO) children still constitutes a matter of discussion. For this purpose, a formula, which facilitates the diagnosis of MetS in MO children, was investigated. The aim of this study was to develop a formula which was capable of discriminating MO children with and without MetS findings. Study population comprised MO children. Age and sex-dependent body mass index (BMI) percentiles of the children were above 99. Metabolic syndrome components were also determined. Elevated systolic and diastolic blood pressures (SBP and DBP), elevated fasting blood glucose (FBG), elevated triglycerides (TRG), and/or depressed high density lipoprotein cholesterol (HDL-C) in addition to central obesity were listed as MetS components for each child. Presence of at least two of these components confirmed that the case was MetS. Two groups were constituted. In the first group, there were forty-two MO children without MetS components. Second group was composed of forty-four MO children with at least two MetS components. Anthropometric measurements, including weight, height, waist, and hip circumferences, were performed following physical examination. Body mass index and homeostatic model assessment of insulin resistance values were calculated. Informed consent forms were obtained from the parents of the children. Institutional Non-Interventional Ethics Committee approved the study design. Blood pressure values were recorded. Routine biochemical analysis, including FBG, insulin (INS), TRG, HDL-C were performed. The performance and the clinical utility of the Diagnostic Obesity Notation Model Assessment Metabolic Syndrome Index (DONMA MetS index) [(INS/FBG)/(HDL-C/TRG)*100] was tested. Appropriate statistical tests were applied to the study data. p value smaller than 0.05 was defined as significant. Metabolic syndrome index values were 41.6±5.1 in MO group and 104.4±12.8 in MetS group. Corresponding values for HDL-C values were 54.5±13.2 mg/dl and 44.2±11.5 mg/dl. There were statistically significant differences between the groups (p<0.001). Upon evaluation of the correlations between MetS index and HDL-C values, a much stronger negative correlation was found in MetS group (r=-0.515; p=0.001) in comparison with the correlation detected in MO group (r=-0.371; p=0.016). From these findings, it was concluded that the statistical significance degree of the difference between MO and MetS groups was highly acceptable for this recently introduced MetS index as expected. This was due to the involvement of all of the biochemically defined MetS components into the index. This is particularly important because each of these four parameters used in the formula is cardiac risk factor. Aside from discriminating MO children with and without MetS findings, MetS index introduced in this study is important from the cardiovascular risk point of view in MetS group of children.

Keywords: children, fasting blood glucose, high density lipoprotein cholesterol, index, insulin, metabolic syndrome, morbid obesity, triglycerides.

Procedia PDF Downloads 62
99 Investigation of the Usability of Biochars Obtained from Olive Pomace and Smashed Olive Seeds as Additives for Bituminous Binders

Authors: Muhammed Ertugrul Celoglu, Beyza Furtana, Mehmet Yilmaz, Baha Vural Kok

Abstract:

Biomass, which is considered to be one of the largest renewable energy sources in the world, has a potential to be utilized as a bitumen additive after it is processed by a wide variety of thermochemical methods. Furthermore, biomasses are renewable in short amounts of time, and they possess a hydrocarbon structure. These characteristics of biomass promote their usability as additives. One of the most common ways to create materials with significant economic values from biomasses is the processes of pyrolysis. Pyrolysis is defined as the process of an organic matter’s thermochemical degradation (carbonization) at a high temperature and in an anaerobic environment. The resultant liquid substance at the end of the pyrolysis is defined as bio-oil, whereas the resultant solid substance is defined as biochar. Olive pomace is the resultant mildly oily pulp with seeds after olive is pressed and its oil is extracted. It is a significant source of biomass as the waste of olive oil factories. Because olive pomace is waste material, it could create problems just as other waste unless there are appropriate and acceptable areas of utilization. The waste material, which is generated in large amounts, is generally used as fuel and fertilizer. Generally, additive materials are used in order to improve the properties of bituminous binders, and these are usually expensive materials, which are produced chemically. The aim of this study is to investigate the usability of biochars obtained after subjecting olive pomace and smashed olive seeds, which are considered as waste materials, to pyrolysis as additives in bitumen modification. In this way, various ways of use will be provided for waste material, providing both economic and environmental benefits. In this study, olive pomace and smashed olive seeds were used as sources of biomass. Initially, both materials were ground and processed through a No.50 sieve. Both of the sieved materials were subjected to pyrolysis (carbonization) at 400 ℃. Following the process of pyrolysis, bio-oil and biochar were obtained. The obtained biochars were added to B160/220 grade pure bitumen at 10% and 15% rates and modified bitumens were obtained by mixing them in high shear mixtures at 180 ℃ for 1 hour at 2000 rpm. Pure bitumen and four different types of bitumen obtained as a result of the modifications were tested with penetration, softening point, rotational viscometer, and dynamic shear rheometer, evaluating the effects of additives and the ratios of additives. According to the test results obtained, both biochar modifications at both ratios provided improvements in the performance of pure bitumen. In the comparison of the test results of the binders modified with the biochars of olive pomace and smashed olive seed, it was revealed that there was no notable difference in their performances.

Keywords: bituminous binders, biochar, biomass, olive pomace, pomace, pyrolysis

Procedia PDF Downloads 104
98 Reliability of 2D Motion Analysis System for Sagittal Plane Lower Limb Kinematics during Running

Authors: Seyed Hamed Mousavi, Juha M. Hijmans, Reza Rajabi, Ron Diercks, Johannes Zwerver, Henk van der Worp

Abstract:

Introduction: Running is one of the most popular sports activity among people. Improper sagittal plane ankle, knee and hip kinematics are considered to be associated with the increase of injury risk in runners. Motion assessing smart-phone applications are increasingly used to measure kinematics both in the field and laboratory setting, as they are cheaper, more portable, accessible, and easier to use relative to 3D motion analysis system. The aims of this study are 1) to compare the results of 3D gait analysis system and CE; 2) to evaluate the test-retest and intra-rater reliability of coach’s eye (CE) app for the sagittal plane hip, knee, and ankle angles in the touchdown and toe-off while running. Method: Twenty subjects participated in this study. Sixteen reflective markers and cluster markers were attached to the subject’s body. Subjects were asked to run at a self-selected speed on a treadmill. Twenty-five seconds of running were collected for analyzing kinematics of interest. To measure sagittal plane hip, knee and ankle joint angles at touchdown (TD) and toe off (TO), the mean of first ten acceptable consecutive strides was calculated for each angle. A smartphone (Samsung Note5, android) was placed on the right side of the subject so that whole body was simultaneously filmed with 3D gait system during running. All subjects repeated the task with the same running speed after a short interval of 5 minutes in between. The CE app, installed on the smartphone, was used to measure the sagittal plane hip, knee and ankle joint angles at touchdown and toe off the stance phase. Results: Intraclass correlation coefficient (ICC) was used to assess test-retest and intra-rater reliability. To analyze the agreement between 3D and 2D outcomes, the Bland and Altman plot was used. The values of ICC were for Ankle at TD (TRR=0.8,IRR=0.94), ankle at TO (TRR=0.9,IRR=0.97), knee at TD (TRR=0.78,IRR=0.98), knee at TO (TRR=0.9,IRR=0.96), hip at TD (TRR=0.75,IRR=0.97), hip at TO (TRR=0.87,IRR=0.98). The Bland and Altman plots displaying a mean difference (MD) and ±2 standard deviation of MD (2SDMD) of 3D and 2D outcomes were for Ankle at TD (MD=3.71,+2SDMD=8.19, -2SDMD=-0.77), ankle at TO (MD=-1.27, +2SDMD=6.22, -2SDMD=-8.76), knee at TD (MD=1.48, +2SDMD=8.21, -2SDMD=-5.25), knee at TO (MD=-6.63, +2SDMD=3.94, -2SDMD=-17.19), hip at TD (MD=1.51, +2SDMD=9.05, -2SDMD=-6.03), hip at TO (MD=-0.18, +2SDMD=12.22, -2SDMD=-12.59). Discussion: The ability that the measurements are accurately reproduced is valuable in the performance and clinical assessment of outcomes of joint angles. The results of this study showed that the intra-rater and test-retest reliability of CE app for all kinematics measured are excellent (ICC ≥ 0.75). The Bland and Altman plots display that there are high differences of values for ankle at TD and knee at TO. Measuring ankle at TD by 2D gait analysis depends on the plane of movement. Since ankle at TD mostly occurs in the none-sagittal plane, the measurements can be different as foot progression angle at TD increases during running. The difference in values of the knee at TD can depend on how 3D and the rater detect the TO during the stance phase of running.

Keywords: reliability, running, sagittal plane, two dimensional

Procedia PDF Downloads 170