Search results for: battery energy storage efficiency
948 Exercise in Extreme Conditions: Leg Cooling and Fat/Carbohydrate Utilization
Authors: Anastasios Rodis
Abstract:
Background: Case studies of walkers, climbers, and campers exposed to cold and wet conditions without limb water/windproof protection revealed experiences of muscle weakness and fatigue. It is reasonable to assume that a part of the fatigue could occur due to an alteration in substrate utilization, since reduction of performance in extreme cold conditions, may partially be explained by higher anaerobic glycolysis, reflecting higher carbohydrate oxidation and an increase accumulation rate of blood lactate. The aim of this study was to assess the effects of pre-exercise lower limb cooling on substrate utilization rate during sub-maximal exercise. Method: Six male university students (mean (SD): age, 21.3 (1.0) yr; maximal oxygen uptake (V0₂ max), 49.6 (3.6) ml.min⁻¹; and percentage of body fat, 13.6 (2.5) % were examined in random order after either 30min cold water (12°C) immersion utilized as the cooling strategy up to the gluteal fold, or under control conditions (no precooling), with tests separated by minimum of 7 days. Exercise consisted of 60min cycling at 50% V0₂ max, in a thermoneutral environment of 20°C. Subjects were also required to record a diet diary over the 24hrs prior to the each trial. Means (SD) for the three macronutrients during the 1 day prior to each trial (expressed as a percentage of total energy) 52 (3) % carbohydrate, 31 (4) % fat, and 17 (± 2) % protein. Results: The following responses to lower limb cooling relative to control trial during exercise were: 1) Carbohydrate (CHO) oxidation, and blood lactate (Bₗₐc) concentration were significantly higher (P < 0.05); 2) rectal temperature (Tᵣₑc) was significantly higher (P < 0.05), but skin temperature was significantly lower (P < 0.05); no significant differences were found in blood glucose (Bg), heart rate (HR) and oxygen consumption (V0₂). Discussion: These data suggested that lower limb cooling prior to submaximal exercise will shift metabolic processes from Fat oxidation to CHO oxidation. This shift from Fat to CHO oxidation will probably have important implications in the surviving scenario, since people facing accidental localized cooling of their limbs either through wading/falling in cold water or snow even if they do not perform high intensity activity, they have to rely on CHO availability.Keywords: exercise in wet conditions, leg cooling, outdoors exercise, substrate utilization
Procedia PDF Downloads 440947 Ramadan as a Model of Intermittent Fasting: Effects on Gut Hormones, Appetite and Body Composition in Diabetes vs. Controls
Authors: Turki J. Alharbi, Jencia Wong, Dennis Yue, Tania P. Markovic, Julie Hetherington, Ted Wu, Belinda Brooks, Radhika Seimon, Alice Gibson, Stephanie L. Silviera, Amanda Sainsbury, Tanya J. Little
Abstract:
Fasting has been practiced for centuries and is incorporated into the practices of different religions including Islam, whose followers intermittently fast throughout the month of Ramadan. Thus, Ramadan presents a unique model of prolonged intermittent fasting (IF). Despite a growing body of evidence for a cardio-metabolic and endocrine benefit of IF, detailed studies of the effects of IF on these indices in type 2 diabetes are scarce. We studied 5 subjects with type 2 diabetes (T2DM) and 7 healthy controls (C) at baseline (pre), and in the last week of Ramadan (post). Fasting circulating levels of glucose, HbA1c and lipids, as well as body composition (with DXA) and resting energy expenditure (REE) were measured. Plasma gut hormone levels and appetite responses to a mixed meal were also studied. Data are means±SEM. Ramadan decreased total fat mass (-907±92 g, p=0.001) and trunk fat (-778±190 g, p=0.014) in T2DM but not in controls, without any reductions in lean mass or REE. There was a trend towards a decline in plasma FFA in both groups. Ramadan had no effect on body weight, glycemia, blood pressure, or plasma lipids in either group. In T2DM only, the area under the curve for post-meal plasma ghrelin concentrations increased after Ramadan (pre:6632±1737 vs. post:9025±2518 pg/ml.min-1, p=0.045). Despite this increase in orexigenic ghrelin, subjective appetite scores were not altered by Ramadan. Meal-induced plasma concentrations of the satiety hormone pancreatic polypeptide did not change during Ramadan, but were higher in T2DM compared to controls (post: C: 23486±6677 vs. T2DM: 62193±6880 pg/ml.min-1, p=0.003. In conclusion, Ramadan, as a model for IF appears to have more favourable effects on body composition in T2DM, without adverse effects on metabolic control or subjective appetite. These data suggest that IF may be particularly beneficial in T2DM as a nutritional intervention. Larger studies are warranted.Keywords: type 2 diabetes, obesity, intermittent fasting, appetite regulating hormones
Procedia PDF Downloads 312946 Optimization of Waste Plastic to Fuel Oil Plants' Deployment Using Mixed Integer Programming
Authors: David Muyise
Abstract:
Mixed Integer Programming (MIP) is an approach that involves the optimization of a range of decision variables in order to minimize or maximize a particular objective function. The main objective of this study was to apply the MIP approach to optimize the deployment of waste plastic to fuel oil processing plants in Uganda. The processing plants are meant to reduce plastic pollution by pyrolyzing the waste plastic into a cleaner fuel that can be used to power diesel/paraffin engines, so as (1) to reduce the negative environmental impacts associated with plastic pollution and also (2) to curb down the energy gap by utilizing the fuel oil. A programming model was established and tested in two case study applications that are, small-scale applications in rural towns and large-scale deployment across major cities in the country. In order to design the supply chain, optimal decisions on the types of waste plastic to be processed, size, location and number of plants, and downstream fuel applications were concurrently made based on the payback period, investor requirements for capital cost and production cost of fuel and electricity. The model comprises qualitative data gathered from waste plastic pickers at landfills and potential investors, and quantitative data obtained from primary research. It was found out from the study that a distributed system is suitable for small rural towns, whereas a decentralized system is only suitable for big cities. Small towns of Kalagi, Mukono, Ishaka, and Jinja were found to be the ideal locations for the deployment of distributed processing systems, whereas Kampala, Mbarara, and Gulu cities were found to be the ideal locations initially utilize the decentralized pyrolysis technology system. We conclude that the model findings will be most important to investors, engineers, plant developers, and municipalities interested in waste plastic to fuel processing in Uganda and elsewhere in developing economy.Keywords: mixed integer programming, fuel oil plants, optimisation of waste plastics, plastic pollution, pyrolyzing
Procedia PDF Downloads 129945 Tax Administration Constraints: The Case of Small and Medium Size Enterprises in Addis Ababa, Ethiopia
Authors: Zeleke Ayalew Alemu
Abstract:
This study aims to investigate tax administration constraints in Addis Ababa with a focus on small and medium-sized enterprises by identifying issues and constraints in tax administration and assessment. The study identifies problems associated with taxpayers and tax-collecting authorities in the city. The research used qualitative and quantitative research designs and employed questionnaires, focus group discussion and key informant interviews for primary data collection and also used secondary data from different sources. The study identified many constraints that taxpayers are facing. Among others, tax administration offices’ inefficiency, reluctance to respond to taxpayers’ questions, limited tax assessment and administration knowledge and skills, and corruption and unethical practices are the major ones. Besides, the tax laws and regulations are complex and not enforced equally and fully on all taxpayers, causing a prevalence of business entities not paying taxes. This apparently results in an uneven playing field. Consequently, the tax system at present is neither fair nor transparent and increases compliance costs. In case of dispute, the appeal process is excessively long and the tax authority’s decision is irreversible. The Value Added Tax (VAT) administration and compliance system is not well designed, and VAT has created economic distortion among VAT-registered and non-registered taxpayers. Cash registration machine administration and the reporting system are big headaches for taxpayers. With regard to taxpayers, there is a lack of awareness of tax laws and documentation. Based on the above and other findings, the study forwarded recommendations, such as, ensuring fairness and transparency in tax collection and administration, enhancing the efficiency of tax authorities by use of modern technologies and upgrading human resources, conducting extensive awareness creation programs, and enforcing tax laws in a fair and equitable manner. The objective of this study is to assess problems, weaknesses and limitations of small and medium-sized enterprise taxpayers, tax authority administrations, and laws as sources of inefficiency and dissatisfaction to forward recommendations that bring about efficient, fair and transparent tax administration. The entire study has been conducted in a participatory and process-oriented manner by involving all partners and stakeholders at all levels. Accordingly, the researcher used participatory assessment methods in generating both secondary and primary data as well as both qualitative and quantitative data on the field. The research team held FGDs with 21 people from Addis Ababa City Administration tax offices and selected medium and small taxpayers. The study team also interviewed 10 KIIs selected from the various segments of stakeholders. The lead, along with research assistants, handled the KIIs using a predesigned semi-structured questionnaire.Keywords: taxation, tax system, tax administration, small and medium enterprises
Procedia PDF Downloads 73944 The Efficacy of Class IV Diode Laser in the Treatment of Patients with Chronic Neck Pain: A Randomized Controlled Trial
Authors: Mohamed Salaheldien Mohamed Alayat, Ahmed Mohamed Elsoudany, Roaa Abdulghani Sroge, Bayan Muteb Aldhahwani
Abstract:
Background: Neck pain is a common illness that could affect individual’s daily activities. Class IV laser with longer wavelength can stimulate tissues and penetrate more than the classic low-level laser therapy. Objectives: The aim of the study was to investigate the efficacy of class IV diode laser in the treatment of patients with chronic neck pain (CNP). Methods: Fifty-two patients participated and completed the study. Their mean age (SD) was 50.7 (6.2). Patients were randomized into two groups and treated with laser plus exercise (laser + EX) group and placebo laser plus exercise (PL+EX) group. Treatment was performed by Class IV laser in two phases; scanning and trigger point phases. Scanning to the posterior neck and shoulder girdle region with 4 J/cm2 with a total energy of 300 J applied to 75 cm2 in 4 minutes and 16 seconds. Eight trigger points on the posterior neck area were treated by 4 J/cm2 and the time of application was in 30 seconds. Both groups received exercise two times per week for 4 weeks. Exercises included range of motion, isometric, stretching, isotonic resisted exercises to the cervical extensors, lateral bending and rotators muscles with postural correction exercises. The measured variables were pain level using visual analogue scale (VAS), and neck functional activity using neck disability index (NDI) score. Measurements were taken at baseline and after 4 weeks of treatment. The level of statistical significance was set as p < 0.05. Results: There were significant decreases in post-treatment VAS and NDI in both groups as compared to baseline values. Laser + EX effectively decreased VAS (mean difference -6.5, p = 0.01) and NDI scores after (mean difference -41.3, p = 0.01) 4 weeks of treatment compared to PL + EX. Conclusion: Class IV laser combined with exercise is effective treatment for patients with CNP as compared to PL + EX therapy. The combination of laser + EX effectively increased functional activity and reduced pain after 4 weeks of treatment.Keywords: chronic neck pain, class IV laser, exercises, neck disability index, visual analogue scale
Procedia PDF Downloads 314943 Thermal Decomposition Behaviors of Hexafluoroethane (C2F6) Using Zeolite/Calcium Oxide Mixtures
Authors: Kazunori Takai, Weng Kaiwei, Sadao Araki, Hideki Yamamoto
Abstract:
HFC and PFC gases have been commonly and widely used as refrigerant of air conditioner and as etching agent of semiconductor manufacturing process, because of their higher heat of vaporization and chemical stability. On the other hand, HFCs and PFCs gases have the high global warming effect on the earth. Therefore, we have to be decomposed these gases emitted from chemical apparatus like as refrigerator. Until now, disposal of these gases were carried out by using combustion method like as Rotary kiln treatment mainly. However, this treatment needs extremely high temperature over 1000 °C. In the recent year, in order to reduce the energy consumption, a hydrolytic decomposition method using catalyst and plasma decomposition treatment have been attracted much attention as a new disposal treatment. However, the decomposition of fluorine-containing gases under the wet condition is not able to avoid the generation of hydrofluoric acid. Hydrofluoric acid is corrosive gas and it deteriorates catalysts in the decomposition process. Moreover, an additional process for the neutralization of hydrofluoric acid is also indispensable. In this study, the decomposition of C2F6 using zeolite and zeolite/CaO mixture as reactant was evaluated in the dry condition at 923 K. The effect of the chemical structure of zeolite on the decomposition reaction was confirmed by using H-Y, H-Beta, H-MOR and H-ZSM-5. The formation of CaF2 in zeolite/CaO mixtures after the decomposition reaction was confirmed by XRD measurements. The decomposition of C2F6 using zeolite as reactant showed the closely similar behaviors regardless the type of zeolite (MOR, Y, ZSM-5, Beta type). There was no difference of XRD patterns of each zeolite before and after reaction. On the other hand, the difference in the C2F6 decomposition for each zeolite/CaO mixtures was observed. These results suggested that the rate-determining process for the C2F6 decomposition on zeolite alone is the removal of fluorine from reactive site. In other words, the C2F6 decomposition for the zeolite/CaO improved compared with that for the zeolite alone by the removal of the fluorite from reactive site. HMOR/CaO showed 100% of the decomposition for 3.5 h and significantly improved from zeolite alone. On the other hand, Y type zeolite showed no improvement, that is, the almost same value of Y type zeolite alone. The descending order of C2F6 decomposition was MOR, ZSM-5, beta and Y type zeolite. This order is similar to the acid strength characterized by NH3-TPD. Hence, it is considered that the C-F bond cleavage is closely related to the acid strength.Keywords: hexafluoroethane, zeolite, calcium oxide, decomposition
Procedia PDF Downloads 481942 EQMamba - Method Suggestion for Earthquake Detection and Phase Picking
Authors: Noga Bregman
Abstract:
Accurate and efficient earthquake detection and phase picking are crucial for seismic hazard assessment and emergency response. This study introduces EQMamba, a deep-learning method that combines the strengths of the Earthquake Transformer and the Mamba model for simultaneous earthquake detection and phase picking. EQMamba leverages the computational efficiency of Mamba layers to process longer seismic sequences while maintaining a manageable model size. The proposed architecture integrates convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM) networks, and Mamba blocks. The model employs an encoder composed of convolutional layers and max pooling operations, followed by residual CNN blocks for feature extraction. Mamba blocks are applied to the outputs of BiLSTM blocks, efficiently capturing long-range dependencies in seismic data. Separate decoders are used for earthquake detection, P-wave picking, and S-wave picking. We trained and evaluated EQMamba using a subset of the STEAD dataset, a comprehensive collection of labeled seismic waveforms. The model was trained using a weighted combination of binary cross-entropy loss functions for each task, with the Adam optimizer and a scheduled learning rate. Data augmentation techniques were employed to enhance the model's robustness. Performance comparisons were conducted between EQMamba and the EQTransformer over 20 epochs on this modest-sized STEAD subset. Results demonstrate that EQMamba achieves superior performance, with higher F1 scores and faster convergence compared to EQTransformer. EQMamba reached F1 scores of 0.8 by epoch 5 and maintained higher scores throughout training. The model also exhibited more stable validation performance, indicating good generalization capabilities. While both models showed lower accuracy in phase-picking tasks compared to detection, EQMamba's overall performance suggests significant potential for improving seismic data analysis. The rapid convergence and superior F1 scores of EQMamba, even on a modest-sized dataset, indicate promising scalability for larger datasets. This study contributes to the field of earthquake engineering by presenting a computationally efficient and accurate method for simultaneous earthquake detection and phase picking. Future work will focus on incorporating Mamba layers into the P and S pickers and further optimizing the architecture for seismic data specifics. The EQMamba method holds the potential for enhancing real-time earthquake monitoring systems and improving our understanding of seismic events.Keywords: earthquake, detection, phase picking, s waves, p waves, transformer, deep learning, seismic waves
Procedia PDF Downloads 52941 Evaluation of the Efficacy of Surface Hydrophobisation and Properties of Composite Based on Lime Binder with Flax Fillers
Authors: Stanisław Fic, Danuta Barnat-Hunek, Przemysław Brzyski
Abstract:
The aim of the study was to evaluate the possibility of applying modified lime binder together with natural flax fibers and straw to the production of wall blocks to the usage in energy-efficient construction industry and the development of proposals for technological solutions. The following laboratory tests were performed: the analysis of the physical characteristics of the tested materials (bulk density, total porosity, and thermal conductivity), compressive strength, a water droplet absorption test, water absorption of samples, diffusion of water vapor, and analysis of the structure by using SEM. In addition, the process of surface hydrophobisation was analyzed. In the paper, there was examined the effectiveness of two formulations differing in the degree of hydrolytic polycondensation, viscosity and concentration, as these are the factors that determine the final impregnation effect. Four composites, differing in composition, were executed. Composites, as a result of the presence of flax straw and fibers showed low bulk density in the range from 0.44 to 1.29 kg/m3 and thermal conductivity between 0.13 W/mK and 0.22 W/mK. Compressive strength changed in the range from 0,45 MPa to 0,65 MPa. The analysis of results allowed observing the relationship between the formulas and the physical properties of the composites. The results of the effectiveness of hydrophobisation of composites after 2 days showed a decrease in water absorption. Depending on the formulation, after 2 days, the water absorption ratio WH of composites was from 15 to 92% (effectiveness of hydrophobization was suitably from 8 to 85%). In practice, preparations based on organic solvents often cause sealing of surface, hindering the diffusion of water vapor from materials but studies have shown good water vapor permeability by the hydrophobic silicone coating. The conducted pilot study demonstrated the possibility of applying flax composites. The article shows that the reduction of CO2 which is produced in the building process can be affected by using natural materials for the building components whose quality is not inferior as compared to the materials which are commonly used.Keywords: ecological construction, flax fibers, hydrophobisation, lime
Procedia PDF Downloads 334940 A Case Study on an Integrated Analysis of Well Control and Blow out Accident
Authors: Yasir Memon
Abstract:
The complexity and challenges in the offshore industry are increasing more than the past. The oil and gas industry is expanding every day by accomplishing these challenges. More challenging wells such as longer and deeper are being drilled in today’s environment. Blowout prevention phenomena hold a worthy importance in oil and gas biosphere. In recent, so many past years when the oil and gas industry was growing drilling operation were extremely dangerous. There was none technology to determine the pressure of reservoir and drilling hence was blind operation. A blowout arises when an uncontrolled reservoir pressure enters in wellbore. A potential of blowout in the oil industry is the danger for the both environment and the human life. Environmental damage, state/country regulators, and the capital investment causes in loss. There are many cases of blowout in the oil the gas industry caused damage to both human and the environment. A huge capital investment is being in used to stop happening of blowout through all over the biosphere to bring damage at the lowest level. The objective of this study is to promote safety and good resources to assure safety and environmental integrity in all operations during drilling. This study shows that human errors and management failure is the main cause of blowout therefore proper management with the wise use of precautions, prevention methods or controlling techniques can reduce the probability of blowout to a minimum level. It also discusses basic procedures, concepts and equipment involved in well control methods and various steps using at various conditions. Furthermore, another aim of this study work is to highlight management role in oil gas operations. Moreover, this study analyze the causes of Blowout of Macondo well occurred in the Gulf of Mexico on April 20, 2010, and deliver the recommendations and analysis of various aspect of well control methods and also provides the list of mistakes and compromises that British Petroleum and its partner were making during drilling and well completion methods and also the Macondo well disaster happened due to various safety and development rules violation. This case study concludes that Macondo well blowout disaster could be avoided with proper management of their personnel’s and communication between them and by following safety rules/laws it could be brought to minimum environmental damage.Keywords: energy, environment, oil and gas industry, Macondo well accident
Procedia PDF Downloads 187939 Sorption Properties of Hemp Cellulosic Byproducts for Petroleum Spills and Water
Authors: M. Soleimani, D. Cree, C. Chafe, L. Bates
Abstract:
The accidental release of petroleum products into the environment could have harmful consequences to our ecosystem. Different techniques such as mechanical separation, membrane filtration, incineration, treatment processes using enzymes and dispersants, bioremediation, and sorption process using sorbents have been applied for oil spill remediation. Most of the techniques investigated are too costly or do not have high enough efficiency. This study was conducted to determine the sorption performance of hemp byproducts (cellulosic materials) in terms of sorption capacity and kinetics for hydrophobic and hydrophilic fluids. In this study, heavy oil, light oil, diesel fuel, and water/water vapor were used as sorbate fluids. Hemp stalk in different forms, including loose material (hammer milled (HM) and shredded (Sh) with low bulk densities) and densified forms (pellet form (P) and crumbled pellets (CP)) with high bulk densities, were used as sorbents. The sorption/retention tests were conducted according to ASTM 726 standard. For a quick-purpose application of the sorbents, the sorption tests were conducted for 15 min, and for an ideal sorption capacity of the materials, the tests were carried out for 24 h. During the test, the sorbent material was exposed to the fluid by immersion, followed by filtration through a stainless-steel wire screen. Water vapor adsorption was carried out in a controlled environment chamber with the capability of controlling relative humidity (RH) and temperature. To determine the kinetics of sorption for each fluid and sorbent, the retention capacity also was determined intervalley for up to 24 h. To analyze the kinetics of sorption, pseudo-first-order, pseudo-second order and intraparticle diffusion models were employed with the objective of minimal deviation of the experimental results from the models. The results indicated that HM and Sh materials had the highest sorption capacity for the hydrophobic fluids with approximately 6 times compared to P and CP materials. For example, average retention values of heavy oil on HM and Sh was 560% and 470% of the mass of the sorbents, respectively. Whereas, the retention of heavy oil on P and CP was up to 85% of the mass of the sorbents. This lower sorption capacity for P and CP can be due to the less exposed surface area of these materials and compacted voids or capillary tubes in the structures. For water uptake application, HM and Sh resulted in at least 40% higher sorption capacity compared to those obtained for P and CP. On average, the performance of sorbate uptake from high to low was as follows: water, heavy oil, light oil, diesel fuel. The kinetic analysis indicated that the second-pseudo order model can describe the sorption process of the oil and diesel better than other models. However, the kinetics of water absorption was better described by the pseudo-first-order model. Acetylation of HM materials could improve its oil and diesel sorption to some extent. Water vapor adsorption of hemp fiber was a function of temperature and RH, and among the models studied, the modified Oswin model was the best model in describing this phenomenon.Keywords: environment, fiber, petroleum, sorption
Procedia PDF Downloads 124938 Examining the Relationship Between Green Procurement Practices and Firm’s Performance in Ghana
Authors: Alexander Otchere Fianko, Clement Yeboah, Evans Oteng
Abstract:
Prior research concludes that Green Procurement Practices positively drive Organisational Performance. Nonetheless, the nexus and conditions under which Green Procurement Practices contribute to a Firm’s Performance are less understood. The purpose of this quantitative relational study was to examine the relationship between Green Procurement Practices and 500 Firms’ Performances in Ghana. The researchers further seek to draw insights from the resource-based view to conceptualize Green Procurement Practices and Environmental Commitment as resource capabilities to enhance Firm Performance. The researchers used insights from the contingent resource-based view to examine Green Leadership Orientation conditions under which Green Procurement Practices contribute to Firm Performance through Environmental Commitment Capabilities. The study’s conceptual framework was tested on primary data from some firms in the Ghanaian market. PROCESS Macro was used to test the study’s hypotheses. Beyond that, Environmental Commitment Capabilities mediated the association between Green Procurement Practices and the Firm’s Performance. The study further seeks to find out whether Green Leadership Orientation positively moderates the indirect relationship between Green Procurement Practices and Firm Performance through Environmental Commitment Capabilities. While conventional wisdom suggests that improved Green Procurement Practices help improve a Firm’s Performance, this study tested this presumed relationship between Green Procurement Practices and Firm Performance and provides theoretical arguments and empirical evidence to justify how Environmental Commitment Capabilities uniquely and in synergy with Green Leadership Orientation transform this relationship. The study results indicated a positive correlation between Green Procurement Practices and Firm Performance. This result suggests that firms that prioritize environmental sustainability and demonstrate a strong commitment to environmentally responsible practices tend to experience better overall performance. This includes financial gains, operational efficiency, enhanced reputation, and improved relationships with stakeholders. The study's findings inform policy formulation in Ghana related to environmental regulations, incentives, and support mechanisms. Policymakers can use the insights to design policies that encourage and reward firms for their Green Procurement Practices, thereby fostering a more sustainable and environmentally responsible business environment. The findings from such research can influence the design and development of educational programs in Ghana, specifically in fields related to sustainability, environmental management, and corporate social responsibility (CSR). Institutions may consider integrating environmental and sustainability topics into their business and management courses to create awareness and promote responsible practices among future business professionals. Also, the study results can also promote the adoption of environmental accounting practices in Ghana. By recognizing and measuring the environmental impacts and costs associated with business activities, firms can better understand the financial implications of their Green Procurement Practices and develop strategies for improved performance.Keywords: environmental commitment, firm’s performance, green procurement practice, green leadership orientation
Procedia PDF Downloads 80937 Valorization of Banana Peels for Mercury Removal in Environmental Realist Conditions
Authors: E. Fabre, C. Vale, E. Pereira, C. M. Silva
Abstract:
Introduction: Mercury is one of the most troublesome toxic metals responsible for the contamination of the aquatic systems due to its accumulation and bioamplification along the food chain. The 2030 agenda for sustainable development of United Nations promotes the improving of water quality by reducing water pollution and foments an enhance in wastewater treatment, encouraging their recycling and safe water reuse globally. Sorption processes are widely used in wastewater treatments due to their many advantages such as high efficiency and low operational costs. In these processes the target contaminant is removed from the solution by a solid sorbent. The more selective and low cost is the biosorbent the more attractive becomes the process. Agricultural wastes are especially attractive approaches for sorption. They are largely available, have no commercial value and require little or no processing. In this work, banana peels were tested for mercury removal from low concentrated solutions. In order to investigate the applicability of this solid, six water matrices were used increasing the complexity from natural waters to a real wastewater. Studies of kinetics and equilibrium were also performed using the most known models to evaluate the viability of the process In line with the concept of circular economy, this study adds value to this by-product as well as contributes to liquid waste management. Experimental: The solutions were prepared with Hg(II) initial concentration of 50 µg L-1 in natural waters, at 22 ± 1 ºC, pH 6, magnetically stirring at 650 rpm and biosorbent mass of 0.5 g L-1. NaCl was added to obtain the salt solutions, seawater was collected from the Portuguese coast and the real wastewater was kindly provided by ISQ - Instituto de Soldadura e qualidade (Welding and Quality Institute) and diluted until the same concentration of 50 µg L-1. Banana peels were previously freeze-drying, milled, sieved and the particles < 1 mm were used. Results: Banana peels removed more than 90% of Hg(II) from all the synthetic solutions studied. In these cases, the enhance in the complexity of the water type promoted a higher mercury removal. In salt waters, the biosorbent showed removals of 96%, 95% and 98 % for 3, 15 and 30 g L-1 of NaCl, respectively. The residual concentration of Hg(II) in solution achieved the level of drinking water regulation (1 µg L-1). For real matrices, the lower Hg(II) elimination (93 % for seawater and 81 % for the real wastewaters), can be explained by the competition between the Hg(II) ions and the other elements present in these solutions for the sorption sites. Regarding the equilibrium study, the experimental data are better described by the Freundlich isotherm (R ^ 2=0.991). The Elovich equation provided the best fit to the kinetic points. Conclusions: The results exhibited the great ability of the banana peels to remove mercury. The environmental realist conditions studied in this work, highlight their potential usage as biosorbents in water remediation processes.Keywords: banana peels, mercury removal, sorption, water treatment
Procedia PDF Downloads 155936 Characterization of a Three-Electrodes Bioelectrochemical System from Mangrove Water and Sediments for the Reduction of Chlordecone in Martinique
Authors: Malory Jonata
Abstract:
Chlordecone (CLD) is an organochlorine pesticide used between 1971 and 1993 in both Guadeloupe and Martinique for the control of banana black weevil. The bishomocubane structure which characterizes this chemical compound led to high stability in organic matter and high persistence in the environment. Recently, researchers found that CLD can be degraded by isolated bacteria consortiums and, particularly, by bacteria such as Citrobacter sp 86 and Delsulfovibrio sp 86. Actually, six transformation product families of CLD are known. Moreover, the latest discovery showed that CLD was disappearing faster than first predicted in highly contaminated soil in Guadeloupe. However, the toxicity of transformation products is still unknown, and knowledge has to be deepened on the degradation ways and chemical characteristics of chlordecone and its transformation products. Microbial fuel cells (MFC) are electrochemical systems that can convert organic matter into electricity thanks to electroactive bacteria. These bacteria can exchange electrons through their membranes to solid surfaces or molecules. MFC have proven their efficiency as bioremediation systems in water and soils. They are already used for the bioremediation of several organochlorine compounds such as perchlorate, trichlorophenol or hexachlorobenzene. In this study, a three-electrodes system, inspired by MFC, is used to try to degrade chlordecone using bacteria from a mangrove swamp in Martinique. As we know, some mangrove bacteria are electroactive. Furthermore, the CLD rate seems to decline in mangrove swamp sediments. This study aims to prove that electroactive bacteria from a mangrove swamp in Martinique can degrade CLD thanks to a three-electrodes bioelectrochemical system. To achieve this goal, the tree-electrodes assembly has been connected to a potentiostat. The substrate used is mangrove water and sediments sampled in the mangrove swamp of La Trinité, a coastal city in Martinique, where CLD contamination has already been studied. Electroactive biofilms are formed by imposing a potential relative to Saturated Calomel Electrode using chronoamperometry. Moreover, their comportment has been studied by using cyclic voltametry. Biofilms have been studied under different imposed potentials, several conditions of the substrate and with or without CLD. In order to quantify the evolution of CLD rates in the substrate’s system, gas chromatography coupled with mass spectrometry (GC-MS) was performed on pre-treated samples of water and sediments after short, medium and long-term contact with the electroactive biofilms. Results showed that between -0,8V and -0,2V, the three-electrodes system was able to reduce the chemical in the substrate solution. The first GC-MS analysis result of samples spiked with CLD seems to reveal decreased CLD concentration over time. In conclusion, the designed bioelectrochemical system can provide the necessary conditions for chlordecone degradation. However, it is necessary to improve three-electrodes control settings in order to increase degradation rates. The biological pathways are yet to enlighten by biologicals analysis of electroactive biofilms formed in this system. Moreover, the electrochemical study of mangrove substrate gives new informations on the potential use of this substrate for bioremediation. But further studies are needed to a better understanding of the electrochemical potential of this environment.Keywords: bioelectrochemistry, bioremediation, chlordecone, mangrove swamp
Procedia PDF Downloads 83935 Vertical Village Buildings as Sustainable Strategy to Re-Attract Mega-Cities in Developing Countries
Authors: M. J. Eichner, Y. S. Sarhan
Abstract:
Overall study purpose has been the evaluation of ‘Vertical Villages’ as a new sustainable building typology, reducing significantly negative impacts of rapid urbanization processes in third world capital cities. Commonly in fast-growing cities, housing and job supply, educational and recreational opportunities, as well as public transportation infrastructure, are not accommodating rapid population growth, exposing people to high noise and emission polluted living environments with low-quality neighborhoods and a lack of recreational areas. Like many others, Egypt’s capital city Cairo, according to the UN facing annual population growth rates of up to 428.000 people, is struggling to address the general deterioration of urban living conditions. New settlements typologies and urban reconstruction approach hardly follow sustainable urbanization principles or socio-ecologic urbanization models with severe effects not only for inhabitants but also for the local environment and global climate. The authors prove that ‘Vertical Village’ buildings can offer a sustainable solution for increasing urban density with at the same time improving the living quality and urban environment significantly. Inserting them within high-density urban fabrics the ecologic and socio-cultural conditions of low-quality neighborhoods can be transformed towards districts, considering all needs of sustainable and social urban life. This study analyzes existing building typologies in Cairo’s «low quality - high density» districts Ard el Lewa, Dokki and Mohandesen according to benchmarks for sustainable residential buildings, identifying major problems and deficits. In 3 case study design projects, the sustainable transformation potential through ‘Vertical Village’ buildings are laid out and comparative studies show the improvement of the urban microclimate, safety, social diversity, sense of community, aesthetics, privacy, efficiency, healthiness and accessibility. The main result of the paper is that the disadvantages of density and overpopulation in developing countries can be converted with ‘Vertical Village’ buildings into advantages, achieving attractive and environmentally friendly living environments with multiple synergies. The paper is documenting based on scientific criteria that mixed-use vertical building structures, designed according to sustainable principles of low rise housing, can serve as an alternative to convert «low quality - high density» districts in megacities, opening a pathway for governments to achieve sustainable urban transformation goals. Neglected informal urban districts, home to millions of the poorer population groups, can be converted into healthier living and working environments.Keywords: sustainable, architecture, urbanization, urban transformation, vertical village
Procedia PDF Downloads 124934 Guests’ Satisfaction and Intention to Revisit Smart Hotels: Qualitative Interviews Approach
Authors: Raymond Chi Fai Si Tou, Jacey Ja Young Choe, Amy Siu Ian So
Abstract:
Smart hotels can be defined as the hotel which has an intelligent system, through digitalization and networking which achieve hotel management and service information. In addition, smart hotels include high-end designs that integrate information and communication technology with hotel management fulfilling the guests’ needs and improving the quality, efficiency and satisfaction of hotel management. The purpose of this study is to identify appropriate factors that may influence guests’ satisfaction and intention to revisit Smart Hotels based on service quality measurement of lodging quality index and extended UTAUT theory. Unified Theory of Acceptance and Use of Technology (UTAUT) is adopted as a framework to explain technology acceptance and use. Since smart hotels are technology-based infrastructure hotels, UTATU theory could be as the theoretical background to examine the guests’ acceptance and use after staying in smart hotels. The UTAUT identifies four key drivers of the adoption of information systems: performance expectancy, effort expectancy, social influence, and facilitating conditions. The extended UTAUT modifies the definitions of the seven constructs for consideration; the four previously cited constructs of the UTAUT model together with three new additional constructs, which including hedonic motivation, price value and habit. Thus, the seven constructs from the extended UTAUT theory could be adopted to understand their intention to revisit smart hotels. The service quality model will also be adopted and integrated into the framework to understand the guests’ intention of smart hotels. There are rare studies to examine the service quality on guests’ satisfaction and intention to revisit in smart hotels. In this study, Lodging Quality Index (LQI) will be adopted to measure the service quality in smart hotels. Using integrated UTAUT theory and service quality model because technological applications and services require using more than one model to understand the complicated situation for customers’ acceptance of new technology. Moreover, an integrated model could provide more perspective insights to explain the relationships of the constructs that could not be obtained from only one model. For this research, ten in-depth interviews are planned to recruit this study. In order to confirm the applicability of the proposed framework and gain an overview of the guest experience of smart hotels from the hospitality industry, in-depth interviews with the hotel guests and industry practitioners will be accomplished. In terms of the theoretical contribution, it predicts that the integrated models from the UTAUT theory and the service quality will provide new insights to understand factors that influence the guests’ satisfaction and intention to revisit smart hotels. After this study identifies influential factors, smart hotel practitioners could understand which factors may significantly influence smart hotel guests’ satisfaction and intention to revisit. In addition, smart hotel practitioners could also provide outstanding guests experience by improving their service quality based on the identified dimensions from the service quality measurement. Thus, it will be beneficial to the sustainability of the smart hotels business.Keywords: intention to revisit, guest satisfaction, qualitative interviews, smart hotels
Procedia PDF Downloads 208933 Training for Search and Rescue Teams: Online Training for SAR Teams to Locate Lost Persons with Dementia Using Drones
Authors: Dalia Hanna, Alexander Ferworn
Abstract:
This research provides detailed proposed training modules for the public safety teams and, specifically, SAR teams responsible for search and rescue operations related to finding lost persons with dementia. Finding a lost person alive is the goal of this training. Time matters if a lost person is to be found alive. Finding lost people living with dementia is quite challenging, as they are unaware they are lost and will not seek help. Even a small contribution to SAR operations could contribute to saving a life. SAR operations will always require expert professional and human volunteers. However, we can reduce their time, save lives, and reduce costs by providing practical training that is based on real-life scenarios. The content for the proposed training is based on the research work done by the researcher in this area. This research has demonstrated that, based on utilizing drones, the algorithmic approach could support a successful search outcome. Understanding the behavior of the lost person, learning where they may be found, predicting their survivability, and automating the search are all contributions of this work, founded in theory and demonstrated in practice. In crisis management, human behavior constitutes a vital aspect in responding to the crisis; the speed and efficiency of the response often get affected by the difficulty of the context of the operation. Therefore, training in this area plays a significant role in preparing the crisis manager to manage the emotional aspects that lead to decision-making in these critical situations. Since it is crucial to gain high-level strategic choices and the ability to apply crisis management procedures, simulation exercises become central in training crisis managers to gain the needed skills to respond critically to these events. The training will enhance the responders’ ability to make decisions and anticipate possible consequences of their actions through flexible and revolutionary reasoning in responding to the crisis efficiently and quickly. As adult learners, search and rescue teams will be approaching training and learning by taking responsibility of the learning process, appreciate flexible learning and as contributors to the teaching and learning happening during that training. These are all characteristics of adult learning theories. The learner self-reflects, gathers information, collaborates with others and is self-directed. One of the learning strategies associated with adult learning is effective elaboration. It helps learners to remember information in the long term and use it in situations where it might be appropriate. It is also a strategy that can be taught easily and used with learners of different ages. Designers must design reflective activities to improve the student’s intrapersonal awareness.Keywords: training, OER, dementia, drones, search and rescue, adult learning, UDL, instructional design
Procedia PDF Downloads 108932 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 114931 Experimental and Theoratical Methods to Increase Core Damping for Sandwitch Cantilever Beam
Authors: Iyd Eqqab Maree, Moouyad Ibrahim Abbood
Abstract:
The purpose behind this study is to predict damping effect for steel cantilever beam by using two methods of passive viscoelastic constrained layer damping. First method is Matlab Program, this method depend on the Ross, Kerwin and Unger (RKU) model for passive viscoelastic damping. Second method is experimental lab (frequency domain method), in this method used the half-power bandwidth method and can be used to determine the system loss factors for damped steel cantilever beam. The RKU method has been applied to a cantilever beam because beam is a major part of a structure and this prediction may further leads to utilize for different kinds of structural application according to design requirements in many industries. In this method of damping a simple cantilever beam is treated by making sandwich structure to make the beam damp, and this is usually done by using viscoelastic material as a core to ensure the damping effect. The use of viscoelastic layers constrained between elastic layers is known to be effective for damping of flexural vibrations of structures over a wide range of frequencies. The energy dissipated in these arrangements is due to shear deformation in the viscoelastic layers, which occurs due to flexural vibration of the structures. The theory of dynamic stability of elastic systems deals with the study of vibrations induced by pulsating loads that are parametric with respect to certain forms of deformation. There is a very good agreement of the experimental results with the theoretical findings. The main ideas of this thesis are to find the transition region for damped steel cantilever beam (4mm and 8mm thickness) from experimental lab and theoretical prediction (Matlab R2011a). Experimentally and theoretically proved that the transition region for two specimens occurs at modal frequency between mode 1 and mode 2, which give the best damping, maximum loss factor and maximum damping ratio, thus this type of viscoelastic material core (3M468) is very appropriate to use in automotive industry and in any mechanical application has modal frequency eventuate between mode 1 and mode 2.Keywords: 3M-468 material core, loss factor and frequency, domain method, bioinformatics, biomedicine, MATLAB
Procedia PDF Downloads 271930 A Comprehensive Key Performance Indicators Dashboard for Emergency Medical Services
Authors: Giada Feletti, Daniela Tedesco, Paolo Trucco
Abstract:
The present study aims to develop a dashboard of Key Performance Indicators (KPI) to enhance information and predictive capabilities in Emergency Medical Services (EMS) systems, supporting both operational and strategic decisions of different actors. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning the indicators currently used for the performance measurement of EMS systems. From this literature analysis, it emerged that current studies focus on two distinct perspectives: the ambulance service, a fundamental component of pre-hospital health treatment, and the patient care in the Emergency Department (ED). The perspective proposed by this study is to consider an integrated view of the ambulance service process and the ED process, both essential to ensure high quality of care and patient safety. Thus, the proposal focuses on the entire healthcare service process and, as such, allows considering the interconnection between the two EMS processes, the pre-hospital and hospital ones, connected by the assignment of the patient to a specific ED. In this way, it is possible to optimize the entire patient management. Therefore, attention is paid to the dependency of decisions that in current EMS management models tend to be neglected or underestimated. In particular, the integration of the two processes enables the evaluation of the advantage of an ED selection decision having visibility on EDs’ saturation status and therefore considering the distance, the available resources and the expected waiting times. Starting from a critical review of the KPIs proposed in the extant literature, the design of the dashboard was carried out: the high number of analyzed KPIs was reduced by eliminating the ones firstly not in line with the aim of the study and then the ones supporting a similar functionality. The KPIs finally selected were tested on a realistic dataset, which draws us to exclude additional indicators due to the unavailability of data required for their computation. The final dashboard, which was discussed and validated by experts in the field, includes a variety of KPIs able to support operational and planning decisions, early warning, and citizens’ awareness of EDs accessibility in real-time. By associating each KPI to the EMS phase it refers to, it was also possible to design a well-balanced dashboard covering both efficiency and effective performance of the entire EMS process. Indeed, just the initial phases related to the interconnection between ambulance service and patient’s care are covered by traditional KPIs compared to the subsequent phases taking place in the hospital ED. This could be taken into consideration for the potential future development of the dashboard. Moreover, the research could proceed by building a multi-layer dashboard composed of the first level with a minimal set of KPIs to measure the basic performance of the EMS system at an aggregate level and further levels with KPIs that can bring additional and more detailed information.Keywords: dashboard, decision support, emergency medical services, key performance indicators
Procedia PDF Downloads 113929 Quantitative Analysis of Three Sustainability Pillars for Water Tradeoff Projects in Amazon
Authors: Taha Anjamrooz, Sareh Rajabi, Hasan Mahmmud, Ghassan Abulebdeh
Abstract:
Water availability, as well as water demand, are not uniformly distributed in time and space. Numerous extra-large water diversion projects are launched in Amazon to alleviate water scarcities. This research utilizes statistical analysis to examine the temporal and spatial features of 40 extra-large water diversion projects in Amazon. Using a network analysis method, the correlation between seven major basins is measured, while the impact analysis method is employed to explore the associated economic, environmental, and social impacts. The study unearths that the development of water diversion in Amazon has witnessed four stages, from a preliminary or initial period to a phase of rapid development. It is observed that the length of water diversion channels and the quantity of water transferred have amplified significantly in the past five decades. As of 2015, in Amazon, more than 75 billion m³ of water was transferred amidst 12,000 km long channels. These projects extend over half of the Amazon Area. The River Basin E is currently the most significant source of transferred water. Through inter-basin water diversions, Amazon gains the opportunity to enhance the Gross Domestic Product (GDP) by 5%. Nevertheless, the construction costs exceed 70 billion US dollars, which is higher than any other country. The average cost of transferred water per unit has amplified with time and scale but reduced from western to eastern Amazon. Additionally, annual total energy consumption for pumping exceeded 40 billion kilowatt-hours, while the associated greenhouse gas emissions are assessed to be 35 million tons. Noteworthy to comprehend that ecological problems initiated by water diversion influence the River Basin B and River Basin D. Due to water diversion, more than 350 thousand individuals have been relocated, away from their homes. In order to enhance water diversion sustainability, four categories of innovative measures are provided for decision-makers: development of water tradeoff projects strategies, improvement of integrated water resource management, the formation of water-saving inducements, and pricing approach, and application of ex-post assessment.Keywords: sustainability, water trade-off projects, environment, Amazon
Procedia PDF Downloads 129928 Unveiling the Potential of MoSe₂ for Toxic Gas Sensing: Insights from Density Functional Theory and Non-equilibrium Green’s Function Calculations
Authors: Si-Jie Ji, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang
Abstract:
With the rapid development of industrialization and urbanization, air pollution poses significant global environmental challenges, contributing to acid rain, global warming, and adverse health effects. Therefore, it is necessary to monitor the concentration of toxic gases in the atmospheric environment in real-time and to deploy cost-effective gas sensors capable of detecting their emissions. In this study, we systematically investigated the sensing capabilities of the two-dimensional MoSe₂ for seven key environmental gases (NO, NO₂, CO, CO₂, SO₂, SO₃, and O₂) using density functional theory (DFT) and non-equilibrium Green’s function (NEGF) calculations. We also investigated the impact of H₂O as an interfering gas. Our results indicate that the MoSe₂ monolayer is thermodynamically stable and exhibits strong gas-sensing capabilities. The calculated adsorption energies indicate that these gases can stably adsorb on MoSe₂, with SO₃ exhibiting the strongest adsorption energy (-0.63 eV). Electronic structure analysis, including projected density of states (PDOS) and Bader charge analysis, demonstrates significant changes in the electronic properties of MoSe₂ upon gas adsorption, affecting its conductivity and sensing performance. We find that oxygen (O₂) adsorption notably influenced the deformation of MoSe₂. To comprehensively understand the potential of MoSe₂ as a gas sensor, we used the NEGF method to assess the electronic transport properties of MoSe₂ under gas adsorption, evaluating current-voltage (I-V), resistance-voltage (R-V) characteristics, and transmission spectra to determine sensitivity, selectivity, and recovery time compared to pristine MoSe₂. Sensitivity, selectivity, and recovery time are analyzed at a bias voltage of 1.7V, showing excellent performance of MoSe₂ in detecting SO₃, among other gases. The pronounced changes in electronic transport behavior induced by SO₃ adsorption confirm MoSe₂’s strong potential as a high-performance gas-sensing material. Overall, this theoretical study provides new insights into the development of high-performance gas sensors, demonstrating the potential of MoSe₂ as a gas-sensing material, particularly for gases like SO₃.Keywords: density functional theory, gas sensing, MoSe₂, non-equilibrium Green’s function, SO
Procedia PDF Downloads 21927 Robotic Process Automation in Accounting and Finance Processes: An Impact Assessment of Benefits
Authors: Rafał Szmajser, Katarzyna Świetla, Mariusz Andrzejewski
Abstract:
Robotic process automation (RPA) is a technology of repeatable business processes performed using computer programs, robots that simulate the work of a human being. This approach assumes replacing an existing employee with the use of dedicated software (software robots) to support activities, primarily repeated and uncomplicated, characterized by a low number of exceptions. RPA application is widespread in modern business services, particularly in the areas of Finance, Accounting and Human Resources Management. By utilizing this technology, the effectiveness of operations increases while reducing workload, minimizing possible errors in the process, and as a result, bringing measurable decrease in the cost of providing services. Regardless of how the use of modern information technology is assessed, there are also some doubts as to whether we should replace human activities in the implementation of the automation in business processes. After the initial awe for the new technological concept, a reflection arises: to what extent does the implementation of RPA increase the efficiency of operations or is there a Business Case for implementing it? If the business case is beneficial, in which business processes is the greatest potential for RPA? A closer look at these issues was provided by in this research during which the respondents’ view of the perceived advantages resulting from the use of robotization and automation in financial and accounting processes was verified. As a result of an online survey addressed to over 500 respondents from international companies, 162 complete answers were returned from the most important types of organizations in the modern business services industry, i.e. Business or IT Process Outsourcing (BPO/ITO), Shared Service Centers (SSC), Consulting/Advisory and their customers. Answers were provided by representatives of the positions in their organizations: Members of the Board, Directors, Managers and Experts/Specialists. The structure of the survey allowed the respondents to supplement the survey with additional comments and observations. The results formed the basis for the creation of a business case calculating tangible benefits associated with the implementation of automation in the selected financial processes. The results of the statistical analyses carried out with regard to revenue growth confirmed the correctness of the hypothesis that there is a correlation between job position and the perception of the impact of RPA implementation on individual benefits. Second hypothesis (H2) that: There is a relationship between the kind of company in the business services industry and the reception of the impact of RPA on individual benefits was thus not confirmed. Based results of survey authors performed simulation of business case for implementation of RPA in selected Finance and Accounting Processes. Calculated payback period was diametrically different ranging from 2 months for the Account Payables process with 75% savings and in the extreme case for the process Taxes implementation and maintenance costs exceed the savings resulting from the use of the robot.Keywords: automation, outsourcing, business process automation, process automation, robotic process automation, RPA, RPA business case, RPA benefits
Procedia PDF Downloads 137926 Structural Analysis of a Composite Wind Turbine Blade
Abstract:
The design of an optimised horizontal axis 5-meter-long wind turbine rotor blade in according with IEC 61400-2 standard is a research and development project in order to fulfil the requirements of high efficiency of torque from wind production and to optimise the structural components to the lightest and strongest way possible. For this purpose, a research study is presented here by focusing on the structural characteristics of a composite wind turbine blade via finite element modelling and analysis tools. In this work, first, the required data regarding the general geometrical parts are gathered. Then, the airfoil geometries are created at various sections along the span of the blade by using CATIA software to obtain the two surfaces, namely; the suction and the pressure side of the blade in which there is a hat shaped fibre reinforced plastic spar beam, so-called chassis starting at 0.5m from the root of the blade and extends up to 4 m and filled with a foam core. The root part connecting the blade to the main rotor differential metallic hub having twelve hollow threaded studs is then modelled. The materials are assigned as two different types of glass fabrics, polymeric foam core material and the steel-balsa wood combination for the root connection parts. The glass fabrics are applied using hand wet lay-up lamination with epoxy resin as METYX L600E10C-0, is the unidirectional continuous fibres and METYX XL800E10F having a tri-axial architecture with fibres in the 0,+45,-45 degree orientations in a ratio of 2:1:1. Divinycell H45 is used as the polymeric foam. The finite element modelling of the blade is performed via MSC PATRAN software with various meshes created on each structural part considering shell type for all surface geometries, and lumped mass were added to simulate extra adhesive locations. For the static analysis, the boundary conditions are assigned as fixed at the root through aforementioned bolts, where for dynamic analysis both fixed-free and free-free boundary conditions are made. By also taking the mesh independency into account, MSC NASTRAN is used as a solver for both analyses. The static analysis aims the tip deflection of the blade under its own weight and the dynamic analysis comprises normal mode dynamic analysis performed in order to obtain the natural frequencies and corresponding mode shapes focusing the first five in and out-of-plane bending and the torsional modes of the blade. The analyses results of this study are then used as a benchmark prior to modal testing, where the experiments over the produced wind turbine rotor blade has approved the analytical calculations.Keywords: dynamic analysis, fiber reinforced composites, horizontal axis wind turbine blade, hand-wet layup, modal testing
Procedia PDF Downloads 426925 Effects of Environmental and Genetic Factors on Growth Performance, Fertility Traits and Milk Yield/Composition in Saanen Goats
Authors: Deniz Dincel, Sena Ardicli, Hale Samli, Mustafa Ogan, Faruk Balci
Abstract:
The aim of the study was to determine the effects of some environmental and genetic factors on growth, fertility traits, milk yield and composition in Saanen goats. For this purpose, the total of 173 Saanen goats and kids were investigated for growth, fertility and milk traits in Marmara Region of Turkey. Fertility parameters (n=70) were evaluated during two years. Milk samples were collected during the lactation and the milk yield/components (n=59) of each goat were calculated. In terms of CSN3 and AGPAT6 gene; the genotypes were defined by PCR-RFLP. Saanen kids (n=86-112) were measured from birth to 6 months of life. The birth, weaning, 60ᵗʰ, 90ᵗʰ, 120ᵗʰ and 180tᵗʰ days of average live weights were calculated. The effects of maternal age on pregnancy rate (p < 0.05), birth rate (p < 0.05), infertility rate (p < 0.05), single born kidding (p < 0.001), twinning rate (p < 0.05), triplet rate (p < 0.05), survival rate of kids until weaning (p < 0.05), number of kids per parturition (p < 0.01) and number of kids per mating (p < 0.01) were found significant. The impacts of year on birth rate (p < 0.05), abortion rate (p < 0.001), single born kidding (p < 0.01), survival rate of kids until weaning (p < 0.01), number of kids per mating (p < 0.01) were found significant for fertility traits. The impacts of lactation length on all milk yield parameters (lactation milk, protein, fat, totally solid, solid not fat, casein and lactose yield) (p < 0.001) were found significant. The effects of age on all milk yield parameters (lactation milk, protein, fat, total solid, solid not fat, casein and lactose yield) (p < 0.001), protein rate (p < 0.05), fat rate (p < 0.05), total solid rate (p < 0.01), solid not fat rate (p < 0.05), casein rate (p < 0.05) and lactation length (p < 0.01), were found significant too. However, the effect of AGPAT6 gene on milk yield and composition was not found significant in Saanen goats. The herd was found monomorphic (FF) for CSN3 gene. The effects of sex on live weights until 90ᵗʰ days of life (birth, weaning and 60ᵗʰ day of average weight) were found significant statistically (p < 0.001). The maternal age affected only birth weight (p < 0,001). The effects month at birth on all of the investigated day [the birth, 120ᵗʰ, 180ᵗʰ days (p < 0.05); the weaning, 60ᵗʰ, 90ᵗʰ days (p < 0,001)] were found significant. The birth type was found significant on the birth (p < 0,001), weaning (p < 0,01), 60ᵗʰ (p < 0,01) and 90ᵗʰ (p < 0,01) days of average live weights. As a result, screening the other regions of CSN3, AGPAT6 gene and also investigation the phenotypic association of them should be useful to clarify the efficiency of target genes. Environmental factors such as maternal age, year, sex and birth type were found significant on some growth, fertility and milk traits in Saanen goats. So consideration of these factors could be used as selection criteria in dairy goat breeding.Keywords: fertility, growth, milk yield, Saanen goats
Procedia PDF Downloads 166924 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction
Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach
Abstract:
X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast
Procedia PDF Downloads 257923 Studies on Biojetfuel Obtained from Vegetable Oil: Process Characteristics, Engine Performance and Their Comparison with Mineral Jetfuel
Authors: F. Murilo T. Luna, Vanessa F. Oliveira, Alysson Rocha, Expedito J. S. Parente, Andre V. Bueno, Matheus C. M. Farias, Celio L. Cavalcante Jr.
Abstract:
Aviation jetfuel used in aircraft gas-turbine engines is customarily obtained from the kerosene distillation fraction of petroleum (150-275°C). Mineral jetfuel consists of a hydrocarbon mixture containing paraffins, naphthenes and aromatics, with low olefins content. In order to ensure their safety, several stringent requirements must be met by jetfuels, such as: high energy density, low risk of explosion, physicochemical stability and low pour point. In this context, aviation fuels eventually obtained from biofeedstocks (which have been coined as ‘biojetfuel’), must be used as ‘drop in’, since adaptations in aircraft engines are not desirable, to avoid problems with their operation reliability. Thus, potential aviation biofuels must present the same composition and physicochemical properties of conventional jetfuel. Among the potential feedtstocks for aviation biofuel, the babaçu oil, extracted from a palm tree extensively found in some regions of Brazil, contains expressive quantities of short chain saturated fatty acids and may be an interesting choice for biojetfuel production. In this study, biojetfuel was synthesized through homogeneous transesterification of babaçu oil using methanol and its properties were compared with petroleum-based jetfuel through measurements of oxidative stability, physicochemical properties and low temperature properties. The transesterification reactions were carried out using methanol and after decantation/wash procedures, the methyl esters were purified by molecular distillation under high vacuum at different temperatures. The results indicate significant improvement in oxidative stability and pour point of the products when compared to the fresh oil. After optimization of operational conditions, potential biojetfuel samples were obtained, consisting mainly of C8 esters, showing low pour point and high oxidative stability. Jet engine tests are being conducted in an automated test bed equipped with pollutant emissions analysers to study the operational performance of the biojetfuel that was obtained and compare with a mineral commercial jetfuel.Keywords: biojetfuel, babaçu oil, oxidative stability, engine tests
Procedia PDF Downloads 259922 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 143921 Use of Misoprostol in Pregnancy Termination in the Third Trimester: Oral versus Vaginal Route
Authors: Saimir Cenameri, Arjana Tereziu, Kastriot Dallaku
Abstract:
Introduction: Intra-uterine death is a common problem in obstetrical practice, and can lead to complications if left to resolve spontaneously. The cervix is unprepared, making inducing of labor difficult. Misoprostol is a synthetic prostaglandin E1 analogue, inexpensive, and is presented valid thanks to its ability to bring about changes in the cervix that lead to the induction of uterine contractions. Misoprostol is quickly absorbed when taken orally, resulting in high initial peak serum concentrations compared with the vaginal route. The vaginal misoprostol peak serum concentration is not as high and demonstrates a more gradual serum concentration decline. This is associated with many benefits for the patient; fast induction of labor; smaller doses; and fewer side effects (dose-depended). Mostly it has been used the regime of 50 μg/4 hour, with a high percentage of success and limited side effects. Objective: Evaluation of the efficiency of the use of oral and vaginal misoprostol in inducing labor, and comparing it with its use not by a previously defined protocol. Methods: Participants in this study included patients at U.H.O.G. 'Koco Gliozheni', Tirana from April 2004-July 2006, presenting with an indication for inducing labor in the third trimester for pregnancy termination. A total of 37 patients were randomly admitted for birth inducing activity, according to protocol (26), oral or vaginal protocol (10 vs. 16), and a control group (11), not subject to the protocol, was created. Oral or vaginal misoprostol was administered at a dose of 50 μg/4 h, while the fourth group participants were treated individually by the members of the medical staff. The main result of interest was the time between induction of labor to birth. Kruskal-Wallis test was used to compare the average age, parity, women weight, gestational age, Bishop's score, the size of the uterus and weight of the fetus between the four groups in the study. The Fisher exact test was used to compare day-stay and causes in the four groups. Mann-Whitney test was used to compare the time of the expulsion and the number of doses between oral and vaginal group. For all statistical tests used, the value of P ≤ 0.05 was considered statistically significant. Results: The four groups were comparable with regard to woman age and weight, parity, abortion indication, Bishop's score, fetal weight and the gestational age. There was significant difference in the percentage of deliveries within 24 hours. The average time from induction to birth per route (vaginal, oral, according to protocol and not according to the protocol) was respectively; 10.43h; 21.10h; 15.77h, 21.57h. There was no difference in maternal complications in groups. Conclusions: Use of vaginal misoprostol for inducing labor in the third trimester for termination of pregnancy appears to be more effective than the oral route, and even more to uses not according to the protocols approved before, where complications are greater and unjustified.Keywords: inducing labor, misoprostol, pregnancy termination, third trimester
Procedia PDF Downloads 185920 The Knowledge and Experiences of Pregnant Women Regarding Physical Activity during Pregnancy
Authors: Katarzyna Kwiatkowska, Izabela Walasik, Katarzyna Kosińska-Kaczyńska, Olga Płaza, Kinga Żebrowska
Abstract:
Introduction Adequate physical activity of a pregnant woman has been proven to decrease the risk of pregnancy complications. The knowledge of women regarding physical exercise in pregnancy is a part of conscious motherhood, while a lack of it may lead to not taking up any form of physical activity during pregnancy. Aim: The aim of the study was to assess the knowledge and experience of women regarding physical activity during their latest pregnancy. Material and methodology: An anonymous questionnaire, consisting of 57 questions, was completed electronically in 2018 by women who gave birth at least once. The respondents were qualified as 'physically active during pregnancy' if they performed physical exercises such as regular walks, marching, jogging, working out at a gym, swimming, yoga, pilates, fitness, exercise-ball workouts or home gymnastics. Results: The study group consisted of 9345 women. 52% of them performed exercises during pregnancy. The main reasons for the lack of physical activity were: lack of interest in physical activity (45%), lack of energy (40%), lack of knowledge regarding proper exercise during pregnancy (34%), lack of time (27%) and medical contraindications (25%). Non-active respondents suffered from gestational hypertension (6,7% vs 9,2%; p<00,1) and gave birth prematurely (11% vs 15%; p < 001) to newborns with a lower birth weight significantly more often ( < 2500g vs > 2500g; p < 0,001). Physically active women reported suffering from pregnancy-related ailments such as fatigue, back pain or constipation significantly less often. 22% of all respondents were unable to identify reliable sources of information regarding exercise during pregnancy. A majority of the exercising women used the Internet to obtain gain information on physical activity during pregnancy (69,1%). 4% of women thought that exercising during pregnancy is forbidden, while 20% thought it is not allowed in the 3rd trimester. Physically active women had vaginal delivery more often (61% vs 55%; p < 0,05). Episiotomy was performed most often on non-active primiparous respondents (77,5% vs 71% active primiparous, p < 0,001). 13% of women felt discriminated due to their physical activity during pregnancy. 22% of respondents’ physical activity was not accepted by their environment. 39,1% of the women were told by others to stop physical exercise because it was bad for the baby’s health. Conclusion: The knowledge of Polish women regarding proper physical activity during pregnancy is insufficient, which may influence a lack of will to initiate such activity among pregnant women. Physical activity of a pregnant woman may have an impact on the course of pregnancy and birth.Keywords: childbirth, discrimination, physical activity, pregnancy
Procedia PDF Downloads 162919 Statistical Modeling of Constituents in Ash Evolved From Pulverized Coal Combustion
Authors: Esam Jassim
Abstract:
Industries using conventional fossil fuels have an interest in better understanding the mechanism of particulate formation during combustion since such is responsible for emission of undesired inorganic elements that directly impact the atmospheric pollution level. Fine and ultrafine particulates have tendency to escape the flue gas cleaning devices to the atmosphere. They also preferentially collect on surfaces in power systems resulting in ascending in corrosion inclination, descending in the heat transfer thermal unit, and severe impact on human health. This adverseness manifests particularly in the regions of world where coal is the dominated source of energy for consumption. This study highlights the behavior of calcium transformation as mineral grains verses organically associated inorganic components during pulverized coal combustion. The influence of existing type of calcium on the coarse, fine and ultrafine mode formation mechanisms is also presented. The impact of two sub-bituminous coals on particle size and calcium composition evolution during combustion is to be assessed. Three mixed blends named Blends 1, 2, and 3 are selected according to the ration of coal A to coal B by weight. Calcium percentage in original coal increases as going from Blend 1 to 3. A mathematical model and a new approach of describing constituent distribution are proposed. Analysis of experiments of calcium distribution in ash is also modeled using Poisson distribution. A novel parameter, called elemental index λ, is introduced as a measuring factor of element distribution. Results show that calcium in ash that originally in coal as mineral grains has index of 17, whereas organically associated calcium transformed to fly ash shown to be best described when elemental index λ is 7. As an alkaline-earth element, calcium is considered the fundamental element responsible for boiler deficiency since it is the major player in the mechanism of ash slagging process. The mechanism of particle size distribution and mineral species of ash particles are presented using CCSEM and size-segregated ash characteristics. Conclusions are drawn from the analysis of pulverized coal ash generated from a utility-scale boiler.Keywords: coal combustion, inorganic element, calcium evolution, fluid dynamics
Procedia PDF Downloads 335