Search results for: step leaching
789 Contributions of Women to the Development of Hausa Literature as an Effective Means of Public Enlightenment: The Case of a 19th Century Female Scholar Maryam Bint Uthman Ibn Foduye
Authors: Balbasatu Ibrahim
Abstract:
In the 19th century, Hausaland an Islamic revolution known as the Sokoto Jihad took place that led to the establishment of the Sokoto Caliphate in 1804 under the leadership of the famous Sheik Uthman Bn Fodiye. Before the Jihad movement in Hausaland (now Northern Nigeria), women were left in ignorance and were used and dumped like old kitchen utensils. The sheik and his followers did their best to actualising women’s right to education by using their female family members as role models who were highly educated and renowned scholars. After the Jihad with the establishment of an Islamic state, the women scholars initiated different strategies to teach the generality of the women. The most efficient strategy was the ‘Yantaru Movement founded by Nana Asma’u the daughter of Sheikh Uthman Bn Fodiye in collaboration with her sisters around 1840. The ‘Yantaru movement is a women’s educational movement aimed at enlightening women in rural and urban areas. The move helped in massively mobilizing women for education. In addition to town pupils, women from villages and throughout the nooks and crannies of metropolitan Sokoto participated in the movement in the search for knowledge. Thus, the birth of the ‘Yantaru system of women’s education. The ‘Yantaru operates the three-tier system at village, town and the metropolitan capital of Sokoto. ‘Yantaru functions include imparting knowledge to elderly women and young girls. Step down enlightenment program on returning home. The most effective medium of communication in the ‘Yantaru movement was through poetry where scholars composed educational poems which were memorized by the ‘Yantaru, who on return recite it to fellow women at home. Through this system, many women were educated. This paper translated and examines one of such educative poems written by the second leader of the ‘Yantaru Movement Maryam Bn Uthman Bn Fodiye in 1855.Keywords: English, Hausa language, public enlightenment, Maryam Bint Uthman Ibn Foduye
Procedia PDF Downloads 366788 Comparison of Receiver Operating Characteristic Curve Smoothing Methods
Authors: D. Sigirli
Abstract:
The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve
Procedia PDF Downloads 152787 Evaluation and Control of Cracking for Bending Rein-forced One-way Concrete Voided Slab with Plastic Hollow Inserts
Authors: Mindaugas Zavalis
Abstract:
Analysis of experimental tests data of bending one-way reinforced concrete slabs from various articles of science revealed that voided slabs with a grid of hollow plastic inserts inside have smaller mechani-cal and physical parameters compared to continuous cross-section slabs (solid slabs). The negative influence of a reinforced concrete slab is impacted by hollow plastic inserts, which make a grid of voids in the middle of the cross-sectional area of the reinforced concrete slab. A formed grid of voids reduces the slab’s stiffness, which influences the slab’s parameters of serviceability, like deflection and cracking. Prima-ry investigation of data established during experiments illustrates that cracks occur faster in the tensile surface of the voided slab under bend-ing compared to bending solid slab. It means that the crack bending moment force for the voided slab is smaller than the solid slab and the reduction can variate in the range of 14 – 40 %. Reduce of resistance to cracking can be controlled by changing a lot of factors: the shape of the plastic hallow insert, plastic insert height, steps between plastic in-serts, usage of prestressed reinforcement, the diameter of reinforcement bar, slab effective depth, the bottom cover thickness of concrete, effec-tive cross-section of the concrete area about reinforcement and etc. Mentioned parameters are used to evaluate crack width and step of cracking, but existing analytical calculation methods for cracking eval-uation of voided slab with plastic inserts are not so exact and the re-sults of cracking evaluation in this paper are higher than the results of analyzed experiments. Therefore, it was made analytically calculations according to experimental bending tests of voided reinforced concrete slabs with hollow plastic inserts to find and propose corrections for the evaluation of cracking for reinforced concrete voided slabs with hollow plastic inserts.Keywords: voided slab, cracking, hallow plastic insert, bending, one-way reinforced concrete, serviceability
Procedia PDF Downloads 68786 Sulfate Reducing Bacteria Based Bio-Electrochemical System: Towards Sustainable Landfill Leachate and Solid Waste Treatment
Authors: K. Sushma Varma, Rajesh Singh
Abstract:
Non-engineered landfills cause serious environmental damage due to toxic emissions and mobilization of persistent pollutants, organic and inorganic contaminants, as well as soluble metal ions. The available treatment technologies for landfill leachate and solid waste are not effective from an economic, environmental, and social standpoint. The present study assesses the potential of the bioelectrochemical system (BES) integrated with sulfate-reducing bacteria (SRB) in the sustainable treatment and decontamination of landfill wastes. For this purpose, solid waste and landfill leachate collected from different landfill sites were evaluated for long-term treatment using the integrated SRB-BES anaerobic designed bioreactors after pre-treatment. Based on periodic gas composition analysis, physicochemical characterization of the leachate and solid waste, and metal concentration determination, the present system demonstrated significant improvement in volumetric hydrogen production by suppressing methanogenesis. High reduction percentages of Be, Cr, Pb, Cd, Sb, Ni, Cr, COD, and sTOC removal were observed. This mineralization can be attributed to the synergistic effect of ammonia-assisted pre-treatment complexation and microbial sulphide formation. Despite being amended with 0.1N ammonia, the treated leachate level of NO³⁻ was found to be reduced along with SO₄²⁻. This integrated SRB-BES system can be recommended as an eco-friendly solution for landfill reclamation. The BES-treated solid waste was evidently more stabilized, as shown by a five-fold increase in surface area, and potentially useful for leachate immobilization and bio-fortification of agricultural fields. The vector arrangement and magnitude showed similar treatment with differences in magnitudes for both leachate and solid waste. These findings support the efficacy of SRB-BES in the treatment of landfill leachate and solid waste sustainably, inching a step closer to our sustainable development goals. It utilizes low-cost treatment, and anaerobic SRB adapted to landfill sites. This technology may prove to be a sustainable treatment strategy upon scaling up as its outcomes are two-pronged: landfill waste treatment and energy recovery.Keywords: bio-electrochemical system, leachate /solid waste treatment, landfill leachate, sulfate-reducing bacteria
Procedia PDF Downloads 102785 Training a Neural Network to Segment, Detect and Recognize Numbers
Authors: Abhisek Dash
Abstract:
This study had three neural networks, one for number segmentation, one for number detection and one for number recognition all of which are coupled to one another. All networks were trained on the MNIST dataset and were convolutional. It was assumed that the images had lighter background and darker foreground. The segmentation network took 28x28 images as input and had sixteen outputs. Segmentation training starts when a dark pixel is encountered. Taking a window(7x7) over that pixel as focus, the eight neighborhood of the focus was checked for further dark pixels. The segmentation network was then trained to move in those directions which had dark pixels. To this end the segmentation network had 16 outputs. They were arranged as “go east”, ”don’t go east ”, “go south east”, “don’t go south east”, “go south”, “don’t go south” and so on w.r.t focus window. The focus window was resized into a 28x28 image and the network was trained to consider those neighborhoods which had dark pixels. The neighborhoods which had dark pixels were pushed into a queue in a particular order. The neighborhoods were then popped one at a time stitched to the existing partial image of the number one at a time and trained on which neighborhoods to consider when the new partial image was presented. The above process was repeated until the image was fully covered by the 7x7 neighborhoods and there were no more uncovered black pixels. During testing the network scans and looks for the first dark pixel. From here on the network predicts which neighborhoods to consider and segments the image. After this step the group of neighborhoods are passed into the detection network. The detection network took 28x28 images as input and had two outputs denoting whether a number was detected or not. Since the ground truth of the bounds of a number was known during training the detection network outputted in favor of number not found until the bounds were not met and vice versa. The recognition network was a standard CNN that also took 28x28 images and had 10 outputs for recognition of numbers from 0 to 9. This network was activated only when the detection network votes in favor of number detected. The above methodology could segment connected and overlapping numbers. Additionally the recognition unit was only invoked when a number was detected which minimized false positives. It also eliminated the need for rules of thumb as segmentation is learned. The strategy can also be extended to other characters as well.Keywords: convolutional neural networks, OCR, text detection, text segmentation
Procedia PDF Downloads 162784 Phenolic Composition and Antioxidant Activity of Sorbus L. Fruits and Leaves
Authors: Raudone Lina, Raudonis Raimondas, Gaivelyte Kristina, Pukalskas Audrius, Janulis Valdimaras, Viskelis Pranas
Abstract:
Sorbus L. species are widely distributed in the Northern hemisphere and have been used for medicinal purposes in various traditional medicine systems and as food ingredients. Various Sorbus L. raw materials, fruits, leaves, inflorescences, barks, possess diuretic, anti-inflammatory, hypoglycemic, anti-diarrheal and vasoprotective activities. Phenolics, to whom main pharmacological activities are attributed, are compounds of interest due to their notable antioxidant activity. The aim of this study was to determine the antioxidant profiles of fruits and leaves of selected Sorbus L. species (S. anglica, S. aria f. latifolia, S. arranensis, S. aucuparia, S. austriaca, S. caucasica, S. commixta, S. discolor, S. gracilis, S. hostii, S. semi-incisa, S. tianschanica) and to identify the phenolic compounds with potent contribution to antioxidant activity. Twenty two constituents were identified in Sorbus L. species using ultra high performance liquid chromatography coupled to quadruple and time-of-flight mass spectrometers (UPLC–QTOF–MS). Reducing activity of individual constituents was determined using high performance liquid chromatography (HPLC) coupled to post-column FRAP assay. Signicantly greatest trolox equivalent values corresponding up to 45% of contribution to antioxidant activity were assessed for neochlorogenic and chlorogenic acids, which were determined as markers of antioxidant activity in samples of leaves and fruits. Characteristic patterns of antioxidant profiles obtained using HPLC post-column FRAP assay significantly depend on specific Sorbus L. species and raw materials and are suitable for equivalency research of Sorbus L. fruits and leaves. Selecting species and target plant organs with richest phenolic composition and strongly expressed antioxidant power is the first step in further research of standardized extracts.Keywords: FRAP, antioxidant, phenolic, Sorbus L., chlorogenic acid, neochlorogenic acid
Procedia PDF Downloads 458783 A Simulation-Based Method for Evaluation of Energy System Cooperation between Pulp and Paper Mills and a District Heating System: A Case Study
Authors: Alexander Hedlund, Anna-Karin Stengard, Olof Björkqvist
Abstract:
A step towards reducing greenhouse gases and energy consumption is to collaborate with the energy system between several industries. This work is based on a case study on integration of pulp and paper mills with a district heating system in Sundsvall, Sweden. Present research shows that it is possible to make a significant reduction in the electricity demand in the mechanical pulping process. However, the profitability of the efficiency measures could be an issue, as the excess steam recovered from the refiners decreases with the electricity consumption. A consequence will be that the fuel demand for steam production will increase. If the fuel price is similar to the electricity price it would reduce the profit of such a project. If the paper mill can be integrated with a district heating system, it is possible to upgrade excess heat from a nearby kraft pulp mill to process steam via the district heating system in order to avoid the additional fuel need. The concept is investigated by using a simulation model describing both the mass and energy balance as well as the operating margin. Three scenarios were analyzed: reference, electricity reduction and energy substitution. The simulation show that the total input to the system is lowest in the Energy substitution scenario. Additionally, in the Energy substitution scenario the steam from the incineration boiler covers not only the steam shortage but also a part of the steam produced using the biofuel boiler, the cooling tower connected to the incineration boiler is no longer needed and the excess heat can cover the whole district heating load during the whole year. The study shows a substantial economic advantage if all stakeholders act together as one system. However, costs and benefits are unequally shared between the actors. This means that there is a need for new business models in order to share the system costs and benefits.Keywords: energy system, cooperation, simulation method, excess heat, district heating
Procedia PDF Downloads 226782 Approaches to Reduce the Complexity of Mathematical Models for the Operational Optimization of Large-Scale Virtual Power Plants in Public Energy Supply
Authors: Thomas Weber, Nina Strobel, Thomas Kohne, Eberhard Abele
Abstract:
In context of the energy transition in Germany, the importance of so-called virtual power plants in the energy supply continues to increase. The progressive dismantling of the large power plants and the ongoing construction of many new decentralized plants result in great potential for optimization through synergies between the individual plants. These potentials can be exploited by mathematical optimization algorithms to calculate the optimal application planning of decentralized power and heat generators and storage systems. This also includes linear or linear mixed integer optimization. In this paper, procedures for reducing the number of decision variables to be calculated are explained and validated. On the one hand, this includes combining n similar installation types into one aggregated unit. This aggregated unit is described by the same constraints and target function terms as a single plant. This reduces the number of decision variables per time step and the complexity of the problem to be solved by a factor of n. The exact operating mode of the individual plants can then be calculated in a second optimization in such a way that the output of the individual plants corresponds to the calculated output of the aggregated unit. Another way to reduce the number of decision variables in an optimization problem is to reduce the number of time steps to be calculated. This is useful if a high temporal resolution is not necessary for all time steps. For example, the volatility or the forecast quality of environmental parameters may justify a high or low temporal resolution of the optimization. Both approaches are examined for the resulting calculation time as well as for optimality. Several optimization models for virtual power plants (combined heat and power plants, heat storage, power storage, gas turbine) with different numbers of plants are used as a reference for the investigation of both processes with regard to calculation duration and optimality.Keywords: CHP, Energy 4.0, energy storage, MILP, optimization, virtual power plant
Procedia PDF Downloads 178781 The Organizational Structure, Development Features, and Metadiscoursal Elements in the Expository Writing of College Freshman Students
Authors: Lota Largavista
Abstract:
This study entitled, ‘The Organizational Structure, Development Features, and Metadiscoursal Elements in the Expository Writing of Freshman College Writers’ aimed to examine essays written by college students. It seeks to examine the organizational structure and development features of the essays and describe their defining characteristics, the linguistic elements at both macrostructural and microstructural discourse levels and the types of textual and interpersonal metadiscourse markers that are employed in order to negotiate meanings with their prospective readers. The different frameworks used to analyze the essays include Toulmin’s ( 1984) model for argument structure, Olson’s ( 2003) three-part essay structure; Halliday and Matthiesen (2004) in Herriman (2011) notions of thematic structure, Danes (1974) thematic progression or method of development, Halliday’s (2004) concept of grammatical and lexical cohesion ;Hyland’s (2005) metadiscourse strategies; and Chung and Nation’s( 2003) four-step scale for technical vocabulary. This descriptive study analyzes qualitatively and quantitatively how freshman students generally express their written compositions. Coding of units is done to determine what linguistic features are present in the essays. Findings revealed that students’ expository essays observe a three-part structure having all three moves, the Introduction, the Body and the Conclusion. Stance assertion, stance support, and emerging moves/strategies are found to be employed in the essays. Students have more marked themes on the essays and also prefer constant theme progression as their method of development. The analysis of salient linguistic elements reveals frequently used cohesive devices and metadiscoursal strategies. Based on the findings, an instructional learning plan is being proposed. This plan is characterized by a genre approach that focuses on expository and linguistic conventions.Keywords: metadiscourse, organization, theme progression, structure
Procedia PDF Downloads 242780 Building a Parametric Link between Mapping and Planning: A Sunlight-Adaptive Urban Green System Plan Formation Process
Authors: Chenhao Zhu
Abstract:
Quantitative mapping is playing a growing role in guiding urban planning, such as using a heat map created by CFX, CFD2000, or Envi-met, to adjust the master plan. However, there is no effective quantitative link between the mappings and planning formation. So, in many cases, the decision-making is still based on the planner's subjective interpretation and understanding of these mappings, which limits the improvement of scientific and accuracy brought by the quantitative mapping. Therefore, in this paper, an effort has been made to give a methodology of building a parametric link between the mapping and planning formation. A parametric planning process based on radiant mapping has been proposed for creating an urban green system. In the first step, a script is written in Grasshopper to build a road network and form the block, while the Ladybug Plug-in is used to conduct a radiant analysis in the form of mapping. Then, the research creatively transforms the radiant mapping from a polygon into a data point matrix, because polygon is hard to engage in the design formation. Next, another script is created to select the main green spaces from the road network based on the criteria of radiant intensity and connect the green spaces' central points to generate a green corridor. After that, a control parameter is introduced to adjust the corridor's form based on the radiant intensity. Finally, a green system containing greenspace and green corridor is generated under the quantitative control of the data matrix. The designer only needs to modify the control parameter according to the relevant research results and actual conditions to realize the optimization of the green system. This method can also be applied to much other mapping-based analysis, such as wind environment analysis, thermal environment analysis, and even environmental sensitivity analysis. The parameterized link between the mapping and planning will bring about a more accurate, objective, and scientific planning.Keywords: parametric link, mapping, urban green system, radiant intensity, planning strategy, grasshopper
Procedia PDF Downloads 142779 Humins: From Industrial By-Product to High Value Polymers
Authors: Pierluigi Tosi, Ed de Jong, Gerard van Klink, Luc Vincent, Alice Mija
Abstract:
During the last decades renewable and low-cost resources have attracted increasingly interest. Carbohydrates can be derived by lignocellulosic biomasses, which is an attractive option since they represent the most abundant carbon source available in nature. Carbohydrates can be converted in a plethora of industrially relevant compounds, such as 5-hydroxymethylfurfural (HMF) and levulinic acid (LA), within acid catalyzed dehydration of sugars with mineral acids. Unfortunately, these acid catalyzed conversions suffer of the unavoidable formation of highly viscous heterogeneous poly-disperse carbon based materials known as humins. This black colored low value by-product is made by a complex mixture of macromolecules built by covalent random condensations of the several compounds present during the acid catalyzed conversion. Humins molecular structure is still under investigation but seems based on furanic rings network linked by aliphatic chains and decorated by several reactive moieties (ketones, aldehydes, hydroxyls, …). Despite decades of research, currently there is no way to avoid humins formation. The key parameter for enhance the economic viability of carbohydrate conversion processes is, therefore, increasing the economic value of the humins by-product. Herein are presented new humins based polymeric materials that can be prepared starting from the raw by-product by thermal treatment, without any step of purification or pretreatment. Humins foams can be produced with the control of reaction key parameters, obtaining polymeric porous materials with designed porosity, density, thermal and electrical conductivity, chemical and electrical stability, carbon amount and mechanical properties. Physico chemical properties can be enhanced by modifications on the starting raw material or adding different species during the polymerization. A comparisons on the properties of different compositions will be presented, along with tested applications. The authors gratefully acknowledge the European Community for financial support through Marie-Curie H2020-MSCA-ITN-2015 "HUGS" Project.Keywords: by-product, humins, polymers, valorization
Procedia PDF Downloads 143778 Protein-Enrichment of Oilseed Meals by Triboelectrostatic Separation
Authors: Javier Perez-Vaquero, Katryn Junker, Volker Lammers, Petra Foerst
Abstract:
There is increasing importance to accelerate the transition to sustainable food systems by including environmentally friendly technologies. Our work focuses on protein enrichment and fractionation of agricultural side streams by dry triboelectrostatic separation technology. Materials are fed in particulate form into a system dispersed in a highly turbulent gas stream, whereby the high collision rate of particles against surfaces and other particles greatly enhances the electrostatic charge build-up over the particle surface. A subsequent step takes the charged particles to a delimited zone in the system where there is a highly uniform, intense electric field applied. Because the charge polarity acquired by a particle is influenced by its chemical composition, morphology, and structure, the protein-rich and fiber-rich particles of the starting material get opposite charge polarities, thus following different paths as they move through the region where the electric field is present. The output is two material fractions, which differ in their respective protein content. One is a fiber-rich, low-protein fraction, while the other is a high-protein, low-fiber composition. Prior to testing, materials undergo a milling process, and some samples are stored under controlled humidity conditions. In this way, the influence of both particle size and humidity content was established. We used two oilseed meals: lupine and rapeseed. In addition to a lab-scale separator to perform the experiments, the triboelectric separation process could be successfully scaled up to a mid-scale belt separator, increasing the mass feed from g/sec to kg/hour. The triboelectrostatic separation technology opens a huge potential for the exploitation of so far underutilized alternative protein sources. Agricultural side-streams from cereal and oil production, which are generated in high volumes by the industries, can further be valorized by this process.Keywords: bench-scale processing, dry separation, protein-enrichment, triboelectrostatic separation
Procedia PDF Downloads 190777 Topography Effects on Wind Turbines Wake Flow
Authors: H. Daaou Nedjari, O. Guerri, M. Saighi
Abstract:
A numerical study was conducted to optimize the positioning of wind turbines over complex terrains. Thus, a two-dimensional disk model was used to calculate the flow velocity deficit in wind farms for both flat and complex configurations. The wind turbine wake was assessed using the hybrid methods that combine CFD (Computational Fluid Dynamics) with the actuator disc model. The wind turbine rotor has been defined with a thrust force, coupled with the Navier-Stokes equations that were resolved by an open source computational code (Code_Saturne V3.0 developed by EDF) The simulations were conducted in atmospheric boundary layer condition considering a two-dimensional region located at the north of Algeria at 36.74°N longitude, 02.97°E latitude. The topography elevation values were collected according to a longitudinal direction of 1km downwind. The wind turbine sited over topography was simulated for different elevation variations. The main of this study is to determine the topography effect on the behavior of wind farm wake flow. For this, the wake model applied in complex terrain needs to selects the singularity effects of topography on the vertical wind flow without rotor disc first. This step allows to determine the existence of mixing scales and friction forces zone near the ground. So, according to the ground relief the wind flow waS disturbed by turbulence and a significant speed variation. Thus, the singularities of the velocity field were thoroughly collected and thrust coefficient Ct was calculated using the specific speed. In addition, to evaluate the land effect on the wake shape, the flow field was also simulated considering different rotor hub heights. Indeed, the distance between the ground and the hub height of turbine (Hhub) was tested in a flat terrain for different locations as Hhub=1.125D, Hhub = 1.5D and Hhub=2D (D is rotor diameter) considering a roughness value of z0=0.01m. This study has demonstrated that topographical farm induce a significant effect on wind turbines wakes, compared to that on flat terrain.Keywords: CFD, wind turbine wake, k-epsilon model, turbulence, complex topography
Procedia PDF Downloads 563776 Implementation of Dozer Push Measurement under Payment Mechanism in Mining Operation
Authors: Anshar Ajatasatru
Abstract:
The decline of coal prices over past years have been significantly increasing the awareness of effective mining operation. A viable step must be undertaken in becoming more cost competitive while striving for best mining practice especially at Melak Coal Mine in East Kalimantan, Indonesia. This paper aims to show how effective dozer push measurement method can be implemented as it is controlled by contract rate on the unit basis of USD ($) per bcm. The method emerges from an idea of daily dozer push activity that continually shifts the overburden until final target design by mine planning. Volume calculation is then performed by calculating volume of each time overburden is removed within determined distance using cut and fill method from a high precision GNSS system which is applied into dozer as a guidance to ensure the optimum result of overburden removal. Accumulation of daily to weekly dozer push volume is found 95 bcm which is multiplied by average sell rate of $ 0,95, thus the amount monthly revenue is $ 90,25. Furthermore, the payment mechanism is then based on push distance and push grade. The push distance interval will determine the rates that vary from $ 0,9 - $ 2,69 per bcm and are influenced by certain push slope grade from -25% until +25%. The amount payable rates for dozer push operation shall be specifically following currency adjustment and is to be added to the monthly overburden volume claim, therefore, the sell rate of overburden volume per bcm may fluctuate depends on the real time exchange rate of Jakarta Interbank Spot Dollar Rate (JISDOR). The result indicates that dozer push measurement can be one of the surface mining alternative since it has enabled to refine method of work, operating cost and productivity improvement apart from exposing risk of low rented equipment performance. In addition, payment mechanism of contract rate by dozer push operation scheduling will ultimately deliver clients by almost 45% cost reduction in the form of low and consistent cost.Keywords: contract rate, cut-fill method, dozer push, overburden volume
Procedia PDF Downloads 316775 A Methodological Approach to Digital Engineering Adoption and Implementation for Organizations
Authors: Sadia H. Syeda, Zain H. Malik
Abstract:
As systems continue to become more complex and the interdependencies of processes and sub-systems continue to grow and transform, the need for a comprehensive method of tracking and linking the lifecycle of the systems in a digital form becomes ever more critical. Digital Engineering (DE) provides an approach to managing an authoritative data source that links, tracks, and updates system data as it evolves and grows throughout the system development lifecycle. DE enables the developing, tracking, and sharing system data, models, and other related artifacts in a digital environment accessible to all necessary stakeholders. The DE environment provides an integrated electronic repository that enables traceability between design, engineering, and sustainment artifacts. The DE activities' primary objective is to develop a set of integrated, coherent, and consistent system models for the program. It is envisioned to provide a collaborative information-sharing environment for various stakeholders, including operational users, acquisition personnel, engineering personnel, and logistics and sustainment personnel. Examining the processes that DE can support in the systems engineering life cycle (SELC) is a primary step in the DE adoption and implementation journey. Through an analysis of the U.S Department of Defense’s (DoD) Office of the Secretary of Defense (OSD’s) Digital Engineering Strategy and their implementation, examples of DE implementation by the industry and technical organizations, this paper will provide descriptions of the current DE processes and best practices of implementing DE across an enterprise. This will help identify the capabilities, environment, and infrastructure needed to develop a potential roadmap for implementing DE practices consistent with its business strategy. A capability maturity matrix will be provided to assess the organization’s DE maturity emphasizing how all the SELC elements interlink to form a cohesive ecosystem. If implemented, DE can increase efficiency and improve the systems engineering processes' quality and outcomes.Keywords: digital engineering, digital environment, digital maturity model, single source of truth, systems engineering life-cycle
Procedia PDF Downloads 93774 Investigate the Side Effects of Patients With Severe COVID-19 and Choose the Appropriate Medication Regimens to Deal With Them
Authors: Rasha Ahmadi
Abstract:
In December 2019, a coronavirus, currently identified as SARS-CoV-2, produced a series of acute atypical respiratory illnesses in Wuhan, Hubei Province, China. The sickness induced by this virus was named COVID-19. The virus is transmittable between humans and has caused pandemics worldwide. The number of death tolls continues to climb and a huge number of countries have been obliged to perform social isolation and lockdown. Lack of focused therapy continues to be a problem. Epidemiological research showed that senior patients were more susceptible to severe diseases, whereas children tend to have milder symptoms. In this study, we focus on other possible side effects of COVID-19 and more detailed treatment strategies. Using bioinformatics analysis, we first isolated the gene expression profile of patients with severe COVID-19 from the GEO database. Patients' blood samples were used in the GSE183071 dataset. We then categorized the genes with high and low expression. In the next step, we uploaded the genes separately to the Enrichr database and evaluated our data for signs and symptoms as well as related medication regimens. The results showed that 138 genes with high expression and 108 genes with low expression were observed differentially in the severe COVID-19 VS control group. Symptoms and diseases such as embolism and thrombosis of the abdominal aorta, ankylosing spondylitis, suicidal ideation or attempt, regional enteritis were observed in genes with high expression and in genes with low expression of acute and subacute forms of ischemic heart, CNS infection and poliomyelitis, synovitis and tenosynovitis. Following the detection of diseases and possible signs and symptoms, Carmustine, Bithionol, Leflunomide were evaluated more significantly for high-expression genes and Chlorambucil, Ifosfamide, Hydroxyurea, Bisphenol for low-expression genes. In general, examining the different and invisible aspects of COVID-19 and identifying possible treatments can help us significantly in the emergency and hospitalization of patients.Keywords: phenotypes, drug regimens, gene expression profiles, bioinformatics analysis, severe COVID-19
Procedia PDF Downloads 142773 Copula Autoregressive Methodology for Simulation of Solar Irradiance and Air Temperature Time Series for Solar Energy Forecasting
Authors: Andres F. Ramirez, Carlos F. Valencia
Abstract:
The increasing interest in renewable energies strategies application and the path for diminishing the use of carbon related energy sources have encouraged the development of novel strategies for integration of solar energy into the electricity network. A correct inclusion of the fluctuating energy output of a photovoltaic (PV) energy system into an electric grid requires improvements in the forecasting and simulation methodologies for solar energy potential, and the understanding not only of the mean value of the series but the associated underlying stochastic process. We present a methodology for synthetic generation of solar irradiance (shortwave flux) and air temperature bivariate time series based on copula functions to represent the cross-dependence and temporal structure of the data. We explore the advantages of using this nonlinear time series method over traditional approaches that use a transformation of the data to normal distributions as an intermediate step. The use of copulas gives flexibility to represent the serial variability of the real data on the simulation and allows having more control on the desired properties of the data. We use discrete zero mass density distributions to assess the nature of solar irradiance, alongside vector generalized linear models for the bivariate time series time dependent distributions. We found that the copula autoregressive methodology used, including the zero mass characteristics of the solar irradiance time series, generates a significant improvement over state of the art strategies. These results will help to better understand the fluctuating nature of solar energy forecasting, the underlying stochastic process, and quantify the potential of a photovoltaic (PV) energy generating system integration into a country electricity network. Experimental analysis and real data application substantiate the usage and convenience of the proposed methodology to forecast solar irradiance time series and solar energy across northern hemisphere, southern hemisphere, and equatorial zones.Keywords: copula autoregressive, solar irradiance forecasting, solar energy forecasting, time series generation
Procedia PDF Downloads 323772 Tropical Squall Lines in Brazil: A Methodology for Identification and Analysis Based on ISCCP Tracking Database
Authors: W. A. Gonçalves, E. P. Souza, C. R. Alcântara
Abstract:
The ISCCP-Tracking database offers an opportunity to study physical and morphological characteristics of Convective Systems based on geostationary meteorological satellites. This database contains 26 years of tracking of Convective Systems for the entire globe. Then, Tropical Squall Lines which occur in Brazil are certainly within the database. In this study, we propose a methodology for identification of these systems based on the ISCCP-Tracking database. A physical and morphological characterization of these systems is also shown. The proposed methodology is firstly based on the year of 2007. The Squall Lines were subjectively identified by visually analyzing infrared images from GOES-12. Based on this identification, the same systems were identified within the ISCCP-Tracking database. It is known, and it was also observed that the Squall Lines which occur on the north coast of Brazil develop parallel to the coast, influenced by the sea breeze. In addition, it was also observed that the eccentricity of the identified systems was greater than 0.7. Then, a methodology based on the inclination (based on the coast) and eccentricity (greater than 0.7) of the Convective Systems was applied in order to identify and characterize Tropical Squall Lines in Brazil. These thresholds were applied back in the ISCCP-Tracking database for the year of 2007. It was observed that other systems, which were not Squall Lines, were also identified. Then, we decided to call all systems identified by the inclination and eccentricity thresholds as Linear Convective Systems, instead of Squall Lines. After this step, the Linear Convective Systems were identified and characterized for the entire database, from 1983 to 2008. The physical and morphological characteristics of these systems were compared to those systems which did not have the required inclination and eccentricity to be called Linear Convective Systems. The results showed that the convection associated with the Linear Convective Systems seems to be more intense and organized than in the other systems. This affirmation is based on all ISCCP-Tracking variables analyzed. This type of methodology, which explores 26 years of satellite data by an objective analysis, was not previously explored in the literature. The physical and morphological characterization of the Linear Convective Systems based on 26 years of data is of a great importance and should be used in many branches of atmospheric sciences.Keywords: squall lines, convective systems, linear convective systems, ISCCP-Tracking
Procedia PDF Downloads 301771 Exposure to Ionizing Radiation Resulting from the Chernobyl Fallout and Childhood Cardiac Arrhythmia: A Population Based Study
Authors: Geraldine Landon, Enora Clero, Jean-Rene Jourdain
Abstract:
In 2005, the Institut de Radioprotection et de Sûreté Nucléaire (IRSN, France) launched a research program named EPICE (acronym for 'Evaluation of Pathologies potentially Induced by CaEsium') to collect scientific information on non-cancer effects possibly induced by chronic exposures to low doses of ionizing radiation with the view of addressing a question raised by several French NGOs related to health consequences of the Chernobyl nuclear accident in children. The implementation of the program was preceded by a pilot phase to ensure that the project would be feasible and determine the conditions for implementing an epidemiological study on a population of several thousand children. The EPICE program focused on childhood cardiac arrhythmias started in May 2009 for 4 years, in partnership with the Russian Bryansk Diagnostic Center. The purpose of this cross-sectional study was to determine the prevalence of cardiac arrhythmias in the Bryansk oblast (depending on the contamination of the territory and the caesium-137 whole-body burden) and to assess whether caesium-137 was or not a factor associated with the onset of cardiac arrhythmias. To address these questions, a study bringing together 18 152 children aged 2 to 18 years was initiated; each child received three medical examinations (ECG, echocardiography, and caesium-137 whole-body activity measurement) and some of them were given with a 24-hour Holter monitoring and blood tests. The findings of the study, currently submitted to an international journal justifying that no results can be given at this step, allow us to answer clearly to the issue of radiation-induced childhood arrhythmia, a subject that has been debated for many years. Our results will be certainly helpful for health professionals responsible for the monitoring of population exposed to the releases from the Fukushima Dai-ichi nuclear power plant and also useful for future comparative study in children exposed to ionizing radiation in other contexts, such as cancer radiation therapies.Keywords: Caesium-137, cardiac arrhythmia, Chernobyl, children
Procedia PDF Downloads 245770 Forest Policy and Its Implications on Private Forestry Development: A Case Study in Rautahat District, Nepal
Authors: Dammar Bahadur Adhikari
Abstract:
Community forestry in Nepal has got disproportionately high level of support from government and other actors in forestry sector. Even though master plan for forestry sector (1989) has highlighted community and private forestry as one component, the government policies and other intervention deliberately left out private forestry in its structure and programs. The study aimed at providing the pathway for formulating appropriate policies to address need of different kind of forest management regimes in Rautahat district, Nepal. The key areas the research focused were assessment of current status of private forestry, community forest users' understanding on private forestry; criteria for choosing species of private forestry and factors affecting establishment of private forestry in the area. Qualitative and quantitative data were collected employing questionnaire survey, rapid forest assessment and key informant interview. The study found out that forest policies are imposed due to intense pressure of exogenous forces than due to endogenous demand. Most of the local people opine that their traditional knowledge and skills are not sufficient for private forestry and hence need training on the matter. Likewise, local use, market value and rotation dictate the choice of species for plantation in private forests. Currently district forest office is the only government institution working in the area of private forestry all other governmental and non-governmental organizations have condoned. private forestry. Similarly, only permanent settlers in the area are found to establish private forests other forest users such as migrants and forest encroachers follow opportunistic behavior to meet their forest product need from community and national forests. In this regard, the study recommends taking appropriate step to support other forest management system including private forestry provide community forestry the benefits of competition as suggested by Darwin in 18th century, one and half century back and to help alleviate poverty by channelizing benefits to household level.Keywords: community forest, forest management, poverty, private forest, users’ group
Procedia PDF Downloads 341769 Constructing Practices for Lifestyle Journalism Education
Authors: Lucia Vodanovic, Bryan Pirolli
Abstract:
The London College of Communication is one of the only universities in the world to offer a lifestyle journalism master’s degree. A hybrid originally constructed largely out of a generic journalism program crossed with numerous cultural studies approaches, the degree has developed into a leading lifestyle journalism education attracting students worldwide. This research project seeks to present a framework for structuring the degree as well as to understand how students in this emerging field of study value the program. While some researchers have addressed questions about journalism and higher education, none have looked specifically at the increasingly important genre of lifestyle journalism, which Folker Hanusch defines as including notions of consumerism and critique among other identifying traits. Lifestyle journalism, itself poorly researched by scholars, can relate to topics including travel, fitness, and entertainment, and as such, arguably a lifestyle journalism degree should prepare students to engage with these topics. This research uses the existing Masters of Arts and Lifestyle Journalism at the London College of Communications as a case study to examine the school’s approach. Furthering Hanusch’s original definition, this master’s program attempts to characterizes lifestyle journalism by a specific voice or approach, as reflected in the diversity of student’s final projects. This framework echoes the ethos and ideas of the university, which focuses on creativity, design, and experimentation. By analyzing the current degree as well as student feedback, this research aims to assist future educators in pursuing the often neglected field of lifestyle journalism. Through a discovery of the unique mix of practical coursework, theoretical lessons, and broad scope of student work presented in this degree program, researchers strive to develop a framework for lifestyle journalism education, referring to Mark Deuze’s ten questions for journalism education development. While Hanusch began the discussion to legitimize the study of lifestyle journalism, this project strives to go one step further and open up a discussion about teaching of lifestyle journalism at the university level.Keywords: education, journalism, lifestyle, university
Procedia PDF Downloads 307768 ICAM1 Expression is Enhanced by TNFa through Histone Methylation in Human Brain Microvessel Cells
Authors: Ji-Young Choi, Jungjin Kim, Sang-Sun Yun, Sangmee Ahn Jo
Abstract:
Intracellular adhesion molecule1 (ICAM1) is a mediator of inflammation and involved in adhesion and transmigration of leukocytes to endothelial cells, resulting in enhancement of brain inflammation. We hypothesized that increase of ICAM1 expression in endothelial cells is an early step in the pathogenesis of brain diseases such as Alzheimer’s disease. Here, we report that ICAM1 expression is regulated by pro-inflammatory cytokine TNFa in human microvascular endothelial cell (HBMVEC). TNFa significantly increased ICAM1 mRNA and protein levels at the concentrations showing no cell toxicity. This increase was also shown in micro vessels of mouse brain 24 hours after treatment with TNFa (8 mg/kg, i.v). We then investigated the epigenetic mechanism involved in the induction of ICAM1 expression. Chromatin immunoprecipitation assay revealed that TNFa reduced methylation of histone3K9 (H3K9-2me) and histone3K27 (H3K27-3me), well-known modification as gene suppression, with in the ICAM1 promoter region. However, acetylation of H3K9 and H3K14, well-known modification as gene activation, was not changed by TNFa. Treatment of BIX01294, a specific inhibitor of histone methyltransferase G9a responsible for H3K9-2me, dramatically increased in ICAM1 mRNA and protein levels and overexpression of G9a gene suppressed TNFa-induced ICAM1 expression. In contrast, GSK126, an inhibitor of histone methyltransferase EZH2 responsible for H3K27-3me and valproic acid, an inhibitor of histone deacetylase (HDAC) did not affect ICAM1 expression. These results suggested that histone3 methylation is involved in ICAM1 repression. Moreover, TNFa or BIX01294-induced ICAM induction resulted in both enhancements in adhesion and transmigration of leukocyte on endothelial cell. This study demonstrates that TNFa upregulates ICAM1 expression through H3K9-2me and H3K27-3me within the ICAM1 promoter region, in which G9a is likely to play a pivotal role in ICAM1 transcription. Our study provides a novel mechanism for ICAM1 transcription regulation in HBMVEC.Keywords: ICAM1, TNFa, HBMVEC, H3K9-2me
Procedia PDF Downloads 329767 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis
Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab
Abstract:
Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.Keywords: deep neural network, foot disorder, plantar pressure, support vector machine
Procedia PDF Downloads 358766 Organisational Change: The Impact on Employees and Organisational Development
Authors: Maureen Royce, Joshi Jariwala, Sally Kah
Abstract:
Change is inevitable, but the change process is progressive. Organisational change is the process in which an organisation changes strategies, operational methods, systems, culture, and structure to affect something different in the organisation. This process can be continuous or developed over a period and driven by internal and external factors. Organisational change is essential if organisations are to survive in dynamic and uncertain environments. However, evidence from research shows that many change initiatives fail, leading to severe consequences for organisations and their resources. The complex models of third sector organisations, i.e., social enterprise, compounds the levels of change in these organisations. Interestingly, innovation is associated with a change in social enterprises due to the hybridity of product and service development. Furthermore, the creation of social intervention has offered a new process and outcomes to the lifecycle of change. Therefore, different forms of organisational innovation are developed, i.e., total, evolutionary, expansionary, and developmental, which affect the interventions of social enterprises. This raises both theoretical and business concerns on how the competing hybrid nature of social enterprises change, how change is managed, and the impact on these organisations. These perspectives present critical questions for further investigation. In this study, we investigate the impact of organisational change on employees and organisational development at DaDaFest –a disability arts organisation with a social focus based in Liverpool. The three main objectives are to explore the drivers of change and the implementation process; to examine the impact of organisational change on employees and; to identify barriers to organisation change and development. To address the preceding research objectives, qualitative research design is adopted using semi-structured interviews. Data is analysed using a six-step thematic analysis framework, which enables the study to develop themes depicting the impact of change on employees and organisational development. This study presents theoretical and practical contributions for academics and practitioners. The knowledge contributions encapsulate the evolution of change and the change cycle in a social enterprise. However, practical implications provide critical insights into the change management process and the impact of change on employees and organisational development.Keywords: organisational change, change management, organisational change system, social enterprise
Procedia PDF Downloads 126765 Determination of Rare Earth Element Patterns in Uranium Matrix for Nuclear Forensics Application: Method Development for Inductively Coupled Plasma Mass Spectrometry (ICP-MS) Measurements
Authors: Bernadett Henn, Katalin Tálos, Éva Kováss Széles
Abstract:
During the last 50 years, the worldwide permeation of the nuclear techniques induces several new problems in the environmental and in the human life. Nowadays, due to the increasing of the risk of terrorism worldwide, the potential occurrence of terrorist attacks using also weapon of mass destruction containing radioactive or nuclear materials as e.g. dirty bombs, is a real threat. For instance, the uranium pellets are one of the potential nuclear materials which are suitable for making special weapons. The nuclear forensics mainly focuses on the determination of the origin of the confiscated or found nuclear and other radioactive materials, which could be used for making any radioactive dispersive device. One of the most important signatures in nuclear forensics to find the origin of the material is the determination of the rare earth element patterns (REE) in the seized or found radioactive or nuclear samples. The concentration and the normalized pattern of the REE can be used as an evidence of uranium origin. The REE are the fourteen Lanthanides in addition scandium and yttrium what are mostly found together and really low concentration in uranium pellets. The problems of the REE determination using ICP-MS technique are the uranium matrix (high concentration of uranium) and the interferences among Lanthanides. In this work, our aim was to develop an effective chemical sample preparation process using extraction chromatography for separation the uranium matrix and the rare earth elements from each other following some publications can be found in the literature and modified them. Secondly, our purpose was the optimization of the ICP-MS measuring process for REE concentration. During method development, in the first step, a REE model solution was used in two different types of extraction chromatographic resins (LN® and TRU®) and different acidic media for environmental testing the Lanthanides separation. Uranium matrix was added to the model solution and was proved in the same conditions. Methods were tested and validated using REE UOC (uranium ore concentrate) reference materials. Samples were analyzed by sector field mass spectrometer (ICP-SFMS).Keywords: extraction chromatography, nuclear forensics, rare earth elements, uranium
Procedia PDF Downloads 309764 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure
Authors: Esra Zengin, Sinan Akkar
Abstract:
Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.Keywords: ground motion selection, scaling, uncertainty, fragility curve
Procedia PDF Downloads 583763 Proportional and Integral Controller-Based Direct Current Servo Motor Speed Characterization
Authors: Adel Salem Bahakeem, Ahmad Jamal, Mir Md. Maruf Morshed, Elwaleed Awad Khidir
Abstract:
Direct Current (DC) servo motors, or simply DC motors, play an important role in many industrial applications such as manufacturing of plastics, precise positioning of the equipment, and operating computer-controlled systems where speed of feed control, maintaining the position, and ensuring to have a constantly desired output is very critical. These parameters can be controlled with the help of control systems such as the Proportional Integral Derivative (PID) controller. The aim of the current work is to investigate the effects of Proportional (P) and Integral (I) controllers on the steady state and transient response of the DC motor. The controller gains are varied to observe their effects on the error, damping, and stability of the steady and transient motor response. The current investigation is conducted experimentally on a servo trainer CE 110 using analog PI controller CE 120 and theoretically using Simulink in MATLAB. Both experimental and theoretical work involves varying integral controller gain to obtain the response to a steady-state input, varying, individually, the proportional and integral controller gains to obtain the response to a step input function at a certain frequency, and theoretically obtaining the proportional and integral controller gains for desired values of damping ratio and response frequency. Results reveal that a proportional controller helps reduce the steady-state and transient error between the input signal and output response and makes the system more stable. In addition, it also speeds up the response of the system. On the other hand, the integral controller eliminates the error but tends to make the system unstable with induced oscillations and slow response to eliminate the error. From the current work, it is desired to achieve a stable response of the servo motor in terms of its angular velocity subjected to steady-state and transient input signals by utilizing the strengths of both P and I controllers.Keywords: DC servo motor, proportional controller, integral controller, controller gain optimization, Simulink
Procedia PDF Downloads 110762 A Step Towards Circular Economy: Assessing the Efficacy of Ion Exchange Resins in the Recycling of Automotive Engine Coolants
Authors: George Madalin Danila, Mihaiella Cretu, Cristian Puscasu
Abstract:
The recycling of used antifreeze/coolant is a widely discussed and intricate issue. Complying with government regulations for the proper disposal of hazardous waste poses a significant challenge for today's automotive and industrial industries. In recent years, global focus has shifted toward Earth's fragile ecology, emphasizing the need to restore and preserve the natural environment. The business and industrial sectors have undergone substantial changes to adapt and offer products tailored to these evolving markets. The global antifreeze market size was evaluated at US 5.4 billion in 2020 to reach USD 5,9 billion by 2025 due to the increased number of vehicles worldwide, but also to the growth of HVAC systems. This study presents the evaluation of an ion exchange resin-based installation designed for the recycling of engine coolants, specifically ethylene glycol (EG) and propylene glycol (PG). The recycling process aims to restore the coolant to meet the stringent ASTM standards for both new and recycled coolants. A combination of physical-chemical methods, gas chromatography-mass spectrometry (GC-MS), and inductively coupled plasma mass spectrometry (ICP-MS) was employed to analyze and validate the purity and performance of the recycled product. The experimental setup included performance tests, namely corrosion to glassware and the tendency to foaming of coolant, to assess the efficacy of the recycled coolants in comparison to new coolant standards. The results demonstrate that the recycled EG coolants exhibit comparable quality to new coolants, with all critical parameters falling within the acceptable ASTM limits. This indicates that the ion exchange resin method is a viable and efficient solution for the recycling of engine coolants, offering an environmentally friendly alternative to the disposal of used coolants while ensuring compliance with industry standards.Keywords: engine coolant, glycols, recycling, ion exchange resin, circular economy
Procedia PDF Downloads 45761 Magnetic Nanoparticles Coated with Modified Polysaccharides for the Immobilization of Glycoproteins
Authors: Kinga Mylkie, Pawel Nowak, Marta Z. Borowska
Abstract:
The most important proteins in human serum responsible for drug binding are human serum albumin (HSA) and α1-acid glycoprotein (AGP). The AGP molecule is a glycoconjugate containing a single polypeptide chain composed of 183 amino acids (the core of the protein), and five glycan branched chains (sugar part) covalently linked by an N-glycosidic bond with aspartyl residues (Asp(N) -15, -38, -54, -75, - 85) of polypeptide chain. This protein plays an important role in binding alkaline drugs, a large group of drugs used in psychiatry, some acid drugs (e.g., coumarin anticoagulants), and neutral drugs (steroid hormones). The main goal of the research was to obtain magnetic nanoparticles coated with biopolymers in a chemically modified form, which will have highly reactive functional groups able to effectively immobilize the glycoprotein (acid α1-glycoprotein) without losing the ability to bind active substances. The first phase of the project involved the chemical modification of biopolymer starch. Modification of starch was carried out by methods of organic synthesis, leading to the preparation of a polymer enriched on its surface with aldehyde groups, which in the next step was coupled with 3-aminophenylboronic acid. Magnetite nanoparticles coated with starch were prepared by in situ co-precipitation and then oxidized with a 1 M sodium periodate solution to form a dialdehyde starch coating. Afterward, the reaction between the magnetite nanoparticles coated with dialdehyde starch and 3-aminophenylboronic acid was carried out. The obtained materials consist of a magnetite core surrounded by a layer of modified polymer, which contains on its surface dihydroxyboryl groups of boronic acids which are capable of binding glycoproteins. Magnetic nanoparticles obtained as carriers for plasma protein immobilization were fully characterized by ATR-FTIR, TEM, SEM, and DLS. The glycoprotein was immobilized on the obtained nanoparticles. The amount of mobilized protein was determined by the Bradford method.Keywords: glycoproteins, immobilization, magnetic nanoparticles, polysaccharides
Procedia PDF Downloads 130760 Climate Change and Health in Policies
Authors: Corinne Kowalski, Lea de Jong, Rainer Sauerborn, Niamh Herlihy, Anneliese Depoux, Jale Tosun
Abstract:
Climate change is considered one of the biggest threats to human health of the 21st century. The link between climate change and health has received relatively little attention in the media, in research and in policy-making. A long term and broad overview of how health is represented in the legislation on climate change is missing in the legislative literature. It is unknown if or how the argument for health is referred in legal clauses addressing climate change, in national and European legislation. Integrating scientific based evidence into policies regarding the impacts of climate change on health could be a key step to inciting the political and societal changes necessary to decelerate global warming. This may also drive the implementation of new strategies to mitigate the consequences on health systems. To provide an overview of this issue, we are analyzing the Global Climate Legislation Database provided by the Grantham Research Institute on Climate Change and the Environment. This institution was established in 2008 at the London School of Economics and Political Science. The database consists of (updated as of 1st January 2015) legislations on climate change in 99 countries around the world. This tool offers relevant information about the state of climate related policies. We will use the database to systematically analyze the 829 identified legislations to identify how health is represented as a relevant aspect of climate change legislation. We are conducting explorative research of national and supranational legislations and anticipate health to be addressed in various forms. The goal is to highlight how often, in what specific terms, which aspects of health or health risks of climate change are mentioned in various legislations. The position and recurrence of the mention of health is also of importance. Data will be extracted with complete quotation of the sentence which mentions health, which will allow for second qualitative stage to analyze which aspects of health are represented and in what context. This study is part of an interdisciplinary project called 4CHealth that confronts results of the research done on scientific, political and press literature to better understand how the knowledge on climate change and health circulates within those different fields and whether and how it is translated to real world change.Keywords: climate change, explorative research, health, policies
Procedia PDF Downloads 365