Search results for: AR linear estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4958

Search results for: AR linear estimation

788 Dynamic Analysis of Commodity Price Fluctuation and Fiscal Management in Sub-Saharan Africa

Authors: Abidemi C. Adegboye, Nosakhare Ikponmwosa, Rogers A. Akinsokeji

Abstract:

For many resource-rich developing countries, fiscal policy has become a key tool used for short-run fiscal management since it is considered as playing a critical role in injecting part of resource rents into the economies. However, given its instability, reliance on revenue from commodity exports renders fiscal management, budgetary planning and the efficient use of public resources difficult. In this study, the linkage between commodity prices and fiscal operations among a sample of commodity-exporting countries in sub-Saharan Africa (SSA) is investigated. The main question is whether commodity price fluctuations affects the effectiveness of fiscal policy as a macroeconomic stabilization tool in these countries. Fiscal management effectiveness is considered as the ability of fiscal policy to react countercyclically to output gaps in the economy. Fiscal policy is measured as the ratio of fiscal deficit to GDP and the ratio of government spending to GDP, output gap is measured as a Hodrick-Prescott filter of output growth for each country, while commodity prices are associated with each country based on its main export commodity. Given the dynamic nature of fiscal policy effects on the economy overtime, a dynamic framework is devised for the empirical analysis. The panel cointegration and error correction methodology is used to explain the relationships. In particular, the study employs the panel ECM technique to trace short-term effects of commodity prices on fiscal management and also uses the fully modified OLS (FMOLS) technique to determine the long run relationships. These procedures provide sufficient estimation of the dynamic effects of commodity prices on fiscal policy. Data used cover the period 1992 to 2016 for 11 SSA countries. The study finds that the elasticity of the fiscal policy measures with respect to the output gap is significant and positive, suggesting that fiscal policy is actually procyclical among the countries in the sample. This implies that fiscal management for these countries follows the trend of economic performance. Moreover, it is found that fiscal policy has not performed well in delivering macroeconomic stabilization for these countries. The difficulty in applying fiscal stabilization measures is attributable to the unstable revenue inflows due to the highly volatile nature of commodity prices in the international market. For commodity-exporting countries in SSA to improve fiscal management, therefore, fiscal planning should be largely decoupled from commodity revenues, domestic revenue bases must be improved, and longer period perspectives in fiscal policy management are the critical suggestions in this study.

Keywords: commodity prices, ECM, fiscal policy, fiscal procyclicality, fully modified OLS, sub-saharan africa

Procedia PDF Downloads 143
787 Applied Canonical Correlation Analysis to Explore the Relationship between Resourcefulness and Quality of Life in Cancer Population

Authors: Chiou-Fang Liou

Abstract:

Cancer has been one of the most life-threaten diseases worldwide for 30+ years. The influences of cancer illness include symptoms from cancer itself along with its treatments. The quality of life among patients diagnosed with cancer during cancer treatments has been conceptualized within four domains: Functional Well-Being, Social Well-Being, Physical Well-Being, and Emotional Well-Being. Patients with cancer often need to make adjustments to face all the challenges. The middle-range theory of Resourcefulness and Quality of life has been applied to explore factors contributing to cancer patients’ needs. Resourcefulness is defined as sets of skills that can be learned and consisted of Person and Social Resourcefulness. Empirical evidence also supported a possible relationship between Resourcefulness and Quality of Life. However, little is known about the extent to which the two concepts are related to each other. This study, therefore, applied a multivariate technique, Canonical Correlation Analysis, to identify the relationship between the two sets of variables with multi-dimensional measures, the Resourcefulness and Quality of Life in Cancer patients receiving treatments. After IRB approval, this multi-centered study took place at two medical centers in the Central Region of Taiwan. Sample A total of 186 patients with various cancer diagnoses and either receiving radiation therapy or chemotherapy consented to and answered questionnaires. The Import findings of the Generalized F test identified two typical sets with several linear relations and explained a total of 79.1% of the total variance. The first typical set found Personal Resourcefulness negatively related to Social Well-being, Functional being, Emotional Well-being, and Physical, in that order. The second typical set found Social Resourcefulness negatively related to Functional Well-being and Physical-being yet positively related to Social Well-being and Emotional Well-being. Discussion and Conclusion, The results of this presented study supported the statistically significant relationship between two sets of variables that are consistent with the theory. In addition, the results are considerably important in cancer patients receiving cancer treatments.

Keywords: cancer, canonical correlation analysis, quality of life, resourcefulness

Procedia PDF Downloads 65
786 Comparative Analysis of in vitro Release profile for Escitalopram and Escitalopram Loaded Nanoparticles

Authors: Rashi Rajput, Manisha Singh

Abstract:

Escitalopram oxalate (ETP), an FDA approved antidepressant drug from the category of SSRI (selective serotonin reuptake inhibitor) and is used in treatment of general anxiety disorder (GAD), major depressive disorder (MDD).When taken orally, it is metabolized to S-demethylcitalopram (S-DCT) and S-didemethylcitalopram (S-DDCT) in the liver with the help of enzymes CYP2C19, CYP3A4 and CYP2D6. Hence, causing side effects such as dizziness, fast or irregular heartbeat, headache, nausea etc. Therefore, targeted and sustained drug delivery will be a helpful tool for increasing its efficacy and reducing side effects. The present study is designed for formulating mucoadhesive nanoparticle formulation for the same Escitalopram loaded polymeric nanoparticles were prepared by ionic gelation method and characterization of the optimised formulation was done by zeta average particle size (93.63nm), zeta potential (-1.89mV), TEM (range of 60nm to 115nm) analysis also confirms nanometric size range of the drug loaded nanoparticles along with polydispersibility index of 0.117. In this research, we have studied the in vitro drug release profile for ETP nanoparticles, through a semi permeable dialysis membrane. The three important characteristics affecting the drug release behaviour were – particle size, ionic strength and morphology of the optimised nanoparticles. The data showed that on increasing the particle size of the drug loaded nanoparticles, the initial burst was reduced which was comparatively higher in drug. Whereas, the formulation with 1mg/ml chitosan in 1.5mg/ml tripolyphosphate solution showed steady release over the entire period of drug release. Then this data was further validated through mathematical modelling to establish the mechanism of drug release kinetics, which showed a typical linear diffusion profile in optimised ETP loaded nanoparticles.

Keywords: ionic gelation, mucoadhesive nanoparticle, semi-permeable dialysis membrane, zeta potential

Procedia PDF Downloads 275
785 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve

Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick

Abstract:

Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.

Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin

Procedia PDF Downloads 137
784 Some Quality Parameters of Selected Maize Hybrids from Serbia for the Production of Starch, Bioethanol and Animal Feed

Authors: Marija Milašinović-Šeremešić, Valentina Semenčenko, Milica Radosavljević, Dušanka Terzić, Ljiljana Mojović, Ljubica Dokić

Abstract:

Maize (Zea mays L.) is one of the most important cereal crops, and as such, one of the most significant naturally renewable carbohydrate raw materials for the production of energy and multitude of different products. The main goal of the present study was to investigate a suitability of selected maize hybrids of different genetic background produced in Maize Research Institute ‘Zemun Polje’, Belgrade, Serbia, for starch, bioethanol and animal feed production. All the hybrids are commercial and their detailed characterization is important for the expansion of their different uses. The starches were isolated by using a 100-g laboratory maize wet-milling procedure. Hydrolysis experiments were done in two steps (liquefaction with Termamyl SC, and saccharification with SAN Extra L). Starch hydrolysates obtained by the two-step hydrolysis of the corn flour starch were subjected to fermentation by S. cerevisiae var. ellipsoideus under semi-anaerobic conditions. The digestibility based on enzymatic solubility was performed by the Aufréré method. All investigated ZP maize hybrids had very different physical characteristics and chemical composition which could allow various possibilities of their use. The amount of hard (vitreous) and soft (floury) endosperm in kernel is considered one of the most important parameters that can influence the starch and bioethanol yields. Hybrids with a lower test weight and density and a greater proportion of soft endosperm fraction had a higher yield, recovery and purity of starch. Among the chemical composition parameters only starch content significantly affected the starch yield. Starch yields of studied maize hybrids ranged from 58.8% in ZP 633 to 69.0% in ZP 808. The lowest bioethanol yield of 7.25% w/w was obtained for hybrid ZP 611k and the highest by hybrid ZP 434 (8.96% w/w). A very significant correlation was determined between kernel starch content and the bioethanol yield, as well as volumetric productivity (48h) (r=0.66). Obtained results showed that the NDF, ADF and ADL contents in the whole maize plant of the observed ZP maize hybrids varied from 40.0% to 60.1%, 18.6% to 32.1%, and 1.4% to 3.1%, respectively. The difference in the digestibility of the dry matter of the whole plant among hybrids (ZP 735 and ZP 560) amounted to 18.1%. Moreover, the differences in the contents of the lignocelluloses fraction affected the differences in dry matter digestibility. From the results it can be concluded that genetic background of the selected maize hybrids plays an important part in estimation of the technological value of maize hybrids for various purposes. Obtained results are of an exceptional importance for the breeding programs and selection of potentially most suitable maize hybrids for starch, bioethanol and animal feed production.

Keywords: bioethanol, biomass quality, maize, starch

Procedia PDF Downloads 205
783 The Response of Soil Biodiversity to Agriculture Practice in Rhizosphere

Authors: Yan Wang, Guowei Chen, Gang Wang

Abstract:

Soil microbial diversity is one of the important parameters to assess the soil fertility and soil health, even stability of the ecosystem. In this paper, we aim to reveal the soil microbial difference in rhizosphere and root zone, even to pick the special biomarkers influenced by the long term tillage practices, which included four treatments of no-tillage, ridge tillage, continuous cropping with corn and crop rotation with corn and soybean. Here, high-throughput sequencing was performed to investigate the difference of bacteria in rhizosphere and root zone. The results showed a very significant difference of species richness between rhizosphere and root zone soil at the same crop rotation system (p < 0.01), and also significant difference of species richness was found between continuous cropping with corn and corn-soybean rotation treatment in the rhizosphere statement, no-tillage and ridge tillage in root zone soils. Implied by further beta diversity analysis, both tillage methods and crop rotation systems influence the soil microbial diversity and community structure in varying degree. The composition and community structure of microbes in rhizosphere and root zone soils were clustered distinctly by the beta diversity (p < 0.05). Linear discriminant analysis coupled with effect size (LEfSe) analysis of total taxa in rhizosphere picked more than 100 bacterial taxa, which were significantly more abundant than that in root zone soils, whereas the number of biomarkers was lower between the continuous cropping with corn and crop rotation treatment, the same pattern was found at no-tillage and ridge tillage treatment. Bacterial communities were greatly influenced by main environmental factors in large scale, which is the result of biological adaptation and acclimation, hence it is beneficial for optimizing agricultural practices.

Keywords: tillage methods, biomarker, biodiversity, rhizosphere

Procedia PDF Downloads 145
782 Absorption Behavior of Some Acids During Chemical Aging of HDPE-100 Polyethylene

Authors: Berkas Khaoula

Abstract:

Based on selection characteristics, high-density polyethylene (HDPE) extruded pipes are among the most economical and durable materials as well-designed solutions for water and gas transmission systems. The main reasons for such a choice are the high quality-performance ratio and the long-term service durability under aggressive conditions. Due to inevitable interactions with soils of different chemical compositions and transported fluids, aggressiveness becomes a key factor in studying resilient strength and life prediction limits. This phenomenon is known as environmental stress cracking resistance (ESCR). In this work, the effect of 3 acidic environments (5% acetic, 20% hydrochloric and 20% sulfuric) on HDPE-100 samples (~10x11x24 mm3). The results presented in the form (Δm/m0, %) as a function of √t indicate that the absorption, in the case of strong acids (HCl and H2SO4), evolves towards negative values involving material losses such as antioxidants and some additives. On the other hand, acetic acid and deionized water (DW) give a form of linear Fickean (LF) and B types, respectively. In general, the acids cause a slow but irreversible alteration of the chemical structure, composition and physical integrity of the polymer. The DW absorption is not significant (~0.02%) for an immersion duration of 69 days. Such results are well accepted in actual applications, while changes caused by acidic environments are serious and must be subjected to particular monitoring of the OIT factor (Oxidation Induction Time). After 55 days of aging, the H2SO4 and HCl media showed particular values with a loss of % mass in the interval [0.025-0.038] associated with irreversible chemical reactions as well as physical degradations. This state is usually explained by hydrolysis of the polymer, causing the loss of functions and causing chain scissions. These results are useful for designing and estimating the lifetime of the tube in service and in contact with adverse environments.

Keywords: HDPE, environmental stress cracking, absorption, acid media, chemical aging

Procedia PDF Downloads 68
781 Black-Hole Dimension: A Distinct Methodology of Understanding Time, Space and Data in Architecture

Authors: Alp Arda

Abstract:

Inspired by Nolan's ‘Interstellar’, this paper delves into speculative architecture, asking, ‘What if an architect could traverse time to study a city?’ It unveils the ‘Black-Hole Dimension,’ a groundbreaking concept that redefines urban identities beyond traditional boundaries. Moving past linear time narratives, this approach draws from the gravitational dynamics of black holes to enrich our understanding of urban and architectural progress. By envisioning cities and structures as influenced by black hole-like forces, it enables an in-depth examination of their evolution through time and space. The Black-Hole Dimension promotes a temporal exploration of architecture, treating spaces as narratives of their current state interwoven with historical layers. It advocates for viewing architectural development as a continuous, interconnected journey molded by cultural, economic, and technological shifts. This approach not only deepens our understanding of urban evolution but also empowers architects and urban planners to create designs that are both adaptable and resilient. Echoing themes from popular culture and science fiction, this methodology integrates the captivating dynamics of time and space into architectural analysis, challenging established design conventions. The Black-Hole Dimension champions a philosophy that welcomes unpredictability and complexity, thereby fostering innovation in design. In essence, the Black-Hole Dimension revolutionizes architectural thought by emphasizing space-time as a fundamental dimension. It reimagines our built environments as vibrant, evolving entities shaped by the relentless forces of time, space, and data. This groundbreaking approach heralds a future in architecture where the complexity of reality is acknowledged and embraced, leading to the creation of spaces that are both responsive to their temporal context and resilient against the unfolding tapestry of time.

Keywords: black-hole, timeline, urbanism, space and time, speculative architecture

Procedia PDF Downloads 49
780 Modelling, Assessment, and Optimisation of Rules for Selected Umgeni Water Distribution Systems

Authors: Khanyisile Mnguni, Muthukrishnavellaisamy Kumarasamy, Jeff C. Smithers

Abstract:

Umgeni Water is a water board that supplies most parts of KwaZulu Natal with bulk portable water. Currently, Umgeni Water is running its distribution system based on required reservoir levels and demands and does not consider the energy cost at different times of the day, number of pump switches, and background leakages. Including these constraints can reduce operational cost, energy usage, leakages, and increase performance. Optimising pump schedules can reduce energy usage and costs while adhering to hydraulic and operational constraints. Umgeni Water has installed an online hydraulic software, WaterNet Advisor, that allows running different operational scenarios prior to implementation in order to optimise the distribution system. This study will investigate operation scenarios using optimisation techniques and WaterNet Advisor for a local water distribution system. Based on studies reported in the literature, introducing pump scheduling optimisation can reduce energy usage by approximately 30% without any change in infrastructure. Including tariff structures in an optimisation problem can reduce pumping costs by 15%, while including leakages decreases cost by 10%, and pressure drop in the system can be up to 12 m. Genetical optimisation algorithms are widely used due to their ability to solve nonlinear, non-convex, and mixed-integer problems. Other methods such as branch and bound linear programming have also been successfully used. A suitable optimisation method will be chosen based on its efficiency. The objective of the study is to reduce energy usage, operational cost, and leakages, and the feasibility of optimal solution will be checked using the Waternet Advisor. This study will provide an overview of the optimisation of hydraulic networks and progress made to date in multi-objective optimisation for a selected sub-system operated by Umgeni Water.

Keywords: energy usage, pump scheduling, WaterNet Advisor, leakages

Procedia PDF Downloads 79
779 3D Numerical Simulation of Undoweled and Uncracked Joints in Short Paneled Concrete Pavements

Authors: K. Sridhar Reddy, M. Amaranatha Reddy, Nilanjan Mitra

Abstract:

Short paneled concrete pavement (SPCP) with shorter panel size can be an alternative to the conventional jointed plain concrete pavements (JPCP) at the same cost as the asphalt pavements with all the advantages of concrete pavement with reduced thickness, less chance of mid-slab cracking and or dowel bar locking so common in JPCP. Cast-in-situ short concrete panels (short slabs) laid on a strong foundation consisting of a dry lean concrete base (DLC), and cement treated subbase (CTSB) will reduce the thickness of the concrete slab to the order of 180 mm to 220 mm, whereas JPCP was with 280 mm for the same traffic. During the construction of SPCP test sections on two Indian National Highways (NH), it was observed that the joints remain uncracked after a year of traffic. The undoweled and uncracked joints load transfer variability and joint behavior are of interest with anticipation on its long-term performance of the SPCP. To investigate the effects of undoweled and uncracked joints on short slabs, the present study was conducted. A multilayer linear elastic analysis using 3D finite element package for different panel sizes with different thicknesses resting on different types of solid elastic foundation with and without temperature gradient was developed. Surface deflections were obtained from 3D FE model and validated with measured field deflections from falling weight deflectometer (FWD) test. Stress analysis indicates that flexural stresses in short slabs are decreased with a decrease in panel size and increase in thickness. Detailed evaluation of stress analysis with the effects of curling behavior, the stiffness of the base layer and a variable degree of load transfer, is underway.

Keywords: joint behavior, short slabs, uncracked joints, undoweled joints, 3D numerical simulation

Procedia PDF Downloads 161
778 Statistical Correlation between Ply Mechanical Properties of Composite and Its Effect on Structure Reliability

Authors: S. Zhang, L. Zhang, X. Chen

Abstract:

Due to the large uncertainty on the mechanical properties of FRP (fibre reinforced plastic), the reliability evaluation of FRP structures are currently receiving much attention in industry. However, possible statistical correlation between ply mechanical properties has been so far overlooked, and they are mostly assumed to be independent random variables. In this study, the statistical correlation between ply mechanical properties of uni-directional and plain weave composite is firstly analyzed by a combination of Monte-Carlo simulation and finite element modeling of the FRP unit cell. Large linear correlation coefficients between the in-plane mechanical properties are observed, and the correlation coefficients are heavily dependent on the uncertainty of the fibre volume ratio. It is also observed that the correlation coefficients related to Poisson’s ratio are negative while others are positive. To experimentally achieve the statistical correlation coefficients between in-plane mechanical properties of FRP, all concerned in-plane mechanical properties of the same specimen needs to be known. In-plane shear modulus of FRP is experimentally derived by the approach suggested in the ASTM standard D5379M. Tensile tests are conducted using the same specimens used for the shear test, and due to non-uniform tensile deformation a modification factor is derived by a finite element modeling. Digital image correlation is adopted to characterize the specimen non-uniform deformation. The preliminary experimental results show a good agreement with the numerical analysis on the statistical correlation. Then, failure probability of laminate plates is calculated in cases considering and not considering the statistical correlation, using the Monte-Carlo and Markov Chain Monte-Carlo methods, respectively. The results highlight the importance of accounting for the statistical correlation between ply mechanical properties to achieve accurate failure probability of laminate plates. Furthermore, it is found that for the multi-layer laminate plate, the statistical correlation between the ply elastic properties significantly affects the laminate reliability while the effect of statistical correlation between the ply strength is minimal.

Keywords: failure probability, FRP, reliability, statistical correlation

Procedia PDF Downloads 146
777 Digitalization, Economic Growth and Financial Sector Development in Africa

Authors: Abdul Ganiyu Iddrisu

Abstract:

Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth, and reducing poverty. Yet compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, low-income flows among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector, however, empirical evidence on digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We therefore argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa focusing on the role of digitization, and financial sector development. First, we assess how digitization influence financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on 2 economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improves economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.

Keywords: digitalization, economic growth, financial sector development, Africa

Procedia PDF Downloads 85
776 Long-Term Otitis Media with Effusion and Related Hearing Loss and Its Impact on Developmental Outcomes

Authors: Aleema Rahman

Abstract:

Introduction: This study aims to estimate the prevalence of long-term otitis media with effusion (OME) and hearing loss in a prospective longitudinal cohort studyand to study the relationship between the condition and educational and psychosocial outcomes. Methods: Analysis of data from the Avon Longitudinal Study of Parents and Children (ALSPAC) will be undertaken. ALSPAC is a longitudinal birth cohort study carried out in the UK, which has collected detailed measures of hearing on ~7000 children from the age of seven. A descriptive analysis of the data will be undertaken to estimate the prevalence of OME and hearing loss (defined as having average hearing levels > 20dB and type B tympanogram) at 7, 9, 11, and 15 years as well as that of long-term OME and hearing loss. Logistic and linear regression analyses will be conducted to examine associations between long-term OME and hearing loss and educational outcomes (grades obtained from standardised national attainment tests) and psychosocial outcomes such as anxiety, social fears, and depression at ages 10-11 and 15-16 years. Results: Results will be presented in terms of the prevalence of OME and hearing loss in the population at each age. The prevalence of long-term OME and hearing loss, defined as having OME and hearing loss at two or more time points, will also be reported. Furthermore, any associations between long-term OME and hearing loss and the educational and psychosocial outcomes will be presented. Analyses will take into account demographic factors such as sex and social deprivation and relevant confounders, including socioeconomic status, ethnicity, and IQ. Discussion: Findings from this study will provide new epidemiological information on the prevalence of long-term OME and hearing loss. The research will provide new knowledge on the impact of OME for the small group of children who do not grow out of condition by age 7 but continue to have hearing loss and need clinical care through later childhood. The study could have clinical implications and may influence service delivery for this group of children.

Keywords: educational attainment, hearing loss, otitis media with effusion, psychosocial development

Procedia PDF Downloads 119
775 Vibration-Based Structural Health Monitoring of a 21-Story Building with Tuned Mass Damper in Seismic Zone

Authors: David Ugalde, Arturo Castillo, Leopoldo Breschi

Abstract:

The Tuned Mass Dampers (TMDs) are an effective system for mitigating vibrations in building structures. These dampers have traditionally focused on the protection of high-rise buildings against earthquakes and wind loads. The Camara Chilena de la Construction (CChC) building, built in 2018 in Santiago, Chile, is a 21-story RC wall building equipped with a 150-ton TMD and instrumented with six permanent accelerometers, offering an opportunity to monitor the dynamic response of this damped structure. This paper presents the system identification of the CChC building using power spectral density plots of ambient vibration and two seismic events (5.5 Mw and 6.7 Mw). Linear models of the building with and without the TMD are used to compute the theoretical natural periods through modal analysis and simulate the response of the building through response history analysis. Results show that natural periods obtained from both ambient vibrations and earthquake records are quite similar to the theoretical periods given by the modal analysis of the building model. Some of the experimental periods are noticeable by simple inspection of the earthquake records. The accelerometers in the first story better captured the modes related to the building podium while the upper accelerometers clearly captured the modes related to the tower. The earthquake simulation showed smaller accelerations in the model with TMD that are similar to that measured by the accelerometers. It is concluded that the system identification through power spectral density shows consistency with the expected dynamic properties. The structural health monitoring of the CChC building confirms the advantages of seismic protection technologies such as TMDs in seismic prone areas.

Keywords: system identification, tuned mass damper, wall buildings, seismic protection

Procedia PDF Downloads 108
774 The Art of Resilience in the Case of Skopje

Authors: Kristina Nikolovska

Abstract:

Social movements have become common in the Post Yugoslav cities. Consequently, the wave of activism has been considerably present in Skopje. Starting from 2009 the activist wave in Skopje emerged with the notion of the city. Diversity of initiatives appeared in the city in order to defend places that have been contested by the urban development project SK2014. The activist wave diffused into many different initiatives and diversity of issues. The result was unification in one massive movement in 2016, called 'The Colourful Revolution'. The paper explores the scope of activism in Skopje, with taking into consideration the influence of the spatial transformation, the project SK2014. Moreover, it examines the processes of spatiality into shaping the contention in Skopje, focusing on interdisciplinary and comprehensive approaches. Except the diversity of theoretical framework mainly founded on contentious politics theory and space elaboration from different perspectives, the study is founded on field work based on conducted interviews. Using an interdisciplinary approach and focusing on three main dimensions, the research contributes to understand the dynamics of the activist wave and importance of spatial processes in the creation of the contention in Skopje. Moreover, it elaborates the characteristics, possible effects, and reflections of the cycles of protests in Skopje. The main results of the research showed that dynamics of space is important in the creation of the activist wave in Skopje, moreover space context can give explanation about how opportunities diffuse and transformative power is created. The study contributed into deeper understanding of the importance of spatiality in contentious politics, it showed that in general contentions politics can benefit from deeper analyses of place specificity. Finally, the thesis opposes the traditional linear understanding of social movements, and proposes more dynamic, comprehensive, and sensitive elaboration.

Keywords: contentious politics, place, Skopje, SK2014, social movements, space

Procedia PDF Downloads 212
773 Environmental Impact Assessment in Mining Regions with Remote Sensing

Authors: Carla Palencia-Aguilar

Abstract:

Calculations of Net Carbon Balance can be obtained by means of Net Biome Productivity (NBP), Net Ecosystem Productivity (NEP), and Net Primary Production (NPP). The latter is an important component of the biosphere carbon cycle and is easily obtained data from MODIS MOD17A3HGF; however, the results are only available yearly. To overcome data availability, bands 33 to 36 from MODIS MYD021KM (obtained on a daily basis) were analyzed and compared with NPP data from the years 2000 to 2021 in 7 sites where surface mining takes place in the Colombian territory. Coal, Gold, Iron, and Limestone were the minerals of interest. Scales and Units as well as thermal anomalies, were considered for net carbon balance per location. The NPP time series from the satellite images were filtered by using two Matlab filters: First order and Discrete Transfer. After filtering the NPP time series, comparing the graph results from the satellite’s image value, and running a linear regression, the results showed R2 from 0,72 to 0,85. To establish comparable units among NPP and bands 33 to 36, the Greenhouse Gas Equivalencies Calculator by EPA was used. The comparison was established in two ways: one by the sum of all the data per point per year and the other by the average of 46 weeks and finding the percentage that the value represented with respect to NPP. The former underestimated the total CO2 emissions. The results also showed that coal and gold mining in the last 22 years had less CO2 emissions than limestone, with an average per year of 143 kton CO2 eq for gold, 152 kton CO2 eq for coal, and 287 kton CO2 eq for iron. Limestone emissions varied from 206 to 441 kton CO2 eq. The maximum emission values from unfiltered data correspond to 165 kton CO2 eq. for gold, 188 kton CO2 eq. for coal, and 310 kton CO2 eq. for iron and limestone, varying from 231 to 490 kton CO2 eq. If the most pollutant limestone site improves its production technology, limestone could count with a maximum of 318 kton CO2 eq emissions per year, a value very similar respect to iron. The importance of gathering data is to establish benchmarks in order to attain 2050’s zero emissions goal.

Keywords: carbon dioxide, NPP, MODIS, MINING

Procedia PDF Downloads 79
772 Numerical Erosion Investigation of Standalone Screen (Wire-Wrapped) Due to the Impact of Sand Particles Entrained in a Single-Phase Flow (Water Flow)

Authors: Ahmed Alghurabi, Mysara Mohyaldinn, Shiferaw Jufar, Obai Younis, Abdullah Abduljabbar

Abstract:

Erosion modeling equations were typically acquired from regulated experimental trials for solid particles entrained in single-phase or multi-phase flows. Evidently, those equations were later employed to predict the erosion damage caused by the continuous impacts of solid particles entrained in streamflow. It is also well-known that the particle impact angle and velocity do not change drastically in gas-sand flow erosion prediction; hence an accurate prediction of erosion can be projected. On the contrary, high-density fluid flows, such as water flow, through complex geometries, such as sand screens, greatly affect the sand particles’ trajectories/tracks and consequently impact the erosion rate predictions. Particle tracking models and erosion equations are frequently applied simultaneously as a method to improve erosion visualization and estimation. In the present work, computational fluid dynamic (CFD)-based erosion modeling was performed using a commercially available software; ANSYS Fluent. The continuous phase (water flow) behavior was simulated using the realizable K-epsilon model, and the secondary phase (solid particles), having a 5% flow concentration, was tracked with the help of the discrete phase model (DPM). To accomplish a successful erosion modeling, three erosion equations from the literature were utilized and introduced to the ANSYS Fluent software to predict the screen wire-slot velocity surge and estimate the maximum erosion rates on the screen surface. Results of turbulent kinetic energy, turbulence intensity, dissipation rate, the total pressure on the screen, screen wall shear stress, and flow velocity vectors were presented and discussed. Moreover, the particle tracks and path-lines were also demonstrated based on their residence time, velocity magnitude, and flow turbulence. On one hand, results from the utilized erosion equations have shown similarities in screen erosion patterns, locations, and DPM concentrations. On the other hand, the model equations estimated slightly different values of maximum erosion rates of the wire-wrapped screen. This is solely based on the fact that the utilized erosion equations were developed with some assumptions that are controlled by the experimental lab conditions.

Keywords: CFD simulation, erosion rate prediction, material loss due to erosion, water-sand flow

Procedia PDF Downloads 145
771 Optimal Tamping for Railway Tracks, Reducing Railway Maintenance Expenditures by the Use of Integer Programming

Authors: Rui Li, Min Wen, Kim Bang Salling

Abstract:

For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euros per kilometer per year. In order to reduce such maintenance expenditures, this paper presents a mixed 0-1 linear mathematical model designed to optimize the predictive railway tamping activities for ballast track in the planning horizon of three to four years. The objective function is to minimize the tamping machine actual costs. The approach of the research is using the simple dynamic model for modelling condition-based tamping process and the solution method for finding optimal condition-based tamping schedule. Seven technical and practical aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality recovery on the track quality after tamping operation; (5) Tamping machine operation practices (6) tamping budgets and (7) differentiating the open track from the station sections. A Danish railway track between Odense and Fredericia with 42.6 km of length is applied for a time period of three and four years in the proposed maintenance model. The generated tamping schedule is reasonable and robust. Based on the result from the Danish railway corridor, the total costs can be reduced significantly (50%) than the previous model which is based on optimizing the number of tamping. The different maintenance strategies have been discussed in the paper. The analysis from the results obtained from the model also shows a longer period of predictive tamping planning has more optimal scheduling of maintenance actions than continuous short term preventive maintenance, namely yearly condition-based planning.

Keywords: integer programming, railway tamping, predictive maintenance model, preventive condition-based maintenance

Procedia PDF Downloads 421
770 Investigating the Relationship Between Alexithymia and Mobile Phone Addiction Along with the Mediating Role of Anxiety, Stress and Depression: A Path Analysis Study and Structural Model Testing

Authors: Pouriya Darabiyan, Hadis Nazari, Kourosh Zarea, Saeed Ghanbari, Zeinab Raiesifar, Morteza Khafaie, Hanna Tuvesson

Abstract:

Introduction Since the beginning of mobile phone addiction, alexithymia, depression, anxiety and stress have been stated as risk factors for Internet addiction, so this study was conducted with the aim of investigating the relationship between Alexithymia and Mobile phone addiction along with the mediating role of anxiety, stress and depression. Materials and methods In this descriptive-analytical and cross-sectional study in 2022, 412 students School of Nursing & Midwifery of Ahvaz Jundishapur University of Medical Sciences were included in the study using available sampling method. Data collection tools were: Demographic Information Questionnaire, Toronto Alexithymia Scale (TAS-20), Depression, Anxiety, Stress Scale (DASS-21) and Mobile Phone Addiction Index (MPAI). Frequency, Pearson correlation coefficient test and linear regression were used to describe and analyze the data. Also, structural equation models and path analysis method were used to investigate the direct and indirect effects as well as the total effect of each dimension of Alexithymia on Mobile phone addiction with the mediating role of stress, depression and anxiety. Statistical analysis was done by SPSS version 22 and Amos version 16 software. Results Alexithymia was a predictive factor for mobile phone addiction. Also, Alexithymia had a positive and significant effect on depression, anxiety and stress. Depression, anxiety and stress had a positive and significant effect on mobile phone addiction. Depression, anxiety and stress variables played the role of a relative mediating variable between Alexithymia and mobile phone addiction. Alexithymia through depression, anxiety and stress also has an indirect effect on Internet addiction. Conclusion Alexithymia is a predictive factor for mobile phone addiction; And the variables of depression, anxiety and stress play the role of a relative mediating variable between Alexithymia and mobile phone addiction.

Keywords: alexithymia, mobile phone, depression, anxiety, stress

Procedia PDF Downloads 75
769 Quantification of Global Cerebrovascular Reactivity in the Principal Feeding Arteries of the Human Brain

Authors: Ravinder Kaur

Abstract:

Introduction Global cerebrovascular reactivity (CVR) mapping is a promising clinical assessment for stress-testing the brain using physiological challenges, such as CO₂, to elicit changes in perfusion. It enables real-time assessment of cerebrovascular integrity and health. Conventional imaging approaches solely use steady-state parameters, like cerebral blood flow (CBF), to evaluate the integrity of the resting parenchyma and can erroneously show a healthy brain at rest, despite the underlying pathogenesis in the presence of cerebrovascular disease. Conversely, coupling CO₂ inhalation with phase-contrast MRI neuroimaging interrogates the capacity of the vasculature to respond to changes under stress. It shows promise in providing prognostic value as a novel health marker to measure neurovascular function in disease and to detect early brain vasculature dysfunction. Objective This exploratory study was established to:(a) quantify the CBF response to CO₂ in hypocapnia and hypercapnia,(b) evaluate disparities in CVR between internal carotid (ICA) and vertebral artery (VA), and (c) assess sex-specific variation in CVR. Methodology Phase-contrast MRI was employed to measure the cerebrovascular reactivity to CO₂ (±10 mmHg). The respiratory interventions were presented using the prospectively end-tidal targeting RespirActTM Gen3 system. Post-processing and statistical analysis were conducted. Results In 9 young, healthy subjects, the CBF increased from hypocapnia to hypercapnia in all vessels (4.21±0.76 to 7.20±1.83 mL/sec in ICA, 1.36±0.55 to 2.33±1.31 mL/sec in VA, p < 0.05). The CVR was quantitatively higher in ICA than VA (slope of linear regression: 0.23 vs. 0.07 mL/sec/mmHg, p < 0.05). No statistically significant effect was observed in CVR between male and female (0.25 vs 0.20 mL/sec/mmHg in ICA, 0.09 vs 0.11 mL/sec/mmHg in VA, p > 0.05). Conclusions The principal finding in this investigation validated the modulation of CBF by CO₂. Moreover, it has indicated that regional heterogeneity in hemodynamic response exists in the brain. This study provides scope to standardize the quantification of CVR prior to its clinical translation.

Keywords: cerebrovascular disease, neuroimaging, phase contrast MRI, cerebrovascular reactivity, carbon dioxide

Procedia PDF Downloads 127
768 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles

Authors: Nozar Kishi, Babak Kamrani, Filmon Habte

Abstract:

Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.

Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM

Procedia PDF Downloads 245
767 Viscoelastic Characterization of Gelatin/Cellulose Nanocrystals Aqueous Bionanocomposites

Authors: Liliane Samara Ferreira Leite, Francys Kley Vieira Moreira, Luiz Henrique Capparelli Mattoso

Abstract:

The increasing environmental concern regarding the plastic pollution worldwide has stimulated the development of low-cost biodegradable materials. Proteins are renewable feedstocks that could be used to produce biodegradable plastics. Gelatin, for example, is a cheap film-forming protein extracted from animal skin and connective tissues of Brazilian Livestock residues; thus it has a good potential in low-cost biodegradable plastic production. However, gelatin plastics are limited in terms of mechanical and barrier properties. Cellulose nanocrystals (CNC) are efficient nanofillers that have been used to extend physical properties of polymers. This work was aimed at evaluating the reinforcing efficiency of CNC on gelatin films. Specifically, we have employed the continuous casting as the processing method for obtaining the gelatin/CNC bionanocomposites. This required a first rheological study for assessing the effect of gelatin-CNC and CNC-CNC interactions on the colloidal state of the aqueous bionanocomposite formulations. CNC were isolated from eucalyptus pulp by sulfuric acid hydrolysis (65 wt%) at 55 °C for 30 min. Gelatin was solubilized in ultra-pure water at 85°C for 20 min and then mixed with glycerol at 20 wt.% and CNC at 0.5 wt%, 1.0 wt% and 2.5 wt%. Rotational measurements were performed to determine linear viscosity (η) of bionanocomposite solutions, which increased with increasing CNC content. At 2.5 wt% CNC, η increased by 118% regarding the neat gelatin solution, which was ascribed to percolation CNC network formation. Storage modulus (G’) and loss modulus (G″) further determined by oscillatory tests revealed that a gel-like behavior was dominant in the bionanocomposite solutions (G’ > G’’) over a broad range of temperature (20 – 85 °C), particularly at 2.5 wt% CNC. These results confirm effective interactions in the aqueous gelatin-CNC bionanocomposites that could substantially increase the physical properties of the gelatin plastics. Tensile tests are underway to confirm this hypothesis. The authors would like to thank the Fapesp (process n 2016/03080-3) for support.

Keywords: bionanocomposites, cellulose nanocrystals, gelatin, viscoelastic characterization

Procedia PDF Downloads 139
766 Response Surface Methodology to Supercritical Carbon Dioxide Extraction of Microalgal Lipids

Authors: Yen-Hui Chen, Terry Walker

Abstract:

As the world experiences an energy crisis, investing in sustainable energy resources is a pressing mission for many countries. Microalgae-derived biodiesel has attracted intensive attention as an important biofuel, and microalgae Chlorella protothecoides lipid is recognized as a renewable source for microalgae-derived biodiesel production. Supercritical carbon dioxide (SC-CO₂) is a promising green solvent that may potentially substitute the use of organic solvents for lipid extraction; however, the efficiency of SC-CO₂ extraction may be affected by many variables, including temperature, pressure and extraction time individually or in combination. In this study, response surface methodology (RSM) was used to optimize the process parameters, including temperature, pressure and extraction time, on C. protothecoides lipid yield by SC-CO₂ extraction. A second order polynomial model provided a good fit (R-square value of 0.94) for the C. protothecoides lipid yield. The linear and quadratic terms of temperature, pressure and extraction time—as well as the interaction between temperature and pressure—showed significant effects on lipid yield during extraction. The optimal lipid yield from the model was predicted as the temperature of 59 °C, the pressure of 350.7 bar and the extraction time 2.8 hours. Under these conditions, the experimental lipid yield (25%) was close to the predicted value. The principal fatty acid methyl esters (FAME) of C. protothecoides lipid-derived biodiesel were oleic acid methyl ester (60.1%), linoleic acid methyl ester (18.6%) and palmitic acid methyl ester (11.4%), which made up more than 90% of the total FAMEs. In summary, this study indicated that RSM was useful to characterize the optimization the SC-CO₂ extraction process of C. protothecoides lipid yield, and the second-order polynomial model could be used for predicting and describing the lipid yield very well. In addition, C. protothecoides lipid, extracted by SC-CO₂, was suggested as a potential candidate for microalgae-derived biodiesel production.

Keywords: Chlorella protothecoides, microalgal lipids, response surface methodology, supercritical carbon dioxide extraction

Procedia PDF Downloads 427
765 The Trade Flow of Small Association Agreements When Rules of Origin Are Relaxed

Authors: Esmat Kamel

Abstract:

This paper aims to shed light on the extent to which the Agadir Association agreement has fostered inter regional trade between the E.U_26 and the Agadir_4 countries; once that we control for the evolution of Agadir agreement’s exports to the rest of the world. The next valid question will be regarding any remarkable variation in the spatial/sectoral structure of exports, and to what extent has it been induced by the Agadir agreement itself and precisely after the adoption of rules of origin and the PANEURO diagonal cumulative scheme? The paper’s empirical dataset covering a timeframe from [2000 -2009] was designed to account for sector specific export and intermediate flows and the bilateral structured gravity model was custom tailored to capture sector and regime specific rules of origin and the Poisson Pseudo Maximum Likelihood Estimator was used to calculate the gravity equation. The methodological approach of this work is considered to be a threefold one which starts first by conducting a ‘Hierarchal Cluster Analysis’ to classify final export flows showing a certain degree of linkage between each other. The analysis resulted in three main sectoral clusters of exports between Agadir_4 and E.U_26: cluster 1 for Petrochemical related sectors, cluster 2 durable goods and finally cluster 3 for heavy duty machinery and spare parts sectors. Second step continues by taking export flows resulting from the 3 clusters to be subject to treatment with diagonal Rules of origin through ‘The Double Differences Approach’, versus an equally comparable untreated control group. Third step is to verify results through a robustness check applied by ‘Propensity Score Matching’ to validate that the same sectoral final export and intermediate flows increased when rules of origin were relaxed. Through all the previous analysis, a remarkable and partial significance of the interaction term combining both treatment effects and time for the coefficients of 13 out of the 17 covered sectors turned out to be partially significant and it further asserted that treatment with diagonal rules of origin contributed in increasing Agadir’s_4 final and intermediate exports to the E.U._26 on average by 335% and in changing Agadir_4 exports structure and composition to the E.U._26 countries.

Keywords: agadir association agreement, structured gravity model, hierarchal cluster analysis, double differences estimation, propensity score matching, diagonal and relaxed rules of origin

Procedia PDF Downloads 304
764 Removal of Cr (VI) from Water through Adsorption Process Using GO/PVA as Nanosorbent

Authors: Syed Hadi Hasan, Devendra Kumar Singh, Viyaj Kumar

Abstract:

Cr (VI) is a known toxic heavy metal and has been considered as a priority pollutant in water. The effluent of various industries including electroplating, anodizing baths, leather tanning, steel industries and chromium based catalyst are the major source of Cr (VI) contamination in the aquatic environment. Cr (VI) show high mobility in the environment and can easily penetrate cell membrane of the living tissues to exert noxious effects. The Cr (VI) contamination in drinking water causes various hazardous health effects to the human health such as cancer, skin and stomach irritation or ulceration, dermatitis, damage to liver, kidney circulation and nerve tissue damage. Herein, an attempt has been done to develop an efficient adsorbent for the removal of Cr (VI) from water. For this purpose nanosorbent composed of polyvinyl alcohol functionalized graphene oxide (GO/PVA) was prepared. Thus, obtained GO/PVA was characterized through FTIR, XRD, SEM, and Raman Spectroscopy. As prepared nanosorbent of GO/PVA was utilized for the removal Cr (VI) in batch mode experiment. The process variables such as contact time, initial Cr (VI) concentration, pH, and temperature were optimized. The maximum 99.8 % removal of Cr (VI) was achieved at initial Cr (VI) concentration 60 mg/L, pH 2, temperature 35 °C and equilibrium was achieved within 50 min. The two widely used isotherm models viz. Langmuir and Freundlich were analyzed using linear correlation coefficient (R2) and it was found that Langmuir model gives best fit with high value of R2 for the data of present adsorption system which indicate the monolayer adsorption of Cr (VI) on the GO/PVA. Kinetic studies were also conducted using pseudo-first order and pseudo-second order models and it was observed that chemosorptive pseudo-second order model described the kinetics of current adsorption system in better way with high value of correlation coefficient. Thermodynamic studies were also conducted and results showed that the adsorption was spontaneous and endothermic in nature.

Keywords: adsorption, GO/PVA, isotherm, kinetics, nanosorbent, thermodynamics

Procedia PDF Downloads 379
763 Establishing a Surrogate Approach to Assess the Exposure Concentrations during Coating Process

Authors: Shan-Hong Ying, Ying-Fang Wang

Abstract:

A surrogate approach was deployed for assessing exposures of multiple chemicals at the selected working area of coating processes and applied to assess the exposure concentration of similar exposed groups using the same chemicals but different formula ratios. For the selected area, 6 to 12 portable photoionization detector (PID) were placed uniformly in its workplace to measure its total VOCs concentrations (CT-VOCs) for 6 randomly selected workshifts. Simultaneously, one sampling strain was placed beside one of these portable PIDs, and the collected air sample was analyzed for individual concentration (CVOCi) of 5 VOCs (xylene, butanone, toluene, butyl acetate, and dimethylformamide). Predictive models were established by relating the CT-VOCs to CVOCi of each individual compound via simple regression analysis. The established predictive models were employed to predict each CVOCi based on the measured CT-VOC for each the similar working area using the same portable PID. Results show that predictive models obtained from simple linear regression analyses were found with an R2 = 0.83~0.99 indicating that CT-VOCs were adequate for predicting CVOCi. In order to verify the validity of the exposure prediction model, the sampling analysis of the above chemical substances was further carried out and the correlation between the measured value (Cm) and the predicted value (Cp) was analyzed. It was found that there is a good correction between the predicted value and measured value of each measured chemical substance (R2=0.83~0.98). Therefore, the surrogate approach could be assessed the exposure concentration of similar exposed groups using the same chemicals but different formula ratios. However, it is recommended to establish the prediction model between the chemical substances belonging to each coater and the direct-reading PID, which is more representative of reality exposure situation and more accurately to estimate the long-term exposure concentration of operators.

Keywords: exposure assessment, exposure prediction model, surrogate approach, TVOC

Procedia PDF Downloads 130
762 Balancing a Rotary Inverted Pendulum System Using Robust Generalized Dynamic Inverse: Design and Experiment

Authors: Ibrahim M. Mehedi, Uzair Ansari, Ubaid M. Al-Saggaf, Abdulrahman H. Bajodah

Abstract:

This paper presents a methodology for balancing a rotary inverted pendulum system using Robust Generalized Dynamic Inversion (RGDI) under influence of parametric variations and external disturbances. In GDI control, dynamic constraints are formulated in the form of asymptotically stable differential equation which encapsulates the control objectives. The constraint differential equations are based on the deviation function of the angular position and its rates from their reference values. The constraint dynamics are inverted using Moore-Penrose Generalized Inverse (MPGI) to realize the control expression. The GDI singularity problem is addressed by augmenting a dynamic scale factor in the interpretation of MPGI which guarantee asymptotically stable position tracking. An additional term based on Sliding Mode Control is appended within GDI control to make it robust against parametric variations, disturbances and tracking performance deterioration due to generalized inversion scaling. The stability of the closed loop system is ensured by using positive definite Lyapunov energy function that guarantees semi-global practically stable position tracking. Numerical simulations are conducted on the dynamic model of rotary inverted pendulum system to analyze the efficiency of proposed RGDI control law. The comparative study is also presented, in which the performance of RGDI control is compared with Linear Quadratic Regulator (LQR) and is verified through experiments. Numerical simulations and real-time experiments demonstrate better tracking performance abilities and robustness features of RGDI control in the presence of parametric uncertainties and disturbances.

Keywords: generalized dynamic inversion, lyapunov stability, rotary inverted pendulum system, sliding mode control

Procedia PDF Downloads 156
761 The Structural Behavior of Fiber Reinforced Lightweight Concrete Beams: An Analytical Approach

Authors: Jubee Varghese, Pouria Hafiz

Abstract:

Increased use of lightweight concrete in the construction industry is mainly due to its reduction in the weight of the structural elements, which in turn reduces the cost of production, transportation, and the overall project cost. However, the structural application of these lightweight concrete structures is limited due to its reduced density. Hence, further investigations are in progress to study the effect of fiber inclusion in improving the mechanical properties of lightweight concrete. Incorporating structural steel fibers, in general, enhances the performance of concrete and increases its durability by minimizing its potential to cracking and providing crack arresting mechanism. In this research, Geometric and Materially Non-linear Analysis (GMNA) was conducted for Finite Element Modelling using a software known as ABAQUS, to investigate the structural behavior of lightweight concrete with and without the addition of steel fibers and shear reinforcement. 21 finite element models of beams were created to study the effect of steel fibers based on three main parameters; fiber volume fraction (Vf = 0, 0.5 and 0.75%), shear span to depth ratio (a/d of 2, 3 and 4) and ratio of area of shear stirrups to spacing (As/s of 0.7, 1 and 1.6). The models created were validated with the previous experiment conducted by H.K. Kang et al. in 2011. It was seen that the lightweight fiber reinforcement can replace the use of fiber reinforced normal weight concrete as structural elements. The effect of an increase in steel fiber volume fraction is dominant for beams with higher shear span to depth ratio than for lower ratios. The effect of stirrups in the presence of fibers was very negligible; however; it provided extra confinement to the cracks by reducing the crack propagation and extra shear resistance than when compared to beams with no stirrups.

Keywords: ABAQUS, beams, fiber-reinforced concrete, finite element, light weight, shear span-depth ratio, steel fibers, steel-fiber volume fraction

Procedia PDF Downloads 90
760 Optimization of Traffic Agent Allocation for Minimizing Bus Rapid Transit Cost on Simplified Jakarta Network

Authors: Gloria Patricia Manurung

Abstract:

Jakarta Bus Rapid Transit (BRT) system which was established in 2009 to reduce private vehicle usage and ease the rush hour gridlock throughout the Jakarta Greater area, has failed to achieve its purpose. With gradually increasing the number of private vehicles ownership and reduced road space by the BRT lane construction, private vehicle users intuitively invade the exclusive lane of BRT, creating local traffic along the BRT network. Invaded BRT lanes costs become the same with the road network, making BRT which is supposed to be the main public transportation in the city becoming unreliable. Efforts to guard critical lanes with preventing the invasion by allocating traffic agents at several intersections have been expended, lead to the improving congestion level along the lane. Given a set of number of traffic agents, this study uses an analytical approach to finding the best deployment strategy of traffic agent on a simplified Jakarta road network in minimizing the BRT link cost which is expected to lead to the improvement of BRT system time reliability. User-equilibrium model of traffic assignment is used to reproduce the origin-destination demand flow on the network and the optimum solution conventionally can be obtained with brute force algorithm. This method’s main constraint is that traffic assignment simulation time escalates exponentially with the increase of set of agent’s number and network size. Our proposed metaheuristic and heuristic algorithms perform linear simulation time increase and result in minimized BRT cost approaching to brute force algorithm optimization. Further analysis of the overall network link cost should be performed to see the impact of traffic agent deployment to the network system.

Keywords: traffic assignment, user equilibrium, greedy algorithm, optimization

Procedia PDF Downloads 215
759 Numerical Solution of Portfolio Selecting Semi-Infinite Problem

Authors: Alina Fedossova, Jose Jorge Sierra Molina

Abstract:

SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.

Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution

Procedia PDF Downloads 290