Search results for: offline estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2065

Search results for: offline estimation

175 Intensive Neurophysiological Rehabilitation System: New Approach for Treatment of Children with Autism

Authors: V. I. Kozyavkin, L. F. Shestopalova, T. B. Voloshyn

Abstract:

Introduction: Rehabilitation of children with Autism is the issue of the day in psychiatry and neurology. It is attributed to constantly increasing quantity of autistic children - Autistic Spectrum Disorders (ASD) Existing rehabilitation approaches in treatment of children with Autism improve their medico- social and social- psychological adjustment. Experience of treatment for different kinds of Autistic disorders in International Clinic of Rehabilitation (ICR) reveals the necessity of complex intensive approach for healing this malady and wider implementation of a Kozyavkin method for treatment of children with ASD. Methods: 19 children aged from 3 to 14 years were examined. They were diagnosed ‘Autism’ (F84.0) with comorbid neurological pathology (from pyramidal insufficiency to para- and tetraplegia). All patients underwent rehabilitation in ICR during two weeks, where INRS approach was used. INRS included methods like biomechanical correction of the spine, massage, physical therapy, joint mobilization, wax-paraffin applications. They were supplemented by art- therapy, ergotherapy, rhythmical group exercises, computer game therapy, team Olympic games and other methods for improvement of motivation and social integration of the child. Estimation of efficacy was conducted using parent’s questioning and done twice- on the onset of INRS rehabilitation course and two weeks afterward. For efficacy assessment of rehabilitation of autistic children in ICR standardized tool was used, namely Autism Treatment Evaluation Checklist (ATEC). This scale was selected because any rehabilitation approaches for the child with Autism can be assessed using it. Results: Before the onset of INRS treatment mean score according to ATEC scale was 64,75±9,23, it reveals occurrence in examined children severe communication, speech, socialization and behavioral impairments. After the end of the rehabilitation course, the mean score was 56,5±6,7, what indicates positive dynamics in comparison to the onset of rehabilitation. Generally, improvement of psychoemotional state occurred in 90% of cases. Most significant changes occurred in the scope of speech (16,5 before and 14,5 after the treatment), socialization (15.1 before and 12,5 after) and behavior (20,1 before and 17.4 after). Conclusion: As a result of INRS rehabilitation course reduction of autistic symptoms was noted. Particularly improvements in speech were observed (children began to spell out new syllables, words), there was some decrease in signs of destructiveness, quality of contact with the surrounding people improved, new skills of self-service appeared. The prospect of the study is further, according to evidence- based medicine standards, deeper examination of INRS and assessment of its usefulness in treatment for Autism and ASD.

Keywords: intensive neurophysiological rehabilitation system (INRS), international clinic od rehabilitation, ASD, rehabilitation

Procedia PDF Downloads 169
174 Digital Image Correlation: Metrological Characterization in Mechanical Analysis

Authors: D. Signore, M. Ferraiuolo, P. Caramuta, O. Petrella, C. Toscano

Abstract:

The Digital Image Correlation (DIC) is a newly developed optical technique that is spreading in all engineering sectors because it allows the non-destructive estimation of the entire surface deformation without any contact with the component under analysis. These characteristics make the DIC very appealing in all the cases the global deformation state is to be known without using strain gages, which are the most used measuring device. The DIC is applicable to any material subjected to distortion caused by either thermal or mechanical load, allowing to obtain high-definition mapping of displacements and deformations. That is why in the civil and the transportation industry, DIC is very useful for studying the behavior of metallic materials as well as of composite materials. DIC is also used in the medical field for the characterization of the local strain field of the vascular tissues surface subjected to uniaxial tensile loading. DIC can be carried out in the two dimension mode (2D DIC) if a single camera is used or in a three dimension mode (3D DIC) if two cameras are involved. Each point of the test surface framed by the cameras can be associated with a specific pixel of the image, and the coordinates of each point are calculated knowing the relative distance between the two cameras together with their orientation. In both arrangements, when a component is subjected to a load, several images related to different deformation states can be are acquired through the cameras. A specific software analyzes the images via the mutual correlation between the reference image (obtained without any applied load) and those acquired during the deformation giving the relative displacements. In this paper, a metrological characterization of the digital image correlation is performed on aluminum and composite targets both in static and dynamic loading conditions by comparison between DIC and strain gauges measures. In the static test, interesting results have been obtained thanks to an excellent agreement between the two measuring techniques. In addition, the deformation detected by the DIC is compliant with the result of a FEM simulation. In the dynamic test, the DIC was able to follow with a good accuracy the periodic deformation of the specimen giving results coherent with the ones given by FEM simulation. In both situations, it was seen that the DIC measurement accuracy depends on several parameters such as the optical focusing, the parameters chosen to perform the mutual correlation between the images and, finally, the reference points on image to be analyzed. In the future, the influence of these parameters will be studied, and a method to increase the accuracy of the measurements will be developed in accordance with the requirements of the industries especially of the aerospace one.

Keywords: accuracy, deformation, image correlation, mechanical analysis

Procedia PDF Downloads 311
173 Exploration and Evaluation of the Effect of Multiple Countermeasures on Road Safety

Authors: Atheer Al-Nuaimi, Harry Evdorides

Abstract:

Every day many people die or get disabled or injured on roads around the world, which necessitates more specific treatments for transportation safety issues. International road assessment program (iRAP) model is one of the comprehensive road safety models which accounting for many factors that affect road safety in a cost-effective way in low and middle income countries. In iRAP model road safety has been divided into five star ratings from 1 star (the lowest level) to 5 star (the highest level). These star ratings are based on star rating score which is calculated by iRAP methodology depending on road attributes, traffic volumes and operating speeds. The outcome of iRAP methodology are the treatments that can be used to improve road safety and reduce fatalities and serious injuries (FSI) numbers. These countermeasures can be used separately as a single countermeasure or mix as multiple countermeasures for a location. There is general agreement that the adequacy of a countermeasure is liable to consistent losses when it is utilized as a part of mix with different countermeasures. That is, accident diminishment appraisals of individual countermeasures cannot be easily added together. The iRAP model philosophy makes utilization of a multiple countermeasure adjustment factors to predict diminishments in the effectiveness of road safety countermeasures when more than one countermeasure is chosen. A multiple countermeasure correction factors are figured for every 100-meter segment and for every accident type. However, restrictions of this methodology incorporate a presumable over-estimation in the predicted crash reduction. This study aims to adjust this correction factor by developing new models to calculate the effect of using multiple countermeasures on the number of fatalities for a location or an entire road. Regression models have been used to establish relationships between crash frequencies and the factors that affect their rates. Multiple linear regression, negative binomial regression, and Poisson regression techniques were used to develop models that can address the effectiveness of using multiple countermeasures. Analyses are conducted using The R Project for Statistical Computing showed that a model developed by negative binomial regression technique could give more reliable results of the predicted number of fatalities after the implementation of road safety multiple countermeasures than the results from iRAP model. The results also showed that the negative binomial regression approach gives more precise results in comparison with multiple linear and Poisson regression techniques because of the overdispersion and standard error issues.

Keywords: international road assessment program, negative binomial, road multiple countermeasures, road safety

Procedia PDF Downloads 240
172 Mathematical Modelling of Biogas Dehumidification by Using of Counterflow Heat Exchanger

Authors: Staņislavs Gendelis, Andris Jakovičs, Jānis Ratnieks, Aigars Laizāns, Dāvids Vardanjans

Abstract:

Dehumidification of biogas at the biomass plants is very important to provide the energy efficient burning of biomethane at the outlet. A few methods are widely used to reduce the water content in biogas, e.g. chiller/heat exchanger based cooling, usage of different adsorbents like PSA, or the combination of such approaches. A quite different method of biogas dehumidification is offered and analyzed in this paper. The main idea is to direct the flow of biogas from the plant around it downwards; thus, creating additional insulation layer. As the temperature in gas shell layer around the plant will decrease from ~ 38°C to 20°C in the summer or even to 0°C in the winter, condensation of water vapor occurs. The water from the bottom of the gas shell can be collected and drain away. In addition, another upward shell layer is created after the condensate drainage place on the outer side to further reducing heat losses. Thus, counterflow biogas heat exchanger is created around the biogas plant. This research work deals with the numerical modelling of biogas flow, taking into account heat exchange and condensation on cold surfaces. Different kinds of boundary conditions (air and ground temperatures in summer/winter) and various physical properties of constructions (insulation between layers, wall thickness) are included in the model to make it more general and useful for different biogas flow conditions. The complexity of this problem is fact, that the temperatures in both channels are conjugated in case of low thermal resistance between layers. MATLAB programming language is used for multiphysical model development, numerical calculations and result visualization. Experimental installation of a biogas plant’s vertical wall with an additional 2 layers of polycarbonate sheets with the controlled gas flow was set up to verify the modelling results. Gas flow at inlet/outlet, temperatures between the layers and humidity were controlled and measured during a number of experiments. Good correlation with modelling results for vertical wall section allows using of developed numerical model for an estimation of parameters for the whole biogas dehumidification system. Numerical modelling of biogas counterflow heat exchanger system placed on the plant’s wall for various cases allows optimizing of thickness for gas layers and insulation layer to ensure necessary dehumidification of the gas under different climatic conditions. Modelling of system’s defined configuration with known conditions helps to predict the temperature and humidity content of the biogas at the outlet.

Keywords: biogas dehumidification, numerical modelling, condensation, biogas plant experimental model

Procedia PDF Downloads 549
171 Signaling Theory: An Investigation on the Informativeness of Dividends and Earnings Announcements

Authors: Faustina Masocha, Vusani Moyo

Abstract:

For decades, dividend announcements have been presumed to contain important signals about the future prospects of companies. Similarly, the same has been presumed about management earnings announcements. Despite both dividend and earnings announcements being considered informative, a number of researchers questioned their credibility and found both to contain short-term signals. Pertaining to dividend announcements, some authors argued that although they might contain important information that can result in changes in share prices, which consequently results in the accumulation of abnormal returns, their degree of informativeness is less compared to other signaling tools such as earnings announcements. Yet, this claim in favor has been refuted by other researchers who found the effect of earnings to be transitory and of little value to shareholders as indicated by the little abnormal returns earned during the period surrounding earnings announcements. Considering the above, it is apparent that both dividends and earnings have been hypothesized to have a signaling impact. This prompts one to question which between these two signaling tools is more informative. To answer this question, two follow-up questions were asked. The first question sought to determine the event which results in the most effect on share prices, while the second question focused on the event that influenced trading volume the most. To answer the first question and evaluate the effect that each of these events had on share prices, an event study methodology was employed on a sample made up of the top 10 JSE-listed companies for data collected from 2012 to 2019 to determine if shareholders gained abnormal returns (ARs) during announcement dates. The event that resulted in the most persistent and highest amount of ARs was considered to be more informative. Looking at the second follow-up question, an investigation was conducted to determine if either dividends or earnings announcements influenced trading patterns, resulting in abnormal trading volumes (ATV) around announcement time. The event that resulted in the most ATV was considered more informative. Using an estimation period of 20 days and an event window of 21 days, and hypothesis testing, it was found that announcements pertaining to the increase of earnings resulted in the most ARs, Cumulative Abnormal Returns (CARs) and had a lasting effect in comparison to dividend announcements whose effect lasted until day +3. This solidifies some empirical arguments that the signaling effect of dividends has become diminishing. It was also found that when reported earnings declined in comparison to the previous period, there was an increase in trading volume, resulting in ATV. Although dividend announcements did result in abnormal returns, they were lesser than those acquired during earnings announcements which refutes a number of theoretical and empirical arguments that found dividends to be more informative than earnings announcements.

Keywords: dividend signaling, event study methodology, information content of earnings, signaling theory

Procedia PDF Downloads 172
170 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging

Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen

Abstract:

Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.

Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques

Procedia PDF Downloads 99
169 Estimating Understory Species Diversity of West Timor Tropical Savanna, Indonesia: The Basis for Planning an Integrated Management of Agricultural and Environmental Weeds and Invasive Species

Authors: M. L. Gaol, I. W. Mudita

Abstract:

Indonesia is well known as a country covered by lush tropical rain forests, but in fact, the northeastern part of the country, within the areas geologically known as Lesser Sunda, the dominant vegetation is tropical savanna. Lesser Sunda is a chain of islands located closer to Australia than to islands in the other parts of the country. Among those of islands in the chain which is closes to Australia, and thereby most strongly affected by the hot and dry Australian climate, is the island of Timor, the western part of which belongs to Indonesia and the eastern part is a sovereign state East Timor. Regardless of being the most dominant vegetation cover, tropical savanna in West Timor, especially its understory, is rarely investigated. This research was therefore carried out to investigate the structure, composition and diversity of the understory of this tropical savanna as the basis for looking at the possibility of introducing other spesieis for various purposes. For this research, 14 terrestrial communities representing major types of the existing savannas in West Timor was selected with aid of the most recently available satellite imagery. At each community, one stand of the size of 50 m x 50 m most likely representing the community was as the site of observation for the type of savanna under investigation. At each of the 14 communities, 20 plots of 1 m x 1 m in size was placed at random to identify understory species and to count the total number of individuals and to estimate the cover of each species. Based on such counts and estimation, the important value of each species was later calculated. The results of this research indicated that the understory of savanna in West Timor consisted of 73 understory species. Of this number of species, 18 species are grasses and 55 are non-grasses. Although lower than non-grass species, grass species indeed dominated the savanna as indicated by their number of individuals (65.33 vs 34.67%), species cover (57.80 vs 42.20%), and important value (123.15 vs 76.85). Of the 14 communities, the lowest density of grass was 13.50/m2 and the highest was 417.50/m2. Of 18 grass species found, all were commonly found as agricultural weeds, whereas of 55 non-grass, 10 species were commonly found as agricultural weeds, environmental weeds, or invasive species. In terms of better managing the savanna in the region, these findings provided the basis for planning a more integrated approach in managing such agricultural and environmental weeds as well as invasive species by considering the structure, composition, and species diversity of the understory species existing in each site. These findings also provided the basis for better understanding the flora of the region as a whole and for developing a flora database of West Timor in future.

Keywords: tropical savanna, understory species, integrated management, weedy and invasive species

Procedia PDF Downloads 135
168 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array

Authors: Yanping Liao, Zenan Wu, Ruigang Zhao

Abstract:

Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is ​​performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues ​​of the noise subspace, improve the divergence of small eigenvalues ​​in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.

Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust

Procedia PDF Downloads 130
167 The Role of Risk Attitudes and Networks on the Migration Decision: Empirical Evidence from the United States

Authors: Tamanna Rimi

Abstract:

A large body of literature has discussed the determinants of migration decision. However, the potential role of individual risk attitudes on migration decision has so far been overlooked. The research on migration literature has studied how the expected income differential influences migration flows for a risk neutral individual. However, migration takes place when there is no expected income differential or even the variability of income appears as lower than in the current location. This migration puzzle motivates a recent trend in the literature that analyzes how attitudes towards risk influence the decision to migrate. However, the significance of risk attitudes on migration decision has been addressed mostly in a theoretical perspective in the mainstream migration literature. The efficient outcome of labor market and overall economy are largely influenced by migration in many countries. Therefore, attitudes towards risk as a determinant of migration should get more attention in empirical studies. To author’s best knowledge, this is the first study that has examined the relationship between relative risk aversion and migration decision in US market. This paper considers movement across United States as a means of migration. In addition, this paper also explores the network effect due to the increasing size of one’s own ethnic group to a source location on the migration decision and how attitudes towards risk vary with network effect. Two ethnic groups (i.e. Asian and Hispanic) have been considered in this regard. For the empirical estimation, this paper uses two sources of data: 1) U.S. census data for social, economic, and health research, 2010 (IPUMPS) and 2) University of Michigan Health and Retirement Study, 2010 (HRS). In order to measure relative risk aversion, this study uses the ‘Two Sample Two-Stage Instrumental Variable (TS2SIV)’ technique. This is a similar method of Angrist (1990) and Angrist and Kruegers’ (1992) ‘Two Sample Instrumental Variable (TSIV)’ technique. Using a probit model, the empirical investigation yields the following results: (i) risk attitude has a significantly large impact on migration decision where more risk averse people are less likely to migrate; (ii) the impact of risk attitude on migration varies by other demographic characteristics such as age and sex; (iii) people with higher concentration of same ethnic households living in a particular place are expected to migrate less from their current place; (iv) the risk attitudes on migration vary with network effect. The overall findings of this paper relating risk attitude, migration decision and network effect can be a significant contribution addressing the gap between migration theory and empirical study in migration literature.

Keywords: migration, network effect, risk attitude, U.S. market

Procedia PDF Downloads 162
166 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 150
165 Imputation of Incomplete Large-Scale Monitoring Count Data via Penalized Estimation

Authors: Mohamed Dakki, Genevieve Robin, Marie Suet, Abdeljebbar Qninba, Mohamed A. El Agbani, Asmâa Ouassou, Rhimou El Hamoumi, Hichem Azafzaf, Sami Rebah, Claudia Feltrup-Azafzaf, Nafouel Hamouda, Wed a.L. Ibrahim, Hosni H. Asran, Amr A. Elhady, Haitham Ibrahim, Khaled Etayeb, Essam Bouras, Almokhtar Saied, Ashrof Glidan, Bakar M. Habib, Mohamed S. Sayoud, Nadjiba Bendjedda, Laura Dami, Clemence Deschamps, Elie Gaget, Jean-Yves Mondain-Monval, Pierre Defos Du Rau

Abstract:

In biodiversity monitoring, large datasets are becoming more and more widely available and are increasingly used globally to estimate species trends and con- servation status. These large-scale datasets challenge existing statistical analysis methods, many of which are not adapted to their size, incompleteness and heterogeneity. The development of scalable methods to impute missing data in incomplete large-scale monitoring datasets is crucial to balance sampling in time or space and thus better inform conservation policies. We developed a new method based on penalized Poisson models to impute and analyse incomplete monitoring data in a large-scale framework. The method al- lows parameterization of (a) space and time factors, (b) the main effects of predic- tor covariates, as well as (c) space–time interactions. It also benefits from robust statistical and computational capability in large-scale settings. The method was tested extensively on both simulated and real-life waterbird data, with the findings revealing that it outperforms six existing methods in terms of missing data imputation errors. Applying the method to 16 waterbird species, we estimated their long-term trends for the first time at the entire North African scale, a region where monitoring data suffer from many gaps in space and time series. This new approach opens promising perspectives to increase the accuracy of species-abundance trend estimations. We made it freely available in the r package ‘lori’ (https://CRAN.R-project.org/package=lori) and recommend its use for large- scale count data, particularly in citizen science monitoring programmes.

Keywords: biodiversity monitoring, high-dimensional statistics, incomplete count data, missing data imputation, waterbird trends in North-Africa

Procedia PDF Downloads 156
164 Finite Element Analysis of Layered Composite Plate with Elastic Pin Under Uniaxial Load Using ANSYS

Authors: R. M. Shabbir Ahmed, Mohamed Haneef, A. R. Anwar Khan

Abstract:

Analysis of stresses plays important role in the optimization of structures. Prior stress estimation helps in better design of the products. Composites find wide usage in the industrial and home applications due to its strength to weight ratio. Especially in the air craft industry, the usage of composites is more due to its advantages over the conventional materials. Composites are mainly made of orthotropic materials having unequal strength in the different directions. Composite materials have the drawback of delamination and debonding due to the weaker bond materials compared to the parent materials. So proper analysis should be done to the composite joints before using it in the practical conditions. In the present work, a composite plate with elastic pin is considered for analysis using finite element software Ansys. Basically the geometry is built using Ansys software using top down approach with different Boolean operations. The modelled object is meshed with three dimensional layered element solid46 for composite plate and solid element (Solid45) for pin material. Various combinations are considered to find the strength of the composite joint under uniaxial loading conditions. Due to symmetry of the problem, only quarter geometry is built and results are presented for full model using Ansys expansion options. The results show effect of pin diameter on the joint strength. Here the deflection and load sharing of the pin are increasing and other parameters like overall stress, pin stress and contact pressure are reducing due to lesser load on the plate material. Further material effect shows, higher young modulus material has little deflection, but other parameters are increasing. Interference analysis shows increasing of overall stress, pin stress, contact stress along with pin bearing load. This increase should be understood properly for increasing the load carrying capacity of the joint. Generally every structure is preloaded to increase the compressive stress in the joint to increase the load carrying capacity. But the stress increase should be properly analysed for composite due to its delamination and debonding effects due to failure of the bond materials. When results for an isotropic combination is compared with composite joint, isotropic joint shows uniformity of the results with lesser values for all parameters. This is mainly due to applied layer angle combinations. All the results are represented with necessasary pictorial plots.

Keywords: bearing force, frictional force, finite element analysis, ANSYS

Procedia PDF Downloads 334
163 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber

Procedia PDF Downloads 266
162 Dynamic Analysis of Commodity Price Fluctuation and Fiscal Management in Sub-Saharan Africa

Authors: Abidemi C. Adegboye, Nosakhare Ikponmwosa, Rogers A. Akinsokeji

Abstract:

For many resource-rich developing countries, fiscal policy has become a key tool used for short-run fiscal management since it is considered as playing a critical role in injecting part of resource rents into the economies. However, given its instability, reliance on revenue from commodity exports renders fiscal management, budgetary planning and the efficient use of public resources difficult. In this study, the linkage between commodity prices and fiscal operations among a sample of commodity-exporting countries in sub-Saharan Africa (SSA) is investigated. The main question is whether commodity price fluctuations affects the effectiveness of fiscal policy as a macroeconomic stabilization tool in these countries. Fiscal management effectiveness is considered as the ability of fiscal policy to react countercyclically to output gaps in the economy. Fiscal policy is measured as the ratio of fiscal deficit to GDP and the ratio of government spending to GDP, output gap is measured as a Hodrick-Prescott filter of output growth for each country, while commodity prices are associated with each country based on its main export commodity. Given the dynamic nature of fiscal policy effects on the economy overtime, a dynamic framework is devised for the empirical analysis. The panel cointegration and error correction methodology is used to explain the relationships. In particular, the study employs the panel ECM technique to trace short-term effects of commodity prices on fiscal management and also uses the fully modified OLS (FMOLS) technique to determine the long run relationships. These procedures provide sufficient estimation of the dynamic effects of commodity prices on fiscal policy. Data used cover the period 1992 to 2016 for 11 SSA countries. The study finds that the elasticity of the fiscal policy measures with respect to the output gap is significant and positive, suggesting that fiscal policy is actually procyclical among the countries in the sample. This implies that fiscal management for these countries follows the trend of economic performance. Moreover, it is found that fiscal policy has not performed well in delivering macroeconomic stabilization for these countries. The difficulty in applying fiscal stabilization measures is attributable to the unstable revenue inflows due to the highly volatile nature of commodity prices in the international market. For commodity-exporting countries in SSA to improve fiscal management, therefore, fiscal planning should be largely decoupled from commodity revenues, domestic revenue bases must be improved, and longer period perspectives in fiscal policy management are the critical suggestions in this study.

Keywords: commodity prices, ECM, fiscal policy, fiscal procyclicality, fully modified OLS, sub-saharan africa

Procedia PDF Downloads 163
161 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve

Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick

Abstract:

Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.

Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin

Procedia PDF Downloads 150
160 Some Quality Parameters of Selected Maize Hybrids from Serbia for the Production of Starch, Bioethanol and Animal Feed

Authors: Marija Milašinović-Šeremešić, Valentina Semenčenko, Milica Radosavljević, Dušanka Terzić, Ljiljana Mojović, Ljubica Dokić

Abstract:

Maize (Zea mays L.) is one of the most important cereal crops, and as such, one of the most significant naturally renewable carbohydrate raw materials for the production of energy and multitude of different products. The main goal of the present study was to investigate a suitability of selected maize hybrids of different genetic background produced in Maize Research Institute ‘Zemun Polje’, Belgrade, Serbia, for starch, bioethanol and animal feed production. All the hybrids are commercial and their detailed characterization is important for the expansion of their different uses. The starches were isolated by using a 100-g laboratory maize wet-milling procedure. Hydrolysis experiments were done in two steps (liquefaction with Termamyl SC, and saccharification with SAN Extra L). Starch hydrolysates obtained by the two-step hydrolysis of the corn flour starch were subjected to fermentation by S. cerevisiae var. ellipsoideus under semi-anaerobic conditions. The digestibility based on enzymatic solubility was performed by the Aufréré method. All investigated ZP maize hybrids had very different physical characteristics and chemical composition which could allow various possibilities of their use. The amount of hard (vitreous) and soft (floury) endosperm in kernel is considered one of the most important parameters that can influence the starch and bioethanol yields. Hybrids with a lower test weight and density and a greater proportion of soft endosperm fraction had a higher yield, recovery and purity of starch. Among the chemical composition parameters only starch content significantly affected the starch yield. Starch yields of studied maize hybrids ranged from 58.8% in ZP 633 to 69.0% in ZP 808. The lowest bioethanol yield of 7.25% w/w was obtained for hybrid ZP 611k and the highest by hybrid ZP 434 (8.96% w/w). A very significant correlation was determined between kernel starch content and the bioethanol yield, as well as volumetric productivity (48h) (r=0.66). Obtained results showed that the NDF, ADF and ADL contents in the whole maize plant of the observed ZP maize hybrids varied from 40.0% to 60.1%, 18.6% to 32.1%, and 1.4% to 3.1%, respectively. The difference in the digestibility of the dry matter of the whole plant among hybrids (ZP 735 and ZP 560) amounted to 18.1%. Moreover, the differences in the contents of the lignocelluloses fraction affected the differences in dry matter digestibility. From the results it can be concluded that genetic background of the selected maize hybrids plays an important part in estimation of the technological value of maize hybrids for various purposes. Obtained results are of an exceptional importance for the breeding programs and selection of potentially most suitable maize hybrids for starch, bioethanol and animal feed production.

Keywords: bioethanol, biomass quality, maize, starch

Procedia PDF Downloads 222
159 Bioanalytical Method Development and Validation of Aminophylline in Rat Plasma Using Reverse Phase High Performance Liquid Chromatography: An Application to Preclinical Pharmacokinetics

Authors: S. G. Vasantharaju, Viswanath Guptha, Raghavendra Shetty

Abstract:

Introduction: Aminophylline is a methylxanthine derivative belonging to the class bronchodilator. From the literature survey, reported methods reveals the solid phase extraction and liquid liquid extraction which is highly variable, time consuming, costly and laborious analysis. Present work aims to develop a simple, highly sensitive, precise and accurate high-performance liquid chromatography method for the quantification of Aminophylline in rat plasma samples which can be utilized for preclinical studies. Method: Reverse Phase high-performance liquid chromatography method. Results: Selectivity: Aminophylline and the internal standard were well separated from the co-eluted components and there was no interference from the endogenous material at the retention time of analyte and the internal standard. The LLOQ measurable with acceptable accuracy and precision for the analyte was 0.5 µg/mL. Linearity: The developed and validated method is linear over the range of 0.5-40.0 µg/mL. The coefficient of determination was found to be greater than 0.9967, indicating the linearity of this method. Accuracy and precision: The accuracy and precision values for intra and inter day studies at low, medium and high quality control samples concentrations of aminophylline in the plasma were within the acceptable limits Extraction recovery: The method produced consistent extraction recovery at all 3 QC levels. The mean extraction recovery of aminophylline was 93.57 ± 1.28% while that of internal standard was 90.70 ± 1.30%. Stability: The results show that aminophylline is stable in rat plasma under the studied stability conditions and that it is also stable for about 30 days when stored at -80˚C. Pharmacokinetic studies: The method was successfully applied to the quantitative estimation of aminophylline rat plasma following its oral administration to rats. Discussion: Preclinical studies require a rapid and sensitive method for estimating the drug concentration in the rat plasma. The method described in our article includes a simple protein precipitation extraction technique with ultraviolet detection for quantification. The present method is simple and robust for fast high-throughput sample analysis with less analysis cost for analyzing aminophylline in biological samples. In this proposed method, no interfering peaks were observed at the elution times of aminophylline and the internal standard. The method also had sufficient selectivity, specificity, precision and accuracy over the concentration range of 0.5 - 40.0 µg/mL. An isocratic separation technique was used underlining the simplicity of the presented method.

Keywords: Aminophyllin, preclinical pharmacokinetics, rat plasma, RPHPLC

Procedia PDF Downloads 222
158 Digitalization, Economic Growth and Financial Sector Development in Africa

Authors: Abdul Ganiyu Iddrisu

Abstract:

Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth, and reducing poverty. Yet compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, low-income flows among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector, however, empirical evidence on digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We therefore argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa focusing on the role of digitization, and financial sector development. First, we assess how digitization influence financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on 2 economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improves economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.

Keywords: digitalization, economic growth, financial sector development, Africa

Procedia PDF Downloads 103
157 Numerical Erosion Investigation of Standalone Screen (Wire-Wrapped) Due to the Impact of Sand Particles Entrained in a Single-Phase Flow (Water Flow)

Authors: Ahmed Alghurabi, Mysara Mohyaldinn, Shiferaw Jufar, Obai Younis, Abdullah Abduljabbar

Abstract:

Erosion modeling equations were typically acquired from regulated experimental trials for solid particles entrained in single-phase or multi-phase flows. Evidently, those equations were later employed to predict the erosion damage caused by the continuous impacts of solid particles entrained in streamflow. It is also well-known that the particle impact angle and velocity do not change drastically in gas-sand flow erosion prediction; hence an accurate prediction of erosion can be projected. On the contrary, high-density fluid flows, such as water flow, through complex geometries, such as sand screens, greatly affect the sand particles’ trajectories/tracks and consequently impact the erosion rate predictions. Particle tracking models and erosion equations are frequently applied simultaneously as a method to improve erosion visualization and estimation. In the present work, computational fluid dynamic (CFD)-based erosion modeling was performed using a commercially available software; ANSYS Fluent. The continuous phase (water flow) behavior was simulated using the realizable K-epsilon model, and the secondary phase (solid particles), having a 5% flow concentration, was tracked with the help of the discrete phase model (DPM). To accomplish a successful erosion modeling, three erosion equations from the literature were utilized and introduced to the ANSYS Fluent software to predict the screen wire-slot velocity surge and estimate the maximum erosion rates on the screen surface. Results of turbulent kinetic energy, turbulence intensity, dissipation rate, the total pressure on the screen, screen wall shear stress, and flow velocity vectors were presented and discussed. Moreover, the particle tracks and path-lines were also demonstrated based on their residence time, velocity magnitude, and flow turbulence. On one hand, results from the utilized erosion equations have shown similarities in screen erosion patterns, locations, and DPM concentrations. On the other hand, the model equations estimated slightly different values of maximum erosion rates of the wire-wrapped screen. This is solely based on the fact that the utilized erosion equations were developed with some assumptions that are controlled by the experimental lab conditions.

Keywords: CFD simulation, erosion rate prediction, material loss due to erosion, water-sand flow

Procedia PDF Downloads 163
156 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles

Authors: Nozar Kishi, Babak Kamrani, Filmon Habte

Abstract:

Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.

Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM

Procedia PDF Downloads 269
155 The Trade Flow of Small Association Agreements When Rules of Origin Are Relaxed

Authors: Esmat Kamel

Abstract:

This paper aims to shed light on the extent to which the Agadir Association agreement has fostered inter regional trade between the E.U_26 and the Agadir_4 countries; once that we control for the evolution of Agadir agreement’s exports to the rest of the world. The next valid question will be regarding any remarkable variation in the spatial/sectoral structure of exports, and to what extent has it been induced by the Agadir agreement itself and precisely after the adoption of rules of origin and the PANEURO diagonal cumulative scheme? The paper’s empirical dataset covering a timeframe from [2000 -2009] was designed to account for sector specific export and intermediate flows and the bilateral structured gravity model was custom tailored to capture sector and regime specific rules of origin and the Poisson Pseudo Maximum Likelihood Estimator was used to calculate the gravity equation. The methodological approach of this work is considered to be a threefold one which starts first by conducting a ‘Hierarchal Cluster Analysis’ to classify final export flows showing a certain degree of linkage between each other. The analysis resulted in three main sectoral clusters of exports between Agadir_4 and E.U_26: cluster 1 for Petrochemical related sectors, cluster 2 durable goods and finally cluster 3 for heavy duty machinery and spare parts sectors. Second step continues by taking export flows resulting from the 3 clusters to be subject to treatment with diagonal Rules of origin through ‘The Double Differences Approach’, versus an equally comparable untreated control group. Third step is to verify results through a robustness check applied by ‘Propensity Score Matching’ to validate that the same sectoral final export and intermediate flows increased when rules of origin were relaxed. Through all the previous analysis, a remarkable and partial significance of the interaction term combining both treatment effects and time for the coefficients of 13 out of the 17 covered sectors turned out to be partially significant and it further asserted that treatment with diagonal rules of origin contributed in increasing Agadir’s_4 final and intermediate exports to the E.U._26 on average by 335% and in changing Agadir_4 exports structure and composition to the E.U._26 countries.

Keywords: agadir association agreement, structured gravity model, hierarchal cluster analysis, double differences estimation, propensity score matching, diagonal and relaxed rules of origin

Procedia PDF Downloads 318
154 Simulation of Technological, Energy and GHG Comparison between a Conventional Diesel Bus and E-bus: Feasibility to Promote E-bus Change in High Lands Cities

Authors: Riofrio Jonathan, Fernandez Guillermo

Abstract:

Renewable energy represented around 80% of the energy matrix for power generation in Ecuador during 2020, so the deployment of current public policies is focused on taking advantage of the high presence of renewable sources to carry out several electrification projects. These projects are part of the portfolio sent to the United Nations Framework on Climate Change (UNFCCC) as a commitment to reduce greenhouse gas emissions (GHG) in the established national determined contribution (NDC). In this sense, the Ecuadorian Organic Energy Efficiency Law (LOEE) published in 2019 promotes E-mobility as one of the main milestones. In fact, it states that the new vehicles for urban and interurban usage must be E-buses since 2025. As a result, and for a successful implementation of this technological change in a national context, it is important to deploy land surveys focused on technical and geographical areas to keep the quality of services in both the electricity and transport sectors. Therefore, this research presents a technological and energy comparison between a conventional diesel bus and its equivalent E-bus. Both vehicles fulfill all the technical requirements to ride in the study-case city, which is Ambato in the province of Tungurahua-Ecuador. In addition, the analysis includes the development of a model for the energy estimation of both technologies that are especially applied in a highland city such as Ambato. The altimetry of the most important bus routes in the city varies from 2557 to 3200 m.a.s.l., respectively, for the lowest and highest points. These operation conditions provide a grade of novelty to this paper. Complementary, the technical specifications of diesel buses are defined following the common features of buses registered in Ambato. On the other hand, the specifications for E-buses come from the most common units introduced in Latin America because there is not enough evidence in similar cities at the moment. The achieved results will be good input data for decision-makers since electric demand forecast, energy savings, costs, and greenhouse gases emissions are computed. Indeed, GHG is important because it allows reporting the transparency framework that it is part of the Paris Agreement. Finally, the presented results correspond to stage I of the called project “Analysis and Prospective of Electromobility in Ecuador and Energy Mix towards 2030” supported by Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ).

Keywords: high altitude cities, energy planning, NDC, e-buses, e-mobility

Procedia PDF Downloads 151
153 Survey of Indoor Radon/Thoron Concentrations in High Lung Cancer Incidence Area in India

Authors: Zoliana Bawitlung, P. C. Rohmingliana, L. Z. Chhangte, Remlal Siama, Hming Chungnunga, Vanram Lawma, L. Hnamte, B. K. Sahoo, B. K. Sapra, J. Malsawma

Abstract:

Mizoram state has the highest lung cancer incidence rate in India due to its high-level consumption of tobacco and its products which is supplemented by the food habits. While smoking is mainly responsible for this incidence, the effect of inhalation of indoor radon gas cannot be discarded as the hazardous nature of this radioactive gas and its progenies on human population have been well-established worldwide where the radiation damage to bronchial cells eventually can be the second leading cause of lung cancer next to smoking. It is also known that the effect of radiation, however, small may be the concentration, cannot be neglected as they can bring about the risk of cancer incidence. Hence, estimation of indoor radon concentration is important to give a useful reference against radiation effects as well as establishing its safety measures and to create a baseline for further case-control studies. The indoor radon/thoron concentrations in Mizoram had been measured in 41 dwellings selected on the basis of spot gamma background radiation and construction type of the houses during 2015-2016. The dwellings were monitored for one year, in 4 months cycles to indicate seasonal variations, for the indoor concentration of radon gas and its progenies, outdoor gamma dose, and indoor gamma dose respectively. A time-integrated method using Solid State Nuclear Track Detector (SSNTD) based single entry pin-hole dosimeters were used for measurement of indoor Radon/Thoron concentration. Gamma dose measurements for indoor as well as outdoor were carried out using Geiger Muller survey meters. Seasonal variation of indoor radon/ thoron concentration was monitored. The results show that the annual average radon concentrations varied from 54.07 – 144.72 Bq/m³ with an average of 90.20 Bq/m³ and the annual average thoron concentration varied from 17.39 – 54.19 Bq/m³ with an average of 35.91 Bq/m³ which are below the permissible limit. The spot survey of gamma background radiation level varies between 9 to 24 µR/h inside and outside the dwellings throughout Mizoram which are all within acceptable limits. From the above results, there is no direct indication that radon/thoron is responsible for the high lung cancer incidence in the area. In order to find epidemiological evidence of natural radiations to high cancer incidence in the area, one may need to conduct a case-control study which is beyond this scope. However, the derived data of measurement will provide baseline data for further studies.

Keywords: background gamma radiation, indoor radon/thoron, lung cancer, seasonal variation

Procedia PDF Downloads 143
152 Digitization and Economic Growth in Africa: The Role of Financial Sector Development

Authors: Abdul Ganiyu Iddrisu, Bei Chen

Abstract:

Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth and reducing poverty. Yet the compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, and low-income flows, among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector. However, empirical evidence on the digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We, therefore, argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa, focusing on the role of digitization and financial sector development. First, we assess how digitization influences financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to the private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improve economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.

Keywords: digitalization, financial sector development, Africa, economic growth

Procedia PDF Downloads 140
151 Method of Complex Estimation of Text Perusal and Indicators of Reading Quality in Different Types of Commercials

Authors: Victor N. Anisimov, Lyubov A. Boyko, Yazgul R. Almukhametova, Natalia V. Galkina, Alexander V. Latanov

Abstract:

Modern commercials presented on billboards, TV and on the Internet contain a lot of information about the product or service in text form. However, this information cannot always be perceived and understood by consumers. Typical sociological focus group studies often cannot reveal important features of the interpretation and understanding information that has been read in text messages. In addition, there is no reliable method to determine the degree of understanding of the information contained in a text. Only the fact of viewing a text does not mean that consumer has perceived and understood the meaning of this text. At the same time, the tools based on marketing analysis allow only to indirectly estimate the process of reading and understanding a text. Therefore, the aim of this work is to develop a valid method of recording objective indicators in real time for assessing the fact of reading and the degree of text comprehension. Psychophysiological parameters recorded during text reading can form the basis for this objective method. We studied the relationship between multimodal psychophysiological parameters and the process of text comprehension during reading using the method of correlation analysis. We used eye-tracking technology to record eye movements parameters to estimate visual attention, electroencephalography (EEG) to assess cognitive load and polygraphic indicators (skin-galvanic reaction, SGR) that reflect the emotional state of the respondent during text reading. We revealed reliable interrelations between perceiving the information and the dynamics of psychophysiological parameters during reading the text in commercials. Eye movement parameters reflected the difficulties arising in respondents during perceiving ambiguous parts of text. EEG dynamics in rate of alpha band were related with cumulative effect of cognitive load. SGR dynamics were related with emotional state of the respondent and with the meaning of text and type of commercial. EEG and polygraph parameters together also reflected the mental difficulties of respondents in understanding text and showed significant differences in cases of low and high text comprehension. We also revealed differences in psychophysiological parameters for different type of commercials (static vs. video, financial vs. cinema vs. pharmaceutics vs. mobile communication, etc.). Conclusions: Our methodology allows to perform multimodal evaluation of text perusal and the quality of text reading in commercials. In general, our results indicate the possibility of designing an integral model to estimate the comprehension of reading the commercial text in percent scale based on all noticed markers.

Keywords: reading, commercials, eye movements, EEG, polygraphic indicators

Procedia PDF Downloads 166
150 Anaerobic Co-Digestion of Pressmud with Bagasse and Animal Waste for Biogas Production Potential

Authors: Samita Sondhi, Sachin Kumar, Chirag Chopra

Abstract:

The increase in population has resulted in an excessive feedstock production, which has in return lead to the accumulation of a large amount of waste from different resources as crop residues, industrial waste and solid municipal waste. This situation has raised the problem of waste disposal in present days. A parallel problem of depletion of natural fossil fuel resources has led to the formation of alternative sources of energy from the waste of different industries to concurrently resolve the two issues. The biogas is a carbon neutral fuel which has applications in transportation, heating and power generation. India is a nation that has an agriculture-based economy and agro-residues are a significant source of organic waste. Taking into account, the second largest agro-based industry that is sugarcane industry producing a high quantity of sugar and sugarcane waste byproducts such as Bagasse, Press Mud, Vinasse and Wastewater. Currently, there are not such efficient disposal methods adopted at large scales. According to manageability objectives, anaerobic digestion can be considered as a method to treat organic wastes. Press mud is lignocellulosic biomass and cannot be accumulated for Mono digestion because of its complexity. Prior investigations indicated that it has a potential for production of biogas. But because of its biological and elemental complexity, Mono-digestion was not successful. Due to the imbalance in the C/N ratio and presence of wax in it can be utilized with any other fibrous material hence will be digested properly under suitable conditions. In the first batch of Mono-digestion of Pressmud biogas production was low. Now, co-digestion of Pressmud with Bagasse which has desired C/N ratio will be performed to optimize the ratio for maximum biogas from Press mud. In addition, with respect to supportability, the main considerations are the monetary estimation of item result and ecological concerns. The work is designed in such a way that the waste from the sugar industry will be digested for maximum biogas generation and digestive after digestion will be characterized for its use as a bio-fertilizer for soil conditioning. Due to effectiveness demonstrated by studied setups of Mono-digestion and Co-digestion, this approach can be considered as a viable alternative for lignocellulosic waste disposal and in agricultural applications. Biogas produced from the Pressmud either can be used for Powerhouses or transportation. In addition, the work initiated towards the development of waste disposal for energy production will demonstrate balanced economy sustainability of the process development.

Keywords: anaerobic digestion, carbon neutral fuel, press mud, lignocellulosic biomass

Procedia PDF Downloads 169
149 Rain Gauges Network Optimization in Southern Peninsular Malaysia

Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno

Abstract:

Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.

Keywords: geostatistics, simulated annealing, semivariogram, optimization

Procedia PDF Downloads 302
148 In situ Stabilization of Arsenic in Soils with Birnessite and Goethite

Authors: Saeed Bagherifam, Trevor Brown, Chris Fellows, Ravi Naidu

Abstract:

Over the last century, rapid urbanization, industrial emissions, and mining activities have resulted in widespread contamination of the environment by heavy metal(loid)s. Arsenic (As) is a toxic metalloid belonging to group 15 of the periodic table, which occurs naturally at low concentrations in soils and the earth’s crust, although concentrations can be significantly elevated in natural systems as a result of dispersion from anthropogenic sources, e.g., mining activities. Bioavailability is the fraction of a contaminant in soils that is available for uptake by plants, food chains, and humans and therefore presents the greatest risk to terrestrial ecosystems. Numerous attempts have been made to establish in situ and ex-situ technologies of remedial action for remediation of arsenic-contaminated soils. In situ stabilization techniques are based on deactivation or chemical immobilization of metalloid(s) in soil by means of soil amendments, which consequently reduce the bioavailability (for biota) and bioaccessibility (for humans) of metalloids due to the formation of low-solubility products or precipitates. This study investigated the effectiveness of two different types of synthetic manganese and iron oxides (birnessite and goethite) for stabilization of As in a soil spiked with 1000 mg kg⁻¹ of As and treated with 10% dosages of soil amendments. Birnessite was made using HCl and KMnO₄, and goethite was synthesized by the dropwise addition of KOH into Fe(NO₃) solution. The resulting contaminated soils were subjected to a series of chemical extraction studies including sequential extraction (BCR method), single-step extraction with distilled (DI) water, 2M HNO₃ and simplified bioaccessibility extraction tests (SBET) for estimation of bioaccessible fractions of As in two different soil fractions ( < 250 µm and < 2 mm). Concentrations of As in samples were measured using inductively coupled plasma mass spectrometry (ICP-MS). The results showed that soil with birnessite reduced bioaccessibility of As by up to 92% in both soil fractions. Furthermore, the results of single-step extractions revealed that the application of both birnessite and Goethite reduced DI water and HNO₃ extractable amounts of arsenic by 75, 75, 91, and 57%, respectively. Moreover, the results of the sequential extraction studies showed that both birnessite and goethite dramatically reduced the exchangeable fraction of As in soils. However, the amounts of recalcitrant fractions were higher in birnessite, and Goethite amended soils. The results revealed that the application of both birnessite and goethite significantly reduced bioavailability and the exchangeable fraction of As in contaminated soils, and therefore birnessite and Goethite amendments might be considered as promising adsorbents for stabilization and remediation of As contaminated soils.

Keywords: arsenic, bioavailability, in situ stabilisation, metalloid(s) contaminated soils

Procedia PDF Downloads 135
147 Estimation of Rock Strength from Diamond Drilling

Authors: Hing Hao Chan, Thomas Richard, Masood Mostofi

Abstract:

The mining industry relies on an estimate of rock strength at several stages of a mine life cycle: mining (excavating, blasting, tunnelling) and processing (crushing and grinding), both very energy-intensive activities. An effective comminution design that can yield significant dividends often requires a reliable estimate of the material rock strength. Common laboratory tests such as rod, ball mill, and uniaxial compressive strength share common shortcomings such as time, sample preparation, bias in plug selection cost, repeatability, and sample amount to ensure reliable estimates. In this paper, the authors present a methodology to derive an estimate of the rock strength from drilling data recorded while coring with a diamond core head. The work presented in this paper builds on a phenomenological model of the bit-rock interface proposed by Franca et al. (2015) and is inspired by the now well-established use of the scratch test with PDC (Polycrystalline Diamond Compact) cutter to derive the rock uniaxial compressive strength. The first part of the paper introduces the phenomenological model of the bit-rock interface for a diamond core head that relates the forces acting on the drill bit (torque, axial thrust) to the bit kinematic variables (rate of penetration and angular velocity) and introduces the intrinsic specific energy or the energy required to drill a unit volume of rock for an ideally sharp drilling tool (meaning ideally sharp diamonds and no contact between the bit matrix and rock debris) that is found well correlated to the rock uniaxial compressive strength for PDC and roller cone bits. The second part describes the laboratory drill rig, the experimental procedure that is tailored to minimize the effect of diamond polishing over the duration of the experiments, and the step-by-step methodology to derive the intrinsic specific energy from the recorded data. The third section presents the results and shows that the intrinsic specific energy correlates well to the uniaxial compressive strength for the 11 tested rock materials (7 sedimentary and 4 igneous rocks). The last section discusses best drilling practices and a method to estimate the rock strength from field drilling data considering the compliance of the drill string and frictional losses along the borehole. The approach is illustrated with a case study from drilling data recorded while drilling an exploration well in Australia.

Keywords: bit-rock interaction, drilling experiment, impregnated diamond drilling, uniaxial compressive strength

Procedia PDF Downloads 137
146 Evaluation of Soil Erosion Risk and Prioritization for Implementation of Management Strategies in Morocco

Authors: Lahcen Daoudi, Fatima Zahra Omdi, Abldelali Gourfi

Abstract:

In Morocco, as in most Mediterranean countries, water scarcity is a common situation because of low and unevenly distributed rainfall. The expansions of irrigated lands, as well as the growth of urban and industrial areas and tourist resorts, contribute to an increase of water demand. Therefore in the 1960s Morocco embarked on an ambitious program to increase the number of dams to boost water retention capacity. However, the decrease in the capacity of these reservoirs caused by sedimentation is a major problem; it is estimated at 75 million m3/year. Dams and reservoirs became unusable for their intended purposes due to sedimentation in large rivers that result from soil erosion. Soil erosion presents an important driving force in the process affecting the landscape. It has become one of the most serious environmental problems that raised much interest throughout the world. Monitoring soil erosion risk is an important part of soil conservation practices. The estimation of soil loss risk is the first step for a successful control of water erosion. The aim of this study is to estimate the soil loss risk and its spatial distribution in the different fields of Morocco and to prioritize areas for soil conservation interventions. The approach followed is the Revised Universal Soil Loss Equation (RUSLE) using remote sensing and GIS, which is the most popular empirically based model used globally for erosion prediction and control. This model has been tested in many agricultural watersheds in the world, particularly for large-scale basins due to the simplicity of the model formulation and easy availability of the dataset. The spatial distribution of the annual soil loss was elaborated by the combination of several factors: rainfall erosivity, soil erodability, topography, and land cover. The average annual soil loss estimated in several basins watershed of Morocco varies from 0 to 50t/ha/year. Watersheds characterized by high-erosion-vulnerability are located in the North (Rif Mountains) and more particularly in the Central part of Morocco (High Atlas Mountains). This variation of vulnerability is highly correlated to slope variation which indicates that the topography factor is the main agent of soil erosion within these basin catchments. These results could be helpful for the planning of natural resources management and for implementing sustainable long-term management strategies which are necessary for soil conservation and for increasing over the projected economic life of the dam implemented.

Keywords: soil loss, RUSLE, GIS-remote sensing, watershed, Morocco

Procedia PDF Downloads 461