Search results for: filtering and estimation
156 Numerical Erosion Investigation of Standalone Screen (Wire-Wrapped) Due to the Impact of Sand Particles Entrained in a Single-Phase Flow (Water Flow)
Authors: Ahmed Alghurabi, Mysara Mohyaldinn, Shiferaw Jufar, Obai Younis, Abdullah Abduljabbar
Abstract:
Erosion modeling equations were typically acquired from regulated experimental trials for solid particles entrained in single-phase or multi-phase flows. Evidently, those equations were later employed to predict the erosion damage caused by the continuous impacts of solid particles entrained in streamflow. It is also well-known that the particle impact angle and velocity do not change drastically in gas-sand flow erosion prediction; hence an accurate prediction of erosion can be projected. On the contrary, high-density fluid flows, such as water flow, through complex geometries, such as sand screens, greatly affect the sand particles’ trajectories/tracks and consequently impact the erosion rate predictions. Particle tracking models and erosion equations are frequently applied simultaneously as a method to improve erosion visualization and estimation. In the present work, computational fluid dynamic (CFD)-based erosion modeling was performed using a commercially available software; ANSYS Fluent. The continuous phase (water flow) behavior was simulated using the realizable K-epsilon model, and the secondary phase (solid particles), having a 5% flow concentration, was tracked with the help of the discrete phase model (DPM). To accomplish a successful erosion modeling, three erosion equations from the literature were utilized and introduced to the ANSYS Fluent software to predict the screen wire-slot velocity surge and estimate the maximum erosion rates on the screen surface. Results of turbulent kinetic energy, turbulence intensity, dissipation rate, the total pressure on the screen, screen wall shear stress, and flow velocity vectors were presented and discussed. Moreover, the particle tracks and path-lines were also demonstrated based on their residence time, velocity magnitude, and flow turbulence. On one hand, results from the utilized erosion equations have shown similarities in screen erosion patterns, locations, and DPM concentrations. On the other hand, the model equations estimated slightly different values of maximum erosion rates of the wire-wrapped screen. This is solely based on the fact that the utilized erosion equations were developed with some assumptions that are controlled by the experimental lab conditions.Keywords: CFD simulation, erosion rate prediction, material loss due to erosion, water-sand flow
Procedia PDF Downloads 162155 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles
Authors: Nozar Kishi, Babak Kamrani, Filmon Habte
Abstract:
Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM
Procedia PDF Downloads 268154 The Trade Flow of Small Association Agreements When Rules of Origin Are Relaxed
Authors: Esmat Kamel
Abstract:
This paper aims to shed light on the extent to which the Agadir Association agreement has fostered inter regional trade between the E.U_26 and the Agadir_4 countries; once that we control for the evolution of Agadir agreement’s exports to the rest of the world. The next valid question will be regarding any remarkable variation in the spatial/sectoral structure of exports, and to what extent has it been induced by the Agadir agreement itself and precisely after the adoption of rules of origin and the PANEURO diagonal cumulative scheme? The paper’s empirical dataset covering a timeframe from [2000 -2009] was designed to account for sector specific export and intermediate flows and the bilateral structured gravity model was custom tailored to capture sector and regime specific rules of origin and the Poisson Pseudo Maximum Likelihood Estimator was used to calculate the gravity equation. The methodological approach of this work is considered to be a threefold one which starts first by conducting a ‘Hierarchal Cluster Analysis’ to classify final export flows showing a certain degree of linkage between each other. The analysis resulted in three main sectoral clusters of exports between Agadir_4 and E.U_26: cluster 1 for Petrochemical related sectors, cluster 2 durable goods and finally cluster 3 for heavy duty machinery and spare parts sectors. Second step continues by taking export flows resulting from the 3 clusters to be subject to treatment with diagonal Rules of origin through ‘The Double Differences Approach’, versus an equally comparable untreated control group. Third step is to verify results through a robustness check applied by ‘Propensity Score Matching’ to validate that the same sectoral final export and intermediate flows increased when rules of origin were relaxed. Through all the previous analysis, a remarkable and partial significance of the interaction term combining both treatment effects and time for the coefficients of 13 out of the 17 covered sectors turned out to be partially significant and it further asserted that treatment with diagonal rules of origin contributed in increasing Agadir’s_4 final and intermediate exports to the E.U._26 on average by 335% and in changing Agadir_4 exports structure and composition to the E.U._26 countries.Keywords: agadir association agreement, structured gravity model, hierarchal cluster analysis, double differences estimation, propensity score matching, diagonal and relaxed rules of origin
Procedia PDF Downloads 315153 Simulation of Technological, Energy and GHG Comparison between a Conventional Diesel Bus and E-bus: Feasibility to Promote E-bus Change in High Lands Cities
Authors: Riofrio Jonathan, Fernandez Guillermo
Abstract:
Renewable energy represented around 80% of the energy matrix for power generation in Ecuador during 2020, so the deployment of current public policies is focused on taking advantage of the high presence of renewable sources to carry out several electrification projects. These projects are part of the portfolio sent to the United Nations Framework on Climate Change (UNFCCC) as a commitment to reduce greenhouse gas emissions (GHG) in the established national determined contribution (NDC). In this sense, the Ecuadorian Organic Energy Efficiency Law (LOEE) published in 2019 promotes E-mobility as one of the main milestones. In fact, it states that the new vehicles for urban and interurban usage must be E-buses since 2025. As a result, and for a successful implementation of this technological change in a national context, it is important to deploy land surveys focused on technical and geographical areas to keep the quality of services in both the electricity and transport sectors. Therefore, this research presents a technological and energy comparison between a conventional diesel bus and its equivalent E-bus. Both vehicles fulfill all the technical requirements to ride in the study-case city, which is Ambato in the province of Tungurahua-Ecuador. In addition, the analysis includes the development of a model for the energy estimation of both technologies that are especially applied in a highland city such as Ambato. The altimetry of the most important bus routes in the city varies from 2557 to 3200 m.a.s.l., respectively, for the lowest and highest points. These operation conditions provide a grade of novelty to this paper. Complementary, the technical specifications of diesel buses are defined following the common features of buses registered in Ambato. On the other hand, the specifications for E-buses come from the most common units introduced in Latin America because there is not enough evidence in similar cities at the moment. The achieved results will be good input data for decision-makers since electric demand forecast, energy savings, costs, and greenhouse gases emissions are computed. Indeed, GHG is important because it allows reporting the transparency framework that it is part of the Paris Agreement. Finally, the presented results correspond to stage I of the called project “Analysis and Prospective of Electromobility in Ecuador and Energy Mix towards 2030” supported by Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ).Keywords: high altitude cities, energy planning, NDC, e-buses, e-mobility
Procedia PDF Downloads 151152 Survey of Indoor Radon/Thoron Concentrations in High Lung Cancer Incidence Area in India
Authors: Zoliana Bawitlung, P. C. Rohmingliana, L. Z. Chhangte, Remlal Siama, Hming Chungnunga, Vanram Lawma, L. Hnamte, B. K. Sahoo, B. K. Sapra, J. Malsawma
Abstract:
Mizoram state has the highest lung cancer incidence rate in India due to its high-level consumption of tobacco and its products which is supplemented by the food habits. While smoking is mainly responsible for this incidence, the effect of inhalation of indoor radon gas cannot be discarded as the hazardous nature of this radioactive gas and its progenies on human population have been well-established worldwide where the radiation damage to bronchial cells eventually can be the second leading cause of lung cancer next to smoking. It is also known that the effect of radiation, however, small may be the concentration, cannot be neglected as they can bring about the risk of cancer incidence. Hence, estimation of indoor radon concentration is important to give a useful reference against radiation effects as well as establishing its safety measures and to create a baseline for further case-control studies. The indoor radon/thoron concentrations in Mizoram had been measured in 41 dwellings selected on the basis of spot gamma background radiation and construction type of the houses during 2015-2016. The dwellings were monitored for one year, in 4 months cycles to indicate seasonal variations, for the indoor concentration of radon gas and its progenies, outdoor gamma dose, and indoor gamma dose respectively. A time-integrated method using Solid State Nuclear Track Detector (SSNTD) based single entry pin-hole dosimeters were used for measurement of indoor Radon/Thoron concentration. Gamma dose measurements for indoor as well as outdoor were carried out using Geiger Muller survey meters. Seasonal variation of indoor radon/ thoron concentration was monitored. The results show that the annual average radon concentrations varied from 54.07 – 144.72 Bq/m³ with an average of 90.20 Bq/m³ and the annual average thoron concentration varied from 17.39 – 54.19 Bq/m³ with an average of 35.91 Bq/m³ which are below the permissible limit. The spot survey of gamma background radiation level varies between 9 to 24 µR/h inside and outside the dwellings throughout Mizoram which are all within acceptable limits. From the above results, there is no direct indication that radon/thoron is responsible for the high lung cancer incidence in the area. In order to find epidemiological evidence of natural radiations to high cancer incidence in the area, one may need to conduct a case-control study which is beyond this scope. However, the derived data of measurement will provide baseline data for further studies.Keywords: background gamma radiation, indoor radon/thoron, lung cancer, seasonal variation
Procedia PDF Downloads 140151 Digitization and Economic Growth in Africa: The Role of Financial Sector Development
Authors: Abdul Ganiyu Iddrisu, Bei Chen
Abstract:
Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth and reducing poverty. Yet the compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, and low-income flows, among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector. However, empirical evidence on the digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We, therefore, argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa, focusing on the role of digitization and financial sector development. First, we assess how digitization influences financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to the private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improve economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.Keywords: digitalization, financial sector development, Africa, economic growth
Procedia PDF Downloads 138150 Method of Complex Estimation of Text Perusal and Indicators of Reading Quality in Different Types of Commercials
Authors: Victor N. Anisimov, Lyubov A. Boyko, Yazgul R. Almukhametova, Natalia V. Galkina, Alexander V. Latanov
Abstract:
Modern commercials presented on billboards, TV and on the Internet contain a lot of information about the product or service in text form. However, this information cannot always be perceived and understood by consumers. Typical sociological focus group studies often cannot reveal important features of the interpretation and understanding information that has been read in text messages. In addition, there is no reliable method to determine the degree of understanding of the information contained in a text. Only the fact of viewing a text does not mean that consumer has perceived and understood the meaning of this text. At the same time, the tools based on marketing analysis allow only to indirectly estimate the process of reading and understanding a text. Therefore, the aim of this work is to develop a valid method of recording objective indicators in real time for assessing the fact of reading and the degree of text comprehension. Psychophysiological parameters recorded during text reading can form the basis for this objective method. We studied the relationship between multimodal psychophysiological parameters and the process of text comprehension during reading using the method of correlation analysis. We used eye-tracking technology to record eye movements parameters to estimate visual attention, electroencephalography (EEG) to assess cognitive load and polygraphic indicators (skin-galvanic reaction, SGR) that reflect the emotional state of the respondent during text reading. We revealed reliable interrelations between perceiving the information and the dynamics of psychophysiological parameters during reading the text in commercials. Eye movement parameters reflected the difficulties arising in respondents during perceiving ambiguous parts of text. EEG dynamics in rate of alpha band were related with cumulative effect of cognitive load. SGR dynamics were related with emotional state of the respondent and with the meaning of text and type of commercial. EEG and polygraph parameters together also reflected the mental difficulties of respondents in understanding text and showed significant differences in cases of low and high text comprehension. We also revealed differences in psychophysiological parameters for different type of commercials (static vs. video, financial vs. cinema vs. pharmaceutics vs. mobile communication, etc.). Conclusions: Our methodology allows to perform multimodal evaluation of text perusal and the quality of text reading in commercials. In general, our results indicate the possibility of designing an integral model to estimate the comprehension of reading the commercial text in percent scale based on all noticed markers.Keywords: reading, commercials, eye movements, EEG, polygraphic indicators
Procedia PDF Downloads 163149 Anaerobic Co-Digestion of Pressmud with Bagasse and Animal Waste for Biogas Production Potential
Authors: Samita Sondhi, Sachin Kumar, Chirag Chopra
Abstract:
The increase in population has resulted in an excessive feedstock production, which has in return lead to the accumulation of a large amount of waste from different resources as crop residues, industrial waste and solid municipal waste. This situation has raised the problem of waste disposal in present days. A parallel problem of depletion of natural fossil fuel resources has led to the formation of alternative sources of energy from the waste of different industries to concurrently resolve the two issues. The biogas is a carbon neutral fuel which has applications in transportation, heating and power generation. India is a nation that has an agriculture-based economy and agro-residues are a significant source of organic waste. Taking into account, the second largest agro-based industry that is sugarcane industry producing a high quantity of sugar and sugarcane waste byproducts such as Bagasse, Press Mud, Vinasse and Wastewater. Currently, there are not such efficient disposal methods adopted at large scales. According to manageability objectives, anaerobic digestion can be considered as a method to treat organic wastes. Press mud is lignocellulosic biomass and cannot be accumulated for Mono digestion because of its complexity. Prior investigations indicated that it has a potential for production of biogas. But because of its biological and elemental complexity, Mono-digestion was not successful. Due to the imbalance in the C/N ratio and presence of wax in it can be utilized with any other fibrous material hence will be digested properly under suitable conditions. In the first batch of Mono-digestion of Pressmud biogas production was low. Now, co-digestion of Pressmud with Bagasse which has desired C/N ratio will be performed to optimize the ratio for maximum biogas from Press mud. In addition, with respect to supportability, the main considerations are the monetary estimation of item result and ecological concerns. The work is designed in such a way that the waste from the sugar industry will be digested for maximum biogas generation and digestive after digestion will be characterized for its use as a bio-fertilizer for soil conditioning. Due to effectiveness demonstrated by studied setups of Mono-digestion and Co-digestion, this approach can be considered as a viable alternative for lignocellulosic waste disposal and in agricultural applications. Biogas produced from the Pressmud either can be used for Powerhouses or transportation. In addition, the work initiated towards the development of waste disposal for energy production will demonstrate balanced economy sustainability of the process development.Keywords: anaerobic digestion, carbon neutral fuel, press mud, lignocellulosic biomass
Procedia PDF Downloads 168148 Rain Gauges Network Optimization in Southern Peninsular Malaysia
Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno
Abstract:
Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.Keywords: geostatistics, simulated annealing, semivariogram, optimization
Procedia PDF Downloads 301147 In situ Stabilization of Arsenic in Soils with Birnessite and Goethite
Authors: Saeed Bagherifam, Trevor Brown, Chris Fellows, Ravi Naidu
Abstract:
Over the last century, rapid urbanization, industrial emissions, and mining activities have resulted in widespread contamination of the environment by heavy metal(loid)s. Arsenic (As) is a toxic metalloid belonging to group 15 of the periodic table, which occurs naturally at low concentrations in soils and the earth’s crust, although concentrations can be significantly elevated in natural systems as a result of dispersion from anthropogenic sources, e.g., mining activities. Bioavailability is the fraction of a contaminant in soils that is available for uptake by plants, food chains, and humans and therefore presents the greatest risk to terrestrial ecosystems. Numerous attempts have been made to establish in situ and ex-situ technologies of remedial action for remediation of arsenic-contaminated soils. In situ stabilization techniques are based on deactivation or chemical immobilization of metalloid(s) in soil by means of soil amendments, which consequently reduce the bioavailability (for biota) and bioaccessibility (for humans) of metalloids due to the formation of low-solubility products or precipitates. This study investigated the effectiveness of two different types of synthetic manganese and iron oxides (birnessite and goethite) for stabilization of As in a soil spiked with 1000 mg kg⁻¹ of As and treated with 10% dosages of soil amendments. Birnessite was made using HCl and KMnO₄, and goethite was synthesized by the dropwise addition of KOH into Fe(NO₃) solution. The resulting contaminated soils were subjected to a series of chemical extraction studies including sequential extraction (BCR method), single-step extraction with distilled (DI) water, 2M HNO₃ and simplified bioaccessibility extraction tests (SBET) for estimation of bioaccessible fractions of As in two different soil fractions ( < 250 µm and < 2 mm). Concentrations of As in samples were measured using inductively coupled plasma mass spectrometry (ICP-MS). The results showed that soil with birnessite reduced bioaccessibility of As by up to 92% in both soil fractions. Furthermore, the results of single-step extractions revealed that the application of both birnessite and Goethite reduced DI water and HNO₃ extractable amounts of arsenic by 75, 75, 91, and 57%, respectively. Moreover, the results of the sequential extraction studies showed that both birnessite and goethite dramatically reduced the exchangeable fraction of As in soils. However, the amounts of recalcitrant fractions were higher in birnessite, and Goethite amended soils. The results revealed that the application of both birnessite and goethite significantly reduced bioavailability and the exchangeable fraction of As in contaminated soils, and therefore birnessite and Goethite amendments might be considered as promising adsorbents for stabilization and remediation of As contaminated soils.Keywords: arsenic, bioavailability, in situ stabilisation, metalloid(s) contaminated soils
Procedia PDF Downloads 134146 Estimation of Rock Strength from Diamond Drilling
Authors: Hing Hao Chan, Thomas Richard, Masood Mostofi
Abstract:
The mining industry relies on an estimate of rock strength at several stages of a mine life cycle: mining (excavating, blasting, tunnelling) and processing (crushing and grinding), both very energy-intensive activities. An effective comminution design that can yield significant dividends often requires a reliable estimate of the material rock strength. Common laboratory tests such as rod, ball mill, and uniaxial compressive strength share common shortcomings such as time, sample preparation, bias in plug selection cost, repeatability, and sample amount to ensure reliable estimates. In this paper, the authors present a methodology to derive an estimate of the rock strength from drilling data recorded while coring with a diamond core head. The work presented in this paper builds on a phenomenological model of the bit-rock interface proposed by Franca et al. (2015) and is inspired by the now well-established use of the scratch test with PDC (Polycrystalline Diamond Compact) cutter to derive the rock uniaxial compressive strength. The first part of the paper introduces the phenomenological model of the bit-rock interface for a diamond core head that relates the forces acting on the drill bit (torque, axial thrust) to the bit kinematic variables (rate of penetration and angular velocity) and introduces the intrinsic specific energy or the energy required to drill a unit volume of rock for an ideally sharp drilling tool (meaning ideally sharp diamonds and no contact between the bit matrix and rock debris) that is found well correlated to the rock uniaxial compressive strength for PDC and roller cone bits. The second part describes the laboratory drill rig, the experimental procedure that is tailored to minimize the effect of diamond polishing over the duration of the experiments, and the step-by-step methodology to derive the intrinsic specific energy from the recorded data. The third section presents the results and shows that the intrinsic specific energy correlates well to the uniaxial compressive strength for the 11 tested rock materials (7 sedimentary and 4 igneous rocks). The last section discusses best drilling practices and a method to estimate the rock strength from field drilling data considering the compliance of the drill string and frictional losses along the borehole. The approach is illustrated with a case study from drilling data recorded while drilling an exploration well in Australia.Keywords: bit-rock interaction, drilling experiment, impregnated diamond drilling, uniaxial compressive strength
Procedia PDF Downloads 135145 Evaluation of Soil Erosion Risk and Prioritization for Implementation of Management Strategies in Morocco
Authors: Lahcen Daoudi, Fatima Zahra Omdi, Abldelali Gourfi
Abstract:
In Morocco, as in most Mediterranean countries, water scarcity is a common situation because of low and unevenly distributed rainfall. The expansions of irrigated lands, as well as the growth of urban and industrial areas and tourist resorts, contribute to an increase of water demand. Therefore in the 1960s Morocco embarked on an ambitious program to increase the number of dams to boost water retention capacity. However, the decrease in the capacity of these reservoirs caused by sedimentation is a major problem; it is estimated at 75 million m3/year. Dams and reservoirs became unusable for their intended purposes due to sedimentation in large rivers that result from soil erosion. Soil erosion presents an important driving force in the process affecting the landscape. It has become one of the most serious environmental problems that raised much interest throughout the world. Monitoring soil erosion risk is an important part of soil conservation practices. The estimation of soil loss risk is the first step for a successful control of water erosion. The aim of this study is to estimate the soil loss risk and its spatial distribution in the different fields of Morocco and to prioritize areas for soil conservation interventions. The approach followed is the Revised Universal Soil Loss Equation (RUSLE) using remote sensing and GIS, which is the most popular empirically based model used globally for erosion prediction and control. This model has been tested in many agricultural watersheds in the world, particularly for large-scale basins due to the simplicity of the model formulation and easy availability of the dataset. The spatial distribution of the annual soil loss was elaborated by the combination of several factors: rainfall erosivity, soil erodability, topography, and land cover. The average annual soil loss estimated in several basins watershed of Morocco varies from 0 to 50t/ha/year. Watersheds characterized by high-erosion-vulnerability are located in the North (Rif Mountains) and more particularly in the Central part of Morocco (High Atlas Mountains). This variation of vulnerability is highly correlated to slope variation which indicates that the topography factor is the main agent of soil erosion within these basin catchments. These results could be helpful for the planning of natural resources management and for implementing sustainable long-term management strategies which are necessary for soil conservation and for increasing over the projected economic life of the dam implemented.Keywords: soil loss, RUSLE, GIS-remote sensing, watershed, Morocco
Procedia PDF Downloads 460144 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour
Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale
Abstract:
Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.Keywords: artificial neural network, back-propagation, tide data, training algorithm
Procedia PDF Downloads 482143 Modeling Diel Trends of Dissolved Oxygen for Estimating the Metabolism in Pristine Streams in the Brazilian Cerrado
Authors: Wesley A. Saltarelli, Nicolas R. Finkler, Adriana C. P. Miwa, Maria C. Calijuri, Davi G. F. Cunha
Abstract:
The metabolism of the streams is an indicator of ecosystem disturbance due to the influences of the catchment on the structure of the water bodies. The study of the respiration and photosynthesis allows the estimation of energy fluxes through the food webs and the analysis of the autotrophic and heterotrophic processes. We aimed at evaluating the metabolism in streams located in the Brazilian savannah, Cerrado (Sao Carlos, SP), by determining and modeling the daily changes of dissolved oxygen (DO) in the water during one year. Three water bodies with minimal anthropogenic interference in their surroundings were selected, Espraiado (ES), Broa (BR) and Canchim (CA). Every two months, water temperature, pH and conductivity are measured with a multiparameter probe. Nitrogen and phosphorus forms are determined according to standard methods. Also, canopy cover percentages are estimated in situ with a spherical densitometer. Stream flows are quantified through the conservative tracer (NaCl) method. For the metabolism study, DO (PME-MiniDOT) and light (Odyssey Photosynthetic Active Radiation) sensors log data for at least three consecutive days every ten minutes. The reaeration coefficient (k2) is estimated through the method of the tracer gas (SF6). Finally, we model the variations in DO concentrations and calculate the rates of gross and net primary production (GPP and NPP) and respiration based on the one station method described in the literature. Three sampling were carried out in October and December 2015 and February 2016 (the next will be in April, June and August 2016). The results from the first two periods are already available. The mean water temperatures in the streams were 20.0 +/- 0.8C (Oct) and 20.7 +/- 0.5C (Dec). In general, electrical conductivity values were low (ES: 20.5 +/- 3.5uS/cm; BR 5.5 +/- 0.7uS/cm; CA 33 +/- 1.4 uS/cm). The mean pH values were 5.0 (BR), 5.7 (ES) and 6.4 (CA). The mean concentrations of total phosphorus were 8.0ug/L (BR), 66.6ug/L (ES) and 51.5ug/L (CA), whereas soluble reactive phosphorus concentrations were always below 21.0ug/L. The BR stream had the lowest concentration of total nitrogen (0.55mg/L) as compared to CA (0.77mg/L) and ES (1.57mg/L). The average discharges were 8.8 +/- 6L/s (ES), 11.4 +/- 3L/s and CA 2.4 +/- 0.5L/s. The average percentages of canopy cover were 72% (ES), 75% (BR) and 79% (CA). Significant daily changes were observed in the DO concentrations, reflecting predominantly heterotrophic conditions (respiration exceeded the gross primary production, with negative net primary production). The GPP varied from 0-0.4g/m2.d (in Oct and Dec) and the R varied from 0.9-22.7g/m2.d (Oct) and from 0.9-7g/m2.d (Dec). The predominance of heterotrophic conditions suggests increased vulnerability of the ecosystems to artificial inputs of organic matter that would demand oxygen. The investigation of the metabolism in the pristine streams can help defining natural reference conditions of trophic state.Keywords: low-order streams, metabolism, net primary production, trophic state
Procedia PDF Downloads 256142 Examining the Effects of National Disaster on the Performance of Hospitality Industry in Korea
Authors: Kim Sang Hyuck, Y. Park Sung
Abstract:
The outbreak of national disasters stimulates the decrease of the both internal and domestic tourism demands, causing bad effects on the hospitality industry. The effective and efficient risk management regarding national disasters are being increasingly required from the hospitality industry practitioners and the tourism policymakers. To establish the effective and efficient risk management strategy on national disasters, the most essential prerequisite condition is the correct estimation of national disasters’ effects in terms of the size and duration of the damages occurred from national disaster on hospitality industry. More specifically, the national disasters are twofold: natural disaster and social disaster. In addition, the hospitality industry has consisted of several types of business, such as hotel, restaurant, travel agency, etc. As reasons of the above, it is important to consider how each type of national disasters differently influences on the performance of each type of hospitality industry. Therefore, the purpose of this study is examining the effects of national disaster on hospitality industry in Korea based on the types of national disasters as well as the types of hospitality business. The monthly data was collected from Jan. 2000 to Dec. 2016. The indexes of industrial production for each hospitality industry in Korea were used with the proxy variable for the performance of each hospitality industry. Two national disaster variables (natural disaster and social disaster) were treated as dummy variables. In addition, the exchange rate, industrial production index, and consumer price index were used as control variables in the research model. The impulse response analysis was used to examine the size and duration of the damages occurred from each type of national disaster on each type of hospitality industries. The results of this study show that the natural disaster and the social disaster differently influenced on each type of hospitality industry. More specifically, the performance of airline industry is negatively influenced by the natural disaster at the time of 3 months later from the incidence. However, the negative impacts of social disaster on airline industry occurred not significantly over the time periods. For the hotel industry, both natural disaster and social disaster negatively influence the performance of hotel industry at the time of 5 months and 6 months later, respectively. Also, the negative impact of natural disaster on the performance of restaurant industry occurred at the time of 5 months later, as well as for both 3 months and 6 months later for the social disaster. Finally, both natural disaster and social disaster negatively influence the performance of travel agency at the time of 3 months and 4 months later, respectively. In conclusion, the types of national disasters differently influence the performance of each type of hospitality industry in Korea. These results would provide an important information to establish the effective and efficient risk management strategy for the national disasters.Keywords: impulse response analysis, Korea, national disaster, performance of hospitality industry
Procedia PDF Downloads 183141 Performance Estimation of Small Scale Wind Turbine Rotor for Very Low Wind Regime Condition
Authors: Vilas Warudkar, Dinkar Janghel, Siraj Ahmed
Abstract:
Rapid development experienced by India requires huge amount of energy. Actual supply capacity additions have been consistently lower than the targets set by the government. According to World Bank 40% of residences are without electricity. In 12th five year plan 30 GW grid interactive renewable capacity is planned in which 17 GW is Wind, 10 GW is from solar and 2.1 GW from small hydro project, and rest is compensated by bio gas. Renewable energy (RE) and energy efficiency (EE) meet not only the environmental and energy security objectives, but also can play a crucial role in reducing chronic power shortages. In remote areas or areas with a weak grid, wind energy can be used for charging batteries or can be combined with a diesel engine to save fuel whenever wind is available. India according to IEC 61400-1 belongs to class IV Wind Condition; it is not possible to set up wind turbine in large scale at every place. So, the best choice is to go for small scale wind turbine at lower height which will have good annual energy production (AEP). Based on the wind characteristic available at MANIT Bhopal, rotor for small scale wind turbine is designed. Various Aero foil data is reviewed for selection of airfoil in the Blade Profile. Airfoil suited of Low wind conditions i.e. at low Reynold’s number is selected based on Coefficient of Lift, Drag and angle of attack. For designing of the rotor blade, standard Blade Element Momentum (BEM) Theory is implanted. Performance of the Blade is estimated using BEM theory in which axial induction factor and angular induction factor is optimized using iterative technique. Rotor performance is estimated for particular designed blade specifically for low wind Conditions. Power production of rotor is determined at different wind speeds for particular pitch angle of the blade. At pitch 15o and velocity 5 m/sec gives good cut in speed of 2 m/sec and power produced is around 350 Watts. Tip speed of the Blade is considered as 6.5 for which Coefficient of Performance of the rotor is calculated 0.35, which is good acceptable value for Small scale Wind turbine. Simple Load Model (SLM, IEC 61400-2) is also discussed to improve the structural strength of the rotor. In SLM, Edge wise Moment and Flap Wise moment is considered which cause bending stress at the root of the blade. Various Load case mentioned in the IEC 61400-2 is calculated and checked for the partial safety factor of the wind turbine blade.Keywords: annual energy production, Blade Element Momentum Theory, low wind Conditions, selection of airfoil
Procedia PDF Downloads 335140 The Potential Fresh Water Resources of Georgia and Sustainable Water Management
Authors: Nana Bolashvili, Vakhtang Geladze, Tamazi Karalashvili, Nino Machavariani, George Geladze, Davit Kartvelishvili, Ana Karalashvili
Abstract:
Fresh water is the major natural resource of Georgia. The average perennial sum of the rivers' runoff in Georgia is 52,77 km³, out of which 9,30 km³ inflows from abroad. The major volume of transit river runoff is ascribed to the Chorokhi river. Average perennial runoff in Western Georgia is 41,52 km³, in Eastern Georgia 11,25 km³. The indices of Eastern and Western Georgia were calculated with 50% and 90% river runoff respectively, while the same index calculation for other countries is based on a 50% river runoff. Out of total volume of resources, 133,2 m³/sec (4,21 km³) has been geologically prospected by the State Commission on Reserves and Acknowledged as reserves available for exploitation, 48% (2,02 km³) of which is in Western Georgia and 2,19 km³ in Eastern Georgia. Considering acknowledged water reserves of all categories per capita water resources accounts to 2,2 m³/day, whereas high industrial category -0. 88 m³ /day fresh drinking water. According to accepted norms, the possibility of using underground water reserves is 2,5 times higher than the long-term requirements of the country. The volume of abundant fresh-water reserves in Georgia is about 150 m³/sec (4,74 km³). Water in Georgia is consumed mostly in agriculture for irrigation purposes. It makes 66,4% around Georgia, in Eastern Georgia 72,4% and 38% in Western Georgia. According to the long-term forecast provision of population and the territory with water resources in Eastern Georgia will be quite normal. A bit different is the situation in the lower reaches of the Khrami and Iori rivers which could be easily overcome by corresponding financing. The present day irrigation system in Georgia does not meet the modern technical requirements. The overall efficiency of their majority varies between 0,4-0,6. Similar is the situation in the fresh water and public service water consumption. Organization of the mentioned systems, installation of water meters, introduction of new methods of irrigation without water loss will substantially increase efficiency of water use. Besides new irrigation norms developed from agro-climatic, geographical and hydrological angle will significantly reduce water waste. Taking all this into account we assume that for irrigation agricultural lands in Georgia is necessary 6,0 km³ water, 5,5 km³ of which goes to Eastern Georgia on irrigation arable areas. To increase water supply in Eastern Georgian territory and its population is possible by means of new water reservoirs as the runoff of every river considerably exceeds the consumption volume. In conclusion, we should say that fresh water resources by which Georgia is that rich could be significant source for barter exchange and investment attraction. Certain volume of fresh water can be exported from Western Georgia quite trouble free, without bringing any damage to population and hydroecosystems. The precise volume of exported water per region/time and method/place of water consumption should be defined after the estimation of different hydroecosystems and detailed analyses of water balance of the corresponding territories.Keywords: GIS, management, rivers, water resources
Procedia PDF Downloads 369139 Cross-Comparison between Land Surface Temperature from Polar and Geostationary Satellite over Heterogenous Landscape: A Case Study in Hong Kong
Authors: Ibrahim A. Adeniran, Rui F. Zhu, Man S. Wong
Abstract:
Owing to the insufficiency in the spatial representativeness and continuity of in situ temperature measurements from weather stations (WS), the use of temperature measurement from WS for large-range diurnal analysis in heterogenous landscapes has been limited. This has made the accurate estimation of land surface temperature (LST) from remotely sensed data more crucial. Moreover, the study of dynamic interaction between the atmosphere and the physical surface of the Earth could be enhanced at both annual and diurnal scales by using optimal LST data derived from satellite sensors. The tradeoff between the spatial and temporal resolution of LSTs from satellite’s thermal infrared sensors (TIRS) has, however, been a major challenge, especially when high spatiotemporal LST data are recommended. It is well-known from existing literature that polar satellites have the advantage of high spatial resolution, while geostationary satellites have a high temporal resolution. Hence, this study is aimed at designing a framework for the cross-comparison of LST data from polar and geostationary satellites in a heterogeneous landscape. This could help to understand the relationship between the LST estimates from the two satellites and, consequently, their integration in diurnal LST analysis. Landsat-8 satellite data will be used as the representative of the polar satellite due to the availability of its long-term series, while the Himawari-8 satellite will be used as the data source for the geostationary satellite because of its improved TIRS. For the study area, Hong Kong Special Administrative Region (HK SAR) will be selected; this is due to the heterogeneity in the landscape of the region. LST data will be retrieved from both satellites using the Split window algorithm (SWA), and the resulting data will be validated by comparing satellite-derived LST data with temperature data from automatic WS in HK SAR. The LST data from the satellite data will then be separated based on the land use classification in HK SAR using the Global Land Cover by National Mapping Organization version3 (GLCNMO 2013) data. The relationship between LST data from Landsat-8 and Himawari-8 will then be investigated based on the land-use class and over different seasons of the year in order to account for seasonal variation in their relationship. The resulting relationship will be spatially and statistically analyzed and graphically visualized for detailed interpretation. Findings from this study will reveal the relationship between the two satellite data based on the land use classification within the study area and the seasons of the year. While the information provided by this study will help in the optimal combination of LST data from Polar (Landsat-8) and geostationary (Himawari-8) satellites, it will also serve as a roadmap in the annual and diurnal urban heat (UHI) analysis in Hong Kong SAR.Keywords: automatic weather station, Himawari-8, Landsat-8, land surface temperature, land use classification, split window algorithm, urban heat island
Procedia PDF Downloads 72138 Tests for Zero Inflation in Count Data with Measurement Error in Covariates
Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao
Abstract:
In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.Keywords: count data, measurement error, score test, zero inflation
Procedia PDF Downloads 286137 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra
Authors: Amin Asgarian, Ghyslaine McClure
Abstract:
Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design
Procedia PDF Downloads 235136 Evaluation of Antidiabetic Activity of a Combination Extract of Nigella Sativa & Cinnamomum Cassia in Streptozotocin Induced Type-I Diabetic Rats
Authors: Ginpreet Kaur, Mohammad Yasir Usmani, Mohammed Kamil Khan
Abstract:
Diabetes mellitus is a disease with a high global burden and results in significant morbidity and mortality. In India, the number of people suffering with diabetes is expected to rise from 19 to 57 million in 2025. At present, interest in herbal remedies is growing to reduce the side effects associated with conventional dosage form like oral hypoglycemic agents and insulin for the treatment of diabetes mellitus. Our aim was to investigate the antidiabetic activities of combinatorial extract of N. sativa & C. cassia in Streptozotocin induced type-I Diabetic Rats. Thus, the present study was undertaken to screen postprandial glucose excursion potential through α- glucosidase inhibitory activity (In Vitro) and effect of combinatorial extract of N. sativa & C. cassia in Streptozotocin induced type-I Diabetic Rats (In Vivo). In addition changes in body weight, plasma glucose, lipid profile and kidney profile were also determined. The IC50 values for both extract and Acarbose was calculated by extrapolation method. Combinatorial extract of N. sativa & C. cassia at different dosages (100 and 200 mg/kg orally) and Metformin (50 mg/kg orally) as the standard drug was administered for 28 days and then biochemical estimation, body weights and OGTT (Oral glucose tolerance test) were determined. Histopathological studies were also performed on kidney and pancreatic tissue. In In-Vitro the combinatorial extract shows much more inhibiting effect than the individual extracts. The results reveals that combinatorial extract of N. sativa & C. cassia has shown significant decrease in plasma glucose (p<0.0001), total cholesterol and LDL levels when compared with the STZ group The decreasing level of BUN and creatinine revealed the protection of N. sativa & C. cassia extracts against nephropathy associated with diabetes. Combination of N. sativa & C. cassia significantly improved glucose tolerance to exogenously administered glucose (2 g/kg) after 60, 90 and 120 min interval on OGTT in high dose streptozotocin induced diabetic rats compared with the untreated control group. Histopathological studies shown that treatment with N. sativa & C. cassia extract alone and in combination restored pancreatic tissue integrity and was able to regenerate the STZ damaged pancreatic β cells. Thus, the present study reveals that combination of N. sativa & C. cassia extract has significant α- glucosidase inhibitory activity and thus has great potential as a new source for diabetes treatment.Keywords: lipid levels, OGTT, diabetes, herbs, glucosidase
Procedia PDF Downloads 428135 Approach to Freight Trip Attraction Areas Classification, in Developing Countries
Authors: Adrián Esteban Ortiz-Valera, Angélica Lozano
Abstract:
In developing countries, informal trade is relevant, but it has been little studied in urban freight transport (UFT) context, although it is a challenge due to the non- contemplated demand it produces and the operational limitations it imposes. Hence, UFT operational improvements (initiatives) and freight attraction models must consider informal trade for developing countries. Afour phasesapproach for characterizing the commercial areas in developing countries (considering both formal and informal establishments) is proposed and applied to ten areas in Mexico City. This characterization is required to calculate real freight trip attraction and then select and/or adapt suitable initiatives. Phase 1 aims the delimitation of the study area. The following information is obtained for each establishment of a potential area: location or geographic coordinates, industrial sector, industrial subsector, and number of employees. Phase 2 characterizes the study area and proposes a set of indicators. This allows a broad view of the operations and constraints of UFT in the study area. Phase 3 classifies the study area according to seven indicators. Each indicator represents a level of conflict in the area due to the presence of formal (registered) and informal establishments on the sidewalks and streets, affecting urban freight transport (and other activities). Phase 4 determines preliminary initiatives which could be implemented in the study area to improve the operation of UFT. The indicators and initiatives relation allows a preliminary initiatives selection. This relation requires to know the following: a) the problems in the area (congested streets, lack of parking space for freight vehicles, etc.); b) the factors which limit initiatives due to informal establishments (reduced streets for freight vehicles; mobility and parking inability during a period, among others), c) the problems in the area due to its physical characteristics; and d) the factors which limit initiatives due to regulations of the area. Several differences in the study areas were observed. As the indicators increases, the areas tend to be less ordered, and the limitations for the initiatives become higher, causing a smaller number of susceptible initiatives. In ordered areas (similar to the commercial areas of developed countries), the current techniquesfor estimating freight trip attraction (FTA) can bedirectly applied, however, in the areas where the level of order is lower due to the presence of informal trade, this is not recommended because the real FTA would not be estimated. Therefore, a technique, which consider the characteristics of the areas in developing countries to obtain data and to estimate FTA, is required. This estimation can be the base for proposing feasible initiatives to such zones. The proposed approach provides a wide view of the needs of the commercial areas of developing countries. The knowledge of these needs would allow UFT´s operation to be improved and its negative impacts to be minimized.Keywords: freight initiatives, freight trip attraction, informal trade, urban freight transport
Procedia PDF Downloads 139134 Estimation of the Dynamic Fragility of Padre Jacinto Zamora Bridge Due to Traffic Loads
Authors: Kimuel Suyat, Francis Aldrine Uy, John Paul Carreon
Abstract:
The Philippines, composed of many islands, is connected with approximately 8030 bridges. Continuous evaluation of the structural condition of these bridges is needed to safeguard the safety of the general public. With most bridges reaching its design life, retrofitting and replacement may be needed. Concerned government agencies allocate huge costs for periodic monitoring and maintenance of these structures. The rising volume of traffic and aging of these infrastructures is challenging structural engineers to give rise for structural health monitoring techniques. Numerous techniques are already proposed and some are now being employed in other countries. Vibration Analysis is one way. The natural frequency and vibration of a bridge are design criteria in ensuring the stability, safety and economy of the structure. Its natural frequency must not be so high so as not to cause discomfort and not so low that the structure is so stiff causing it to be both costly and heavy. It is well known that the stiffer the member is, the more load it attracts. The frequency must not also match the vibration caused by the traffic loads. If this happens, a resonance occurs. Vibration that matches a systems frequency will generate excitation and when this exceeds the member’s limit, a structural failure will happen. This study presents a method for calculating dynamic fragility through the use of vibration-based monitoring system. Dynamic fragility is the probability that a structural system exceeds a limit state when subjected to dynamic loads. The bridge is modeled in SAP2000 based from the available construction drawings provided by the Department of Public Works and Highways. It was verified and adjusted based from the actual condition of the bridge. The bridge design specifications are also checked using nondestructive tests. The approach used in this method properly accounts the uncertainty of observed values and code-based structural assumptions. The vibration response of the structure due to actual loads is monitored using installed sensors on the bridge. From the determinacy of these dynamic characteristic of a system, threshold criteria can be established and fragility curves can be estimated. This study conducted in relation with the research project between Department of Science and Technology, Mapúa Institute of Technology, and the Department of Public Works and Highways also known as Mapúa-DOST Smart Bridge Project deploys Structural Health Monitoring Sensors at Zamora Bridge. The bridge is selected in coordination with the Department of Public Works and Highways. The structural plans for the bridge are also readily available.Keywords: structural health monitoring, dynamic characteristic, threshold criteria, traffic loads
Procedia PDF Downloads 270133 Intersection of Racial and Gender Microaggressions: Social Support as a Coping Strategy among Indigenous LGBTQ People in Taiwan
Authors: Ciwang Teyra, A. H. Y. Lai
Abstract:
Introduction: Indigenous LGBTQ individuals face with significant life stress such as racial and gender discrimination and microaggressions, which may lead to negative impacts of their mental health. Although studies relevant to Taiwanese indigenous LGBTQpeople gradually increase, most of them are primarily conceptual or qualitative in nature. This research aims to fulfill the gap by offering empirical quantitative evidence, especially investigating the impact of racial and gender microaggressions on mental health among Taiwanese indigenous LGBTQindividuals with an intersectional perspective, as well as examine whether social support can help them to cope with microaggressions. Methods: Participants were (n=200; mean age=29.51; Female=31%, Male=61%, Others=8%). A cross-sectional quantitative design was implemented using data collected in the year 2020. Standardised measurements was used, including Racial Microaggression Scale (10 items), Gender Microaggression Scale (9 items), Social Support Questionnaire-SF(6 items); Patient Health Questionnaire(9-item); and Generalised Anxiety Disorder(7-item). Covariates were age, gender, and perceived economic hardships. Structural equation modelling (SEM) was employed using Mplus 8.0 with the latent variables of depression and anxiety as outcomes. A main effect SEM model was first established (Model1).To test the moderation effects of perceived social support, an interaction effect model (Model 2) was created with interaction terms entered into Model1. Numerical integration was used with maximum likelihood estimation to estimate the interaction model. Results: Model fit statistics of the Model 1:X2(df)=1308.1 (795), p<.05; CFI/TLI=0.92/0.91; RMSEA=0.06; SRMR=0.06. For Model, the AIC and BIC values of Model 2 improved slightly compared to Model 1(AIC =15631 (Model1) vs. 15629 (Model2); BIC=16098 (Model1) vs. 16103 (Model2)). Model 2 was adopted as the final model. In main effect model 1, racialmicroaggressionand perceived social support were associated with depression and anxiety, but not sexual orientation microaggression(Indigenous microaggression: b = 0.27 for depression; b=0.38 for anxiety; Social support: b=-0.37 for depression; b=-0.34 for anxiety). Thus, an interaction term between social support and indigenous microaggression was added in Model 2. In the final Model 2, indigenous microaggression and perceived social support continues to be statistically significant predictors of both depression and anxiety. Social support moderated the effect of indigenous microaggression of depression (b=-0.22), but not anxiety. All covariates were not statistically significant. Implications: Results indicated that racial microaggressions have a significant impact on indigenous LGBTQ people’s mental health. Social support plays as a crucial role to buffer the negative impact of racial microaggression. To promote indigenous LGBTQ people’s wellbeing, it is important to consider how to support them to develop social support network systems.Keywords: microaggressions, intersectionality, indigenous population, mental health, social support
Procedia PDF Downloads 146132 Corporate Performance and Balance Sheet Indicators: Evidence from Indian Manufacturing Companies
Authors: Hussain Bohra, Pradyuman Sharma
Abstract:
This study highlights the significance of Balance Sheet Indicators on the corporate performance in the case of Indian manufacturing companies. Balance sheet indicators show the actual financial health of the company and it helps to the external investors to choose the right company for their investment and it also help to external financing agency to give easy finance to the manufacturing companies. The period of study is 2000 to 2014 for 813 manufacturing companies for which the continuous data is available throughout the study period. The data is collected from PROWESS data base maintained by Centre for Monitoring Indian Economy Pvt. Ltd. Panel data methods like fixed effect and random effect methods are used for the analysis. The Likelihood Ratio test, Lagrange Multiplier test and Hausman test results proof the suitability of the fixed effect model for the estimation. Return on assets (ROA) is used as the proxy to measure corporate performance. ROA is the best proxy to measure corporate performance as it already used by the most of the authors who worked on the corporate performance. ROA shows return on long term investment projects of firms. Different ratios like Current Ratio, Debt-equity ratio, Receivable turnover ratio, solvency ratio have been used as the proxies for the Balance Sheet Indicators. Other firm specific variable like firm size, and sales as the control variables in the model. From the empirical analysis, it was found that all selected financial ratios have significant and positive impact on the corporate performance. Firm sales and firm size also found significant and positive impact on the corporate performance. To check the robustness of results, the sample was divided on the basis of different ratio like firm having high debt equity ratio and low debt equity ratio, firms having high current ratio and low current ratio, firms having high receivable turnover and low receivable ratio and solvency ratio in the form of firms having high solving ratio and low solvency ratio. We find that the results are robust to all types of companies having different form of selected balance sheet indicators ratio. The results for other variables are also in the same line as for the whole sample. These findings confirm that Balance sheet indicators play as significant role on the corporate performance in India. The findings of this study have the implications for the corporate managers to focus different ratio to maintain the minimum expected level of performance. Apart from that, they should also maintain adequate sales and total assets to improve corporate performance.Keywords: balance sheet, corporate performance, current ratio, panel data method
Procedia PDF Downloads 263131 Analytical Study of the Structural Response to Near-Field Earthquakes
Authors: Isidro Perez, Maryam Nazari
Abstract:
Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.Keywords: near-field, pulse, pushover, time-history
Procedia PDF Downloads 146130 Restricted Boltzmann Machines and Deep Belief Nets for Market Basket Analysis: Statistical Performance and Managerial Implications
Authors: H. Hruschka
Abstract:
This paper presents the first comparison of the performance of the restricted Boltzmann machine and the deep belief net on binary market basket data relative to binary factor analysis and the two best-known topic models, namely Dirichlet allocation and the correlated topic model. This comparison shows that the restricted Boltzmann machine and the deep belief net are superior to both binary factor analysis and topic models. Managerial implications that differ between the investigated models are treated as well. The restricted Boltzmann machine is defined as joint Boltzmann distribution of hidden variables and observed variables (purchases). It comprises one layer of observed variables and one layer of hidden variables. Note that variables of the same layer are not connected. The comparison also includes deep belief nets with three layers. The first layer is a restricted Boltzmann machine based on category purchases. Hidden variables of the first layer are used as input variables by the second-layer restricted Boltzmann machine which then generates second-layer hidden variables. Finally, in the third layer hidden variables are related to purchases. A public data set is analyzed which contains one month of real-world point-of-sale transactions in a typical local grocery outlet. It consists of 9,835 market baskets referring to 169 product categories. This data set is randomly split into two halves. One half is used for estimation, the other serves as holdout data. Each model is evaluated by the log likelihood for the holdout data. Performance of the topic models is disappointing as the holdout log likelihood of the correlated topic model – which is better than Dirichlet allocation - is lower by more than 25,000 compared to the best binary factor analysis model. On the other hand, binary factor analysis on its own is clearly surpassed by both the restricted Boltzmann machine and the deep belief net whose holdout log likelihoods are higher by more than 23,000. Overall, the deep belief net performs best. We also interpret hidden variables discovered by binary factor analysis, the restricted Boltzmann machine and the deep belief net. Hidden variables characterized by the product categories to which they are related differ strongly between these three models. To derive managerial implications we assess the effect of promoting each category on total basket size, i.e., the number of purchased product categories, due to each category's interdependence with all the other categories. The investigated models lead to very different implications as they disagree about which categories are associated with higher basket size increases due to a promotion. Of course, recommendations based on better performing models should be preferred. The impressive performance advantages of the restricted Boltzmann machine and the deep belief net suggest continuing research by appropriate extensions. To include predictors, especially marketing variables such as price, seems to be an obvious next step. It might also be feasible to take a more detailed perspective by considering purchases of brands instead of purchases of product categories.Keywords: binary factor analysis, deep belief net, market basket analysis, restricted Boltzmann machine, topic models
Procedia PDF Downloads 199129 Scenario of Some Minerals and Impact of Promoter Hypermethylation of DAP-K Gene in Gastric Carcinoma Patients of Kashmir Valley
Authors: Showkat Ahmad Bhat, Iqra Reyaz, Falaque ul Afshan, Ahmad Arif Reshi, Muneeb U. Rehman, Manzoor R. Mir, Sabhiya Majid, Sonallah, Sheikh Bilal, Ishraq Hussain
Abstract:
Background: Gastric cancer is the fourth most common cancer and the second leading cause of worldwide cancer-related deaths, with a wide variation in incidence rates across different geographical areas. The current view of cancer is that a malignancy arises from a transformation of the genetic material of a normal cell, followed by successive mutations and by chain of alterations in genes such as DNA repair genes, oncogenes, Tumor suppressor genes. Minerals are necessary for the functioning of several transcriptional factors, proteins that recognize certain DNA sequences and have been found to play a role in gastric cancer. Material Methods:The present work was a case control study and its aim was to ascertain the role of minerals and promoter hypermethylation of CpG islands of DAP-K gene in Gastric cancer patients among the Kashmiri population. Serum was extracted from all the samples and mineral estimation was done by AAS from serum, DNA was also extracted and was modified using bisulphite modification kit. Methylation-specific PCR was used for the analysis of the promoter hypermethylation status of DAP-K gene. The epigenetic analysis revealed that unlike other high risk regions, Kashmiri population has a different promoter hypermethylation profile of DAP-K gene and has different mineral profile. Results: In our study mean serum copper levels were significantly different for the two genders (p<0.05), while as no significant differences were observed for iron and zinc levels. In Methylation-specific PCR the methylation status of the promoter region of DAP-K gene was as 67.50% (27/40) of the gastric cancer tissues showed methylated DAP-K promoter and 32.50% (13/40) of the cases however showed unmethylated DAP-K promoter. Almost all 85% (17/20) of the histopathologically confirmed normal tissues showed unmethylated DAP-K promoter except only in 3 cases where DAP-K promoter was found to be methylated. The association of promoter hypermethylation with gastric cancer was evaluated by χ2 (Chi square) test and was found to be significant (P=0.0006). Occurrence of DAP-K methylation was found to be unequally distributed in males and females with more frequency in males than in females but the difference was not statistically significant (P =0.7635, Odds ratio=1.368 and 95% C.I=0.4197 to 4.456). When the frequency of DAP-K promoter methylation was compared with clinical staging of the disease, DAP-K promoter methylation was found to be certainly higher in Stage III/IV (85.71%) compared to Stage I/ II (57.69%) but the difference was not statistically significant (P =0.0673). These results suggest that DAP-K aberrant promoter hypermethylation in Kashmiri population contributes to the process of carcinogenesis in Gastric cancer and is reportedly one of the commonest epigenetic changes in the development of Gastric cancer.Keywords: gastric cancer, minerals, AAS, hypermethylation, CpG islands, DAP-K gene
Procedia PDF Downloads 513128 Principal Well-Being at Hong Kong: A Quantitative Investigation
Authors: Junjun Chen, Yingxiu Li
Abstract:
The occupational well-being of school principals has played a vital role in the pursuit of individual and school wellness and success. However, principals’ well-being worldwide is under increasing threat because of the challenging and complex nature of their work and growing demands for school standardisation and accountability. Pressure is particularly acute in the post-pandemicfuture as principals attempt to deal with the impact of the pandemic on top of more regular demands. This is particularly true in Hong Kong, as school principals are increasingly wedged between unparalleled political, social, and academic responsibilities. Recognizing the semantic breadth of well-being, scholars have not determined a single, mutually agreeable definition but agreed that the concept of well-being has multiple dimensions across various disciplines. The multidimensional approach promises more precise assessments of the relationships between well-being and other concepts than the ‘affect-only’ approach or other single domains for capturing the essence of principal well-being. The multiple-dimension well-being concept is adopted in this project to understand principal well-being in this study. This study aimed to understand the situation of principal well-being and its influential drivers with a sample of 670 principals from Hong Kong and Mainland China. An online survey was sent to the participants after the breakout of COVID-19 by the researchers. All participants were well informed about the purposes and procedure of the project and the confidentiality of the data prior to filling in the questionnaire. Confirmatory factor analysis and structural equation modelling performed with Mplus were employed to deal with the dataset. The data analysis procedure involved the following three steps. First, the descriptive statistics (e.g., mean and standard deviation) were calculated. Second, confirmatory factor analysis (CFA) was used to trim principal well-being measurement performed with maximum likelihood estimation. Third, structural equation modelling (SEM) was employed to test the influential factors of principal well-being. The results of this study indicated that the overall of principal well-being were above the average mean score. The highest ranking in this study given by the principals was to their psychological and social well-being (M = 5.21). This was followed by spiritual (M = 5.14; SD = .77), cognitive (M = 5.14; SD = .77), emotional (M = 4.96; SD = .79), and physical well-being (M = 3.15; SD = .73). Participants ranked their physical well-being the lowest. Moreover, professional autonomy, supervisor and collegial support, school physical conditions, professional networking, and social media have showed a significant impact on principal well-being. The findings of this study will potentially enhance not only principal well-being, but also the functioning of an individual principal and a school without sacrificing principal well-being for quality education in the process. This will eventually move one step forward for a new future - a wellness society advocated by OECD. Importantly, well-being is an inside job that begins with choosing to have wellness, whilst supports to become a wellness principal are also imperative.Keywords: well-being, school principals, quantitative, influential factors
Procedia PDF Downloads 82127 A Mixed Method Approach for Modeling Entry Capacity at Rotary Intersections
Authors: Antonio Pratelli, Lorenzo Brocchini, Reginald Roy Souleyrette
Abstract:
A rotary is a traffic circle intersection where vehicles entering from branches give priority to circulating flow. Vehicles entering the intersection from converging roads move around the central island and weave out of the circle into their desired exiting branch. This creates merging and diverging conflicts among any entry and its successive exit, i.e., a section. Therefore, rotary capacity models are usually based on the weaving of the different movements in any section of the circle, and the maximum rate of flow value is then related to each weaving section of the rotary. Nevertheless, the single-section capacity value does not lead to the typical performance characteristics of the intersection, such as the entry average delay which is directly linked to its level of service. From another point of view, modern roundabout capacity models are based on the limitation of the flow entering from the single entrance due to the amount of flow circulating in front of the entrance itself. Modern roundabouts capacity models generally lead also to a performance evaluation. This paper aims to incorporate a modern roundabout capacity model into an old rotary capacity method to obtain from the latter the single input capacity and ultimately achieve the related performance indicators. Put simply; the main objective is to calculate the average delay of each single roundabout entrance to apply the most common Highway Capacity Manual, or HCM, criteria. The paper is organized as follows: firstly, the rotary and roundabout capacity models are sketched, and it has made a brief introduction to the model combination technique with some practical instances. The successive section is deserved to summarize the TRRL old rotary capacity model and the most recent HCM-7th modern roundabout capacity model. Then, the two models are combined through an iteration-based algorithm, especially set-up and linked to the concept of roundabout total capacity, i.e., the value reached due to a traffic flow pattern leading to the simultaneous congestion of all roundabout entrances. The solution is the average delay for each entrance of the rotary, by which is estimated its respective level of service. In view of further experimental applications, at this research stage, a collection of existing rotary intersections operating with the priority-to-circle rule has already started, both in the US and in Italy. The rotaries have been selected by direct inspection of aerial photos through a map viewer, namely Google Earth. Each instance has been recorded by location, general urban or rural, and its main geometrical patterns. Finally, conclusion remarks are drawn, and a discussion on some further research developments has opened.Keywords: mixed methods, old rotary and modern roundabout capacity models, total capacity algorithm, level of service estimation
Procedia PDF Downloads 81