Search results for: infinite slope
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 789

Search results for: infinite slope

129 A Strategy to Oil Production Placement Zones Based on Maximum Closeness

Authors: Waldir Roque, Gustavo Oliveira, Moises Santos, Tatiana Simoes

Abstract:

Increasing the oil recovery factor of an oil reservoir has been a concern of the oil industry. Usually, the production placement zones are defined after some analysis of geological and petrophysical parameters, being the rock porosity, permeability and oil saturation of fundamental importance. In this context, the determination of hydraulic flow units (HFUs) renders an important step in the process of reservoir characterization since it may provide specific regions in the reservoir with similar petrophysical and fluid flow properties and, in particular, techniques supporting the placement of production zones that favour the tracing of directional wells. A HFU is defined as a representative volume of a total reservoir rock in which petrophysical and fluid flow properties are internally consistent and predictably distinct of other reservoir rocks. Technically, a HFU is characterized as a rock region that exhibit flow zone indicator (FZI) points lying on a straight line of the unit slope. The goal of this paper is to provide a trustful indication for oil production placement zones for the best-fit HFUs. The FZI cloud of points can be obtained from the reservoir quality index (RQI), a function of effective porosity and permeability. Considering log and core data the HFUs are identified and using the discrete rock type (DRT) classification, a set of connected cell clusters can be found and by means a graph centrality metric, the maximum closeness (MaxC) cell is obtained for each cluster. Considering the MaxC cells as production zones, an extensive analysis, based on several oil recovery factor and oil cumulative production simulations were done for the SPE Model 2 and the UNISIM-I-D synthetic fields, where the later was build up from public data available from the actual Namorado Field, Campos Basin, in Brazil. The results have shown that the MaxC is actually technically feasible and very reliable as high performance production placement zones.

Keywords: hydraulic flow unit, maximum closeness centrality, oil production simulation, production placement zone

Procedia PDF Downloads 323
128 Corrosion of Steel in Relation with Hydrogen Activity of Concentrated HClO4 Media: Realisation Sensor and Reference Electrode

Authors: B. Hammouti, H. Oudda, A. Benabdellah, A. Benayada, A. Aouniti

Abstract:

Corrosion behaviour of carbon steel was studied in various concentrated HClO4 solutions. To explain the acid attack in relation of H+ activity, new sensor was realised: two carbon paste electrodes (CPE) were constructed by incorporating ferrocene (Fc) and orthoquinone into the carbon paste matrix and crossed by weak current to stabilize potential difference. The potentiometric method at imposed weak current between these two electrodes permits the in situ determination of both concentration and acidity level of various concentrated HClO4 solutions. The different factors affecting the potential at imposed current as current intensity, temperature and H+ ion concentration are studied. The potentials measured between ferrocene and chloranil electrodes are directly linked to the acid concentration. The acidity Ri(H) function defined represents the determination of the H+ activity and constitutes the extend of pH is concentrated acid solutions. Ri(H) has been determined and compared to Strehlow Ro(H), Janata HGF and Hammett Ho functions. The collected data permit to give a scale of strength of mineral concentrated acids at a given concentration. Ri(H) is numerically equal to the thermodynamic Ro(H), but deviated from Hammett functions based on indicator determination. The CPE electrode with inserted ferrocene in presence of ferricinium (Fc+) ion in concentrated HClO4 at various concentrations is realized without junction potential and may plays the role of a practical reference electrode (FRE) in concentrated acids. Fc+ was easily prepared in biphasic medium HClO4-acid by the quantitative oxidation of ferrocene by the ortho-chloranil (oQ). Potential of FRE is stable with time. The variation of equilibrium potential of the interface Fc/ Fc+ at various concentrations of Fc+ (10-4 - 2 10-2 M) obeyed to the Nernst equation with a slope 0.059 Volt per decade. Corrosion rates obtained by weight loss and electrochemical techniques were then easily linked to acidity level.

Keywords: ferrocene, strehlow, concentrated acid, corrosion, Generalised pH, sensor carbon paste electrode

Procedia PDF Downloads 352
127 Woody Carbon Stock Potentials and Factor Affecting Their Storage in Munessa Forest, Southern Ethiopia

Authors: Mojo Mengistu Gelasso

Abstract:

The tropical forest is considered the most important forest ecosystem for mitigating climate change by sequestering a high amount of carbon. The potential carbon stock of the forest can be influenced by many factors. Therefore, studying these factors is crucial for understanding the determinants that affect the potential for woody carbon storage in the forest. This study was conducted to evaluate the potential for woody carbon stock and how it varies based on plant community types, as well as along altitudinal, slope, and aspect gradients in the Munessa dry Afromontane forest. Vegetation data was collected using systematic sampling. Five line transects were established at 100 m intervals along the altitudinal gradient between two consecutive transect lines. On each transect, 10 quadrats (20 x 20 m), separated by 200 m, were established. The woody carbon was estimated using an appropriate allometric equation formulated for tropical forests. The data was analyzed using one-way ANOVA in R software. The results showed that the total woody carbon stock of the Munessa forest was 210.43 ton/ha. The analysis of variance revealed that woody carbon density varied significantly based on environmental factors, while community types had no significant effect. The highest mean carbon stock was found at middle altitudes (2367-2533 m.a.s.l), lower slopes (0-13%), and west-facing aspects. The Podocarpus falcatus-Croton macrostachyus community type also contributed a higher woody carbon stock, as larger tree size classes and older trees dominated it. Overall, the potential for woody carbon sequestration in this study was strongly associated with environmental variables. Additionally, the uneven distribution of species with larger diameter at breast height (DBH) in the study area might be linked to anthropogenic factors, as the current forest growth indicates characteristics of a secondary forest. Therefore, our study suggests that the development and implementation of a sustainable forest management plan is necessary to increase the carbon sequestration potential of this forest and mitigate climate change.

Keywords: munessa forest, woody carbon stock, environmental factors, climate mitigation

Procedia PDF Downloads 74
126 Woody Carbon Stock Potentials and Factor Affecting Their Storage in Munessa Forest, Southern Ethiopia

Authors: Mengistu Gelasso Mojo

Abstract:

The tropical forest is considered the most important forest ecosystem for mitigating climate change by sequestering a high amount of carbon. The potential carbon stock of the forest can be influenced by many factors. Therefore, studying these factors is crucial for understanding the determinants that affect the potential for woody carbon storage in the forest. This study was conducted to evaluate the potential for woody carbon stock and how it varies based on plant community types, as well as along altitudinal, slope, and aspect gradients in the Munessa dry Afromontane forest. Vegetation data was collected using systematic sampling. Five line transects were established at 100 m intervals along the altitudinal gradient between two consecutive transect lines. On each transect, 10 quadrats (20 x 20 m), separated by 200 m, were established. The woody carbon was estimated using an appropriate allometric equation formulated for tropical forests. The data was analyzed using one-way ANOVA in R software. The results showed that the total woody carbon stock of the Munessa forest was 210.43 ton/ha. The analysis of variance revealed that woody carbon density varied significantly based on environmental factors, while community types had no significant effect. The highest mean carbon stock was found at middle altitudes (2367-2533 m.a.s.l), lower slopes (0-13%), and west-facing aspects. The Podocarpus falcatus-Croton macrostachyus community type also contributed a higher woody carbon stock, as larger tree size classes and older trees dominated it. Overall, the potential for woody carbon sequestration in this study was strongly associated with environmental variables. Additionally, the uneven distribution of species with larger diameter at breast height (DBH) in the study area might be linked to anthropogenic factors, as the current forest growth indicates characteristics of a secondary forest. Therefore, our study suggests that the development and implementation of a sustainable forest management plan is necessary to increase the carbon sequestration potential of this forest and mitigate climate change.

Keywords: munessa forest, woody carbon stock, environmental factors, climate mitigation

Procedia PDF Downloads 79
125 Determining Optimum Locations for Runoff Water Harvesting in W. Watir, South Sinai, Using RS, GIS, and WMS Techniques

Authors: H. H. Elewa, E. M. Ramadan, A. M. Nosair

Abstract:

Rainfall water harvesting is considered as an important tool for overcoming water scarcity in arid and semi-arid region. Wadi Watir in the southeastern part of Sinai Peninsula is considered as one of the main and active basins in the Gulf of Aqaba drainage system. It is characterized by steep hills mainly consist of impermeable rocks, whereas the streambeds are covered by a highly permeable mixture of gravel and sand. A comprehensive approach involving the integration of geographic information systems, remote sensing and watershed modeling was followed to identify the RWH capability in this area. Eight thematic layers, viz volume of annual flood, overland flow distance, maximum flow distance, rock or soil infiltration, drainage frequency density, basin area, basin slope and basin length were used as a multi-parametric decision support system for conducting weighted spatial probability models (WSPMs) to determine the potential areas for the RWH. The WSPMs maps classified the area into five RWH potentiality classes ranging from the very low to very high. Three performed WSPMs' scenarios for W. Watir reflected identical results among their maps for the high and very high RWH potentiality classes, which are the most suitable ones for conducting surface water harvesting techniques. There is also a reasonable match with respect to the potentiality of runoff harvesting areas with a probability of moderate, low and very low among the three scenarios. WSPM results have shown that the high and very high classes, which are the most suitable for the RWH are representing approximately 40.23% of the total area of the basin. Accordingly, several locations were decided for the establishment of water harvesting dams and cisterns to improve the water conditions and living environment in the study area.

Keywords: Sinai, Wadi Watir, remote sensing, geographic information systems, watershed modeling, runoff water harvesting

Procedia PDF Downloads 355
124 Implementation of Dozer Push Measurement under Payment Mechanism in Mining Operation

Authors: Anshar Ajatasatru

Abstract:

The decline of coal prices over past years have been significantly increasing the awareness of effective mining operation. A viable step must be undertaken in becoming more cost competitive while striving for best mining practice especially at Melak Coal Mine in East Kalimantan, Indonesia. This paper aims to show how effective dozer push measurement method can be implemented as it is controlled by contract rate on the unit basis of USD ($) per bcm. The method emerges from an idea of daily dozer push activity that continually shifts the overburden until final target design by mine planning. Volume calculation is then performed by calculating volume of each time overburden is removed within determined distance using cut and fill method from a high precision GNSS system which is applied into dozer as a guidance to ensure the optimum result of overburden removal. Accumulation of daily to weekly dozer push volume is found 95 bcm which is multiplied by average sell rate of $ 0,95, thus the amount monthly revenue is $ 90,25. Furthermore, the payment mechanism is then based on push distance and push grade. The push distance interval will determine the rates that vary from $ 0,9 - $ 2,69 per bcm and are influenced by certain push slope grade from -25% until +25%. The amount payable rates for dozer push operation shall be specifically following currency adjustment and is to be added to the monthly overburden volume claim, therefore, the sell rate of overburden volume per bcm may fluctuate depends on the real time exchange rate of Jakarta Interbank Spot Dollar Rate (JISDOR). The result indicates that dozer push measurement can be one of the surface mining alternative since it has enabled to refine method of work, operating cost and productivity improvement apart from exposing risk of low rented equipment performance. In addition, payment mechanism of contract rate by dozer push operation scheduling will ultimately deliver clients by almost 45% cost reduction in the form of low and consistent cost.

Keywords: contract rate, cut-fill method, dozer push, overburden volume

Procedia PDF Downloads 312
123 Study of Lanthanoide Organic Frameworks Properties and Synthesis: Multicomponent Ligands

Authors: Ayla Roberta Galaco, Juliana Fonseca De Lima, Osvaldo Antonio Serra

Abstract:

Coordination polymers, also known as metal-organic frameworks (MOFs) or lanthanoide organic frameworks (LOFs) have been reported due of their promising applications in gas storage, separation, catalysis, luminescence, magnetism, drug delivery, and so on. As a type of organic–inorganic hybrid materials, the properties of coordination polymers could be chosen by deliberately selecting the organic and inorganic components. LOFs have received considerable attention because of their properties such as porosity, luminescence, and magnetism. Methods such as solvothermal synthesis are important as a strategy to control the structural and morphological properties as well as the composition of the target compounds. In this work the first solvothermal synthesis was employed to obtain the compound [Y0.4,Yb0.4,Er0.2(dmf)(for)(H2O)(tft)], by using terephthalic acid (tft) and oxalic acid, decomposed in formate (for), as ligands; Yttrium, Ytterbium and, Erbium as metal centers, in DMF and water for 4 days under 160 °C. The semi-rigid terephthalic acid (dicarboxylic) coordinates with Ln3+ ions and also is possible to form a polyfunctional bridge. On the other hand, oxalate anion has no high-energy vibrational groups, which benefits the excitation of Yb3+ in upconversion process. It was observed that the compounds with water molecules in the coordination sphere of the lanthanoide ions cause lower crystalline properties and change the structure of the LOF (1D, 2D, 3D). In the FTIR, the bands at 1589 and 1500 cm-1 correspond to the asymmetric stretching vibration of –COO. The band at 1383 cm-1 is assigned to the symmetric stretching vibration of –COO. Single crystal X-ray diffraction study reveals an infinite 3D coordination framework that crystalizes in space group P21/c. The other three products, [TR(chel)(ofd)0,5(H2O)2], where TR= Eu3+, Y3, and Yb3+/Er3+ were obtained by using 1, 2-phenylenedioxydiacetic acid (ofd) and chelidonic acid (chel) as organic ligands. Thermal analysis shows that the lanthanoide organic frameworks do not collapse at temperatures below 250 °C. By the polycrystalline X-ray diffraction patterns (PXRD) it was observed that the compounds with Eu3+, Y3+, and Yb3+/Er3+ ions are isostructural. From PXRD patterns, high crystallinity can be noticed for the complexes. The final products were characterized by single X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), energy dispersive spectroscopy (EDS) and thermogravimetric analysis (TGA). The X-ray diffraction (XRD) is an effective method to investigate crystalline properties of synthesized materials. The solid crystal obtained in the synthesis show peaks at 2θ < 10°, indicating the MOF formation. The chemical composition of LOFs was also confirmed by EDS.

Keywords: isostructural, lanthanoids, lanthanoids organic frameworks (LOFs), metal organic frameworks (MOFs), thermogravimetry, X-Ray diffraction

Procedia PDF Downloads 255
122 Calibration and Validation of ArcSWAT Model for Estimation of Surface Runoff and Sediment Yield from Dhangaon Watershed

Authors: M. P. Tripathi, Priti Tiwari

Abstract:

Soil and Water Assessment Tool (SWAT) is a distributed parameter continuous time model and was tested on daily and fortnightly basis for a small agricultural watershed (Dhangaon) of Chhattisgarh state in India. The SWAT model recently interfaced with ArcGIS and called as ArcSWAT. The watershed and sub-watershed boundaries, drainage networks, slope and texture maps were generated in the environment of ArcGIS of ArcSWAT. Supervised classification method was used for land use/cover classification from satellite imageries of the years 2009 and 2012. Manning's roughness coefficient 'n' for overland flow and channel flow and Fraction of Field Capacity (FFC) were calibrated for monsoon season of the years 2009 and 2010. The model was validated on a daily basis for the years 2011 and 2012 by using the observed daily rainfall and temperature data. Calibration and validation results revealed that the model was predicting the daily surface runoff and sediment yield satisfactorily. Sensitivity analysis showed that the annual sediment yield was inversely proportional to the overland and channel 'n' values whereas; annual runoff and sediment yields were directly proportional to the FFC. The model was also tested (calibrated and validated) for the fortnightly runoff and sediment yield for the year 2009-10 and 2011-12, respectively. Simulated values of fortnightly runoff and sediment yield for the calibration and validation years compared well with their observed counterparts. The calibration and validation results revealed that the ArcSWAT model could be used for identification of critical sub-watershed and for developing management scenarios for the Dhangaon watershed. Further, the model should be tested for simulating the surface runoff and sediment yield using generated rainfall and temperature before applying it for developing the management scenario for the critical or priority sub-watersheds.

Keywords: watershed, hydrologic and water quality, ArcSWAT model, remote sensing, GIS, runoff and sediment yield

Procedia PDF Downloads 376
121 Advanced Study on Hydrogen Evolution Reaction based on Nickel sulfide Catalyst

Authors: Kishor Kumar Sadasivuni, Mizaj Shabil Sha, Assim Alajali, Godlaveeti Sreenivasa Kumar, Aboubakr M. Abdullah, Bijandra Kumar, Mithra Geetha

Abstract:

A potential pathway for efficient hydrogen production from water splitting electrolysis involves catalysis or electrocatalysis, which plays a crucial role in energy conversion and storage. Hydrogen generated by electrocatalytic water splitting requires active, stable, and low-cost catalysts or electrocatalysts to be developed for practical applications. In this study, we evaluated combination of 2D materials of NiS nanoparticle catalysts for hydrogen evolution reactions. The photocatalytic H₂ production rate of this nanoparticle is high and exceeds that obtained on components alone. Nanoparticles serve as electron collectors and transporters, which explains this improvement. Moreover, a current density was recorded at reduced working potential by 0.393 mA. Calculations based on density functional theory indicate that the nanoparticle's hydrogen evolution reaction catalytic activity is caused by strong interaction between its components at the interface. The samples were analyzed by XPS and morphologically by FESEM for the best outcome, depending on their structural shapes. Use XPS and morphologically by FESEM for the best results. This nanocomposite demonstrated higher electro-catalytic activity, and a low tafel slope of 60 mV/dec. Additionally, despite 1000 cycles into a durability test, the electrocatalyst still displays excellent stability with minimal current loss. The produced catalyst has shown considerable potential for use in the evolution of hydrogen due to its robust synthesis. According to these findings, the combination of 2D materials of nickel sulfide sample functions as good electocatalyst for H₂ evolution. Additionally, the research being done in this fascinating field will surely push nickel sulfide-based technology closer to becoming an industrial reality and revolutionize existing energy issues in a sustainable and clean manner.

Keywords: electrochemical hydrogenation, nickel sulfide, electrocatalysts, energy conversion, catalyst

Procedia PDF Downloads 117
120 Urban Impervious and its Impact on Storm Water Drainage Systems

Authors: Ratul Das, Udit Narayan Das

Abstract:

Surface imperviousness in urban area brings significant changes in storm water drainage systems and some recent studies reveals that the impervious surfaces that passes the storm water runoff directly to drainage systems through storm water collection systems, called directly connected impervious area (DCIA) is an effective parameter rather than total impervious areas (TIA) for computation of surface runoff. In the present study, extension of DCIA and TIA were computed for a small sub-urban area of Agartala, the capital of state Tripura. Total impervious surfaces covering the study area were identified on the existing storm water drainage map from landuse map of the study area in association with field assessments. Also, DCIA assessed through field survey were compared to DCIA computed by empirical relationships provided by other investigators. For the assessment of DCIA in the study area two methods were adopted. First, partitioning the study area into four drainage sub-zones based on average basin slope and laying of existing storm water drainage systems. In the second method, the entire study area was divided into small grids. Each grid or parcel comprised of 20m× 20m area. Total impervious surfaces were delineated from landuse map in association with on-site assessments for efficient determination of DCIA within each sub-area and grid. There was a wide variation in percent connectivity of TIA across each sub-drainage zone and grid. In the present study, total impervious area comprises 36.23% of the study area, in which 21.85% of the total study area is connected to storm water collection systems. Total pervious area (TPA) and others comprise 53.20% and 10.56% of the total area, respectively. TIA recorded by field assessment (36.23%) was considerably higher than that calculated from the available land use map (22%). From the analysis of recoded data, it is observed that the average percentage of connectivity (% DCIA with respect to TIA) is 60.31 %. The analysis also reveals that the observed DCIA lies below the line of optimal impervious surface connectivity for a sub-urban area provided by other investigators and which indicate the probable reason of water logging conditions in many parts of the study area during monsoon period.

Keywords: Drainage, imperviousness, runoff, storm water.

Procedia PDF Downloads 345
119 Artificial Neural Networks and Hidden Markov Model in Landslides Prediction

Authors: C. S. Subhashini, H. L. Premaratne

Abstract:

Landslides are the most recurrent and prominent disaster in Sri Lanka. Sri Lanka has been subjected to a number of extreme landslide disasters that resulted in a significant loss of life, material damage, and distress. It is required to explore a solution towards preparedness and mitigation to reduce recurrent losses associated with landslides. Artificial Neural Networks (ANNs) and Hidden Markov Model (HMMs) are now widely used in many computer applications spanning multiple domains. This research examines the effectiveness of using Artificial Neural Networks and Hidden Markov Model in landslides predictions and the possibility of applying the modern technology to predict landslides in a prominent geographical area in Sri Lanka. A thorough survey was conducted with the participation of resource persons from several national universities in Sri Lanka to identify and rank the influencing factors for landslides. A landslide database was created using existing topographic; soil, drainage, land cover maps and historical data. The landslide related factors which include external factors (Rainfall and Number of Previous Occurrences) and internal factors (Soil Material, Geology, Land Use, Curvature, Soil Texture, Slope, Aspect, Soil Drainage, and Soil Effective Thickness) are extracted from the landslide database. These factors are used to recognize the possibility to occur landslides by using an ANN and HMM. The model acquires the relationship between the factors of landslide and its hazard index during the training session. These models with landslide related factors as the inputs will be trained to predict three classes namely, ‘landslide occurs’, ‘landslide does not occur’ and ‘landslide likely to occur’. Once trained, the models will be able to predict the most likely class for the prevailing data. Finally compared two models with regards to prediction accuracy, False Acceptance Rates and False Rejection rates and This research indicates that the Artificial Neural Network could be used as a strong decision support system to predict landslides efficiently and effectively than Hidden Markov Model.

Keywords: landslides, influencing factors, neural network model, hidden markov model

Procedia PDF Downloads 382
118 New Coating Materials Based on Mixtures of Shellac and Pectin for Pharmaceutical Products

Authors: M. Kumpugdee-Vollrath, M. Tabatabaeifar, M. Helmis

Abstract:

Shellac is a natural polyester resin secreted by insects. Pectins are natural, non-toxic and water-soluble polysaccharides extracted from the peels of citrus fruits or the leftovers of apples. Both polymers are allowed for the use in the pharmaceutical industry and as a food additive. SSB Aquagold® is the aqueous solution of shellac and can be used for a coating process as an enteric or controlled drug release polymer. In this study, tablets containing 10 mg methylene blue as a model drug were prepared with a rotary press. Those tablets were coated with mixtures of shellac and one of the pectin different types (i.e. CU 201, CU 501, CU 701 and CU 020) mostly in a 2:1 ratio or with pure shellac in a small scale fluidized bed apparatus. A stable, simple and reproducible three-stage coating process was successfully developed. The drug contents of the coated tablets were determined using UV-VIS spectrophotometer. The characterization of the surface and the film thickness were performed with the scanning electron microscopy (SEM) and the light microscopy. Release studies were performed in a dissolution apparatus with a basket. Most of the formulations were enteric coated. The dissolution profiles showed a delayed or sustained release with a lagtime of at least 4 h. Dissolution profiles of coated tablets with pure shellac had a very long lagtime ranging from 13 to 17.5 h and the slopes were quite high. The duration of the lagtime and the slope of the dissolution profiles could be adjusted by adding the proper type of pectin to the shellac formulation and by variation of the coating amount. In order to apply a coating formulation as a colon delivery system, the prepared film should be resistant against gastric fluid for at least 2 h and against intestinal fluid for 4-6 h. The required delay time was gained with most of the shellac-pectin polymer mixtures. The release profiles were fitted with the modified model of the Korsmeyer-Peppas equation and the Hixson-Crowell model. A correlation coefficient (R²) > 0.99 was obtained by Korsmeyer-Peppas equation.

Keywords: shellac, pectin, coating, fluidized bed, release, colon delivery system, kinetic, SEM, methylene blue

Procedia PDF Downloads 404
117 The Effectiveness of Energy-related Tax in Curbing Transport-related Carbon Emissions: The Role of Green Finance and Technology in OECD Economies

Authors: Hassan Taimoor, Piotr Krajewski, Piotr Gabrielzcak

Abstract:

Being responsible for the largest source of energy-related emissions, the transportation sector is driven by more than half of global oil demand and total energy consumption, making it a crucial factor in tackling climate change and environmental degradation. The present study empirically tests the effectives of the energy-related tax (TXEN) in curbing transport-related carbon emissions (CO2TRANSP) in Organization for Economic Cooperation and Development (OECD) economies over the period of 1990-2020. Moreover, Green Finance (GF), Technology (TECH), and Gross domestic product (GDP) have also been added as explanatory factors which might affect CO2TRANSP emissions. The study employs the Method of Moment Quantile Regression (MMQR), an advance econometric technique to observe the variations along each quantile. Based on the results of the preliminary test, we confirm the presence of cross-sectional dependence and slope heterogeneity. Whereas the result of the panel unit root test report mixed order of variables’ integration. The findings reveal that rise in income level activates CO2TRANSP, confirming the first stage of Environmental Kuznet Hypothesis. Surprisingly, the present TXEN policies of OECD member states are not mature enough to tackle the CO2TRANSP emissions. However, the findings confirm that GF and TECH are solely responsible for the reduction in the CO2TRANSP. The outcomes of Bootstrap Quantile Regression (BSQR) further validate and support the earlier findings of MMQR. Based on the findings of this study, it is revealed that the current TXEN policies are too moderate, and an incremental and progressive rise in TXEN may help in a transition toward a cleaner and sustainable transportation sector in the study region.

Keywords: transport-related CO2 emissions, energy-related tax, green finance, technological development, oecd member states

Procedia PDF Downloads 72
116 Modeling of Foundation-Soil Interaction Problem by Using Reduced Soil Shear Modulus

Authors: Yesim Tumsek, Erkan Celebi

Abstract:

In order to simulate the infinite soil medium for soil-foundation interaction problem, the essential geotechnical parameter on which the foundation stiffness depends, is the value of soil shear modulus. This parameter directly affects the site and structural response of the considered model under earthquake ground motions. Strain-dependent shear modulus under cycling loads makes difficult to estimate the accurate value in computation of foundation stiffness for the successful dynamic soil-structure interaction analysis. The aim of this study is to discuss in detail how to use the appropriate value of soil shear modulus in the computational analyses and to evaluate the effect of the variation in shear modulus with strain on the impedance functions used in the sub-structure method for idealizing the soil-foundation interaction problem. Herein, the impedance functions compose of springs and dashpots to represent the frequency-dependent stiffness and damping characteristics at the soil-foundation interface. Earthquake-induced vibration energy is dissipated into soil by both radiation and hysteretic damping. Therefore, flexible-base system damping, as well as the variability in shear strengths, should be considered in the calculation of impedance functions for achievement a more realistic dynamic soil-foundation interaction model. In this study, it has been written a Matlab code for addressing these purposes. The case-study example chosen for the analysis is considered as a 4-story reinforced concrete building structure located in Istanbul consisting of shear walls and moment resisting frames with a total height of 12m from the basement level. The foundation system composes of two different sized strip footings on clayey soil with different plasticity (Herein, PI=13 and 16). In the first stage of this study, the shear modulus reduction factor was not considered in the MATLAB algorithm. The static stiffness, dynamic stiffness modifiers and embedment correction factors of two rigid rectangular foundations measuring 2m wide by 17m long below the moment frames and 7m wide by 17m long below the shear walls are obtained for translation and rocking vibrational modes. Afterwards, the dynamic impedance functions of those have been calculated for reduced shear modulus through the developed Matlab code. The embedment effect of the foundation is also considered in these analyses. It can easy to see from the analysis results that the strain induced in soil will depend on the extent of the earthquake demand. It is clearly observed that when the strain range increases, the dynamic stiffness of the foundation medium decreases dramatically. The overall response of the structure can be affected considerably because of the degradation in soil stiffness even for a moderate earthquake. Therefore, it is very important to arrive at the corrected dynamic shear modulus for earthquake analysis including soil-structure interaction.

Keywords: clay soil, impedance functions, soil-foundation interaction, sub-structure approach, reduced shear modulus

Procedia PDF Downloads 263
115 Developing Confidence of Visual Literacy through Using MIRO during Online Learning

Authors: Rachel S. E. Lim, Winnie L. C. Tan

Abstract:

Visual literacy is about making meaning through the interaction of images, words, and sounds. Graphic communication students typically develop visual literacy through critique and production of studio-based projects for their portfolios. However, the abrupt switch to online learning during the COVID-19 pandemic has made it necessary to consider new strategies of visualization and planning to scaffold teaching and learning. This study, therefore, investigated how MIRO, a cloud-based visual collaboration platform, could be used to develop the visual literacy confidence of 30 diploma in graphic communication students attending a graphic design course at a Singapore arts institution. Due to COVID-19, the course was taught fully online throughout a 16-week semester. Guided by Kolb’s Experiential Learning Cycle, the two lecturers developed students’ engagement with visual literacy concepts through different activities that facilitated concrete experiences, reflective observation, abstract conceptualization, and active experimentation. Throughout the semester, students create, collaborate, and centralize communication in MIRO with infinite canvas, smart frameworks, a robust set of widgets (i.e., sticky notes, freeform pen, shapes, arrows, smart drawing, emoticons, etc.), and powerful platform capabilities that enable asynchronous and synchronous feedback and interaction. Students then drew upon these multimodal experiences to brainstorm, research, and develop their motion design project. A survey was used to examine students’ perceptions of engagement (E), confidence (C), learning strategies (LS). Using multiple regression, it¬ was found that the use of MIRO helped students develop confidence (C) with visual literacy, which predicted performance score (PS) that was measured against their application of visual literacy to the creation of their motion design project. While students’ learning strategies (LS) with MIRO did not directly predict confidence (C) or performance score (PS), it fostered positive perceptions of engagement (E) which in turn predicted confidence (C). Content analysis of students’ open-ended survey responses about their learning strategies (LS) showed that MIRO provides organization and structure in documenting learning progress, in tandem with establishing standards and expectations as a preparatory ground for generating feedback. With the clarity and sequence of the mentioned conditions set in place, these prerequisites then lead to the next level of personal action for self-reflection, self-directed learning, and time management. The study results show that the affordances of MIRO can develop visual literacy and make up for the potential pitfalls of student isolation, communication, and engagement during online learning. The context of how MIRO could be used by lecturers to orientate students for learning in visual literacy and studio-based projects for future development are discussed.

Keywords: design education, graphic communication, online learning, visual literacy

Procedia PDF Downloads 110
114 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri

Authors: Shishay Kidanu, Abdullah Alhaj

Abstract:

Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.

Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri

Procedia PDF Downloads 70
113 Analytical Solutions of Josephson Junctions Dynamics in a Resonant Cavity for Extended Dicke Model

Authors: S.I.Mukhin, S. Seidov, A. Mukherjee

Abstract:

The Dicke model is a key tool for the description of correlated states of quantum atomic systems, excited by resonant photon absorption and subsequently emitting spontaneous coherent radiation in the superradiant state. The Dicke Hamiltonian (DH) is successfully used for the description of the dynamics of the Josephson Junction (JJ) array in a resonant cavity under applied current. In this work, we have investigated a generalized model, which is described by DH with a frustrating interaction term. This frustrating interaction term is explicitly the infinite coordinated interaction between all the spin half in the system. In this work, we consider an array of N superconducting islands, each divided into two sub-islands by a Josephson Junction, taken in a charged qubit / Cooper Pair Box (CPB) condition. The array is placed inside the resonant cavity. One important aspect of the problem lies in the dynamical nature of the physical observables involved in the system, such as condensed electric field and dipole moment. It is important to understand how these quantities behave with time to define the quantum phase of the system. The Dicke model without frustrating term is solved to find the dynamical solutions of the physical observables in analytic form. We have used Heisenberg’s dynamical equations for the operators and on applying newly developed Rotating Holstein Primakoff (HP) transformation and DH we have arrived at the four coupled nonlinear dynamical differential equations for the momentum and spin component operators. It is possible to solve the system analytically using two-time scales. The analytical solutions are expressed in terms of Jacobi's elliptic functions for the metastable ‘bound luminosity’ dynamic state with the periodic coherent beating of the dipoles that connect the two double degenerate dipolar ordered phases discovered previously. In this work, we have proceeded the analysis with the extended DH with a frustrating interaction term. Inclusion of the frustrating term involves complexity in the system of differential equations and it gets difficult to solve analytically. We have solved semi-classical dynamic equations using the perturbation technique for small values of Josephson energy EJ. Because the Hamiltonian contains parity symmetry, thus phase transition can be found if this symmetry is broken. Introducing spontaneous symmetry breaking term in the DH, we have derived the solutions which show the occurrence of finite condensate, showing quantum phase transition. Our obtained result matches with the existing results in this scientific field.

Keywords: Dicke Model, nonlinear dynamics, perturbation theory, superconductivity

Procedia PDF Downloads 130
112 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint

Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar

Abstract:

Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.

Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine

Procedia PDF Downloads 73
111 Seamounts and Submarine Landslides: Study Case of Island Arcs Area in North of Sulawesi

Authors: Muhammad Arif Rahman, Gamma Abdul Jabbar, Enggar Handra Pangestu, Alfi Syahrin Qadri, Iryan Anugrah Putra, Rizqi Ramadhandi.

Abstract:

Indonesia lies above three major tectonic plates, Indo-Australia plate, Eurasia plate, and Pacific plate. Interactions between those plates resulted in high tectonic and volcanic activities that corelates into high risk of geological hazards in adjacent areas, one of the areas is in North of Sulawesi’s Islands. This case raises a problem in terms of infrastructure in order to mitigate existing infrastructure and various future infrastructures plan. One of the infrastructures that is essentials to enhance telecommunication aspect is submarine fiber optic cable, that has risk to geological hazard. This cable is essential that act as backbone in telecommunication. Damaged fiber optic cables can pose serious problem that make existing signal to be loss and have negative impact to people’s social and economic factor with also decreasing various governmental services performance. Submarine cables are facing challenges in terms of geological hazards, for instance are seamounts activity. Previous studies show that until 2023, five seamounts are identified in North of Sulawesi. Seamounts itself can damage and trigger many activities that can risks submarine cables, one of the examples is submarine landslide. Main focuses of this study are to identify new possible seamounts and submarine landslide path in area North of Sulawesi Islands to help minimize risks pose by those hazards, either to existing or future plan submarine cables. Using bathymetry data, this study conduct slope analysis and use distinctive morphological features to interpret possible seamounts. Then we mapped out valleys in between seamounts and determine where sediments might flow in case of landslide, and to finally, know how it affect submarine cables in the area.

Keywords: bathymetry, geological hazard, mitigation, seamount, submarine cable, submarine landslide, volcanic activity

Procedia PDF Downloads 63
110 Optimisation of Energy Harvesting for a Composite Aircraft Wing Structure Bonded with Discrete Macro Fibre Composite Sensors

Authors: Ali H. Daraji, Ye Jianqiao

Abstract:

The micro electrical devices of the wireless sensor network are continuously developed and become very small and compact with low electric power requirements using limited period life conventional batteries. The low power requirement for these devices, cost of conventional batteries and its replacement have encouraged researcher to find alternative power supply represented by energy harvesting system to provide an electric power supply with infinite period life. In the last few years, the investigation of energy harvesting for structure health monitoring has increased to powering wireless sensor network by converting waste mechanical vibration into electricity using piezoelectric sensors. Optimisation of energy harvesting is an important research topic to ensure a flowing of efficient electric power from structural vibration. The harvesting power is mainly based on the properties of piezoelectric material, dimensions of piezoelectric sensor, its position on a structure and value of an external electric load connected between sensor electrodes. Larger surface area of sensor is not granted larger power harvesting when the sensor area is covered positive and negative mechanical strain at the same time. Thus lead to reduction or cancellation of piezoelectric output power. Optimisation of energy harvesting is achieved by locating these sensors precisely and efficiently on the structure. Limited published work has investigated the energy harvesting for aircraft wing. However, most of the published studies have simplified the aircraft wing structure by a cantilever flat plate or beam. In these studies, the optimisation of energy harvesting was investigated by determination optimal value of an external electric load connected between sensor electrode terminals or by an external electric circuit or by randomly splitting piezoelectric sensor to two segments. However, the aircraft wing structures are complex than beam or flat plate and mostly constructed from flat and curved skins stiffened by stringers and ribs with more complex mechanical strain induced on the wing surfaces. This aircraft wing structure bonded with discrete macro fibre composite sensors was modelled using multiphysics finite element to optimise the energy harvesting by determination of the optimal number of sensors, location and the output resistance load. The optimal number and location of macro fibre sensors were determined based on the maximization of the open and close loop sensor output voltage using frequency response analysis. It was found different optimal distribution, locations and number of sensors bounded on the top and the bottom surfaces of the aircraft wing.

Keywords: energy harvesting, optimisation, sensor, wing

Procedia PDF Downloads 296
109 Quantitative and Qualitative Analysis of Randomized Controlled Trials in Physiotherapy from India

Authors: K. Hariohm, V. Prakash, J. Saravana Kumar

Abstract:

Introduction and Rationale: Increased scope of Physiotherapy (PT) practice also has contributed to research in the field of PT. It is essential to determine the production and quality of the clinical trials from India since, it may reflect the scientific growth of the profession. These trends can be taken as a baseline to measure our performance and also can be used as a guideline for the future trials. Objective: To quantify and analyze qualitatively the RCT’s from India from the period 2000-2013’ May, and classify data for the information process. Methods: Studies were searched in the Medline database using the key terms “India”, “Indian”, “Physiotherapy”. Clinical trials only with PT authors were included. Trials out of scope of PT practice and on animals were excluded. Retrieved valid articles were analyzed for published year, type of participants, area of study, PEDro score, outcome measure domains of impairment, activity, participation; ‘a priori’ sample size calculation, region, and explanation of the intervention. Result: 45 valid articles were retrieved from the year 2000-2013’ May. The majority of articles were done on symptomatic participants (81%). The frequencies of conditions repeated more were low back pain (n-7) and diabetes (n-4). PEDro score with mode 5 and upper limit of 8 and lower limit 4 was found. 97.2% of studies measure the outcome at the impairment level, 34% in activity level, and 27.8% in participation level. 29.7% of studies did ‘a priori’ sample size calculation. Correlation of year trend and PEDro score found to be not significant (p>.05). Individual PEDro item analysis showed, randomization (100%), concealment (33%) baseline (76%), blinding-subject, therapist, assessor (9.1%, 0%, 10%), follow-up (89%) ITT (15%), statistics between groups (100%), measures of variance (88 %). Conclusion: The trend shows an upward slope in terms of RCTs published from India which is a good indicator. The qualitative analysis showed some gaps in the clinical trial design, which can be expected to be, fulfilled by the future researchers.

Keywords: RCT, PEDro, physical therapy, rehabilitation

Procedia PDF Downloads 338
108 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring

Authors: Zheng Wang, Zhenhong Li, Jon Mills

Abstract:

Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.

Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring

Procedia PDF Downloads 157
107 Effect of Non-Regulated pH on the Dynamics of Dark Fermentative Biohydrogen Production with Suspended and Immobilized Cell Culture

Authors: Joelle Penniston, E. B. Gueguim-Kana

Abstract:

Biohydrogen has been identified as a promising alternative to the use of non-renewable fossil reserves, owing to its sustainability and non-polluting nature. pH is considered as a key parameter in fermentative biohydrogen production processes, due to its effect on the hydrogenase activity, metabolic activity as well as substrate hydrolysis. The present study assesses the influence of regulating pH on dark fermentative biohydrogen production. Four experimental hydrogen production schemes were evaluated. Two were implemented using suspended cells under regulated pH growth conditions (Sus_R) and suspended and non-regulated pH (Sus_N). The two others regimes consisted of alginate immobilized cells under pH regulated growth conditions (Imm_R) and immobilized and non-pH regulated conditions (Imm_N). All experiments were carried out at 37.5°C with glucose as sole source of carbon. Sus_R showed a lag time of 5 hours and a peak hydrogen fraction of 36% and a glucose degradation of 37%, compared to Sus_N which showed a peak hydrogen fraction of 44% and complete glucose degradation. Both suspended culture systems showed a higher peak biohydrogen fraction compared to the immobilized cell system. Imm_R experiments showed a lag phase of 8 hours, a peak biohydrogen fraction of 35%, while Imm_N showed a lag phase of 5 hours, a peak biohydrogen fraction of 22%. 100% glucose degradation was observed in both pH regulated and non-regulated processes. This study showed that biohydrogen production in batch mode with suspended cells in a non-regulated pH environment results in a partial degradation of substrate, with lower yield. This scheme has been the culture mode of choice for most reported studies in biohydrogen research. The relatively lower slope in pH trend of the non-regulated pH experiment with immobilized cells (Imm_N) compared to Sus_N revealed that that immobilized systems have a better buffering capacity compared to suspended systems, which allows for the extended production of biohydrogen even under non-regulated pH conditions. However, alginate immobilized cultures in flask systems showed some drawbacks associated to high rate of gas production that leads to increased buoyancy of the immobilization beads. This ultimately impedes the release of gas out of the flask.

Keywords: biohydrogen, sustainability, suspended, immobilized

Procedia PDF Downloads 336
106 Satellite Photogrammetry for DEM Generation Using Stereo Pair and Automatic Extraction of Terrain Parameters

Authors: Tridipa Biswas, Kamal Pandey

Abstract:

A Digital Elevation Model (DEM) is a simple representation of a surface in 3 dimensional space with elevation as the third dimension along with X (horizontal coordinates) and Y (vertical coordinates) in rectangular coordinates. DEM has wide applications in various fields like disaster management, hydrology and watershed management, geomorphology, urban development, map creation and resource management etc. Cartosat-1 or IRS P5 (Indian Remote Sensing Satellite) is a state-of-the-art remote sensing satellite built by ISRO (May 5, 2005) which is mainly intended for cartographic applications.Cartosat-1 is equipped with two panchromatic cameras capable of simultaneous acquiring images of 2.5 meters spatial resolution. One camera is looking at +26 degrees forward while another looks at –5 degrees backward to acquire stereoscopic imagery with base to height ratio of 0.62. The time difference between acquiring of the stereopair images is approximately 52 seconds. The high resolution stereo data have great potential to produce high-quality DEM. The high-resolution Cartosat-1 stereo image data is expected to have significant impact in topographic mapping and watershed applications. The objective of the present study is to generate high-resolution DEM, quality evaluation in different elevation strata, generation of ortho-rectified image and associated accuracy assessment from CARTOSAT-1 data based Ground Control Points (GCPs) for Aglar watershed (Tehri-Garhwal and Dehradun district, Uttarakhand, India). The present study reveals that generated DEMs (10m and 30m) derived from the CARTOSAT-1 stereo pair is much better and accurate when compared with existing DEMs (ASTER and CARTO DEM) also for different terrain parameters like slope, aspect, drainage, watershed boundaries etc., which are derived from the generated DEMs, have better accuracy and results when compared with the other two (ASTER and CARTO) DEMs derived terrain parameters.

Keywords: ASTER-DEM, CARTO-DEM, CARTOSAT-1, digital elevation model (DEM), ortho-rectified image, photogrammetry, RPC, stereo pair, terrain parameters

Procedia PDF Downloads 302
105 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs

Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.

Abstract:

Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.

Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification

Procedia PDF Downloads 117
104 On the Survival of Individuals with Type 2 Diabetes Mellitus in the United Kingdom: A Retrospective Case-Control Study

Authors: Njabulo Ncube, Elena Kulinskaya, Nicholas Steel, Dmitry Pshezhetskiy

Abstract:

Life expectancy in the United Kingdom (UK) has been near constant since 2010, particularly for the individuals of 65 years and older. This trend has been also noted in several other countries. This slowdown in the increase of life expectancy was concurrent with the increase in the number of deaths caused by non-communicable diseases. Of particular concern is the world-wide exponential increase in the number of diabetes related deaths. Previous studies have reported increased mortality hazards among diabetics compared to non-diabetics, and on the differing effects of antidiabetic drugs on mortality hazards. This study aimed to estimate the all-cause mortality hazards and related life expectancies among type 2 diabetes (T2DM) patients in the UK using the time-variant Gompertz-Cox model with frailty. The study also aimed to understand the major causes of the change in life expectancy growth in the last decade. A total of 221 182 (30.8% T2DM, 57.6% Males) individuals aged 50 years and above, born between 1930 and 1960, inclusive, and diagnosed between 2000 and 2016, were selected from The Health Improvement Network (THIN) database of the UK primary care data and followed up to 31 December 2016. About 13.4% of participants died during the follow-up period. The overall all-cause mortality hazard ratio of T2DM compared to non-diabetic controls was 1.467 (1.381-1.558) and 1.38 (1.307-1.457) when diagnosed between 50 to 59 years and 60 to 74 years, respectively. The estimated life expectancies among T2DM individuals without further comorbidities diagnosed at the age of 60 years were 2.43 (1930-1939 birth cohort), 2.53 (1940-1949 birth cohort) and 3.28 (1950-1960 birth cohort) years less than those of non-diabetic controls. However, the 1950-1960 birth cohort had a steeper hazard function compared to the 1940-1949 birth cohort for both T2DM and non-diabetic individuals. In conclusion, mortality hazards for people with T2DM continue to be higher than for non-diabetics. The steeper mortality hazard slope for the 1950-1960 birth cohort might indicate the sub-population contributing to a slowdown in the growth of the life expectancy.

Keywords: T2DM, Gompetz-Cox model with frailty, all-cause mortality, life expectancy

Procedia PDF Downloads 118
103 Hydro-Meteorological Vulnerability and Planning in Urban Area: The Case of Yaoundé City in Cameroon

Authors: Ouabo Emmanuel Romaric, Amougou Armathe

Abstract:

Background and aim: The study of impacts of floods and landslides at a small scale, specifically in the urban areas of developing countries is done to provide tools and actors for a better management of risks in such areas, which are now being affected by climate change. The main objective of this study is to assess the hydrometeorological vulnerabilities associated with flooding and urban landslides to propose adaptation measures. Methods: Climatic data analyses were done by calculation of indices of climate change within 50 years (1960-2012). Analyses of field data to determine causes, the level of risk and its consequences on the area of study was carried out using SPSS 18 software. The cartographic analysis and GIS were used to refine the work in space. Then, spatial and terrain analyses were carried out to determine the morphology of field in relation with floods and landslide, and the diffusion on the field. Results: The interannual changes in precipitation has highlighted the surplus years (21), the deficit years (24) and normal years (7). Barakat method bring out evolution of precipitation by jerks and jumps. Floods and landslides are correlated to high precipitation during surplus and normal years. Data field analyses show that populations are conscious (78%) of the risks with 74% of them exposed, but their capacities of adaptation is very low (51%). Floods are the main risk. The soils are classed as feralitic (80%), hydromorphic (15%) and raw mineral (5%). Slope variation (5% to 15%) of small hills and deep valley with anarchic construction favor flood and landslide during heavy precipitation. Mismanagement of waste produce blocks free circulation of river and accentuate floods. Conclusion: Vulnerability of population to hydrometeorological risks in Yaoundé VI is the combination of variation of parameters like precipitation, temperature due to climate change, and the bad planning of construction in urban areas. Because of lack of channels for water to circulate due to saturation of soils, the increase of heavy precipitation and mismanagement of waste, the result are floods and landslides which causes many damages on goods and people.

Keywords: climate change, floods, hydrometeorological, vulnerability

Procedia PDF Downloads 464
102 Effect of Forests and Forest Cover Change on Rainfall in the Central Rift Valley of Ethiopia

Authors: Alemayehu Muluneh, Saskia Keesstra, Leo Stroosnijder, Woldeamlak Bewket, Ashenafi Burka

Abstract:

There are some scientific evidences and a belief by many that forests attract rain and deforestation contributes to a decline of rainfall. However, there is still a lack of concrete scientific evidence on the role of forests in rainfall amount. In this paper, we investigate the forest-rainfall relationships in the environmentally hot spot area of the Central Rift Valley (CRV) of Ethiopia. Specifically, we evaluate long term (1970-2009) rainfall variability and its relationship with historical forest cover and the relationship between existing forest cover and topographical variables and rainfall distribution. The study used 16 long term and 15 short term rainfall stations. The Mann-Kendall test, bi variate and multiple regression models were used. The results show forest and wood land cover continuously declined over the 40 years period (1970-2009), but annual rainfall in the rift valley floor increased by 6.42 mm/year. But, on the escarpment and highlands, annual rainfall decreased by 2.48 mm/year. The increase in annual rainfall in the rift valley floor is partly attributable to the increase in evaporation as a result of increasing temperatures from the 4 existing lakes in the rift valley floor. Though, annual rainfall is decreasing on the escarpment and highlands, there was no significant correlation between this rainfall decrease and forest and wood land decline and also rainfall variability in the region was not explained by forest cover. Hence, the decrease in annual rainfall on the escarpment and highlands is likely related to the global warming of the atmosphere and the surface waters of the Indian Ocean. Spatial variability of number of rainy days from systematically observed two-year’s rainfall data (2012-2013) was significantly (R2=-0.63) explained by forest cover (distance from forest). But, forest cover was not a significant variable (R2=-0.40) in explaining annual rainfall amount. Generally, past deforestation and existing forest cover showed very little effect on long term and short term rainfall distribution, but a significant effect on number of rainy days in the CRV of Ethiopia.

Keywords: elevation, forest cover, rainfall, slope

Procedia PDF Downloads 542
101 Effects of Nutrient Source and Drying Methods on Physical and Phytochemical Criteria of Pot Marigold (Calendula offiCinalis L.) Flowers

Authors: Leila Tabrizi, Farnaz Dezhaboun

Abstract:

In order to study the effect of plant nutrient source and different drying methods on physical and phytochemical characteristics of pot marigold (Calendula officinalis L., Asteraceae) flowers, a factorial experiment was conducted based on completely randomized design with three replications in Research Laboratory of University of Tehran in 2010. Different nutrient sources (vermicompost, municipal waste compost, cattle manure, mushroom compost and control) which were applied in a field experiment for flower production and different drying methods including microwave (300, 600 and 900 W), oven (60, 70 and 80oC) and natural-shade drying in room temperature, were tested. Criteria such as drying kinetic, antioxidant activity, total flavonoid content, total phenolic compounds and total carotenoid of flowers were evaluated. Results indicated that organic inputs as nutrient source for flowers had no significant effects on quality criteria of pot marigold except of total flavonoid content, while drying methods significantly affected phytochemical criteria. Application of microwave 300, 600 and 900 W resulted in the highest amount of total flavonoid content, total phenolic compounds and antioxidant activity, respectively, while oven drying caused the lowest amount of phytochemical criteria. Also, interaction effect of nutrient source and drying method significantly affected antioxidant activity in which the highest amount of antioxidant activity was obtained in combination of vermicompost and microwave 900 W. In addition, application of vermicompost combined with oven drying at 60oC caused the lowest amount of antioxidant activity. Based on results of drying trend, microwave drying showed a faster drying rate than those oven and natural-shade drying in which by increasing microwave power and oven temperature, time of flower drying decreased whereas slope of moisture content reduction curve showed accelerated trend.

Keywords: drying kinetic, medicinal plant, organic fertilizer, phytochemical criteria

Procedia PDF Downloads 331
100 Flood Hazard Assessment and Land Cover Dynamics of the Orai Khola Watershed, Bardiya, Nepal

Authors: Loonibha Manandhar, Rajendra Bhandari, Kumud Raj Kafle

Abstract:

Nepal’s Terai region is a part of the Ganges river basin which is one of the most disaster-prone areas of the world, with recurrent monsoon flooding causing millions in damage and the death and displacement of hundreds of people and households every year. The vulnerability of human settlements to natural disasters such as floods is increasing, and mapping changes in land use practices and hydro-geological parameters is essential in developing resilient communities and strong disaster management policies. The objective of this study was to develop a flood hazard zonation map of Orai Khola watershed and map the decadal land use/land cover dynamics of the watershed. The watershed area was delineated using SRTM DEM, and LANDSAT images were classified into five land use classes (forest, grassland, sediment and bare land, settlement area and cropland, and water body) using pixel-based semi-automated supervised maximum likelihood classification. Decadal changes in each class were then quantified using spatial modelling. Flood hazard mapping was performed by assigning weights to factors slope, rainfall distribution, distance from the river and land use/land cover on the basis of their estimated influence in causing flood hazard and performing weighed overlay analysis to identify areas that are highly vulnerable. The forest and grassland coverage increased by 11.53 km² (3.8%) and 1.43 km² (0.47%) from 1996 to 2016. The sediment and bare land areas decreased by 12.45 km² (4.12%) from 1996 to 2016 whereas settlement and cropland areas showed a consistent increase to 14.22 km² (4.7%). Waterbody coverage also increased to 0.3 km² (0.09%) from 1996-2016. 1.27% (3.65 km²) of total watershed area was categorized into very low hazard zone, 20.94% (60.31 km²) area into low hazard zone, 37.59% (108.3 km²) area into moderate hazard zone, 29.25% (84.27 km²) area into high hazard zone and 31 villages which comprised 10.95% (31.55 km²) were categorized into high hazard zone area.

Keywords: flood hazard, land use/land cover, Orai river, supervised maximum likelihood classification, weighed overlay analysis

Procedia PDF Downloads 348