Search results for: Taylor’s Series Method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21048

Search results for: Taylor’s Series Method

10398 Rapid Green Synthesis and Characterization of Silver Nanoparticles Using Eclipta prostrata Leaf Extract

Authors: Siva Prasad Peddi

Abstract:

Silver nanoparticles were successfully synthesized from silver nitrate through a rapid green synthesis method using Eclipta prostrata leaf extract as a reducing cum stabilizing agent. The experimental procedure was readily conducted at room temperature and pressure, and could be easily scaled up. The silver nanoparticles thus obtained were characterized using UV-Visible Spectroscopy (UV-VIS) which yielded an absorption peak at 416 nm. The biomolecules responsible for capping of the bio-reduced silver nanoparticles synthesized using plant extract were successfully identified through FTIR analysis. It was evinced through Scanning Electron Microscope (SEM), and X-ray diffraction (XRD) analysis that the silver nanoparticles were crystalline in nature and spherical in shape. The average size of the particles obtained using Scherrer’s formula was 27.4 nm. The adopted technique for silver nanoparticle synthesis is suitable for large-scale production.

Keywords: silver nanoparticles, green synthesis, characterization, Eclipta prostrata

Procedia PDF Downloads 468
10397 Roller Compacting Concrete “RCC” in Dams

Authors: Orod Zarrin, Mohsen Ramezan Shirazi

Abstract:

Rehabilitation of dam components such as foundations, buttresses, spillways and overtopping protection require a wide range of construction and design methodologies. Geotechnical Engineering considerations play an important role in the design and construction of foundations of new dams. Much investigation is required to assess and evaluate the existing dams. The application of roller compacting concrete (RCC) has been accepted as a new method for constructing new dams or rehabilitating old ones. In the past 40 years there have been so many changes in the usage of RCC and now it is one of most satisfactory solutions of water and hydropower resource throughout the world. The considerations of rehabilitation and construction of dams might differ due to upstream reservoir and its influence on penetrating and dewatering of downstream, operations requirements and plant layout. One of the advantages of RCC is its rapid placement which allows the dam to be operated quickly. Unlike ordinary concrete it is a drier mix, and stiffs enough for compacting by vibratory rollers. This paper evaluates some different aspects of RCC and focuses on its preparation progress.

Keywords: spillway, vibrating consistency, fly ash, water tightness, foundation

Procedia PDF Downloads 606
10396 Risk Assessment of Building Information Modelling Adoption in Construction Projects

Authors: Amirhossein Karamoozian, Desheng Wu, Behzad Abbasnejad

Abstract:

Building information modelling (BIM) is a new technology to enhance the efficiency of project management in the construction industry. In addition to the potential benefits of this useful technology, there are various risks and obstacles to applying it in construction projects. In this study, a decision making approach is presented for risk assessment in BIM adoption in construction projects. Various risk factors of exerting BIM during different phases of the project lifecycle are identified with the help of Delphi method, experts’ opinions and related literature. Afterward, Shannon’s entropy and Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Situation) are applied to derive priorities of the identified risk factors. Results indicated that lack of knowledge between professional engineers about workflows in BIM and conflict of opinions between different stakeholders are the risk factors with the highest priority.

Keywords: risk, BIM, fuzzy TOPSIS, construction projects

Procedia PDF Downloads 229
10395 A New Class of Conjugate Gradient Methods Based on a Modified Search Direction for Unconstrained Optimization

Authors: Belloufi Mohammed, Sellami Badreddine

Abstract:

Conjugate gradient methods have played a special role for solving large scale optimization problems due to the simplicity of their iteration, convergence properties and their low memory requirements. In this work, we propose a new class of conjugate gradient methods which ensures sufficient descent. Moreover, we propose a new search direction with the Wolfe line search technique for solving unconstrained optimization problems, a global convergence result for general functions is established provided that the line search satisfies the Wolfe conditions. Our numerical experiments indicate that our proposed methods are preferable and in general superior to the classical conjugate gradient methods in terms of efficiency and robustness.

Keywords: unconstrained optimization, conjugate gradient method, sufficient descent property, numerical comparisons

Procedia PDF Downloads 405
10394 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm

Authors: G. Singer, M. Golan

Abstract:

Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.

Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension

Procedia PDF Downloads 100
10393 Comparison of E-Waste Management in Switzerland and in Australia: A Qualitative Content Analysis

Authors: Md Tasbirul Islam, Pablo Dias, Nazmul Huda

Abstract:

E-waste/Waste electrical and electronic equipment (WEEE) is one of the fastest growing waste streams across the globe. This paper aims to compare the e-waste management system in Switzerland and Australia in terms of four features - legislative initiatives, disposal practice, collection and financial mechanisms. The qualitative content analysis is employed as a research method in the study. Data were collected from various published academic research papers, industry reports, and web sources. In addition, a questionnaire survey is conducted in Australia to understand the public awareness and opinions on the features. The results of the study provide valuable insights to policymakers in Australia developing better e-waste management system in conjunction with the public consensus, and the state-of-the-art operational strategies currently being practiced in Switzerland.

Keywords: E-waste management, WEEE, awareness, pro-environmental behavior, Australia, Switzerland

Procedia PDF Downloads 281
10392 A Vertical Grating Coupler with High Efficiency and Broadband Operation

Authors: Md. Asaduzzaman

Abstract:

A Silicon-on-insulator (SOI) perfectly vertical fibre-to-chip grating coupler is proposed and designed based on engineered subwavelength structures. The high directionality of the coupler is achieved by implementing step gratings to realize asymmetric diffraction and by applying effective index variation with auxiliary ultra-subwavelength gratings. The proposed structure is numerically analysed by using two-dimensional Finite Difference Time Domain (2D FDTD) method and achieves 96% (-0.2 dB) coupling efficiency and 39 nm 1-dB bandwidth. This highly efficient GC is necessary for applications where coupling efficiency between the optical fibre and nanophotonics waveguide is critically important, for instance, experiments of the quantum photonics integrated circuits. Such efficient and broadband perfectly vertical grating couplers are also significantly advantageous in highly dense photonic packaging.

Keywords: diffraction grating, FDTD, grating couplers, nanophotonic

Procedia PDF Downloads 68
10391 Multi-Criteria Evaluation for the Selection Process of a Wind Power Plant's Location Using Choquet Integral

Authors: Serhat Tüzün, Tufan Demirel

Abstract:

The objective of the present study is to select the most suitable location for a wind power plant station through Choquet integral method. The problem of selecting the location for a wind power station was considered as a multi-criteria decision-making problem. The essential and sub-criteria were specified and location selection was expressed in a hierarchic structure. Among the main criteria taken into account in this paper are wind potential, technical factors, social factors, transportation, and costs. The problem was solved by using different approaches of Choquet integral and the best location for a wind power station was determined. Then, the priority weights obtained from different Choquet integral approaches are compared and commented on.

Keywords: multi-criteria decision making, choquet integral, fuzzy sets, location of a wind power plant

Procedia PDF Downloads 412
10390 A Closed-Form Solution and Comparison for a One-Dimensional Orthorhombic Quasicrystal and Crystal Plate

Authors: Arpit Bhardwaj, Koushik Roy

Abstract:

The work includes derivation of the exact-closed form solution for simply supported quasicrystal and crystal plates by using propagator matrix method under surface loading and free vibration. As a numerical example a quasicrystal and a crystal plate are considered, and after investigation, the variation of displacement and stress fields along the thickness of these two plates are presented. Further, it includes analyzing the displacement and stress fields for two plates having two different stacking arrangement, i.e., QuasiCrystal/Crystal/QuasiCrystal and Crystal/QuasiCrystal/Crystal and comparing their results. This will not only tell us the change in the behavior of displacement and stress fields in two different materials but also how these get changed after trying their different combinations. For the free vibration case, Crystal and Quasicrystal plates along with their different stacking arrangements are considered, and displacements are plotted in all directions for different Mode Shapes.

Keywords: free vibration, multilayered plates, surface loading, quasicrystals

Procedia PDF Downloads 147
10389 Computation of Thermal Stress Intensity Factor for Bonded Composite Repairs in Aircraft Structures

Authors: Fayçal Benyahia, Abdelmohsen Albedah, Bel Abbes Bachir Bouiadjra

Abstract:

In this study the Finite element method is used to analyse the effect of the thermal residual stresses resulting from adhesive curing on the performances of the bonded composite repair in aircraft structures. The stress intensity factor at the crack tip is chosen as fracture criterion in order to estimate the repair performances. The obtained results show that the presence of the thermal residual stresses reduces considerably the repair performances and consequently decreases the fatigue life of cracked structures. The effects of the curing temperature, the adhesive properties and the adhesive thickness on the Stress Intensity Factor (SIF) variation with thermal stresses are also analysed.

Keywords: bonded composite repair, residual stress, adhesion, stress transfer, finite element analysis

Procedia PDF Downloads 417
10388 Deep Learning Approach for Colorectal Cancer’s Automatic Tumor Grading on Whole Slide Images

Authors: Shenlun Chen, Leonard Wee

Abstract:

Tumor grading is an essential reference for colorectal cancer (CRC) staging and survival prognostication. The widely used World Health Organization (WHO) grading system defines histological grade of CRC adenocarcinoma based on the density of glandular formation on whole slide images (WSI). Tumors are classified as well-, moderately-, poorly- or un-differentiated depending on the percentage of the tumor that is gland forming; >95%, 50-95%, 5-50% and <5%, respectively. However, manually grading WSIs is a time-consuming process and can cause observer error due to subjective judgment and unnoticed regions. Furthermore, pathologists’ grading is usually coarse while a finer and continuous differentiation grade may help to stratifying CRC patients better. In this study, a deep learning based automatic differentiation grading algorithm was developed and evaluated by survival analysis. Firstly, a gland segmentation model was developed for segmenting gland structures. Gland regions of WSIs were delineated and used for differentiation annotating. Tumor regions were annotated by experienced pathologists into high-, medium-, low-differentiation and normal tissue, which correspond to tumor with clear-, unclear-, no-gland structure and non-tumor, respectively. Then a differentiation prediction model was developed on these human annotations. Finally, all enrolled WSIs were processed by gland segmentation model and differentiation prediction model. The differentiation grade can be calculated by deep learning models’ prediction of tumor regions and tumor differentiation status according to WHO’s defines. If multiple WSIs were possessed by a patient, the highest differentiation grade was chosen. Additionally, the differentiation grade was normalized into scale between 0 to 1. The Cancer Genome Atlas, project COAD (TCGA-COAD) project was enrolled into this study. For the gland segmentation model, receiver operating characteristic (ROC) reached 0.981 and accuracy reached 0.932 in validation set. For the differentiation prediction model, ROC reached 0.983, 0.963, 0.963, 0.981 and accuracy reached 0.880, 0.923, 0.668, 0.881 for groups of low-, medium-, high-differentiation and normal tissue in validation set. Four hundred and one patients were selected after removing WSIs without gland regions and patients without follow up data. The concordance index reached to 0.609. Optimized cut off point of 51% was found by “Maxstat” method which was almost the same as WHO system’s cut off point of 50%. Both WHO system’s cut off point and optimized cut off point performed impressively in Kaplan-Meier curves and both p value of logrank test were below 0.005. In this study, gland structure of WSIs and differentiation status of tumor regions were proven to be predictable through deep leaning method. A finer and continuous differentiation grade can also be automatically calculated through above models. The differentiation grade was proven to stratify CAC patients well in survival analysis, whose optimized cut off point was almost the same as WHO tumor grading system. The tool of automatically calculating differentiation grade may show potential in field of therapy decision making and personalized treatment.

Keywords: colorectal cancer, differentiation, survival analysis, tumor grading

Procedia PDF Downloads 134
10387 Separation of Fexofenadine Enantiomers Using Beta Cyclodextrin as Chiral Counter Ion in Mobile Phase

Authors: R. Fegas, S. Zerkout, S. Taberkokt, M. Righezza

Abstract:

The present work demonstrate the potential of Betacyclodextrine (BCD) for the chiral analysis of a drug .Various separation mechanisms were applied and several parameters affecting the separation were studied, including the type and concentration of chiral selector, and pH of buffer. A simple and sensitive high-performance liquid chromatography (HPLC) method was developed as an assay for fexofenadine enantiomers in pharmaceutical preparation. Fexofenadine enantiomers were separated using a mobile phase of 0.25mM NaH2PO4–acetonitrile (65:35, v/v) – Betacyclodextrine on achiral phenyl-urea column at a flow rate of 1ml/min and measurement at 220nm. The chiral mechanism of separation was mainly based on specific interaction between the solute and the stationary phase. The retention was directly controlled by mobile phase composition but not the selectivity which results of the two mechanisms, electrostatic interactions and partition mechanism.

Keywords: fexofenadine enantiomer, HPLC, achiral phenyl-urea column

Procedia PDF Downloads 458
10386 Artificial Intelligence Based Analysis of Magnetic Resonance Signals for the Diagnosis of Tissue Abnormalities

Authors: Kapila Warnakulasuriya, Walimuni Janaka Mendis

Abstract:

In this study, an artificial intelligence-based approach is developed to diagnose abnormal tissues in human or animal bodies by analyzing magnetic resonance signals. As opposed to the conventional method of generating an image from the magnetic resonance signals, which are then evaluated by a radiologist for the diagnosis of abnormalities, in the discussed approach, the magnetic resonance signals are analyzed by an artificial intelligence algorithm without having to generate or analyze an image. The AI-based program compares magnetic resonance signals with millions of possible magnetic resonance waveforms which can be generated from various types of normal tissues. Waveforms generated by abnormal tissues are then identified, and images of the abnormal tissues are generated with the possible location of them in the body for further diagnostic tests.

Keywords: magnetic resonance, artificial intelligence, magnetic waveform analysis, abnormal tissues

Procedia PDF Downloads 91
10385 Controlled Synthesis of Pt₃Sn-SnOx/C Electrocatalysts for Polymer Electrolyte Membrane Fuel Cells

Authors: Dorottya Guban, Irina Borbath, Istvan Bakos, Peter Nemeth, Andras Tompos

Abstract:

One of the greatest challenges of the implementation of polymer electrolyte membrane fuel cells (PEMFCs) is to find active and durable electrocatalysts. The cell performance is always limited by the oxygen reduction reaction (ORR) on the cathode since it is at least 6 orders of magnitude slower than the hydrogen oxidation on the anode. Therefore high loading of Pt is required. Catalyst corrosion is also more significant on the cathode, especially in case of mobile applications, where rapid changes of loading have to be tolerated. Pt-Sn bulk alloys and SnO2-decorated Pt3Sn nanostructures are among the most studied bimetallic systems for fuel cell applications. Exclusive formation of supported Sn-Pt alloy phases with different Pt/Sn ratios can be achieved by using controlled surface reactions (CSRs) between hydrogen adsorbed on Pt sites and tetraethyl tin. In this contribution our results for commercial and a home-made 20 wt.% Pt/C catalysts modified by tin anchoring via CSRs are presented. The parent Pt/C catalysts were synthesized by modified NaBH4-assisted ethylene-glycol reduction method using ethanol as a solvent, which resulted either in dispersed and highly stable Pt nanoparticles or evenly distributed raspberry-like agglomerates according to the chosen synthesis parameters. The 20 wt.% Pt/C catalysts prepared that way showed improved electrocatalytic performance in the ORR and stability in comparison to the commercial 20 wt.% Pt/C catalysts. Then, in order to obtain Sn-Pt/C catalysts with Pt/Sn= 3 ratio, the Pt/C catalysts were modified with tetraethyl tin (SnEt4) using three and five consecutive tin anchoring periods. According to in situ XPS studies in case of catalysts with highly dispersed Pt nanoparticles, pre-treatment in hydrogen even at 170°C resulted in complete reduction of the ionic tin to Sn0. No evidence of the presence of SnO2 phase was found by means of the XRD and EDS analysis. These results demonstrate that the method of CSRs is a powerful tool to create Pt-Sn bimetallic nanoparticles exclusively, without tin deposition onto the carbon support. On the contrary, the XPS results revealed that the tin-modified catalysts with raspberry-like Pt agglomerates always contained a fraction of non-reducible tin oxide. At the same time, they showed increased activity and long-term stability in the ORR than Pt/C, which was assigned to the presence of SnO2 in close proximity/contact with Pt-Sn alloy phase. It has been demonstrated that the content and dispersion of the fcc Pt3Sn phase within the electrocatalysts can be controlled by tuning the reaction conditions of CSRs. The bimetallic catalysts displayed an outstanding performance in the ORR. The preparation of a highly dispersed 20Pt/C catalyst permits to decrease the Pt content without relevant decline in the electrocatalytic performance of the catalysts.

Keywords: anode catalyst, cathode catalyst, controlled surface reactions, oxygen reduction reaction, PtSn/C electrocatalyst

Procedia PDF Downloads 235
10384 Understanding and Improving Neural Network Weight Initialization

Authors: Diego Aguirre, Olac Fuentes

Abstract:

In this paper, we present a taxonomy of weight initialization schemes used in deep learning. We survey the most representative techniques in each class and compare them in terms of overhead cost, convergence rate, and applicability. We also introduce a new weight initialization scheme. In this technique, we perform an initial feedforward pass through the network using an initialization mini-batch. Using statistics obtained from this pass, we initialize the weights of the network, so the following properties are met: 1) weight matrices are orthogonal; 2) ReLU layers produce a predetermined number of non-zero activations; 3) the output produced by each internal layer has a unit variance; 4) weights in the last layer are chosen to minimize the error in the initial mini-batch. We evaluate our method on three popular architectures, and a faster converge rates are achieved on the MNIST, CIFAR-10/100, and ImageNet datasets when compared to state-of-the-art initialization techniques.

Keywords: deep learning, image classification, supervised learning, weight initialization

Procedia PDF Downloads 135
10383 Microstructures Evolution of a Nano/Ultrafine Grained Low Carbon Steel Produced by Martensite Treatment Using Accumulative Roll Bonding

Authors: Mehdi Salari

Abstract:

This work introduces a new experimental method of martensite treatment contains accumulative roll-bonding used for producing the nano/ultrafine grained structure in low carbon steel. The ARB process up to 4 cycles was performed under unlubricated conditions, while the annealing process was carried out in the temperature range of 450–550°C for 30–100 min. The microstructures of the deformed and annealed specimens were investigated. The results showed that in the annealed specimen at 450°C for 30 or 60 min, recrystallization couldn’t be completed. Decrease in time and temperature intensified the volume fraction of the martensite cell blocks. Fully equiaxed nano/ultrafine grained ferrite was developed from the martensite cell blocks during the annealing at temperature around 500°C for 100 min.

Keywords: martensite process, accumulative roll bonding, recrystallization, nanostructure, plain carbon steel

Procedia PDF Downloads 379
10382 Wear Behaviors of B4C and SiC Particle Reinforced AZ91 Magnesium Matrix Metal Composites

Authors: M. E. Turan, H. Zengin, E. Cevik, Y. Sun, Y. Turen, H. Ahlatci

Abstract:

In this study, the effects of B4C and SiC particle reinforcements on wear properties of magnesium matrix metal composites produced by pressure infiltration method were investigated. AZ91 (9%Al-1%Zn) magnesium alloy was used as a matrix. AZ91 magnesium alloy was melted under an argon atmosphere. The melt was infiltrated to the particles with an appropriate pressure. Wear tests, hardness tests were performed respectively. Microstructure characterizations were examined by light optical (LOM) and scanning electron microscope (SEM). The results showed that uniform particle distributions were achieved in both B4C and SiC reinforced composites. Wear behaviors of magnesium matrix metal composites changed as a function of type of particles. SiC reinforced composite has better wear performance and higher hardness than B4C reinforced composite.

Keywords: magnesium matrix composite, pressure infiltration, SEM, wear

Procedia PDF Downloads 360
10381 Microgrid Design Under Optimal Control With Batch Reinforcement Learning

Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion

Abstract:

Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.

Keywords: batch-constrained reinforcement learning, control, design, optimal

Procedia PDF Downloads 123
10380 Re-Evaluation of Field X Located in Northern Lake Albert Basin to Refine the Structural Interpretation

Authors: Calorine Twebaze, Jesca Balinga

Abstract:

Field X is located on the Eastern shores of L. Albert, Uganda, on the rift flank where the gross sedimentary fill is typically less than 2,000m. The field was discovered in 2006 and encountered about 20.4m of net pay across three (3) stratigraphic intervals within the discovery well. The field covers an area of 3 km2, with the structural configuration comprising a 3-way dip-closed hanging wall anticline that seals against the basement to the southeast along the bounding fault. Field X had been mapped on reprocessed 3D seismic data, which was originally acquired in 2007 and reprocessed in 2013. The seismic data quality is good across the field, and reprocessing work reduced the uncertainty in the location of the bounding fault and enhanced the lateral continuity of reservoir reflectors. The current study was a re-evaluation of Field X to refine fault interpretation and understand the structural uncertainties associated with the field. The seismic data, and three (3) wells datasets were used during the study. The evaluation followed standard workflows using Petrel software and structural attribute analysis. The process spanned from seismic- -well tie, structural interpretation, and structural uncertainty analysis. Analysis of three (3) well ties generated for the 3 wells provided a geophysical interpretation that was consistent with geological picks. The generated time-depth curves showed a general increase in velocity with burial depth. However, separation in curve trends observed below 1100m was mainly attributed to minimal lateral variation in velocity between the wells. In addition to Attribute analysis, three velocity modeling approaches were evaluated, including the Time-Depth Curve, Vo+ kZ, and Average Velocity Method. The generated models were calibrated at well locations using well tops to obtain the best velocity model for Field X. The Time-depth method resulted in more reliable depth surfaces with good structural coherence between the TWT and depth maps with minimal error at well locations of 2 to 5m. Both the NNE-SSW rift border fault and minor faults in the existing interpretation were reevaluated. However, the new interpretation delineated an E-W trending fault in the northern part of the field that had not been interpreted before. The fault was interpreted at all stratigraphic levels and thus propagates from the basement to the surface and is an active fault today. It was also noted that the entire field is less faulted with more faults in the deeper part of the field. The major structural uncertainties defined included 1) The time horizons due to reduced data quality, especially in the deeper parts of the structure, an error equal to one-third of the reflection time thickness was assumed, 2) Check shot analysis showed varying velocities within the wells thus varying depth values for each well, and 3) Very few average velocity points due to limited wells produced a pessimistic average Velocity model.

Keywords: 3D seismic data interpretation, structural uncertainties, attribute analysis, velocity modelling approaches

Procedia PDF Downloads 59
10379 Recursive Parametric Identification of a Doubly Fed Induction Generator-Based Wind Turbine

Authors: A. El Kachani, E. Chakir, A. Ait Laachir, A. Niaaniaa, J. Zerouaoui

Abstract:

This document presents an adaptive controller based on recursive parametric identification applied to a wind turbine based on the doubly-fed induction machine (DFIG), to compensate the faults and guarantee efficient of the DFIG. The proposed adaptive controller is based on the recursive least square algorithm which considers that the best estimator for the vector parameter is the vector x minimizing a quadratic criterion. Furthermore, this method can improve the rapidity and precision of the controller based on a model. The proposed controller is validated via simulation on a 5.5 kW DFIG-based wind turbine. The results obtained seem to be good. In addition, they show the advantages of an adaptive controller based on recursive least square algorithm.

Keywords: adaptive controller, recursive least squares algorithm, wind turbine, doubly fed induction generator

Procedia PDF Downloads 288
10378 Model Order Reduction Using Hybrid Genetic Algorithm and Simulated Annealing

Authors: Khaled Salah

Abstract:

Model order reduction has been one of the most challenging topics in the past years. In this paper, a hybrid solution of genetic algorithm (GA) and simulated annealing algorithm (SA) are used to approximate high-order transfer functions (TFs) to lower-order TFs. In this approach, hybrid algorithm is applied to model order reduction putting in consideration improving accuracy and preserving the properties of the original model which are two important issues for improving the performance of simulation and computation and maintaining the behavior of the original complex models being reduced. Compared to conventional mathematical methods that have been used to obtain a reduced order model of high order complex models, our proposed method provides better results in terms of reducing run-time. Thus, the proposed technique could be used in electronic design automation (EDA) tools.

Keywords: genetic algorithm, simulated annealing, model reduction, transfer function

Procedia PDF Downloads 143
10377 Optimization Design of Superposition Wave Form Automotive Exhaust Bellows Structure

Authors: Zhang Jianrun, He Tangling

Abstract:

Superposition wave form automotive exhaust bellows is a new type of bellows, which has the characteristics of large compensation, good vibration isolation performance and long life. It has been paid more and more attention and applications in automotive exhaust pipe system. Aiming at the lack of current design methods of superposition wave form automotive exhaust bellows, this paper proposes a response surface parameter optimization method where the fatigue life and vibration transmissibility of the bellows are set as objectives. The parametric modeling of bellow structure is also adopted to achieve the high efficiency in the design. The approach proposed in this paper provides a new way for the design of superposition wave form automotive exhaust bellows. It embodies good engineering application value.

Keywords: superposition wave form, exhaust bellows, optimization, vibration, fatigue life

Procedia PDF Downloads 96
10376 Improoving Readability for Tweet Contextualization Using Bipartite Graphs

Authors: Amira Dhokar, Lobna Hlaoua, Lotfi Ben Romdhane

Abstract:

Tweet contextualization (TC) is a new issue that aims to answer questions of the form 'What is this tweet about?' The idea of this task was imagined as an extension of a previous area called multi-document summarization (MDS), which consists in generating a summary from many sources. In both TC and MDS, the summary should ideally contain the most relevant information of the topic that is being discussed in the source texts (for MDS) and related to the query (for TC). Furthermore of being informative, a summary should be coherent, i.e. well written to be readable and grammatically compact. Hence, coherence is an essential characteristic in order to produce comprehensible texts. In this paper, we propose a new approach to improve readability and coherence for tweet contextualization based on bipartite graphs. The main idea of our proposed method is to reorder sentences in a given paragraph by combining most expressive words detection and HITS (Hyperlink-Induced Topic Search) algorithm to make up a coherent context.

Keywords: bipartite graphs, readability, summarization, tweet contextualization

Procedia PDF Downloads 193
10375 Estimation of Leachate Generation from Municipal Solid Waste Landfills in Selangor

Authors: Tengku Nilam Baizura, Noor Zalina Mahmood

Abstract:

In Malaysia, landfilling is the most preferred method and most of it does not have the proper leachate treatment system which can cause environmental problems. Leachate is the major factor to river water pollution since most landfills are located near the river which is the main water resource for the country. The study aimed to estimate leachate production from landfills in Selangor. A simple mathematical modelling was used for the calculation of annual leachate volume. The estimate of identified landfill area (A) using Google Earth was multiplied by the annual rainfall (R). The product is expressed as volume (V). The data indicate that the leachate production is high even it is fully closed. It is important to design the efficient landfill and proper leachate treatment processes especially for the old/closed landfill. Extensive monitoring will be required to predict future impact.

Keywords: landfill, leachate, municipal solid waste management, waste disposal

Procedia PDF Downloads 370
10374 A Model of Teacher Leadership in History Instruction

Authors: Poramatdha Chutimant

Abstract:

The objective of the research was to propose a model of teacher leadership in history instruction for utilization. Everett M. Rogers’ Diffusion of Innovations Theory is applied as theoretical framework. Qualitative method is to be used in the study, and the interview protocol used as an instrument to collect primary data from best practices who awarded by Office of National Education Commission (ONEC). Open-end questions will be used in interview protocol in order to gather the various data. Then, information according to international context of history instruction is the secondary data used to support in the summarizing process (Content Analysis). Dendrogram is a key to interpret and synthesize the primary data. Thus, secondary data comes as the supportive issue in explanation and elaboration. In-depth interview is to be used to collected information from seven experts in educational field. The focal point is to validate a draft model in term of future utilization finally.

Keywords: history study, nationalism, patriotism, responsible citizenship, teacher leadership

Procedia PDF Downloads 279
10373 Modeling Metrics for Monitoring Software Project Performance Based on the GQM Model

Authors: Mariayee Doraisamy, Suhaimi bin Ibrahim, Mohd Naz’ri Mahrin

Abstract:

There are several methods to monitor software projects and the objective for monitoring is to ensure that the software projects are developed and delivered successfully. A performance measurement is a method that is closely associated with monitoring and it can be scrutinized by looking at two important attributes which are efficiency and effectiveness both of which are factors that are important for the success of a software project. Consequently, a successful steering is achieved by monitoring and controlling a software project via the performance measurement criteria and metrics. Hence, this paper is aimed at identifying the performance measurement criteria and the metrics for monitoring the performance of a software project by using the Goal Question Metrics (GQM) approach. The GQM approach is utilized to ensure that the identified metrics are reliable and useful. These identified metrics are useful guidelines for project managers to monitor the performance of their software projects.

Keywords: component, software project performance, goal question metrics, performance measurement criteria, metrics

Procedia PDF Downloads 356
10372 Mariculture Trials of the Philippine Blue Sponge Xestospongia sp.

Authors: Clairecynth Yu, Geminne Manzano

Abstract:

The mariculture potential of the Philippine blue sponge, Xestospongia sp. was assessed through the pilot sponge culture in the open-sea at two different biogeographic regions in the Philippines. Thirty explants were randomly allocated for the Puerto Galera, Oriental Mindoro culture setup and the other nine were transported to Lucero, Bolinao, Pangasinan. Two different sponge culture methods of the sponge explants- the lantern and the wall method, were employed to assess the production of the Renieramycin M. Both methods have shown to be effective in growing the sponge explants and that the Thin Layer Chromatography (TLC) results have shown that Renieramycin M is present on the sponges. The effect of partial harvesting in the growth and survival rates of the blue sponge in the Puerto Galera setup was also determined. Results showed that a higher growth rate was observed on the partially harvested explants on both culture methods as compared to the unharvested explants.

Keywords: chemical ecology, porifera, sponge, Xestospongia sp.

Procedia PDF Downloads 273
10371 AI-Driven Strategies for Sustainable Electronics Repair: A Case Study in Energy Efficiency

Authors: Badiy Elmabrouk, Abdelhamid Boujarif, Zhiguo Zeng, Stephane Borrel, Robert Heidsieck

Abstract:

In an era where sustainability is paramount, this paper introduces a machine learning-driven testing protocol to accurately predict diode failures, merging reliability engineering with failure physics to enhance repair operations efficiency. Our approach refines the burn-in process, significantly curtailing its duration, which not only conserves energy but also elevates productivity and mitigates component wear. A case study from GE HealthCare’s repair center vividly demonstrates the method’s effectiveness, recording a high prediction of diode failures and a substantial decrease in energy consumption that translates to an annual reduction of 6.5 Tons of CO2 emissions. This advancement sets a benchmark for environmentally conscious practices in the electronics repair sector.

Keywords: maintenance, burn-in, failure physics, reliability testing

Procedia PDF Downloads 68
10370 Static Response of Homogeneous Clay Stratum to Imposed Structural Loads

Authors: Aaron Aboshio

Abstract:

Numerical study of the static response of homogeneous clay stratum considering a wide range of cohesion and subject to foundation loads is presented. The linear elastic–perfectly plastic constitutive relation with the von Mises yield criterion were utilised to develop a numerically cost effective finite element model for the soil while imposing a rigid body constrain to the foundation footing. From the analyses carried out, estimate of the bearing capacity factor, Nc as well as the ultimate load-carrying capacities of these soils, effect of cohesion on foundation settlements, stress fields and failure propagation were obtained. These are consistent with other findings in the literature and hence can be a useful guide in design of safe foundations in clay soils for buildings and other structure.

Keywords: bearing capacity factors, finite element method, safe bearing pressure, structure-soil interaction

Procedia PDF Downloads 302
10369 The Effects of Mobile Communication on the Nigerian Populace

Authors: Chapman Eze Nnadozie

Abstract:

Communication, the activity of conveying information, remains a vital resource for the growth and development of any given society. Mobile communication, popularly known as global system for mobile communication (GSM) is a globally accepted standard for digital cellular communication. GSM, which is a wireless technology, remains the fastest growing communication means worldwide. Indeed, mobile phones have become a critical business tool and part of everyday life in both developed and developing countries. This study examines the effects of mobile communication on the Nigerian populace. The methodology used in this study is the survey research method with the main data collection tool as questionnaires. The questionnaires were administered to a total of seventy respondents in five cities across the country, namely: Aba, Enugu, Bauchi, Makurdi, and Lagos. The result reveals that though there is some quality of service issues, mobile communication has very significant positive efforts on the economic and social development of the Nigerian populace.

Keywords: effect, mobile communication, populace, GSM, wireless technology, mobile phone

Procedia PDF Downloads 271