Search results for: action based method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 41056

Search results for: action based method

38026 Scaling Analysis for the Liquefaction Phenomena Generated by Water Waves

Authors: E. Arcos, E. Bautista, F. Méndez

Abstract:

In this work, a scaling analysis of the liquefaction phenomena is presented. The characteristic scales are obtained by balancing term by term of the well-known partial dynamics governing equations, (U − P). From the above, the order of magnitude of the horizontal displacement is very smaller compared with the vertical displacement and therefore the governing equation is only a function of the dependent vertical variables. The U − P approximation is reduced and presented in its dimensionless version. This scaling analysis can be used to obtain analytical solutions of the liquefaction phenomena under the action of the water waves.

Keywords: approximation U-P, porous seabed, scaling analysis, water waves

Procedia PDF Downloads 349
38025 Micro-Channel Flows Simulation Based on Nonlinear Coupled Constitutive Model

Authors: Qijiao He

Abstract:

MicroElectrical-Mechanical System (MEMS) is one of the most rapidly developing frontier research field both in theory study and applied technology. Micro-channel is a very important link component of MEMS. With the research and development of MEMS, the size of the micro-devices and the micro-channels becomes further smaller. Compared with the macroscale flow, the flow characteristics of gas in the micro-channel have changed, and the rarefaction effect appears obviously. However, for the rarefied gas and microscale flow, Navier-Stokes-Fourier (NSF) equations are no longer appropriate due to the breakup of the continuum hypothesis. A Nonlinear Coupled Constitutive Model (NCCM) has been derived from the Boltzmann equation to describe the characteristics of both continuum and rarefied gas flows. We apply the present scheme to simulate continuum and rarefied gas flows in a micro-channel structure. And for comparison, we apply other widely used methods which based on particle simulation or direct solution of distribution function, such as Direct simulation of Monte Carlo (DSMC), Unified Gas-Kinetic Scheme (UGKS) and Lattice Boltzmann Method (LBM), to simulate the flows. The results show that the present solution is in better agreement with the experimental data and the DSMC, UGKS and LBM results than the NSF results in rarefied cases but is in good agreement with the NSF results in continuum cases. And some characteristics of both continuum and rarefied gas flows are observed and analyzed.

Keywords: continuum and rarefied gas flows, discontinuous Galerkin method, generalized hydrodynamic equations, numerical simulation

Procedia PDF Downloads 172
38024 Relation of Optimal Pilot Offsets in the Shifted Constellation-Based Method for the Detection of Pilot Contamination Attacks

Authors: Dimitriya A. Mihaylova, Zlatka V. Valkova-Jarvis, Georgi L. Iliev

Abstract:

One possible approach for maintaining the security of communication systems relies on Physical Layer Security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, with the aim of having an enhanced data signal sent to the malicious user. The Shifted 2-N-PSK method involves two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is investigated. The results show that no regularity exists in the relation between the pilot contamination attacks (PCA) detection probability and the choice of offset values. Therefore, an adversary who aims to obtain the exact offset values can only employ a brute-force attack but the large number of possible combinations for the shifted constellations makes such a type of attack difficult to successfully mount. For this reason, the number of optimal shift value pairs is also studied for both 100% and 98% probabilities of detecting pilot contamination attacks. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, the inter-cell interference impact on the performance of the method is investigated by means of a large number of simulations. The results show that the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.

Keywords: channel estimation, inter-cell interference, pilot contamination attacks, wireless communications

Procedia PDF Downloads 217
38023 MB-Slam: A Slam Framework for Construction Monitoring

Authors: Mojtaba Noghabaei, Khashayar Asadi, Kevin Han

Abstract:

Simultaneous Localization and Mapping (SLAM) technology has recently attracted the attention of construction companies for real-time performance monitoring. To effectively use SLAM for construction performance monitoring, SLAM results should be registered to a Building Information Models (BIM). Registring SLAM and BIM can provide essential insights for construction managers to identify construction deficiencies in real-time and ultimately reduce rework. Also, registering SLAM to BIM in real-time can boost the accuracy of SLAM since SLAM can use features from both images and 3d models. However, registering SLAM with the BIM in real-time is a challenge. In this study, a novel SLAM platform named Model-Based SLAM (MB-SLAM) is proposed, which not only provides automated registration of SLAM and BIM but also improves the localization accuracy of the SLAM system in real-time. This framework improves the accuracy of SLAM by aligning perspective features such as depth, vanishing points, and vanishing lines from the BIM to the SLAM system. This framework extracts depth features from a monocular camera’s image and improves the localization accuracy of the SLAM system through a real-time iterative process. Initially, SLAM can be used to calculate a rough camera pose for each keyframe. In the next step, each SLAM video sequence keyframe is registered to the BIM in real-time by aligning the keyframe’s perspective with the equivalent BIM view. The alignment method is based on perspective detection that estimates vanishing lines and points by detecting straight edges on images. This process will generate the associated BIM views from the keyframes' views. The calculated poses are later improved during a real-time gradient descent-based iteration method. Two case studies were presented to validate MB-SLAM. The validation process demonstrated promising results and accurately registered SLAM to BIM and significantly improved the SLAM’s localization accuracy. Besides, MB-SLAM achieved real-time performance in both indoor and outdoor environments. The proposed method can fully automate past studies and generate as-built models that are aligned with BIM. The main contribution of this study is a SLAM framework for both research and commercial usage, which aims to monitor construction progress and performance in a unified framework. Through this platform, users can improve the accuracy of the SLAM by providing a rough 3D model of the environment. MB-SLAM further boosts the application to practical usage of the SLAM.

Keywords: perspective alignment, progress monitoring, slam, stereo matching.

Procedia PDF Downloads 224
38022 A Self-Adaptive Stimulus Artifacts Removal Approach for Electrical Stimulation Based Muscle Rehabilitation

Authors: Yinjun Tu, Qiang Fang, Glenn I. Matthews, Shuenn-Yuh Lee

Abstract:

This paper reports an efficient and rigorous self-adaptive stimulus artifacts removal approach for a mixed surface EMG (Electromyography) and stimulus signal during muscle stimulation. The recording of EMG and the stimulation of muscles were performing simultaneously. It is difficult to generate muscle fatigue feature from the mixed signal, which can be further used in closed loop system. A self-adaptive method is proposed in this paper, the stimulation frequency was calculated and verified firstly. Then, a mask was created based on this stimulation frequency to remove the undesired stimulus. 20 EMG signal recordings were analyzed, and the ANOVA (analysis of variance) approach illustrated that the decreasing trend of median power frequencies was successfully generated from the 'cleaned' EMG signal.

Keywords: EMG, FES, stimulus artefacts, self-adaptive

Procedia PDF Downloads 399
38021 Two Stage Fuzzy Methodology to Evaluate the Credit Risks of Investment Projects

Authors: O. Badagadze, G. Sirbiladze, I. Khutsishvili

Abstract:

The work proposes a decision support methodology for the credit risk minimization in selection of investment projects. The methodology provides two stages of projects’ evaluation. Preliminary selection of projects with minor credit risks is made using the Expertons Method. The second stage makes ranking of chosen projects using the Possibilistic Discrimination Analysis Method. The latter is a new modification of a well-known Method of Fuzzy Discrimination Analysis.

Keywords: expert valuations, expertons, investment project risks, positive and negative discriminations, possibility distribution

Procedia PDF Downloads 676
38020 Dielectric Properties of Ni-Al Nano Ferrites Synthesized by Citrate Gel Method

Authors: D. Ravinder, K. S. Nagaraju

Abstract:

Ni–Al ferrite with composition of NiAlxFe2-xO4 (x=0.2, 0.4 0.6, and 0.8, ) were prepared by citrate gel method. The dielectric properties for all the samples were investigated at room temperature as a function of frequency. The dielectric constant shows dispersion in the lower frequency region and remains almost constant at higher frequencies. The frequency dependence of dielectric loss tangent (tanδ) is found to be abnormal, giving a peak at certain frequency for mixed Ni-Al ferrites. A qualitative explanation is given for the composition and frequency dependence of the dielectric loss tangent.

Keywords: ferrites, citrate method, lattice parameter, dielectric constant

Procedia PDF Downloads 303
38019 The Impact of the Urban Planning and Environmental Problems over the Quality of Life Case Study: Median Zone of Bucharest's Sector 1, Romania

Authors: Cristian Cazacu, Bela Kobulniczky

Abstract:

Even though nowadays the median area of the Bucharest’s Sector 1 owns one of the best reputations in terms of quality of life level, the problems in urban planning from the last twenty years, as well as those related to the urban environment, became more and more obvious and shrill. And all this happened as long as non-compliance with urban and spatial planning laws, corroborated with uncontrolled territorial expansion on certain areas and faulty management of public and private spaces were more acute. The action of all these factors has been felt more and more strongly in the territory in the last twenty years, generating the degradation of the quality of the urban environment and affecting in parallel the general level of the inhabitants¬’ quality of life. Our methodology is based on analyzing a wide range of environmental parameters and it is also based on using advanced resources and skills for mapping planning and environmental dysfunctions as well as the possibility of integrating information into GIS programs, all data sets corroborated with problems related to spatial planning management and inaccuracies of the urbanistic sector. In the end, we managed to obtain a calculated and realistic image of the dysfunctions and a quantitative view of their magnitude in the territory. We also succeeded to create a full general map of the degree of degradation of the urban environment by typologies of urban tissues. Moreover, the methods applied by us can also be used globally to calculate and create realistic images and intelligent maps over the quality of the environment in areas larger than this one. Our study shows that environmental degradation occurred differently in the urban tissues from our study area, depending on several factors, reviewing the faulty way in which the processes of recovery / urban regeneration of the gap in recent years have led to the creation of new territorial dysfunctions. The general, centralized results show that the analyzed space has a much wider range of problems than initially thought, although notoriety and social etiquette place them far above other spaces from the same city of study.

Keywords: environment, GIS, planning, urban tissues

Procedia PDF Downloads 147
38018 SIPINA Induction Graph Method for Seismic Risk Prediction

Authors: B. Selma

Abstract:

The aim of this study is to test the feasibility of SIPINA method to predict the harmfulness parameters controlling the seismic response. The approach developed takes into consideration both the focal depth and the peak ground acceleration. The parameter to determine is displacement. The data used for the learning of this method and analysis nonlinear seismic are described and applied to a class of models damaged to some typical structures of the existing urban infrastructure of Jassy, Romania. The results obtained indicate an influence of the focal depth and the peak ground acceleration on the displacement.

Keywords: SIPINA algorithm, seism, focal depth, peak ground acceleration, displacement

Procedia PDF Downloads 313
38017 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 78
38016 PM Electrical Machines Diagnostic: Methods Selected

Authors: M. Barański

Abstract:

This paper presents a several diagnostic methods designed to electrical machines especially for permanent magnets (PM) machines. Those machines are commonly used in small wind and water systems and vehicles drives. Those methods are preferred by the author in periodic diagnostic of electrical machines. The special attention should be paid to diagnostic method of turn-to-turn insulation and vibrations. Both of those methods were created in Institute of Electrical Drives and Machines Komel. The vibration diagnostic method is the main thesis of author’s doctoral dissertation. This is method of determination the technical condition of PM electrical machine basing on its own signals is the subject of patent application No P.405669. Specific structural properties of machines excited by permanent magnets are used in this method - electromotive force (EMF) generated due to vibrations. There was analysed number of publications which describe vibration diagnostic methods and tests of electrical machines with permanent magnets and there was no method found to determine the technical condition of such machine basing on their own signals.

Keywords: electrical vehicle, generator, main insulation, permanent magnet, thermography, turn-to-traction drive, turn insulation, vibrations

Procedia PDF Downloads 402
38015 Construction Information Visualization System Using nD CAD Model

Authors: Hyeon-seoung Kim, Sang-mi Park, Sun-ju Han, Leen-seok Kang

Abstract:

The visualization technology of construction information using 3D and nD modeling can satisfy the visualization needs of each construction project participant. The nD CAD system is a tool that the construction information, such as construction schedule, cost and resource utilization, are simulated by 4D, 5D and 6D object formats based on 3D object. This study developed a methodology and simulation engine for nD CAD system for construction project management. It has improved functions such as built-in schedule generation, cost simulation of changed budget and built-in resource allocation comparing with the current systems. To develop an integrated nD CAD system, this study attempts an integrated method to link 5D and 6D objects based on 4D object.

Keywords: building information modeling, visual simulation, 3D object, nD CAD augmented reality

Procedia PDF Downloads 312
38014 Predicting the Uniaxial Strength Distribution of Brittle Materials Based on a Uniaxial Test

Authors: Benjamin Sonnenreich

Abstract:

Brittle fracture failure probability is best described using a stochastic approach which is based on the 'weakest link concept' and the connection between a microstructure and macroscopic fracture scale. A general theoretical and experimental framework is presented to predict the uniaxial strength distribution according to independent uniaxial test data. The framework takes as input the applied stresses, the geometry, the materials, the defect distributions and the relevant random variables from uniaxial test results and gives as output an overall failure probability that can be used to improve the reliability of practical designs. Additionally, the method facilitates comparisons of strength data from several sources, uniaxial tests, and sample geometries.

Keywords: brittle fracture, strength distribution, uniaxial, weakest link concept

Procedia PDF Downloads 325
38013 Evaluation of Paper Effluent with Two Bacterial Strain and Their Consortia

Authors: Priya Tomar, Pallavi Mittal

Abstract:

As industrialization is inevitable and progress with rapid acceleration, the need for innovative ways to get rid of waste has increased. Recent advancement in bioresource technology paves novel ideas for recycling of factory waste that has been polluting the agro-industry, soil and water bodies. Paper industries in India are in a considerable number, where molasses and impure alcohol are still being used as raw materials for manufacturing of paper. Paper mills based on nonconventional agro residues are being encouraged due to increased demand of paper and acute shortage of forest-based raw materials. The colouring body present in the wastewater from pulp and paper mill is organic in nature and is comprised of wood extractives, tannin, resins, synthetic dyes, lignin and its degradation products formed by the action of chlorine on lignin which imparts an offensive colour to the water. These mills use different chemical process for paper manufacturing due to which lignified chemicals are released into the environment. Therefore, the chemical oxygen demand (COD) of the emanating stream is quite high. This paper presents some new techniques that were developed for the efficiency of bioremediation on paper industry. A short introduction to paper industry and a variety of presently available methods of bioremediation on paper industry and different strategies are also discussed here. For solving the above problem, two bacterial strains (Pseudomonas aeruginosa and Bacillus subtilis) and their consortia (Pseudomonas aeruginosa and Bacillus subtilis) were utilized for the pulp and paper mill effluent. Pseudomonas aeruginosa and Bacillus subtilis named as T–1, T–2, T–3, T–4, T–5, T–6, for the decolourisation of paper industry effluent. The results indicated that a maximum colour reduction is (60.5%) achieved by Pseudomonas aeruginosa and COD reduction is (88.8%) achieved by Bacillus subtilis, maximum pH changes is (4.23) achieved by Pseudomonas aeruginosa, TSS reduction is (2.09 %) achieved by Bacillus subtilis, and TDS reduction is (0.95 %) achieved by Bacillus subtilis. When the wastewater was supplemented with carbon (glucose) and nitrogen (yeast extract) source and data revealed the efficiency of Bacillus subtilis, having more with glucose than Pseudomonas aeruginosa.

Keywords: bioremediation, paper and pulp mill effluent, treated effluent, lignin

Procedia PDF Downloads 251
38012 CyberSteer: Cyber-Human Approach for Safely Shaping Autonomous Robotic Behavior to Comply with Human Intention

Authors: Vinicius G. Goecks, Gregory M. Gremillion, William D. Nothwang

Abstract:

Modern approaches to train intelligent agents rely on prolonged training sessions, high amounts of input data, and multiple interactions with the environment. This restricts the application of these learning algorithms in robotics and real-world applications, in which there is low tolerance to inadequate actions, interactions are expensive, and real-time processing and action are required. This paper addresses this issue introducing CyberSteer, a novel approach to efficiently design intrinsic reward functions based on human intention to guide deep reinforcement learning agents with no environment-dependent rewards. CyberSteer uses non-expert human operators for initial demonstration of a given task or desired behavior. The trajectories collected are used to train a behavior cloning deep neural network that asynchronously runs in the background and suggests actions to the deep reinforcement learning module. An intrinsic reward is computed based on the similarity between actions suggested and taken by the deep reinforcement learning algorithm commanding the agent. This intrinsic reward can also be reshaped through additional human demonstration or critique. This approach removes the need for environment-dependent or hand-engineered rewards while still being able to safely shape the behavior of autonomous robotic agents, in this case, based on human intention. CyberSteer is tested in a high-fidelity unmanned aerial vehicle simulation environment, the Microsoft AirSim. The simulated aerial robot performs collision avoidance through a clustered forest environment using forward-looking depth sensing and roll, pitch, and yaw references angle commands to the flight controller. This approach shows that the behavior of robotic systems can be shaped in a reduced amount of time when guided by a non-expert human, who is only aware of the high-level goals of the task. Decreasing the amount of training time required and increasing safety during training maneuvers will allow for faster deployment of intelligent robotic agents in dynamic real-world applications.

Keywords: human-robot interaction, intelligent robots, robot learning, semisupervised learning, unmanned aerial vehicles

Procedia PDF Downloads 259
38011 Soil Parameters Identification around PMT Test by Inverse Analysis

Authors: I. Toumi, Y. Abed, A. Bouafia

Abstract:

This paper presents a methodology for identifying the cohesive soil parameters that takes into account different constitutive equations. The procedure, applied to identify the parameters of generalized Prager model associated to the Drucker & Prager failure criterion from a pressuremeter expansion curve, is based on an inverse analysis approach, which consists of minimizing the function representing the difference between the experimental curve and the simulated curve using a simplex algorithm. The model response on pressuremeter path and its identification from experimental data lead to the determination of the friction angle, the cohesion and the Young modulus. Some parameters effects on the simulated curves and stresses path around pressuremeter probe are presented. Comparisons between the parameters determined with the proposed method and those obtained by other means are also presented.

Keywords: cohesive soils, cavity expansion, pressuremeter test, finite element method, optimization procedure, simplex algorithm

Procedia PDF Downloads 294
38010 Analytical Study Of Holographic Polymer Dispersed Liquid Crystals Using Finite Difference Time Domain Method

Authors: N. R. Mohamad, H. Ono, H. Haroon, A. Salleh, N. M. Z. Hashim

Abstract:

In this research, we have studied and analyzed the modulation of light and liquid crystal in HPDLCs using Finite Domain Time Difference (FDTD) method. HPDLCs are modeled as a mixture of polymer and liquid crystals (LCs) that categorized as an anisotropic medium. FDTD method is directly solves Maxwell’s equation with less approximation, so this method can analyze more flexible and general approach for the arbitrary anisotropic media. As the results from FDTD simulation, the highest diffraction efficiency occurred at ±19 degrees (Bragg angle) using p polarization incident beam to Bragg grating, Q > 10 when the pitch is 1µm. Therefore, the liquid crystal is assumed to be aligned parallel to the grating constant vector during these parameters.

Keywords: birefringence, diffraction efficiency, finite domain time difference, nematic liquid crystals

Procedia PDF Downloads 460
38009 Simulation Study on Spacecraft Surface Charging Induced by Jovian Plasma Environment with Particle in Cell Method

Authors: Meihua Fang, Yipan Guo, Tao Fei, Pengyu Tian

Abstract:

Space plasma caused spacecraft surface charging is the major space environment hazard. Particle in cell (PIC) method can be used to simulate the interaction between space plasma and spacecraft. It was proved that surface charging level of spacecraft in Jupiter’s orbits was high for its’ electron-heavy plasma environment. In this paper, Jovian plasma environment is modeled and surface charging analysis is carried out by PIC based software Spacecraft Plasma Interaction System (SPIS). The results show that the spacecraft charging potentials exceed 1000V at 2Rj, 15Rj and 25Rj polar orbits in the dark side at worst case plasma model. Furthermore, the simulation results indicate that the large Jovian magnetic field increases the surface charging level for secondary electron gyration.

Keywords: Jupiter, PIC, space plasma, surface charging

Procedia PDF Downloads 151
38008 An MrPPG Method for Face Anti-Spoofing

Authors: Lan Zhang, Cailing Zhang

Abstract:

In recent years, many face anti-spoofing algorithms have high detection accuracy when detecting 2D face anti-spoofing or 3D mask face anti-spoofing alone in the field of face anti-spoofing, but their detection performance is greatly reduced in multidimensional and cross-datasets tests. The rPPG method used for face anti-spoofing uses the unique vital information of real face to judge real faces and face anti-spoofing, so rPPG method has strong stability compared with other methods, but its detection rate of 2D face anti-spoofing needs to be improved. Therefore, in this paper, we improve an rPPG(Remote Photoplethysmography) method(MrPPG) for face anti-spoofing which through color space fusion, using the correlation of pulse signals between real face regions and background regions, and introducing the cyclic neural network (LSTM) method to improve accuracy in 2D face anti-spoofing. Meanwhile, the MrPPG also has high accuracy and good stability in face anti-spoofing of multi-dimensional and cross-data datasets. The improved method was validated on Replay-Attack, CASIA-FASD, Siw and HKBU_MARs_V2 datasets, the experimental results show that the performance and stability of the improved algorithm proposed in this paper is superior to many advanced algorithms.

Keywords: face anti-spoofing, face presentation attack detection, remote photoplethysmography, MrPPG

Procedia PDF Downloads 178
38007 Implementation of Dozer Push Measurement under Payment Mechanism in Mining Operation

Authors: Anshar Ajatasatru

Abstract:

The decline of coal prices over past years have been significantly increasing the awareness of effective mining operation. A viable step must be undertaken in becoming more cost competitive while striving for best mining practice especially at Melak Coal Mine in East Kalimantan, Indonesia. This paper aims to show how effective dozer push measurement method can be implemented as it is controlled by contract rate on the unit basis of USD ($) per bcm. The method emerges from an idea of daily dozer push activity that continually shifts the overburden until final target design by mine planning. Volume calculation is then performed by calculating volume of each time overburden is removed within determined distance using cut and fill method from a high precision GNSS system which is applied into dozer as a guidance to ensure the optimum result of overburden removal. Accumulation of daily to weekly dozer push volume is found 95 bcm which is multiplied by average sell rate of $ 0,95, thus the amount monthly revenue is $ 90,25. Furthermore, the payment mechanism is then based on push distance and push grade. The push distance interval will determine the rates that vary from $ 0,9 - $ 2,69 per bcm and are influenced by certain push slope grade from -25% until +25%. The amount payable rates for dozer push operation shall be specifically following currency adjustment and is to be added to the monthly overburden volume claim, therefore, the sell rate of overburden volume per bcm may fluctuate depends on the real time exchange rate of Jakarta Interbank Spot Dollar Rate (JISDOR). The result indicates that dozer push measurement can be one of the surface mining alternative since it has enabled to refine method of work, operating cost and productivity improvement apart from exposing risk of low rented equipment performance. In addition, payment mechanism of contract rate by dozer push operation scheduling will ultimately deliver clients by almost 45% cost reduction in the form of low and consistent cost.

Keywords: contract rate, cut-fill method, dozer push, overburden volume

Procedia PDF Downloads 316
38006 Operational Matrix Method for Fuzzy Fractional Reaction Diffusion Equation

Authors: Sachin Kumar

Abstract:

Fuzzy fractional diffusion equation is widely useful to depict different physical processes arising in physics, biology, and hydrology. The motive of this article is to deal with the fuzzy fractional diffusion equation. We study a mathematical model of fuzzy space-time fractional diffusion equation in which unknown function, coefficients, and initial-boundary conditions are fuzzy numbers. First, we find out a fuzzy operational matrix of Legendre polynomial of Caputo type fuzzy fractional derivative having a non-singular Mittag-Leffler kernel. The main advantages of this method are that it reduces the fuzzy fractional partial differential equation (FFPDE) to a system of fuzzy algebraic equations from which we can find the solution of the problem. The feasibility of our approach is shown by some numerical examples. Hence, our method is suitable to deal with FFPDE and has good accuracy.

Keywords: fractional PDE, fuzzy valued function, diffusion equation, Legendre polynomial, spectral method

Procedia PDF Downloads 201
38005 Development of Loop-Mediated Isothermal Amplification for Detection of Garlic in Food

Authors: Ting-Ying Su, Meng-Shiou Lee, Shyang-Chwen Sheu

Abstract:

Garlic is used commonly as a seasoning around the world. But some people suffer from allergy to garlic. Garlic may also cause burning of mouth, stomach, and throat. In some Buddhist traditions, consuming garlic is not allowed. The objective of this study is to develop a LAMP based method for detection of garlic in food. We designed specific primers targeted on ITS1-5.8S rRNA-ITS2 sequence of garlic DNA. The LAMP assay was performed using a set of four different primers F3, B3, FIP and BIP at 60˚C in less than 60 mins. Results showed that the primer was not cross-reactive to other commonly used spice including Chinese leek, Chinese onion, green onion, onion, pepper, basil, parsley, pepper and ginger. As low as 2% of garlic DNA could be detected. Garlic still could be detected by developed LAMP after boiled at 100˚C for 80 minutes and autoclaved at 121˚C for 60 minutes. Commercial products labeled with garlic ingredient could be identified by the developed method.

Keywords: garlic, loop-mediated isothermal amplification, processing, DNA

Procedia PDF Downloads 303
38004 Detecting Geographically Dispersed Overlay Communities Using Community Networks

Authors: Madhushi Bandara, Dharshana Kasthurirathna, Danaja Maldeniya, Mahendra Piraveenan

Abstract:

Community detection is an extremely useful technique in understanding the structure and function of a social network. Louvain algorithm, which is based on Newman-Girman modularity optimization technique, is extensively used as a computationally efficient method extract the communities in social networks. It has been suggested that the nodes that are in close geographical proximity have a higher tendency of forming communities. Variants of the Newman-Girman modularity measure such as dist-modularity try to normalize the effect of geographical proximity to extract geographically dispersed communities, at the expense of losing the information about the geographically proximate communities. In this work, we propose a method to extract geographically dispersed communities while preserving the information about the geographically proximate communities, by analyzing the ‘community network’, where the centroids of communities would be considered as network nodes. We suggest that the inter-community link strengths, which are normalized over the community sizes, may be used to identify and extract the ‘overlay communities’. The overlay communities would have relatively higher link strengths, despite being relatively apart in their spatial distribution. We apply this method to the Gowalla online social network, which contains the geographical signatures of its users, and identify the overlay communities within it.

Keywords: social networks, community detection, modularity optimization, geographically dispersed communities

Procedia PDF Downloads 235
38003 Enhancer: An Effective Transformer Architecture for Single Image Super Resolution

Authors: Pitigalage Chamath Chandira Peiris

Abstract:

A widely researched domain in the field of image processing in recent times has been single image super-resolution, which tries to restore a high-resolution image from a single low-resolution image. Many more single image super-resolution efforts have been completed utilizing equally traditional and deep learning methodologies, as well as a variety of other methodologies. Deep learning-based super-resolution methods, in particular, have received significant interest. As of now, the most advanced image restoration approaches are based on convolutional neural networks; nevertheless, only a few efforts have been performed using Transformers, which have demonstrated excellent performance on high-level vision tasks. The effectiveness of CNN-based algorithms in image super-resolution has been impressive. However, these methods cannot completely capture the non-local features of the data. Enhancer is a simple yet powerful Transformer-based approach for enhancing the resolution of images. A method for single image super-resolution was developed in this study, which utilized an efficient and effective transformer design. This proposed architecture makes use of a locally enhanced window transformer block to alleviate the enormous computational load associated with non-overlapping window-based self-attention. Additionally, it incorporates depth-wise convolution in the feed-forward network to enhance its ability to capture local context. This study is assessed by comparing the results obtained for popular datasets to those obtained by other techniques in the domain.

Keywords: single image super resolution, computer vision, vision transformers, image restoration

Procedia PDF Downloads 105
38002 Post-occupancy Evaluation of Greenway Based on Multi-source data : A Case Study of Jincheng Greenway in Chengdu

Authors: Qin Zhu

Abstract:

Under the development concept of Park City, Tianfu Greenway system, as the basic and pre-configuration element of Chengdu Global Park construction, connects urban open space with linear and circular structures and undertakes and exerts the ecological, cultural and recreational functions of the park system. Chengdu greenway construction is in full swing. In the process of greenway planning and construction, the landscape effect of greenway on urban quality improvement is more valued, and the long-term impact of crowd experience on the sustainable development of greenway is often ignored. Therefore, it is very important to test the effectiveness of greenway construction from the perspective of users. Taking Jincheng Greenway in Chengdu as an example, this paper attempts to introduce multi-source data to construct a post-occupancy evaluation model of greenway and adopts behavior mapping method, questionnaire survey method, web text analysis and IPA analysis method to comprehensively evaluate the user 's behavior characteristics and satisfaction. According to the evaluation results, we can grasp the actual behavior rules and comprehensive needs of users so that the experience of building greenways can be fed back in time and provide guidance for the optimization and improvement of built greenways and the planning and construction of future greenways.

Keywords: multi-source data, greenway, IPA analysis, post -occupancy evaluation (POE)

Procedia PDF Downloads 61
38001 Association of Genetically Proxied Cholesterol-Lowering Drug Targets and Head and Neck Cancer Survival: A Mendelian Randomization Analysis

Authors: Danni Cheng

Abstract:

Background: Preclinical and epidemiological studies have reported potential protective effects of low-density lipoprotein cholesterol (LDL-C) lowering drugs on head and neck squamous cell cancer (HNSCC) survival, but the causality was not consistent. Genetic variants associated with LDL-C lowering drug targets can predict the effects of their therapeutic inhibition on disease outcomes. Objective: We aimed to evaluate the causal association of genetically proxied cholesterol-lowering drug targets and circulating lipid traits with cancer survival in HNSCC patients stratified by human papillomavirus (HPV) status using two-sample Mendelian randomization (MR) analyses. Method: Single-nucleotide polymorphisms (SNPs) in gene region of LDL-C lowering drug targets (HMGCR, NPC1L1, CETP, PCSK9, and LDLR) associated with LDL-C levels in genome-wide association study (GWAS) from the Global Lipids Genetics Consortium (GLGC) were used to proxy LDL-C lowering drug action. SNPs proxy circulating lipids (LDL-C, HDL-C, total cholesterol, triglycerides, apoprotein A and apoprotein B) were also derived from the GLGC data. Genetic associations of these SNPs and cancer survivals were derived from 1,120 HPV-positive oropharyngeal squamous cell carcinoma (OPSCC) and 2,570 non-HPV-driven HNSCC patients in VOYAGER program. We estimated the causal associations of LDL-C lowering drugs and circulating lipids with HNSCC survival using the inverse-variance weighted method. Results: Genetically proxied HMGCR inhibition was significantly associated with worse overall survival (OS) in non-HPV-drive HNSCC patients (inverse variance-weighted hazard ratio (HR IVW), 2.64[95%CI,1.28-5.43]; P = 0.01) but better OS in HPV-positive OPSCC patients (HR IVW,0.11[95%CI,0.02-0.56]; P = 0.01). Estimates for NPC1L1 were strongly associated with worse OS in both total HNSCC (HR IVW,4.17[95%CI,1.06-16.36]; P = 0.04) and non-HPV-driven HNSCC patients (HR IVW,7.33[95%CI,1.63-32.97]; P = 0.01). A similar result was found that genetically proxied PSCK9 inhibitors were significantly associated with poor OS in non-HPV-driven HNSCC (HR IVW,1.56[95%CI,1.02 to 2.39]). Conclusion: Genetically proxied long-term HMGCR inhibition was significantly associated with decreased OS in non-HPV-driven HNSCC and increased OS in HPV-positive OPSCC. While genetically proxied NPC1L1 and PCSK9 had associations with worse OS in total and non-HPV-driven HNSCC patients. Further research is needed to understand whether these drugs have consistent associations with head and neck tumor outcomes.

Keywords: Mendelian randomization analysis, head and neck cancer, cancer survival, cholesterol, statin

Procedia PDF Downloads 100
38000 Approach for Demonstrating Reliability Targets for Rail Transport during Low Mileage Accumulation in the Field: Methodology and Case Study

Authors: Nipun Manirajan, Heeralal Gargama, Sushil Guhe, Manoj Prabhakaran

Abstract:

In railway industry, train sets are designed based on contractual requirements (mission profile), where reliability targets are measured in terms of mean distance between failures (MDBF). However, during the beginning of revenue services, trains do not achieve the designed mission profile distance (mileage) within the timeframe due to infrastructure constraints, scarcity of commuters or other operational challenges thereby not respecting the original design inputs. Since trains do not run sufficiently and do not achieve the designed mileage within the specified time, car builder has a risk of not achieving the contractual MDBF target. This paper proposes a constant failure rate based model to deal with the situations where mileage accumulation is not a part of the design mission profile. The model provides appropriate MDBF target to be demonstrated based on actual accumulated mileage. A case study of rolling stock running in the field is undertaken to analyze the failure data and MDBF target demonstration during low mileage accumulation. The results of case study prove that with the proposed method, reliability targets are achieved under low mileage accumulation.

Keywords: mean distance between failures, mileage-based reliability, reliability target appropriations, rolling stock reliability

Procedia PDF Downloads 267
37999 Semantic Search Engine Based on Query Expansion with Google Ranking and Similarity Measures

Authors: Ahmad Shahin, Fadi Chakik, Walid Moudani

Abstract:

Our study is about elaborating a potential solution for a search engine that involves semantic technology to retrieve information and display it significantly. Semantic search engines are not used widely over the web as the majorities are still in Beta stage or under construction. Many problems face the current applications in semantic search, the major problem is to analyze and calculate the meaning of query in order to retrieve relevant information. Another problem is the ontology based index and its updates. Ranking results according to concept meaning and its relation with query is another challenge. In this paper, we are offering a light meta-engine (QESM) which uses Google search, and therefore Google’s index, with some adaptations to its returned results by adding multi-query expansion. The mission was to find a reliable ranking algorithm that involves semantics and uses concepts and meanings to rank results. At the beginning, the engine finds synonyms of each query term entered by the user based on a lexical database. Then, query expansion is applied to generate different semantically analogous sentences. These are generated randomly by combining the found synonyms and the original query terms. Our model suggests the use of semantic similarity measures between two sentences. Practically, we used this method to calculate semantic similarity between each query and the description of each page’s content generated by Google. The generated sentences are sent to Google engine one by one, and ranked again all together with the adapted ranking method (QESM). Finally, our system will place Google pages with higher similarities on the top of the results. We have conducted experimentations with 6 different queries. We have observed that most ranked results with QESM were altered with Google’s original generated pages. With our experimented queries, QESM generates frequently better accuracy than Google. In some worst cases, it behaves like Google.

Keywords: semantic search engine, Google indexing, query expansion, similarity measures

Procedia PDF Downloads 425
37998 Fault-Tolerant Predictive Control for Polytopic LPV Systems Subject to Sensor Faults

Authors: Sofiane Bououden, Ilyes Boulkaibet

Abstract:

In this paper, a robust fault-tolerant predictive control (FTPC) strategy is proposed for systems with linear parameter varying (LPV) models and input constraints subject to sensor faults. Generally, virtual observers are used for improving the observation precision and reduce the impacts of sensor faults and uncertainties in the system. However, this type of observer lacks certain system measurements which substantially reduce its accuracy. To deal with this issue, a real observer is then designed based on the virtual observer, and consequently a real observer-based robust predictive control is designed for polytopic LPV systems. Moreover, the proposed observer can entirely assure that all system states and sensor faults are estimated. As a result, and based on both observers, a robust fault-tolerant predictive control is then established via the Lyapunov method where sufficient conditions are proposed, for stability analysis and control purposes, in linear matrix inequalities (LMIs) form. Finally, simulation results are given to show the effectiveness of the proposed approach.

Keywords: linear parameter varying systems, fault-tolerant predictive control, observer-based control, sensor faults, input constraints, linear matrix inequalities

Procedia PDF Downloads 200
37997 Adaptive Dehazing Using Fusion Strategy

Authors: M. Ramesh Kanthan, S. Naga Nandini Sujatha

Abstract:

The goal of haze removal algorithms is to enhance and recover details of scene from foggy image. In enhancement the proposed method focus into two main categories: (i) image enhancement based on Adaptive contrast Histogram equalization, and (ii) image edge strengthened Gradient model. Many circumstances accurate haze removal algorithms are needed. The de-fog feature works through a complex algorithm which first determines the fog destiny of the scene, then analyses the obscured image before applying contrast and sharpness adjustments to the video in real-time to produce image the fusion strategy is driven by the intrinsic properties of the original image and is highly dependent on the choice of the inputs and the weights. Then the output haze free image has reconstructed using fusion methodology. In order to increase the accuracy, interpolation method has used in the output reconstruction. A promising retrieval performance is achieved especially in particular examples.

Keywords: single image, fusion, dehazing, multi-scale fusion, per-pixel, weight map

Procedia PDF Downloads 464