Search results for: time domain analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 40221

Search results for: time domain analysis

36351 Analysis Rescuers' Viewpoint about Victims Tracking in Earthquake by Using Radio Frequency Identification (RFID)

Authors: Sima Ajami, Batool Akbari

Abstract:

Background: Radio frequency identification (RFID) system has been successfully applied to the areas of manufacturing, supply chain, agriculture, transportation, healthcare, and services. The RFID is already used to track and trace the victims in a disaster situation. Data can be collected in real time and be immediately available to emergency personnel and saves time by the RFID. Objectives: The aim of this study was, first, to identify stakeholders and customers for rescuing earthquake victims, second, to list key internal and external factors to use RFID to track earthquake victims, finally, to assess SWOT for rescuers' viewpoint. Materials and Methods: This study was an applied and analytical study. The study population included scholars, experts, planners, policy makers and rescuers in the "red crescent society of Isfahan province", "disaster management Isfahan province", "maintenance and operation department of Isfahan", "fire and safety services organization of Isfahan municipality", and "medical emergencies and disaster management center of Isfahan". After that, researchers held a workshop to teach participants about RFID and its usages in tracking earthquake victims. In the meanwhile of the workshop, participants identified, listed, and weighed key internal factors (strengths and weaknesses; SW) and external factors (opportunities and threats; OT) to use RFID in tracking earthquake victims. Therefore, participants put weigh strengths, weaknesses, opportunities, and threats (SWOT) and their weighted scales were calculated. Then, participants' opinions about this issue were assessed. Finally, according to the SWOT matrix, strategies to solve the weaknesses, problems, challenges, and threats through opportunities and strengths were proposed by participants. Results: The SWOT analysis showed that the total weighted score for internal and external factors were 3.91 (Internal Factor Evaluation) and 3.31 (External Factor Evaluation) respectively. Therefore, it was in a quadrant SO strategies cell in the SWOT analysis matrix and aggressive strategies were resulted. Organizations, scholars, experts, planners, policy makers and rescue workers should plan to use RFID technology in order to save more victims and manage their life. Conclusions: Researchers suppose to apply SO strategies and use a firm’s internal strength to take advantage of external opportunities. It is suggested, policy maker should plan to use the most developed technologies to save earthquake victims and deliver the easiest service to them. To do this, education, informing, and encouraging rescuers to use these technologies is essential. Originality/ Value: This study was a research paper that showed how RFID can be useful to track victims in earthquake.

Keywords: frequency identification system, strength, weakness, earthquake, victim

Procedia PDF Downloads 317
36350 Adaption Model for Building Agile Pronunciation Dictionaries Using Phonemic Distance Measurements

Authors: Akella Amarendra Babu, Rama Devi Yellasiri, Natukula Sainath

Abstract:

Where human beings can easily learn and adopt pronunciation variations, machines need training before put into use. Also humans keep minimum vocabulary and their pronunciation variations are stored in front-end of their memory for ready reference, while machines keep the entire pronunciation dictionary for ready reference. Supervised methods are used for preparation of pronunciation dictionaries which take large amounts of manual effort, cost, time and are not suitable for real time use. This paper presents an unsupervised adaptation model for building agile and dynamic pronunciation dictionaries online. These methods mimic human approach in learning the new pronunciations in real time. A new algorithm for measuring sound distances called Dynamic Phone Warping is presented and tested. Performance of the system is measured using an adaptation model and the precision metrics is found to be better than 86 percent.

Keywords: pronunciation variations, dynamic programming, machine learning, natural language processing

Procedia PDF Downloads 165
36349 Fuzzy Total Factor Productivity by Credibility Theory

Authors: Shivi Agarwal, Trilok Mathur

Abstract:

This paper proposes the method to measure the total factor productivity (TFP) change by credibility theory for fuzzy input and output variables. Total factor productivity change has been widely studied with crisp input and output variables, however, in some cases, input and output data of decision-making units (DMUs) can be measured with uncertainty. These data can be represented as linguistic variable characterized by fuzzy numbers. Malmquist productivity index (MPI) is widely used to estimate the TFP change by calculating the total factor productivity of a DMU for different time periods using data envelopment analysis (DEA). The fuzzy DEA (FDEA) model is solved using the credibility theory. The results of FDEA is used to measure the TFP change for fuzzy input and output variables. Finally, numerical examples are presented to illustrate the proposed method to measure the TFP change input and output variables. The suggested methodology can be utilized for performance evaluation of DMUs and help to assess the level of integration. The methodology can also apply to rank the DMUs and can find out the DMUs that are lagging behind and make recommendations as to how they can improve their performance to bring them at par with other DMUs.

Keywords: chance-constrained programming, credibility theory, data envelopment analysis, fuzzy data, Malmquist productivity index

Procedia PDF Downloads 352
36348 Contextual Sentiment Analysis with Untrained Annotators

Authors: Lucas A. Silva, Carla R. Aguiar

Abstract:

This work presents a proposal to perform contextual sentiment analysis using a supervised learning algorithm and disregarding the extensive training of annotators. To achieve this goal, a web platform was developed to perform the entire procedure outlined in this paper. The main contribution of the pipeline described in this article is to simplify and automate the annotation process through a system of analysis of congruence between the notes. This ensured satisfactory results even without using specialized annotators in the context of the research, avoiding the generation of biased training data for the classifiers. For this, a case study was conducted in a blog of entrepreneurship. The experimental results were consistent with the literature related annotation using formalized process with experts.

Keywords: sentiment analysis, untrained annotators, naive bayes, entrepreneurship, contextualized classifier

Procedia PDF Downloads 386
36347 An ALM Matrix Completion Algorithm for Recovering Weather Monitoring Data

Authors: Yuqing Chen, Ying Xu, Renfa Li

Abstract:

The development of matrix completion theory provides new approaches for data gathering in Wireless Sensor Networks (WSN). The existing matrix completion algorithms for WSN mainly consider how to reduce the sampling number without considering the real-time performance when recovering the data matrix. In order to guarantee the recovery accuracy and reduce the recovery time consumed simultaneously, we propose a new ALM algorithm to recover the weather monitoring data. A lot of experiments have been carried out to investigate the performance of the proposed ALM algorithm by using different parameter settings, different sampling rates and sampling models. In addition, we compare the proposed ALM algorithm with some existing algorithms in the literature. Experimental results show that the ALM algorithm can obtain better overall recovery accuracy with less computing time, which demonstrate that the ALM algorithm is an effective and efficient approach for recovering the real world weather monitoring data in WSN.

Keywords: wireless sensor network, matrix completion, singular value thresholding, augmented Lagrange multiplier

Procedia PDF Downloads 377
36346 Lead Removal by Using the Synthesized Zeolites from Sugarcane Bagasse Ash

Authors: Sirirat Jangkorn, Pornsawai Praipipat

Abstract:

Sugarcane bagasse ash of sugar factories is solid wastes that the richest source of silica. The alkali fusion method, quartz particles in material can be dissolved and they can be used as the silicon source for synthesizing silica-based materials such as zeolites. Zeolites have many advantages such as catalyst to improve the chemical reactions and they can also remove heavy metals in the water including lead. Therefore, this study attempts to synthesize zeolites from the sugarcane bagasse ash, investigate their structure characterizations and chemical components to confirm the happening of zeolites, and examine their lead removal efficiency through the batch test studies. In this study, the sugarcane bagasse ash was chosen as the silicon source to synthesize zeolites, X-ray diffraction (XRD) and X-ray fluorescence spectrometry (XRF) were used to verify the zeolite pattern structures and element compositions, respectively. The batch test studies in dose (0.05, 0.1, 0.15 g.), contact time (1, 2, 3), and pH (3, 5, 7) were used to investigate the lead removal efficiency by the synthesized zeolite. XRD analysis result showed the crystalline phase of zeolite pattern, and XRF result showed the main element compositions of the synthesized zeolite that were SiO₂ (50%) and Al₂O₃ (30%). The batch test results showed the best optimum conditions of the synthesized zeolite for lead removal were 0.1 g, 2 hrs., and 5 of dose, contact time, and pH, respectively. As a result, this study can conclude that the zeolites can synthesize from the sugarcane bagasse ash and they can remove lead in the water.

Keywords: sugarcane bagasse ash, solid wastes, zeolite, lead

Procedia PDF Downloads 137
36345 Implication of Fractal Kinetics and Diffusion Limited Reaction on Biomass Hydrolysis

Authors: Sibashish Baksi, Ujjaini Sarkar, Sudeshna Saha

Abstract:

In the present study, hydrolysis of Pinus roxburghi wood powder was carried out with Viscozyme, and kinetics of the hydrolysis has been investigated. Finely ground sawdust is submerged into 2% aqueous peroxide solution (pH=11.5) and pretreated through autoclaving, probe sonication, and alkaline peroxide pretreatment. Afterward, the pretreated material is subjected to hydrolysis. A chain of experiments was executed with delignified biomass (50 g/l) and varying enzyme concentrations (24.2–60.5 g/l). In the present study, 14.32 g/l of glucose, along with 7.35 g/l of xylose, have been recovered with a viscozyme concentration of 48.8 g/l and the same condition was treated as optimum condition. Additionally, thermal deactivation of viscozyme has been investigated and found to be gradually decreasing with escalated enzyme loading from 48.4 g/l (dissociation constant= 0.05 h⁻¹) to 60.5 g/l (dissociation constant= 0.02 h⁻¹). The hydrolysis reaction is a pseudo first-order reaction, and therefore, the rate of the hydrolysis can be expressed as a fractal-like kinetic equation that communicates between the product concentration and hydrolytic time t. It is seen that the value of rate constant (K) increases from 0.008 to 0.017 with augmented enzyme concentration from 24.2 g/l to 60.5 g/l. Greater value of K is associated with stronger enzyme binding capacity of the substrate mass. However, escalated concentration of supplied enzyme ensures improved interaction with more substrate molecules resulting in an enhanced de-polymerization of the polymeric sugar chains per unit time which eventually modifies the physiochemical structure of biomass. All fractal dimensions are in between 0 and 1. Lower the value of fractal dimension, more easily the biomass get hydrolyzed. It can be seen that with increased enzyme concentration from 24.2 g/l to 48.4 g/l, the values of fractal dimension go down from 0.1 to 0.044. This indicates that the presence of more enzyme molecules can more easily hydrolyze the substrate. However, an increased value has been observed with a further increment of enzyme concentration to 60.5g/l because of diffusional limitation. It is evident that the hydrolysis reaction system is a heterogeneous organization, and the product formation rate depends strongly on the enzyme diffusion resistances caused by the rate-limiting structures of the substrate-enzyme complex. Value of the rate constant increases from 1.061 to 2.610 with escalated enzyme concentration from 24.2 to 48.4 g/l. As the rate constant is proportional to Fick’s diffusion coefficient, it can be assumed that with a higher concentration of enzyme, a larger amount of enzyme mass dM diffuses into the substrate through the surface dF per unit time dt. Therefore, a higher rate constant value is associated with a faster diffusion of enzyme into the substrate. Regression analysis of time curves with various enzyme concentrations shows that diffusion resistant constant increases from 0.3 to 0.51 for the first two enzyme concentrations and again decreases with enzyme concentration of 60.5 g/l. During diffusion in a differential scale, the enzyme also experiences a greater resistance during diffusion of larger dM through dF in dt.

Keywords: viscozyme, glucose, fractal kinetics, thermal deactivation

Procedia PDF Downloads 106
36344 Seismic Performance Evaluation of Structures with Hybrid Dampers Based on FEMA P-58 Methodology

Authors: Minsung Kim, Hyunkoo Kang, Jinkoo Kim

Abstract:

In this study, a hybrid energy dissipation device is developed by combining a steel slit plate and friction pads to be used for seismic retrofit of structures, and its effectiveness is investigated by comparing the life cycle costs of the structure before and after the retrofit. The seismic energy dissipation capability of the dampers is confirmed by cyclic loading tests. The probabilities of reaching various damage states are obtained by fragility analysis, and the life cycle costs of the model structures are computed using the PACT (Performance Assessment Calculation Tool) program based on FEMA P-58 methodology. The fragility analysis shows that the probabilities of reaching limit states are minimized by the seismic retrofit with hybrid dampers and increasing column size. The seismic retrofit with increasing column size and hybrid dampers results in the lowest repair cost and shortest repair time. This research was supported by a grant (13AUDP-B066083-01) from Architecture & Urban Development Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

Keywords: FEMA P-58, friction dampers, life cycle cost, seismic retrofit

Procedia PDF Downloads 330
36343 Sensitivity Analysis for 14 Bus Systems in a Distribution Network with Distribution Generators

Authors: Lakshya Bhat, Anubhav Shrivastava, Shivarudraswamy

Abstract:

There has been a formidable interest in the area of Distributed Generation in recent times. A wide number of loads are addressed by Distributed Generators and have better efficiency too. The major disadvantage in Distributed Generation is voltage control- is highlighted in this paper. The paper addresses voltage control at buses in IEEE 14 Bus system by regulating reactive power. An analysis is carried out by selecting the most optimum location in placing the Distributed Generators through load flow analysis and seeing where the voltage profile rises. Matlab programming is used for simulation of voltage profile in the respective buses after introduction of DG’s. A tolerance limit of +/-5% of the base value has to be maintained.To maintain the tolerance limit , 3 methods are used. Sensitivity analysis of 3 methods for voltage control is carried out to determine the priority among the methods.

Keywords: distributed generators, distributed system, reactive power, voltage control, sensitivity analysis

Procedia PDF Downloads 580
36342 Modeling and Analysis of Drilling Operation in Shale Reservoirs with Introduction of an Optimization Approach

Authors: Sina Kazemi, Farshid Torabi, Todd Peterson

Abstract:

Drilling in shale formations is frequently time-consuming, challenging, and fraught with mechanical failures such as stuck pipes or hole packing off when the cutting removal rate is not sufficient to clean the bottom hole. Crossing the heavy oil shale and sand reservoirs with active shale and microfractures is generally associated with severe fluid losses causing a reduction in the rate of the cuttings removal. These circumstances compromise a well’s integrity and result in a lower rate of penetration (ROP). This study presents collective results of field studies and theoretical analysis conducted on data from South Pars and North Dome in an Iran-Qatar offshore field. Solutions to complications related to drilling in shale formations are proposed through systemically analyzing and applying modeling techniques to select field mud logging data. Field data measurements during actual drilling operations indicate that in a shale formation where the return flow of polymer mud was almost lost in the upper dolomite layer, the performance of hole cleaning and ROP progressively change when higher string rotations are initiated. Likewise, it was observed that this effect minimized the force of rotational torque and improved well integrity in the subsequent casing running. Given similar geologic conditions and drilling operations in reservoirs targeting shale as the producing zone like the Bakken formation within the Williston Basin and Lloydminster, Saskatchewan, a drill bench dynamic modeling simulation was used to simulate borehole cleaning efficiency and mud optimization. The results obtained by altering RPM (string revolution per minute) at the same pump rate and optimized mud properties exhibit a positive correlation with field measurements. The field investigation and developed model in this report show that increasing the speed of string revolution as far as geomechanics and drilling bit conditions permit can minimize the risk of mechanically stuck pipes while reaching a higher than expected ROP in shale formations. Data obtained from modeling and field data analysis, optimized drilling parameters, and hole cleaning procedures are suggested for minimizing the risk of a hole packing off and enhancing well integrity in shale reservoirs. Whereas optimization of ROP at a lower pump rate maintains the wellbore stability, it saves time for the operator while reducing carbon emissions and fatigue of mud motors and power supply engines.

Keywords: ROP, circulating density, drilling parameters, return flow, shale reservoir, well integrity

Procedia PDF Downloads 81
36341 Discourse Analysis of the Concept of Citizenship in Textbooks in Iran

Authors: Jafar Ahmadi

Abstract:

This research has been done as a discourse analysis of the concept of citizenship in textbooks in Iran. The purpose of this study is to identify the dominant citizenship discourse in textbooks in the content of textbooks. The research method in this research is qualitative and qualitative content analysis. The statistical sample was selected in a purposeful manner and according to the research topic of books related to Persian literature, religious education and social education. The selected theoretical framework of this research is the three theories of citizenship (pre-modern, modern and postmodern). For each of these discourses, components and indicators have been extracted that are the basis of data analysis. The research findings show that the dominant citizenship discourse on the content of Iranian textbooks is pre-modern discourse and is the basis of this type of religious citizenship discourse. Finally, the findings show that the government uses the institution of education to reproduce its power.

Keywords: citizenship, textbooks, discourse analysis, religious citizenship, representation

Procedia PDF Downloads 189
36340 Fatigue Test and Stress-Life Analysis of Nanocomposite-Based Bone Fixation Device

Authors: Jisoo Kim, Min Su Lee, Sunmook Lee

Abstract:

Durability assessment of nanocomposite-based bone fixation device was performed by flexural fatigue tests, for which the changes in the life cycles of nanocomposite samples synthesized by blending bioabsorbable polymer (PLGA) and ceramic nanoparticles (β-TCP) with different ratios were monitored. The nanocomposite samples were kept in a constant temperature/humidity chamber at 37°C/50%RH for varied incubation periods for the degradation of nanocomposite samples under the temperature/humidity stress. It was found that the life cycles were increasing as the incubation time in the chamber were increasing in the initial stage irrespective of sample compositions, which was due to the annealing effect of the polymer. However, the life cycle was getting shorter as the incubation time increased afterward, which was due to the overall degradation of nanocomposites. It was found that the life cycle of the nanocomposite sample with high ceramic content was shorter than the one with low ceramic content, which was attributed to the increased brittleness of the composite with high ceramic content. The changes in chemical properties were also monitored by FT-IR, which indicated that the degradation of the biodegradable polymer could be confirmed by the increased intensities of carboxyl groups and hydroxyl groups since the hydrolysis of ester bonds connecting two successive monomers yielded carboxyl end groups and hydroxyl groups.

Keywords: bioabsorbable polymer, bone fixation device, ceramic nanoparticles, durability assessment, fatigue test

Procedia PDF Downloads 393
36339 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 129
36338 Biosorption Kinetics, Isotherms, and Thermodynamic Studies of Copper (II) on Spirogyra sp.

Authors: Diwan Singh

Abstract:

The ability of non-living Spirogyra sp. biomass for biosorption of copper(II) ions from aqueous solutions was explored. The effect of contact time, pH, initial copper ion concentration, biosorbent dosage and temperature were investigated in batch experiments. Both the Freundlich and Langmuir Isotherms were found applicable on the experimental data (R2>0.98). Qmax obtained from the Langmuir Isotherms was found to be 28.7 mg/g of biomass. The values of Gibbs free energy (ΔGº) and enthalpy change (ΔHº) suggest that the sorption is spontaneous and endothermic at 20ºC-40ºC.

Keywords: biosorption, Spirogyra sp., contact time, pH, dose

Procedia PDF Downloads 418
36337 On Four Models of a Three Server Queue with Optional Server Vacations

Authors: Kailash C. Madan

Abstract:

We study four models of a three server queueing system with Bernoulli schedule optional server vacations. Customers arriving at the system one by one in a Poisson process are provided identical exponential service by three parallel servers according to a first-come, first served queue discipline. In model A, all three servers may be allowed a vacation at one time, in Model B at the most two of the three servers may be allowed a vacation at one time, in model C at the most one server is allowed a vacation, and in model D no server is allowed a vacation. We study steady the state behavior of the four models and obtain steady state probability generating functions for the queue size at a random point of time for all states of the system. In model D, a known result for a three server queueing system without server vacations is derived.

Keywords: a three server queue, Bernoulli schedule server vacations, queue size distribution at a random epoch, steady state

Procedia PDF Downloads 293
36336 Investigation on Hydration Mechanism of Eco-Friendly Concrete

Authors: Aliakbar Sayadi, Thomas Neitzert, Charles Clifton

Abstract:

The hydration process of a green concrete with differences on fly ash and the poly-lactic acid ratio was investigated using electrical resistivity measurement. The results show that the hydration process of proposed concrete was significantly different with concrete containing petroleum aggregate. Moreover, a microstructure analysis corresponding to each hydration stage is conducted with scanning microscope for ploy-lactic acid and expanded polystyrene concrete. In addition, specific equations using the variables of this study were developed to understand and predict the relationship between setting time and resistivity development of proposed concrete containing eco-friendly aggregate.

Keywords: green concrete, SEM, hydration mechanism, eco-friendly aggregate

Procedia PDF Downloads 314
36335 Numerical Modal Analysis of a Multi-Material 3D-Printed Composite Bushing and Its Application

Authors: Paweł Żur, Alicja Żur, Andrzej Baier

Abstract:

Modal analysis is a crucial tool in the field of engineering for understanding the dynamic behavior of structures. In this study, numerical modal analysis was conducted on a multi-material 3D-printed composite bushing, which comprised a polylactic acid (PLA) outer shell and a thermoplastic polyurethane (TPU) flexible filling. The objective was to investigate the modal characteristics of the bushing and assess its potential for practical applications. The analysis involved the development of a finite element model of the bushing, which was subsequently subjected to modal analysis techniques. Natural frequencies, mode shapes, and damping ratios were determined to identify the dominant vibration modes and their corresponding responses. The numerical modal analysis provided valuable insights into the dynamic behavior of the bushing, enabling a comprehensive understanding of its structural integrity and performance. Furthermore, the study expanded its scope by investigating the entire shaft mounting of a small electric car, incorporating the 3D-printed composite bushing. The shaft mounting system was subjected to numerical modal analysis to evaluate its dynamic characteristics and potential vibrational issues. The results of the modal analysis highlighted the effectiveness of the 3D-printed composite bushing in minimizing vibrations and optimizing the performance of the shaft mounting system. The findings contribute to the broader field of composite material applications in automotive engineering and provide valuable insights for the design and optimization of similar components.

Keywords: 3D printing, composite bushing, modal analysis, multi-material

Procedia PDF Downloads 92
36334 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 148
36333 Impact of Varying Malting and Fermentation Durations on Specific Chemical, Functional Properties, and Microstructural Behaviour of Pearl Millet and Sorghum Flour Using Response Surface Methodology

Authors: G. Olamiti; TK. Takalani; D. Beswa, AIO Jideani

Abstract:

The study investigated the effects of malting and fermentation times on some chemical, functional properties and microstructural behaviour of Agrigreen, Babala pearl millet cultivars and sorghum flours using response surface methodology (RSM). Central Composite Rotatable Design (CCRD) was performed on two independent variables: malting and fermentation times (h), at intervals of 24, 48, and 72, respectively. The results of dependent parameters such as pH, titratable acidity (TTA), Water absorption capacity (WAC), Oil absorption capacity (OAC), bulk density (BD), dispersibility and microstructural behaviour of the flours studied showed a significant difference in p < 0.05 upon malting and fermentation time. Babala flour exhibited a higher pH value at 4.78 at 48 h malted and 81.9 fermentation times. Agrigreen flour showed a higher TTA value at 0.159% at 81.94 h malted and 48 h fermentation times. WAC content was also higher in malted and fermented Babala flour at 2.37 ml g-1 for 81.94 h malted and 48 h fermentation time. Sorghum flour exhibited the least OAC content at 1.67 ml g-1 at 14 h malted and 48 h fermentation times. Agrigreen flour recorded the least bulk density, at 0.53 g ml-1 for 72 h malted and 24 h fermentation time. Sorghum flour exhibited a higher content of dispersibility, at 56.34%, after 24 h malted and 72 h fermented time. The response surface plots showed that increased malting and fermentation time influenced the dependent parameters. The microstructure behaviour of malting and fermentation times of pearl millet varieties and sorghum flours showed isolated, oval, spherical, or polygonal to smooth surfaces. The optimal processing conditions, such as malting and fermentation time for Agrigreen, were 32.24 h and 63.32 h; 35.18 h and 34.58 h for Babala; and 36.75 h and 47.88 h for sorghum with high desirability of 1.00. The validation of the optimum processing malting and fermentation times (h) on the dependent improved the experimented values. Food processing companies can use the study's findings to improve food processing and quality.

Keywords: Pearl millet, malting, fermentation, microstructural behaviour

Procedia PDF Downloads 60
36332 Environmental Effects on Energy Consumption of Smart Grid Consumers

Authors: S. M. Ali, A. Salam Khan, A. U. Khan, M. Tariq, M. S. Hussain, B. A. Abbasi, I. Hussain, U. Farid

Abstract:

Environment and surrounding plays a pivotal rule in structuring life-style of the consumers. Living standards intern effect the energy consumption of the consumers. In smart grid paradigm, climate drifts, weather parameter and green environmental directly relates to the energy profiles of the various consumers, such as residential, commercial and industrial. Considering above factors helps policy in shaping utility load curves and optimal management of demand and supply. Thus, there is a pressing need to develop correlation models of load and weather parameters and critical analysis of the factors effecting energy profiles of smart grid consumers. In this paper, we elaborated various environment and weather parameter factors effecting demand of consumers. Moreover, we developed correlation models, such as Pearson, Spearman, and Kendall, an inter-relation between dependent (load) parameter and independent (weather) parameters. Furthermore, we validated our discussion with real-time data of Texas State. The numerical simulations proved the effective relation of climatic drifts with energy consumption of smart grid consumers.

Keywords: climatic drifts, correlation analysis, energy consumption, smart grid, weather parameter

Procedia PDF Downloads 365
36331 Effects of Magnetization Patterns on Characteristics of Permanent Magnet Linear Synchronous Generator for Wave Energy Converter Applications

Authors: Sung-Won Seo, Jang-Young Choi

Abstract:

The rare earth magnets used in synchronous generators offer many advantages, including high efficiency, greatly reduced the size, and weight. The permanent magnet linear synchronous generator (PMLSG) allows for direct drive without the need for a mechanical device. Therefore, the PMLSG is well suited to translational applications, such as wave energy converters and free piston energy converters. This manuscript compares the effects of different magnetization patterns on the characteristics of double-sided PMLSGs in slotless stator structures. The Halbach array has a higher flux density in air-gap than the Vertical array, and the advantages of its performance and efficiency are widely known. To verify the advantage of Halbach array, we apply a finite element method (FEM) and analytical method. In general, a FEM and an analytical method are used in the electromagnetic analysis for determining model characteristics, and the FEM is preferable to magnetic field analysis. However, the FEM is often slow and inflexible. On the other hand, the analytical method requires little time and produces accurate analysis of the magnetic field. Therefore, the flux density in air-gap and the Back-EMF can be obtained by FEM. In addition, the results from the analytical method correspond well with the FEM results. The model of the Halbach array reveals less copper loss than the model of the Vertical array, because of the Halbach array’s high output power density. The model of the Vertical array is lower core loss than the model of Halbach array, because of the lower flux density in air-gap. Therefore, the current density in the Vertical model is higher for identical power output. The completed manuscript will include the magnetic field characteristics and structural features of both models, comparing various results, and specific comparative analysis will be presented for the determination of the best model for application in a wave energy converting system.

Keywords: wave energy converter, permanent magnet linear synchronous generator, finite element method, analytical method

Procedia PDF Downloads 293
36330 Improving the Run Times of Existing and Historical Demand Models Using Simple Python Scripting

Authors: Abhijeet Ostawal, Parmjit Lall

Abstract:

The run times for a large strategic model that we were managing had become too long leading to delays in project delivery, increased costs and loss in productivity. Software developers are continuously working towards developing more efficient tools by changing their algorithms and processes. The issue faced by our team was how do you apply the latest technologies on validated existing models which are based on much older versions of software that do not have the latest software capabilities. The multi-model transport model that we had could only be run in sequential assignment order. Recent upgrades to the software now allowed the assignment to be run in parallel, a concept called parallelization. Parallelization is a Python script working only within the latest version of the software. A full model transfer to the latest version was not possible due to time, budget and the potential changes in trip assignment. This article is to show the method to adapt and update the Python script in such a way that it can be used in older software versions by calling the latest version and then recalling the old version for assignment model without affecting the results. Through a process of trial-and-error run time savings of up to 30-40% have been achieved. Assignment results were maintained within the older version and through this learning process we’ve applied this methodology to other even older versions of the software resulting in huge time savings, more productivity and efficiency for both client and consultant.

Keywords: model run time, demand model, parallelisation, python scripting

Procedia PDF Downloads 111
36329 Analysis of Patient No-Shows According to Health Conditions

Authors: Sangbok Lee

Abstract:

There has been much effort on process improvement for outpatient clinics to provide quality and acute care to patients. One of the efforts is no-show analysis or prediction. This work analyzes patient no-shows along with patient health conditions. The health conditions refer to clinical symptoms that each patient has, out of the followings; hyperlipidemia, diabetes, metastatic solid tumor, dementia, chronic obstructive pulmonary disease, hypertension, coronary artery disease, myocardial infraction, congestive heart failure, atrial fibrillation, stroke, drug dependence abuse, schizophrenia, major depression, and pain. A dataset from a regional hospital is used to find the relationship between the number of the symptoms and no-show probabilities. Additional analysis reveals how each symptom or combination of symptoms affects no-shows. In the above analyses, cross-classification of patients by age and gender is carried out. The findings from the analysis will be used to take extra care to patients with particular health conditions. They will be forced to visit clinics by being informed about their health conditions and possible consequences more clearly. Moreover, this work will be used in the preparation of making institutional guidelines for patient reminder systems.

Keywords: healthcare system, no show analysis, process improvment, statistical data analysis

Procedia PDF Downloads 228
36328 Japanese and Europe Legal Frameworks on Data Protection and Cybersecurity: Asymmetries from a Comparative Perspective

Authors: S. Fantin

Abstract:

This study is the result of the legal research on cybersecurity and data protection within the EUNITY (Cybersecurity and Privacy Dialogue between Europe and Japan) project, aimed at fostering the dialogue between the European Union and Japan. Based on the research undertaken therein, the author offers an outline of the main asymmetries in the laws governing such fields in the two regions. The research is a comparative analysis of the two legal frameworks, taking into account specific provisions, ratio legis and policy initiatives. Recent doctrine was taken into account, too, as well as empirical interviews with EU and Japanese stakeholders and project partners. With respect to the protection of personal data, the European Union has recently reformed its legal framework with a package which includes a regulation (General Data Protection Regulation), and a directive (Directive 680 on personal data processing in the law enforcement domain). In turn, the Japanese law under scrutiny for this study has been the Act on Protection of Personal Information. Based on a comparative analysis, some asymmetries arise. The main ones refer to the definition of personal information and the scope of the two frameworks. Furthermore, the rights of the data subjects are differently articulated in the two regions, while the nature of sanctions take two opposite approaches. Regarding the cybersecurity framework, the situation looks similarly misaligned. Japan’s main text of reference is the Basic Cybersecurity Act, while the European Union has a more fragmented legal structure (to name a few, Network and Information Security Directive, Critical Infrastructure Directive and Directive on the Attacks at Information Systems). On an relevant note, unlike a more industry-oriented European approach, the concept of cyber hygiene seems to be neatly embedded in the Japanese legal framework, with a number of provisions that alleviate operators’ liability by turning such a burden into a set of recommendations to be primarily observed by citizens. With respect to the reasons to fill such normative gaps, these are mostly grounded on three basis. Firstly, the cross-border nature of cybercrime brings to consider both magnitude of the issue and its regulatory stance globally. Secondly, empirical findings from the EUNITY project showed how recent data breaches and cyber-attacks had shared implications between Europe and Japan. Thirdly, the geopolitical context is currently going through the direction of bringing the two regions to significant agreements from a trade standpoint, but also from a data protection perspective (with an imminent signature by both parts of a so-called ‘Adequacy Decision’). The research conducted in this study reveals two asymmetric legal frameworks on cyber security and data protection. With a view to the future challenges presented by the strengthening of the collaboration between the two regions and the trans-national fashion of cybercrime, it is urged that solutions are found to fill in such gaps, in order to allow European Union and Japan to wisely increment their partnership.

Keywords: cybersecurity, data protection, European Union, Japan

Procedia PDF Downloads 118
36327 The Value of Dynamic Priorities in Motor Learning between Some Basic Skills in Beginner's Basketball, U14 Years

Authors: Guebli Abdelkader, Regiueg Madani, Sbaa Bouabdellah

Abstract:

The goals of this study are to find ways to determine the value of dynamic priorities in motor learning between some basic skills in beginner’s basketball (U14), based on skills of shooting and defense against the shooter. Our role is to expose the statistical results in compare & correlation between samples of study in tests skills for the shooting and defense against the shooter. In order to achieve this objective, we have chosen 40 boys in middle school represented in four groups, two controls group’s (CS1, CS2) ,and two experimental groups (ES1: training on skill of shooting, skill of defense against the shooter, ES2: experimental group training on skill of defense against the shooter, skill of shooting). For the statistical analysis, we have chosen (F & T) tests for the statistical differences, and test (R) for the correlation analysis. Based on the analyses statistics, we confirm the importance of classifying priorities of basketball basic skills during the motor learning process. Admit that the benefits of experimental group training are to economics in the time needed for acquiring new motor kinetic skills in basketball. In the priority of ES2 as successful dynamic motor learning method to enhance the basic skills among beginner’s basketball.

Keywords: basic skills, basketball, motor learning, children

Procedia PDF Downloads 162
36326 Determination of Optimal Stress Locations in 2D–9 Noded Element in Finite Element Technique

Authors: Nishant Shrivastava, D. K. Sehgal

Abstract:

In Finite Element Technique nodal stresses are calculated through displacement as nodes. In this process, the displacement calculated at nodes is sufficiently good enough but stresses calculated at nodes are not sufficiently accurate. Therefore, the accuracy in the stress computation in FEM models based on the displacement technique is obviously matter of concern for computational time in shape optimization of engineering problems. In the present work same is focused to find out unique points within the element as well as the boundary of the element so, that good accuracy in stress computation can be achieved. Generally, major optimal stress points are located in domain of the element some points have been also located at boundary of the element where stresses are fairly accurate as compared to nodal values. Then, it is subsequently concluded that there is an existence of unique points within the element, where stresses have higher accuracy than other points in the elements. Therefore, it is main aim is to evolve a generalized procedure for the determination of the optimal stress location inside the element as well as at the boundaries of the element and verify the same with results from numerical experimentation. The results of quadratic 9 noded serendipity elements are presented and the location of distinct optimal stress points is determined inside the element, as well as at the boundaries. The theoretical results indicate various optimal stress locations are in local coordinates at origin and at a distance of 0.577 in both directions from origin. Also, at the boundaries optimal stress locations are at the midpoints of the element boundary and the locations are at a distance of 0.577 from the origin in both directions. The above findings were verified through experimentation and findings were authenticated. For numerical experimentation five engineering problems were identified and the numerical results of 9-noded element were compared to those obtained by using the same order of 25-noded quadratic Lagrangian elements, which are considered as standard. Then root mean square errors are plotted with respect to various locations within the elements as well as the boundaries and conclusions were drawn. After numerical verification it is noted that in a 9-noded element, origin and locations at a distance of 0.577 from origin in both directions are the best sampling points for the stresses. It was also noted that stresses calculated within line at boundary enclosed by 0.577 midpoints are also very good and the error found is very less. When sampling points move away from these points, then it causes line zone error to increase rapidly. Thus, it is established that there are unique points at boundary of element where stresses are accurate, which can be utilized in solving various engineering problems and are also useful in shape optimizations.

Keywords: finite elements, Lagrangian, optimal stress location, serendipity

Procedia PDF Downloads 102
36325 Cellular RNA-Binding Domains with Distant Homology in Viral Proteomes

Authors: German Hernandez-Alonso, Antonio Lazcano, Arturo Becerra

Abstract:

Until today, viruses remain controversial and poorly understood; about their origin, this problem represents an enigma and one of the great challenges for the contemporary biology. Three main theories have tried to explain the origin of viruses: regressive evolution, escaped host gene, and pre-cellular origin. Under the perspective of the escaped host gene theory, it can be assumed a cellular origin of viral components, like protein RNA-binding domains. These universal distributed RNA-binding domains are related to the RNA metabolism processes, including transcription, processing, and modification of transcripts, translation, RNA degradation and its regulation. In the case of viruses, these domains are present in important viral proteins like helicases, nucleases, polymerases, capsid proteins or regulation factors. Therefore, they are implicated in the replicative cycle and parasitic processes of viruses. That is why it is possible to think that those domains present low levels of divergence due to selective pressures. For these reasons, the main goal for this project is to create a catalogue of the RNA-binding domains found in all the available viral proteomes, using bioinformatics tools in order to analyze its evolutionary process, and thus shed light on the general virus evolution. ProDom database was used to obtain larger than six thousand RNA-binding domain families that belong to the three cellular domains of life and some viral groups. From the sequences of these families, protein profiles were created using HMMER 3.1 tools in order to find distant homologous within greater than four thousand viral proteomes available in GenBank. Once accomplished the analysis, almost three thousand hits were obtained in the viral proteomes. The homologous sequences were found in proteomes of the principal Baltimore viral groups, showing interesting distribution patterns that can contribute to understand the evolution of viruses and their host-virus interactions. Presence of cellular RNA-binding domains within virus proteomes seem to be explained by closed interactions between viruses and their hosts. Recruitment of these domains is advantageous for the viral fitness, allowing viruses to be adapted to the host cellular environment.

Keywords: bioinformatics tools, distant homology, RNA-binding domains, viral evolution

Procedia PDF Downloads 381
36324 Optimised Path Recommendation for a Real Time Process

Authors: Likewin Thomas, M. V. Manoj Kumar, B. Annappa

Abstract:

Traditional execution process follows the path of execution drawn by the process analyst without observing the behaviour of resource and other real-time constraints. Identifying process model, predicting the behaviour of resource and recommending the optimal path of execution for a real time process is challenging. The proposed AlfyMiner: αyM iner gives a new dimension in process execution with the novel techniques Process Model Analyser: PMAMiner and Resource behaviour Analyser: RBAMiner for recommending the probable path of execution. PMAMiner discovers next probable activity for currently executing activity in an online process using variant matching technique to identify the set of next probable activity, among which the next probable activity is discovered using decision tree model. RBAMiner identifies the resource suitable for performing the discovered next probable activity and observe the behaviour based on; load and performance using polynomial regression model, and waiting time using queueing theory. Based on the observed behaviour αyM iner recommend the probable path of execution with; next probable activity and the best suitable resource for performing it. Experiments were conducted on process logs of CoSeLoG Project1 and 72% of accuracy is obtained in identifying and recommending next probable activity and the efficiency of resource performance was optimised by 59% by decreasing their load.

Keywords: cross-organization process mining, process behaviour, path of execution, polynomial regression model

Procedia PDF Downloads 326
36323 Optimization of Tundish Geometry for Minimizing Dead Volume Using OpenFOAM

Authors: Prateek Singh, Dilshad Ahmad

Abstract:

Growing demand for high-quality steel products has inspired researchers to investigate the unit operations involved in the manufacturing of these products (slabs, rods, sheets, etc.). One such operation is tundish operation, in which a vessel (tundish) acts as a buffer of molten steel for the solidification operation in mold. It is observed that tundish also plays a crucial role in the quality and cleanliness of the steel produced, besides merely acting as a reservoir for the mold. It facilitates removal of dissolved oxygen (inclusions) from the molten steel thus improving its cleanliness. Inclusion removal can be enhanced by increasing the residence time of molten steel in the tundish by incorporation of flow modifiers like dams, weirs, turbo-pad, etc. These flow modifiers also help in reducing the dead or short circuit zones within the tundish which is significant for maintaining thermal and chemical homogeneity of molten steel. Thus, it becomes important to analyze the flow of molten steel in the tundish for different configuration of flow modifiers. In the present work, effect of varying positions and heights/depths of dam and weir on the dead volume in tundish is studied. Steady state thermal and flow profiles of molten steel within the tundish are obtained using OpenFOAM. Subsequently, Residence Time Distribution analysis is performed to obtain the percentage of dead volume in the tundish. Design of Experiment method is then used to configure different tundish geometries for varying positions and heights/depths of dam and weir, and dead volume for each tundish design is obtained. A second-degree polynomial with two-term interactions of independent variables to predict the dead volume in the tundish with positions and heights/depths of dam and weir as variables are computed using Multiple Linear Regression model. This polynomial is then used in an optimization framework to obtain the optimal tundish geometry for minimizing dead volume using Sequential Quadratic Programming optimization.

Keywords: design of experiments, multiple linear regression, OpenFOAM, residence time distribution, sequential quadratic programming optimization, steel, tundish

Procedia PDF Downloads 196
36322 Effect of Fat Percentage and Prebiotic Composition on Proteolysis, ACE-Inhibitory and Antioxidant Activity of Probiotic Yogurt

Authors: Mohammad B. HabibiNajafi, Saeideh Sadat Fatemizadeh, Maryam Tavakoli

Abstract:

In recent years, the consumption of functional foods, including foods containing probiotic bacteria, has come to notice. Milk proteins have been identified as a source of angiotensin-I-converting enzyme )ACE( inhibitory peptides and are currently the best-known class of bioactive peptides. In this study, the effects of adding prebiotic ingredients (inulin and wheat fiber) and fat percentage (0%, 2% and 3.5%) in yogurt containing probiotic Lactobacillus casei on physicochemical properties, degree of proteolysis, antioxidant and ACE-inhibitory activity within 21 days of storage at 5 ± 1 °C were evaluated. The results of statistical analysis showed that the application of prebiotic compounds led to a significant increase in water holding capacity, proteolysis and ACE-inhibitory of samples. The degree of proteolysis in yogurt increases as storage time elapses (P < 0.05) but when proteolysis exceeds a certain threshold, this trend begins to decline. Also, during storage time, water holding capacity reduced initially but increased thereafter. Moreover, based on our findings, the survival of Lactobacillus casei in samples treated with inulin and wheat fiber increased significantly in comparison to the control sample (P < 0.05) whereas the effect of fat percentage on the survival of probiotic bacteria was not significant (P = 0.095). Furthermore, the effect of prebiotic ingredients and the presence of probiotic cultures on the antioxidant activity of samples was significant (P < 0.05).

Keywords: probiotic yogurt, proteolysis, ACE-inhibitory, antioxidant activity

Procedia PDF Downloads 246