Search results for: time series modeling.
5053 Investigation on Pore Water Pressure in Core of Karkheh Dam
Authors: Bahar Razavi, Mansour Parehkar, Ali Gholami
Abstract:
Pore water pressure is normally because of consolidation, compaction and water level fluctuation on reservoir. Measuring, controlling and analyzing of pore water pressure have significant importance in both of construction and operation period. Since end of 2002, (dam start up) nature of KARKHEH dam has been analyzed by using the gathered information from instrumentation system of dam. In this lecture dam condition after start up have been analyzed by using the gathered data from located piezometers in core of dam. According to TERZAGHI equation and records of piezometers, consolidation lasted around five years during early years of construction stage, and current pore water pressure in core of dam is caused by water level fluctuation in reservoir. Although there is time lag between water level fluctuation and results of piezometers. These time lags have been checked and the results clearly show that one of the most important causes of it is distance between piezometer and reservoir.Keywords: Earth dam, Reservoir, Piezometer, Terzaghi, Consolidation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27675052 Removal of Elemental Mercury from Dry Methane Gas with Manganese Oxides
Authors: Junya Takenami, Md. Azhar Uddin, Eiji Sasaoka, Yasushi Shioya, Tsuneyoshi Takase
Abstract:
In this study, we sought to investigate the mercury removal efficiency of manganese oxides from natural gas. The fundamental studies on mercury removal with manganese oxides sorbents were carried out in a laboratory scale fixed bed reactor at 30 °C with a mixture of methane (20%) and nitrogen gas laden with 4.8 ppb of elemental mercury. Manganese oxides with varying surface area and crystalline phase were prepared by conventional precipitation method in this study. The effects of surface area, crystallinity and other metal oxides on mercury removal efficiency were investigated. Effect of Ag impregnation on mercury removal efficiency was also investigated. Ag supported on metal oxide such titania and zirconia as reference materials were also used in this study for comparison. The characteristics of mercury removal reaction with manganese oxide was investigated using a temperature programmed desorption (TPD) technique. Manganese oxides showed very high Hg removal activity (about 73-93% Hg removal) for first time use. Surface area of the manganese oxide samples decreased after heat-treatment and resulted in complete loss of Hg removal ability for repeated use after Hg desorption in the case of amorphous MnO2, and 75% loss of the initial Hg removal activity for the crystalline MnO2. Mercury desorption efficiency of crystalline MnO2 was very low (37%) for first time use and high (98%) after second time use. Residual potassium content in MnO2 may have some effect on the thermal stability of the adsorbed Hg species. Desorption of Hg from manganese oxides occurs at much higher temperatures (with a peak at 400 °C) than Ag/TiO2 or Ag/ZrO2. Mercury may be captured on manganese oxides in the form of mercury manganese oxide.Keywords: Mercury removal, Metal and metal oxide sorbents, Methane, Natural gas.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21045051 Modelling of Heating and Evaporation of Biodiesel Fuel Droplets
Authors: Mansour Al Qubeissi, Sergei S. Sazhin, Cyril Crua, Morgan R. Heikal
Abstract:
This paper presents the application of the Discrete Component Model for heating and evaporation to multi-component biodiesel fuel droplets in direct injection internal combustion engines. This model takes into account the effects of temperature gradient, recirculation and species diffusion inside droplets. A distinctive feature of the model used in the analysis is that it is based on the analytical solutions to the temperature and species diffusion equations inside the droplets. Nineteen types of biodiesel fuels are considered. It is shown that a simplistic model, based on the approximation of biodiesel fuel by a single component or ignoring the diffusion of components of biodiesel fuel, leads to noticeable errors in predicted droplet evaporation time and time evolution of droplet surface temperature and radius.
Keywords: Heat/Mass Transfer, Biodiesel, Multi-component Fuel, Droplet, Evaporation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27985050 Stochastic Modeling and Combined Spatial Pattern Analysis of Epidemic Spreading
Authors: S. Chadsuthi, W. Triampo, C. Modchang, P. Kanthang, D. Triampo, N. Nuttavut
Abstract:
We present analysis of spatial patterns of generic disease spread simulated by a stochastic long-range correlation SIR model, where individuals can be infected at long distance in a power law distribution. We integrated various tools, namely perimeter, circularity, fractal dimension, and aggregation index to characterize and investigate spatial pattern formations. Our primary goal was to understand for a given model of interest which tool has an advantage over the other and to what extent. We found that perimeter and circularity give information only for a case of strong correlation– while the fractal dimension and aggregation index exhibit the growth rule of pattern formation, depending on the degree of the correlation exponent (β). The aggregation index method used as an alternative method to describe the degree of pathogenic ratio (α). This study may provide a useful approach to characterize and analyze the pattern formation of epidemic spreadingKeywords: spatial pattern epidemics, aggregation index, fractaldimension, stochastic, long-rang epidemics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16765049 Study of Mechanical Properties of Glutarylated Jute Fiber Reinforced Epoxy Composites
Authors: V. Manush Nandan, K. Lokdeep, R. Vimal, K. Hari Hara Subramanyan, C. Aswin, V. Logeswaran
Abstract:
Natural fibers have attained the potential market in the composite industry because of the huge environmental impact caused by synthetic fibers. Among the natural fibers, jute fibers are the most abundant plant fibers which are manufactured mainly in countries like India. Even though there is a good motive to utilize the natural supplement, the strength of the natural fiber composites is still a topic of discussion. In recent days, many researchers are showing interest in the chemical modification of the natural fibers to increase various mechanical and thermal properties. In the present study, jute fibers have been modified chemically using glutaric anhydride at different concentrations of 5%, 10%, 20%, and 30%. The glutaric anhydride solution is prepared by dissolving the different quantity of glutaric anhydride in benzene and dimethyl-sulfoxide using sodium formate catalyst. The jute fiber mats have been treated by the method of retting at various time intervals of 3, 6, 12, 24, and 36 hours. The modification structure of the treated fibers has been confirmed with infrared spectroscopy. The degree of modification increases with an increase in retention time, but higher retention time has damaged the fiber structure. The unmodified fibers and glutarylated fibers at different retention times are reinforced with epoxy matrix under room temperature. The tensile strength and flexural strength of the composites are analyzed in detail. Among these, the composite made with glutarylated fiber has shown good mechanical properties when compared to those made of unmodified fiber.
Keywords: Flexural properties, glutarylation, glutaric anhydride, tensile properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7055048 Development of Mathematical Model for Overall Oxygen Transfer Coefficient of an Aerator and Comparison with CFD Modeling
Authors: Shashank.B. Thakre, L.B. Bhuyar, Samir.J. Deshmukh
Abstract:
The value of overall oxygen transfer Coefficient (KLa), which is the best measure of oxygen transfer in water through aeration, is obtained by a simple approach, which sufficiently explains the utility of the method to eliminate the discrepancies due to inaccurate assumption of saturation dissolved oxygen concentration. The rate of oxygen transfer depends on number of factors like intensity of turbulence, which in turns depends on the speed of rotation, size, and number of blades, diameter and immersion depth of the rotor, and size and shape of aeration tank, as well as on physical, chemical, and biological characteristic of water. An attempt is made in this paper to correlate the overall oxygen transfer Coefficient (KLa), as an independent parameter with other influencing parameters mentioned above. It has been estimated that the simulation equation developed predicts the values of KLa and power with an average standard error of estimation of 0.0164 and 7.66 respectively and with R2 values of 0.979 and 0.989 respectively, when compared with experimentally determined values. The comparison of this model is done with the model generated using Computational fluid dynamics (CFD) and both the models were found to be in good agreement with each other.Keywords: CFD Model, Overall oxygen transfer coefficient, Power, Mathematical Model, Validation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17695047 Evaluation of Model Evaluation Criterion for Software Development Effort Estimation
Authors: S. K. Pillai, M. K. Jeyakumar
Abstract:
Estimation of model parameters is necessary to predict the behavior of a system. Model parameters are estimated using optimization criteria. Most algorithms use historical data to estimate model parameters. The known target values (actual) and the output produced by the model are compared. The differences between the two form the basis to estimate the parameters. In order to compare different models developed using the same data different criteria are used. The data obtained for short scale projects are used here. We consider software effort estimation problem using radial basis function network. The accuracy comparison is made using various existing criteria for one and two predictors. Then, we propose a new criterion based on linear least squares for evaluation and compared the results of one and two predictors. We have considered another data set and evaluated prediction accuracy using the new criterion. The new criterion is easy to comprehend compared to single statistic. Although software effort estimation is considered, this method is applicable for any modeling and prediction.
Keywords: Software effort estimation, accuracy, Radial Basis Function, linear least squares.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20425046 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain
Authors: Bita Payami-Shabestari, Dariush Eslami
Abstract:
The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.Keywords: Economic production quantity, random cost, supply chain management, vendor-managed inventory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6845045 A Novel Framework for Abnormal Behaviour Identification and Detection for Wireless Sensor Networks
Authors: Muhammad R. Ahmed, Xu Huang, Dharmendra Sharma
Abstract:
Despite extensive study on wireless sensor network security, defending internal attacks and finding abnormal behaviour of the sensor are still difficult and unsolved task. The conventional cryptographic technique does not give the robust security or detection process to save the network from internal attacker that cause by abnormal behavior. The insider attacker or abnormally behaved sensor identificationand location detection framework using false massage detection and Time difference of Arrival (TDoA) is presented in this paper. It has been shown that the new framework can efficiently identify and detect the insider attacker location so that the attacker can be reprogrammed or subside from the network to save from internal attack.Keywords: Insider Attaker identification, Abnormal Behaviour, Location detection, Time difference of Arrival (TDoA), Wireless sensor network
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17745044 Slip Effect Study of 4:1 Contraction Flow for Oldroyd-B Model
Authors: N. Thongjub, B. Puangkird, V. Ngamaramvaranggul
Abstract:
The numerical simulation of the slip effect via vicoelastic fluid for 4:1 contraction problem is investigated with regard to kinematic behaviors of streamlines and stress tensor by models of the Navier-Stokes and Oldroyd-B equations. Twodimensional spatial reference system of incompressible creeping flow with and without slip velocity is determined and the finite element method of a semi-implicit Taylor-Galerkin pressure-correction is applied to compute the problem of this Cartesian coordinate system including the schemes of velocity gradient recovery method and the streamline-Upwind / Petrov-Galerkin procedure. The slip effect at channel wall is added to calculate after each time step in order to intend the alteration of flow path. The result of stress values and the vortices are reduced by the optimum slip coefficient of 0.1 with near the outcome of analytical solution.
Keywords: Slip effect, Oldroyd-B fluid, slip coefficient, time stepping method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19405043 Reliability Modeling and Data Analysis of Vacuum Circuit Breaker Subject to Random Shocks
Authors: Rafik Medjoudj, Rabah Medjoudj, D. Aissani
Abstract:
The electrical substation components are often subject to degradation due to over-voltage or over-current, caused by a short circuit or a lightning. A particular interest is given to the circuit breaker, regarding the importance of its function and its dangerous failure. This component degrades gradually due to the use, and it is also subject to the shock process resulted from the stress of isolating the fault when a short circuit occurs in the system. In this paper, based on failure mechanisms developments, the wear out of the circuit breaker contacts is modeled. The aim of this work is to evaluate its reliability and consequently its residual lifetime. The shock process is based on two random variables such as: the arrival of shocks and their magnitudes. The arrival of shocks was modeled using homogeneous Poisson process (HPP). By simulation, the dates of short-circuit arrivals were generated accompanied with their magnitudes. The same principle of simulation is applied to the amount of cumulative wear out contacts. The objective reached is to find the formulation of the wear function depending on the number of solicitations of the circuit breaker.
Keywords: reliability, short-circuit, models of shocks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19395042 A Review on Cloud Computing and Internet of Things
Authors: Sahar S. Tabrizi, Dogan Ibrahim
Abstract:
Cloud Computing is a convenient model for on-demand networks that uses shared pools of virtual configurable computing resources, such as servers, networks, storage devices, applications, etc. The cloud serves as an environment for companies and organizations to use infrastructure resources without making any purchases and they can access such resources wherever and whenever they need. Cloud computing is useful to overcome a number of problems in various Information Technology (IT) domains such as Geographical Information Systems (GIS), Scientific Research, e-Governance Systems, Decision Support Systems, ERP, Web Application Development, Mobile Technology, etc. Companies can use Cloud Computing services to store large amounts of data that can be accessed from anywhere on Earth and also at any time. Such services are rented by the client companies where the actual rent depends upon the amount of data stored on the cloud and also the amount of processing power used in a given time period. The resources offered by the cloud service companies are flexible in the sense that the user companies can increase or decrease their storage requirements or the processing power requirements at any time, thus minimizing the overall rental cost of the service they receive. In addition, the Cloud Computing service providers offer fast processors and applications software that can be shared by their clients. This is especially important for small companies with limited budgets which cannot afford to purchase their own expensive hardware and software. This paper is an overview of the Cloud Computing, giving its types, principles, advantages, and disadvantages. In addition, the paper gives some example engineering applications of Cloud Computing and makes suggestions for possible future applications in the field of engineering.
Keywords: Cloud computing, cloud services, IaaS, PaaS, SaaS, IoT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13915041 Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator
Authors: Jaeyoung Lee
Abstract:
Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set.
Keywords: Edge network, embedded network, MMA, matrix multiplication accelerator and semantic segmentation network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4675040 System Reliability by Prediction of Generator Output and Losses in a Competitive Energy Market
Authors: Perumal Nallagownden, Ravindra N. Mukerjee, Syafrudin Masri
Abstract:
In a competitive energy market, system reliability should be maintained at all times. Power system operation being of online in nature, the energy balance requirements must be satisfied to ensure reliable operation the system. To achieve this, information regarding the expected status of the system, the scheduled transactions and the relevant inputs necessary to make either a transaction contract or a transmission contract operational, have to be made available in real time. The real time procedure proposed, facilitates this. This paper proposes a quadratic curve learning procedure, which enables a generator-s contribution to the retailer demand, power loss of transaction in a line at the retail end and its associated losses for an oncoming operating scenario to be predicted. Matlab program was used to test in on a 24-bus IEE Reliability Test System, and the results are found to be acceptable.Keywords: Deregulation, learning coefficients, reliability, prediction, competitive energy market.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14785039 Hazard Rate Estimation of Temporal Point Process, Case Study: Earthquake Hazard Rate in Nusatenggara Region
Authors: Sunusi N., Kresna A. J., Islamiyati A., Raupong
Abstract:
Hazard rate estimation is one of the important topics in forecasting earthquake occurrence. Forecasting earthquake occurrence is a part of the statistical seismology where the main subject is the point process. Generally, earthquake hazard rate is estimated based on the point process likelihood equation called the Hazard Rate Likelihood of Point Process (HRLPP). In this research, we have developed estimation method, that is hazard rate single decrement HRSD. This method was adapted from estimation method in actuarial studies. Here, one individual associated with an earthquake with inter event time is exponentially distributed. The information of epicenter and time of earthquake occurrence are used to estimate hazard rate. At the end, a case study of earthquake hazard rate will be given. Furthermore, we compare the hazard rate between HRLPP and HRSD method.Keywords: Earthquake forecast, Hazard Rate, Likelihood point process, Point process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14955038 Using FEM for Prediction of Thermal Post-Buckling Behavior of Thin Plates During Welding Process
Authors: Amin Esmaeilzadeh, Mohammad Sadeghi, Farhad Kolahan
Abstract:
Arc welding is an important joining process widely used in many industrial applications including production of automobile, ships structures and metal tanks. In welding process, the moving electrode causes highly non-uniform temperature distribution that leads to residual stresses and different deviations, especially buckling distortions in thin plates. In order to control the deviations and increase the quality of welded plates, a fixture can be used as a practical and low cost method with high efficiency. In this study, a coupled thermo-mechanical finite element model is coded in the software ANSYS to simulate the behavior of thin plates located by a 3-2-1 positioning system during the welding process. Computational results are compared with recent similar works to validate the finite element models. The agreement between the result of proposed model and other reported data proves that finite element modeling can accurately predict the behavior of welded thin plates.
Keywords: Welding, thin plate, buckling distortion, fixture locators, finite element modelling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24105037 Improved Estimation of Evolutionary Spectrum based on Short Time Fourier Transforms and Modified Magnitude Group Delay by Signal Decomposition
Authors: H K Lakshminarayana, J S Bhat, H M Mahesh
Abstract:
A new estimator for evolutionary spectrum (ES) based on short time Fourier transform (STFT) and modified group delay function (MGDF) by signal decomposition (SD) is proposed. The STFT due to its built-in averaging, suppresses the cross terms and the MGDF preserves the frequency resolution of the rectangular window with the reduction in the Gibbs ripple. The present work overcomes the magnitude distortion observed in multi-component non-stationary signals with STFT and MGDF estimation of ES using SD. The SD is achieved either through discrete cosine transform based harmonic wavelet transform (DCTHWT) or perfect reconstruction filter banks (PRFB). The MGDF also improves the signal to noise ratio by removing associated noise. The performance of the present method is illustrated for cross chirp and frequency shift keying (FSK) signals, which indicates that its performance is better than STFT-MGDF (STFT-GD) alone. Further its noise immunity is better than STFT. The SD based methods, however cannot bring out the frequency transition path from band to band clearly, as there will be gap in the contour plot at the transition. The PRFB based STFT-SD shows good performance than DCTHWT decomposition method for STFT-GD.Keywords: Evolutionary Spectrum, Modified Group Delay, Discrete Cosine Transform, Harmonic Wavelet Transform, Perfect Reconstruction Filter Banks, Short Time Fourier Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16115036 Optimization for Subcritical Water Extraction of Phenolic Compounds from Rambutan Peels
Authors: Nuttawan Yoswathana, M. N. Eshtiaghi
Abstract:
Rambutan is a tropical fruit which peel possesses antioxidant properties. This work was conducted to optimize extraction conditions of phenolic compounds from rambutan peel. Response surface methodology (RSM) was adopted to optimize subcritical water extraction (SWE) on temperature, extraction time and percent solvent mixture. The results demonstrated that the optimum conditions for SWE were as follows: temperature 160°C, extraction time 20min. and concentration of 50% ethanol. Comparison of the phenolic compounds from the rambutan peels in maceration 6h, soxhlet 4h, and SWE 20min., it indicated that total phenolic content (using Folin-Ciocalteu-s phenol reagent) was 26.42, 70.29, and 172.47mg of tannic acid equivalent (TAE) per g dry rambutan peel, respectively. The comparative study concluded that SWE was a promising technique for phenolic compounds extraction from rambutan peel, due to much more two times of conventional techniques and shorter extraction times.
Keywords: Subcritical water extraction, Rambutan peel, phenolic compounds, response surface methodology
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36545035 A Low-Power Two-Stage Seismic Sensor Scheme for Earthquake Early Warning System
Authors: Arvind Srivastav, Tarun Kanti Bhattacharyya
Abstract:
The north-eastern, Himalayan, and Eastern Ghats Belt of India comprise of earthquake-prone, remote, and hilly terrains. Earthquakes have caused enormous damages in these regions in the past. A wireless sensor network based earthquake early warning system (EEWS) is being developed to mitigate the damages caused by earthquakes. It consists of sensor nodes, distributed over the region, that perform majority voting of the output of the seismic sensors in the vicinity, and relay a message to a base station to alert the residents when an earthquake is detected. At the heart of the EEWS is a low-power two-stage seismic sensor that continuously tracks seismic events from incoming three-axis accelerometer signal at the first-stage, and, in the presence of a seismic event, triggers the second-stage P-wave detector that detects the onset of P-wave in an earthquake event. The parameters of the P-wave detector have been optimized for minimizing detection time and maximizing the accuracy of detection.Working of the sensor scheme has been verified with seven earthquakes data retrieved from IRIS. In all test cases, the scheme detected the onset of P-wave accurately. Also, it has been established that the P-wave onset detection time reduces linearly with the sampling rate. It has been verified with test data; the detection time for data sampled at 10Hz was around 2 seconds which reduced to 0.3 second for the data sampled at 100Hz.Keywords: Earthquake early warning system, EEWS, STA/LTA, polarization, wavelet, event detector, P-wave detector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7815034 Numerical Investigation on Latent Heat Storage Unit of Different Configurations
Authors: Manish K Rathod, Jyotirmay Banerjee
Abstract:
The storage of thermal energy as a latent heat of phase change material (PCM) has created considerable interest among researchers in recent times. Here, an attempt is made to carry out numerical investigations to analyze the performance of latent heat storage units (LHSU) employing phase change material. The mathematical model developed is based on an enthalpy formulation. Freezing time of PCM packed in three different shaped containers viz. rectangular, cylindrical and cylindrical shell is compared. The model is validated with the results available in the literature. Results show that for the same mass of PCM and surface area of heat transfer, cylindrical shell container takes the least time for freezing the PCM and this geometric effect is more pronounced with an increase in the thickness of the shell than that of length of the shell.Keywords: Enthalpy Formulation, Latent heat storage unit(LHSU), Numerical Model, Phase change material (PCM)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25165033 Stress Analysis of Spider Gear Using Structural Steel on ANSYS
Authors: Roman Kalvin, Anam Nadeem, Shahab Khushnood
Abstract:
Differential is an integral part of four wheeled vehicle, and its main function is to transmit power from drive shaft to wheels. Differential assembly allows both rear wheels to turn at different speed along curved paths. It consists of four gears which are assembled together namely pinion, ring, spider and bevel gears. This research focused on the spider gear and its static structural analysis using ANSYS. The main aim was to evaluate the distribution of stresses on the teeth of the spider gear. This study also analyzed total deformation that may occur during its working along with bevel gear that is meshed with spider gear. Structural steel was chosen for spider gear in this research. Modeling and assembling were done on SolidWorks for both spider and bevel gear. They were assembled exactly same as in a differential assembly. This assembly was then imported to ANSYS. After observing results that maximum amount of stress and deformation was produced in the spider gear, it was concluded that structural steel material for spider gear possesses greater amount of strength to bear maximum stress.
Keywords: Differential, spider gear, ANSYS, structural steel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10725032 Some Mechanical Properties of Cement Stabilized Malaysian Soft Clay
Authors: Meei-Hoan Ho, Chee-Ming Chan
Abstract:
Soft clays are defined as cohesive soil whose water content is higher than its liquid limits. Thus, soil-cement mixing is adopted to improve the ground conditions by enhancing the strength and deformation characteristics of the soft clays. For the above mentioned reasons, a series of laboratory tests were carried out to study some fundamental mechanical properties of cement stabilized soft clay. The test specimens were prepared by varying the portion of ordinary Portland cement to the soft clay sample retrieved from the test site of RECESS (Research Centre for Soft Soil). Comparisons were made for both homogeneous and columnar system specimens by relating the effects of cement stabilized clay of for 0, 5 and 10 % cement and curing for 3, 28 and 56 days. The mechanical properties examined included one-dimensional compressibility and undrained shear strength. For the mechanical properties, both homogeneous and columnar system specimens were prepared to examine the effect of different cement contents and curing periods on the stabilized soil. The one-dimensional compressibility test was conducted using an oedometer, while a direct shear box was used for measuring the undrained shear strength. The higher the value of cement content, the greater is the enhancement of the yield stress and the decrease of compression index. The value of cement content in a specimen is a more active parameter than the curing period.Keywords: Soft soil, Oedometer, Direct shear box, Cementstabilisedcolumn.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32485031 Investigation of the Space in Response to the Conditions Caused by the Pandemics and Presenting Five-Scale Design Guidelines to Adapt and Prepare to Face the Pandemics
Authors: Sara Ramezanzadeh, Nashid Nabian
Abstract:
Historically, pandemics in different periods have caused compulsory changes in human life. In the case of COVID-19, according to the limitations and established care instructions, spatial alignment with the conditions is important. Following the outbreak of COVID-19, the question raised in this study is how to do spatial design in five scales, namely object, space, architecture, city, and infrastructure, in response to the consequences created in the realms under study. From the beginning of the pandemic until now, some changes in the spatial realm have been created spontaneously or by space users. These transformations have been mostly applied in modifiable parts such as furniture arrangement, especially in work-related spaces. To implement other comprehensive requirements, flexibility and adaptation of space design to the conditions resulting from the pandemics are needed during and after the outbreak. Studying the effects of pandemics from the past to the present, this research covers eight major realms, including three categories of ramifications, solutions, and paradigm shifts, and analytical conclusions about the solutions that have been created in response to them. Finally, by the consideration of epidemiology as a modern discipline influencing the design, spatial solutions in the five scales mentioned (in response to the effects of the eight realms for spatial adaptation in the face of pandemics and their following conditions) are presented as a series of guidelines. Due to the unpredictability of possible pandemics in the future, the possibility of changing and updating the provided guidelines is considered.
Keywords: Pandemics, COVID-19, spatial design, ramifications, paradigm shifts, guidelines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735030 Design and Analysis of a Piezoelectric-Based AC Current Measuring Sensor
Authors: Easa Ali Abbasi, Akbar Allahverdizadeh, Reza Jahangiri, Behnam Dadashzadeh
Abstract:
Electrical current measurement is a suitable method for the performance determination of electrical devices. There are two contact and noncontact methods in this measuring process. Contact method has some disadvantages like having direct connection with wire which may endamage the system. Thus, in this paper, a bimorph piezoelectric cantilever beam which has a permanent magnet on its free end is used to measure electrical current in a noncontact way. In mathematical modeling, based on Galerkin method, the governing equation of the cantilever beam is solved, and the equation presenting the relation between applied force and beam’s output voltage is presented. Magnetic force resulting from current carrying wire is considered as the external excitation force of the system. The results are compared with other references in order to demonstrate the accuracy of the mathematical model. Finally, the effects of geometric parameters on the output voltage and natural frequency are presented.
Keywords: Cantilever beam, electrical current measurement, forced excitation, piezoelectric.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10775029 An Anatomically-Based Model of the Nerves in the Human Foot
Authors: Muhammad Zeeshan UlHaque, Peng Du, Leo K. Cheng, Marc D. Jacobs
Abstract:
Sensory nerves in the foot play an important part in the diagnosis of various neuropathydisorders, especially in diabetes mellitus.However, a detailed description of the anatomical distribution of the nerves is currently lacking. A computationalmodel of the afferent nerves inthe foot may bea useful tool for the study of diabetic neuropathy. In this study, we present the development of an anatomically-based model of various major sensory nerves of the sole and dorsal sidesof the foot. In addition, we presentan algorithm for generating synthetic somatosensory nerve networks in the big-toe region of a right foot model. The algorithm was based on a modified version of the Monte Carlo algorithm, with the capability of being able to vary the intra-epidermal nerve fiber density in differentregionsof the foot model. Preliminary results from the combinedmodel show the realistic anatomical structure of the major nerves as well as the smaller somatosensory nerves of the foot. The model may now be developed to investigate the functional outcomes of structural neuropathyindiabetic patients.
Keywords: Diabetic neuropathy, Finite element modeling, Monte Carlo Algorithm, Somatosensory nerve networks
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23355028 Effect of Rotor to Casing Ratios with Different Rotor Vanes on Performance of Shaft Output of a Vane Type Novel Air Turbine
Authors: Bharat Raj Singh, Onkar Singh
Abstract:
This paper deals with new concept of using compressed atmospheric air as a zero pollution power source for running motorbikes. The motorbike is equipped with an air turbine in place of an internal combustion engine, and transforms the energy of the compressed air into shaft work. The mathematical modeling and performance evaluation of a small capacity compressed air driven vaned type novel air turbine is presented in this paper. The effect of isobaric admission and adiabatic expansion of high pressure air for different rotor to casing diameter ratios with respect to different vane angles (number of vanes) have been considered and analyzed. It is found that the shaft work output is optimum for some typical values of rotor / casing diameter ratios at a particular value of vane angle (no. of vanes). In this study, the maximum power is obtained as 4.5kW - 5.3kW (5.5-6.25 HP) when casing diameter is taken 100 mm, and rotor to casing diameter ratios are kept from 0.65 to 0.55. This value of output is sufficient to run motorbike.
Keywords: zero pollution, compressed air, air turbine, vane angle, rotor / casing diameter ratio
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14665027 Obstacle Classification Method Based On 2D LIDAR Database
Authors: Moohyun Lee, Soojung Hur, Yongwan Park
Abstract:
We propose obstacle classification method based on 2D LIDAR Database. The existing obstacle classification method based on 2D LIDAR, has an advantage in terms of accuracy and shorter calculation time. However, it was difficult to classifier the type of obstacle and therefore accurate path planning was not possible. In order to overcome this problem, a method of classifying obstacle type based on width data of obstacle was proposed. However, width data was not sufficient to improve accuracy. In this paper, database was established by width and intensity data; the first classification was processed by the width data; the second classification was processed by the intensity data; classification was processed by comparing to database; result of obstacle classification was determined by finding the one with highest similarity values. An experiment using an actual autonomous vehicle under real environment shows that calculation time declined in comparison to 3D LIDAR and it was possible to classify obstacle using single 2D LIDAR.
Keywords: Obstacle, Classification, LIDAR, Segmentation, Width, Intensity, Database.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34465026 Modeling the Symptom-Disease Relationship by Using Rough Set Theory and Formal Concept Analysis
Authors: Mert Bal, Hayri Sever, Oya Kalıpsız
Abstract:
Medical Decision Support Systems (MDSSs) are sophisticated, intelligent systems that can provide inference due to lack of information and uncertainty. In such systems, to model the uncertainty various soft computing methods such as Bayesian networks, rough sets, artificial neural networks, fuzzy logic, inductive logic programming and genetic algorithms and hybrid methods that formed from the combination of the few mentioned methods are used. In this study, symptom-disease relationships are presented by a framework which is modeled with a formal concept analysis and theory, as diseases, objects and attributes of symptoms. After a concept lattice is formed, Bayes theorem can be used to determine the relationships between attributes and objects. A discernibility relation that forms the base of the rough sets can be applied to attribute data sets in order to reduce attributes and decrease the complexity of computation.
Keywords: Formal Concept Analysis, Rough Set Theory, Granular Computing, Medical Decision Support System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18145025 Comparison of FAHP and TOPSIS for Evacuation Capability Assessment of High-rise Buildings
Authors: Peng Mei, Yan-Jun Qi, Yu Cui, Song Lu, He-Ping Zhang
Abstract:
A lot of computer-based methods have been developed to assess the evacuation capability (EC) of high-rise buildings. Because softwares are time-consuming and not proper for on scene applications, we adopted two methods, fuzzy analytic hierarchy process (FAHP) and technique for order preference by similarity to an ideal solution (TOPSIS), for EC assessment of a high-rise building in Jinan. The EC scores obtained with the two methods and the evacuation time acquired with Pathfinder 2009 for floors 47-60 of the building were compared with each other. The results show that FAHP performs better than TOPSIS for EC assessment of high-rise buildings, especially in the aspect of dealing with the effect of occupant type and distance to exit on EC, tackling complex problem with multi-level structure of criteria, and requiring less amount of computation. However, both FAHP and TOPSIS failed to appropriately handle the situation where the exit width changes while occupants are few.Keywords: Evacuation capability assessment, FAHP, high-rise buildings, TOPSIS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16205024 Real-Time Episodic Memory Construction for Optimal Action Selection in Cognitive Robotics
Authors: Deon de Jager, Yahya Zweiri, Dimitrios Makris
Abstract:
The three most important components in the cognitive architecture for cognitive robotics is memory representation, memory recall, and action-selection performed by the executive. In this paper, action selection, performed by the executive, is defined as a memory quantification and optimization process. The methodology describes the real-time construction of episodic memory through semantic memory optimization. The optimization is performed by set-based particle swarm optimization, using an adaptive entropy memory quantification approach for fitness evaluation. The performance of the approach is experimentally evaluated by simulation, where a UAV is tasked with the collection and delivery of a medical package. The experiments show that the UAV dynamically uses the episodic memory to autonomously control its velocity, while successfully completing its mission.
Keywords: Cognitive robotics, semantic memory, episodic memory, maximum entropy principle, particle swarm optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1636