Search results for: Problem Solving
623 CFD Parametric Study of Mixers Performance
Authors: Mikhail Strongin
Abstract:
The mixing of two or more liquids is very common in many industrial applications from automotive to food processing. CFD simulations of these processes require comparison with test results. In many cases it is practically impossible. Therefore, comparison provides with scalable tests. So, parameterization of the problem is sufficient to capture the performance of the mixer.
However, the influence of geometrical and thermo-physical parameters on the mixing is not well understood.
In this work influence of geometrical and thermal parameters was studied. It was shown that for full developed turbulent flows (Re > 104), Pet»const and concentration of secondary fluid ~ F(r/l).
In other words, the mixing is practically independent of total flow rate and scale for a given geometry and ratio of flow rates of mixing flows. This statement was proved in present work for different geometries and mixtures such as EGR and water-urea mixture.
Present study has been shown that the best way to improve the mixing is to establish geometry with the lowest Pet number possible by intensifying the turbulence in the domain. This is achievable by using step geometry, impinging flow EGR on a wall, or EGR jets, with a strong change in the flow direction, or using swirler like flow in the domain or combination all of these factors. All of these results are applicable to any mixtures of no compressible fluids.
Keywords: CFD, mixing, fluids, parameterization, scalability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1959622 Evaluation of Eulerian and Lagrangian Method in Analysis of Concrete Gravity Dam Including Dam Water Foundation Interaction
Authors: L. Khan mohammadi, J. Vaseghi Amiri, B. Navayi neya , M. Davoodi
Abstract:
Because of the reservoir effect, dynamic analysis of concrete dams is more involved than other common structures. This problem is mostly sourced by the differences between reservoir water, dam body and foundation material behaviors. To account for the reservoir effect in dynamic analysis of concrete gravity dams, two methods are generally employed. Eulerian method in reservoir modeling gives rise to a set of coupled equations, whereas in Lagrangian method, the same equations for dam and foundation structure are used. The Purpose of this paper is to evaluate and study possible advantages and disadvantages of both methods. Specifically, application of the above methods in the analysis of dam-foundationreservoir systems is leveraged to calculate the hydrodynamic pressure on dam faces. Within the frame work of dam- foundationreservoir systems, dam displacement under earthquake for various dimensions and characteristics are also studied. The results of both Lagrangian and Eulerian methods in effects of loading frequency, boundary condition and foundation elasticity modulus are quantitatively evaluated and compared. Our analyses show that each method has individual advantages and disadvantages. As such, in any particular case, one of the two methods may prove more suitable as presented in the results section of this study.
Keywords: Lagrangian method, Eulerian method, Earthquake, Concrete gravity dam
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1815621 An Automatic Tool for Checking Consistency between Data Flow Diagrams (DFDs)
Authors: Rosziati Ibrahim, Siow Yen Yen
Abstract:
System development life cycle (SDLC) is a process uses during the development of any system. SDLC consists of four main phases: analysis, design, implement and testing. During analysis phase, context diagram and data flow diagrams are used to produce the process model of a system. A consistency of the context diagram to lower-level data flow diagrams is very important in smoothing up developing process of a system. However, manual consistency check from context diagram to lower-level data flow diagrams by using a checklist is time-consuming process. At the same time, the limitation of human ability to validate the errors is one of the factors that influence the correctness and balancing of the diagrams. This paper presents a tool that automates the consistency check between Data Flow Diagrams (DFDs) based on the rules of DFDs. The tool serves two purposes: as an editor to draw the diagrams and as a checker to check the correctness of the diagrams drawn. The consistency check from context diagram to lower-level data flow diagrams is embedded inside the tool to overcome the manual checking problem.Keywords: Data Flow Diagram, Context Diagram, ConsistencyCheck, Syntax and Semantic Rules
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3439620 Role-play Gaming Simulation for Flood Management on Cultural Heritage: A Case Study of Ayutthaya Historic City
Authors: Pongpisit Huyakorn, Chaweewan Denpaiboon, Hidehiko Kanegae
Abstract:
The main aim of this research is to develop a methodology to encourage people's awareness, knowledge and understanding on the participation of flood management for cultural heritage, as the cooperation and interaction among government section, private section, and public section through role-play gaming simulation theory. The format of this research is to develop Role-play gaming simulation from existing documents, game or role-playing from several sources and existing data of the research site. We found that role-play gaming simulation can be implemented to help improving the understanding of the existing problem and the impact of the flood on cultural heritage, and the role-play game can be developed into the tool to improve people's knowledge, understanding and awareness about people's participation for flood management on cultural heritage, moreover the cooperation among the government, private section and public section will be improved through the theory of role-play gaming simulation.
Keywords: Climate change, Role-play gaming simulation, Sustainable development, Public participation, Cultural heritage
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2750619 Improving Spatiotemporal Change Detection: A High Level Fusion Approach for Discovering Uncertain Knowledge from Satellite Image Database
Authors: Wadii Boulila, Imed Riadh Farah, Karim Saheb Ettabaa, Basel Solaiman, Henda Ben Ghezala
Abstract:
This paper investigates the problem of tracking spa¬tiotemporal changes of a satellite image through the use of Knowledge Discovery in Database (KDD). The purpose of this study is to help a given user effectively discover interesting knowledge and then build prediction and decision models. Unfortunately, the KDD process for spatiotemporal data is always marked by several types of imperfections. In our paper, we take these imperfections into consideration in order to provide more accurate decisions. To achieve this objective, different KDD methods are used to discover knowledge in satellite image databases. Each method presents a different point of view of spatiotemporal evolution of a query model (which represents an extracted object from a satellite image). In order to combine these methods, we use the evidence fusion theory which considerably improves the spatiotemporal knowledge discovery process and increases our belief in the spatiotemporal model change. Experimental results of satellite images representing the region of Auckland in New Zealand depict the improvement in the overall change detection as compared to using classical methods.
Keywords: Knowledge discovery in satellite databases, knowledge fusion, data imperfection, data mining, spatiotemporal change detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1547618 Real-time 3D Feature Extraction without Explicit 3D Object Reconstruction
Authors: Kwangjin Hong, Chulhan Lee, Keechul Jung, Kyoungsu Oh
Abstract:
For the communication between human and computer in an interactive computing environment, the gesture recognition is studied vigorously. Therefore, a lot of studies have proposed efficient methods about the recognition algorithm using 2D camera captured images. However, there is a limitation to these methods, such as the extracted features cannot fully represent the object in real world. Although many studies used 3D features instead of 2D features for more accurate gesture recognition, the problem, such as the processing time to generate 3D objects, is still unsolved in related researches. Therefore we propose a method to extract the 3D features combined with the 3D object reconstruction. This method uses the modified GPU-based visual hull generation algorithm which disables unnecessary processes, such as the texture calculation to generate three kinds of 3D projection maps as the 3D feature: a nearest boundary, a farthest boundary, and a thickness of the object projected on the base-plane. In the section of experimental results, we present results of proposed method on eight human postures: T shape, both hands up, right hand up, left hand up, hands front, stand, sit and bend, and compare the computational time of the proposed method with that of the previous methods.Keywords: Fast 3D Feature Extraction, Gesture Recognition, Computer Vision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1638617 Narrative and Expository Text Reading Comprehension by Fourth Grade Spanish-Speaking Children
Authors: Mariela V. De Mier, Veronica S. Sanchez Abchi, Ana M. Borzone
Abstract:
This work aims to explore the factors that have an incidence in reading comprehension process, with different type of texts. In a recent study with 2nd, 3rd and 4th grade children, it was observed that reading comprehension of narrative texts was better than comprehension of expository texts. Nevertheless it seems that not only the type of text but also other textual factors would account for comprehension depending on the cognitive processing demands posed by the text. In order to explore this assumption, three narrative and three expository texts were elaborated with different degree of complexity. A group of 40 fourth grade Spanish-speaking children took part in the study. Children were asked to read the texts and answer orally three literal and three inferential questions for each text. The quantitative and qualitative analysis of children responses showed that children had difficulties in both, narrative and expository texts. The problem was to answer those questions that involved establishing complex relationships among information units that were present in the text or that should be activated from children’s previous knowledge to make an inference. Considering the data analysis, it could be concluded that there is some interaction between the type of text and the cognitive processing load of a specific text.
Keywords: comprehension, textual factors, type of text, processing demands.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408616 Adaptive Square-Rooting Companding Technique for PAPR Reduction in OFDM Systems
Authors: Wisam F. Al-Azzo, Borhanuddin Mohd. Ali
Abstract:
This paper addresses the problem of peak-to-average power ratio (PAPR) in orthogonal frequency division multiplexing (OFDM) systems. It also introduces a new PAPR reduction technique based on adaptive square-rooting (SQRT) companding process. The SQRT process of the proposed technique changes the statistical characteristics of the OFDM output signals from Rayleigh distribution to Gaussian-like distribution. This change in statistical distribution results changes of both the peak and average power values of OFDM signals, and consequently reduces significantly the PAPR. For the 64QAM OFDM system using 512 subcarriers, up to 6 dB reduction in PAPR was achieved by square-rooting technique with fixed degradation in bit error rate (BER) equal to 3 dB. However, the PAPR is reduced at the expense of only -15 dB out-ofband spectral shoulder re-growth below the in-band signal level. The proposed adaptive SQRT technique is superior in terms of BER performance than the original, non-adaptive, square-rooting technique when the required reduction in PAPR is no more than 5 dB. Also, it provides fixed amount of PAPR reduction in which it is not available in the original SQRT technique.Keywords: complementary cumulative distribution function(CCDF), OFDM, peak-to-average power ratio (PAPR), adaptivesquare-rooting PAPR reduction technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2203615 The Relationship between Fugacity and Stress Intensity Factor for Corrosive Environment in Presence of Hydrogen Embrittlement
Authors: A. R. Shahani, E. Mahdavi, M. Amidpour
Abstract:
Hydrogen diffusion is the main problem for corrosion fatigue in corrosive environment. In order to analyze the phenomenon, it is needed to understand their behaviors specially the hydrogen behavior during the diffusion. So, Hydrogen embrittlement and prediction its behavior as a main corrosive part of the fractions, needed to solve combinations of different equations mathematically. The main point to obtain the equation, having knowledge about the source of causing diffusion and running the atoms into materials, called driving force. This is produced by either gradient of electrical or chemical potential. In this work, we consider the gradient of chemical potential to obtain the property equation. In diffusion of atoms, some of them may be trapped but, it could be ignorable in some conditions. According to the phenomenon of hydrogen embrittlement, the thermodynamic and chemical properties of hydrogen are considered to justify and relate them to fracture mechanics. It is very important to get a stress intensity factor by using fugacity as a property of hydrogen or other gases. Although, the diffusive behavior and embrittlement event are common and the same for other gases but, for making it more clear, we describe it for hydrogen. This considering on the definite gas and describing it helps us to understand better the importance of this relation.
Keywords: Hydrogen embrittlement, Fracture mechanics, Thermodynamic, Stress intensity factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1961614 Malicious Route Defending Reliable-Data Transmission Scheme for Multi Path Routing in Wireless Network
Authors: S. Raja Ratna, R. Ravi
Abstract:
Securing the confidential data transferred via wireless network remains a challenging problem. It is paramount to ensure that data are accessible only by the legitimate users rather than by the attackers. One of the most serious threats to organization is jamming, which disrupts the communication between any two pairs of nodes. Therefore, designing an attack-defending scheme without any packet loss in data transmission is an important challenge. In this paper, Dependence based Malicious Route Defending DMRD Scheme has been proposed in multi path routing environment to prevent jamming attack. The key idea is to defend the malicious route to ensure perspicuous transmission. This scheme develops a two layered architecture and it operates in two different steps. In the first step, possible routes are captured and their agent dependence values are marked using triple agents. In the second step, the dependence values are compared by performing comparator filtering to detect malicious route as well as to identify a reliable route for secured data transmission. By simulation studies, it is observed that the proposed scheme significantly identifies malicious route by attaining lower delay time and route discovery time; it also achieves higher throughput.
Keywords: Attacker, Dependence, Jamming, Malicious.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1752613 Simulating Voltage Sag Using PSCAD Software
Authors: Kang Chia Yang, Hushairi HJ Zen, Nur Ikhmar@Najemeen Binti Ayob
Abstract:
Power quality is used to describe the degree of consistency of electrical energy expected from generation source to point of use. The term power quality refers to a wide variety of electromagnetic phenomena that characterize the voltage and current at a given time and at a given location on the power system. Power quality problems can be defined as problem that results in failure of customer equipments, which manifests itself as an economic burden to users, or produces negative impacts on the environment. Voltage stability, power factor, harmonics pollution, reactive power and load unbalance are some of the factors that affect the consistency or the quality level. This research proposal proposes to investigate and analyze the causes and effects of power quality to homes and industries in Sarawak. The increasing application of electronics equipment used in the industries and homes has caused a big impact on the power quality. Many electrical devices are now interconnected to the power network and it can be observed that if the power quality of the network is good, then any loads connected to it will run smoothly and efficiently. On the other hand, if the power quality of the network is bad, then loads connected to it will fail or may cause damage to the equipments and reduced its lifetime. The outcome of this research will enable better and novel solutions of poor power quality to small industries and reduce damage of electrical devices and products in the industries.
Keywords: Power quality, power network, voltage dip.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4346612 Deep Learning Based, End-to-End Metaphor Detection in Greek with Recurrent and Convolutional Neural Networks
Authors: Konstantinos Perifanos, Eirini Florou, Dionysis Goutsos
Abstract:
This paper presents and benchmarks a number of end-to-end Deep Learning based models for metaphor detection in Greek. We combine Convolutional Neural Networks and Recurrent Neural Networks with representation learning to bear on the metaphor detection problem for the Greek language. The models presented achieve exceptional accuracy scores, significantly improving the previous state-of-the-art results, which had already achieved accuracy 0.82. Furthermore, no special preprocessing, feature engineering or linguistic knowledge is used in this work. The methods presented achieve accuracy of 0.92 and F-score 0.92 with Convolutional Neural Networks (CNNs) and bidirectional Long Short Term Memory networks (LSTMs). Comparable results of 0.91 accuracy and 0.91 F-score are also achieved with bidirectional Gated Recurrent Units (GRUs) and Convolutional Recurrent Neural Nets (CRNNs). The models are trained and evaluated only on the basis of training tuples, the related sentences and their labels. The outcome is a state-of-the-art collection of metaphor detection models, trained on limited labelled resources, which can be extended to other languages and similar tasks.Keywords: Metaphor detection, deep learning, representation learning, embeddings.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 554611 Proxisch: An Optimization Approach of Large-Scale Unstable Proxy Servers Scheduling
Authors: Xiaoming Jiang, Jinqiao Shi, Qingfeng Tan, Wentao Zhang, Xuebin Wang, Muqian Chen
Abstract:
Nowadays, big companies such as Google, Microsoft, which have adequate proxy servers, have perfectly implemented their web crawlers for a certain website in parallel. But due to lack of expensive proxy servers, it is still a puzzle for researchers to crawl large amounts of information from a single website in parallel. In this case, it is a good choice for researchers to use free public proxy servers which are crawled from the Internet. In order to improve efficiency of web crawler, the following two issues should be considered primarily: (1) Tasks may fail owing to the instability of free proxy servers; (2) A proxy server will be blocked if it visits a single website frequently. In this paper, we propose Proxisch, an optimization approach of large-scale unstable proxy servers scheduling, which allow anyone with extremely low cost to run a web crawler efficiently. Proxisch is designed to work efficiently by making maximum use of reliable proxy servers. To solve second problem, it establishes a frequency control mechanism which can ensure the visiting frequency of any chosen proxy server below the website’s limit. The results show that our approach performs better than the other scheduling algorithms.Keywords: Proxy server, priority queue, optimization approach, distributed web crawling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2892610 Mitigating the Cost of Empty Container Repositioning through the Virtual Container Yard: An Appraisal of Carriers’ Perceptions
Authors: L. Edirisinghe, Z. Jin, A. W. Wijeratne, R. Mudunkotuwa
Abstract:
Empty container repositioning is a fundamental problem faced by the shipping industry. The virtual container yard is a novel strategy underpinning the container interchange between carriers that could substantially reduce this ever-increasing shipping cost. This paper evaluates the shipping industry perception of the virtual container yard using chi-square tests. It examines if the carriers perceive that the selected independent variables, namely culture, organization, decision, marketing, attitudes, legal, independent, complexity, and stakeholders of carriers, impact the efficiency and benefits of the virtual container yard. There are two major findings of the research. Firstly, carriers view that complexity, attitudes, and stakeholders may impact the effectiveness of container interchange and may influence the perceived benefits of the virtual container yard. Secondly, the three factors of legal, organization, and decision influence only the perceived benefits of the virtual container yard. Accordingly, the implementation of the virtual container yard will be influenced by six key factors, namely complexity, attitudes, stakeholders, legal, organization and decision. Since the virtual container yard could reduce overall shipping costs, it is vital to examine the carriers’ perception of this concept.
Keywords: Virtual container yard, imbalance, management, inventory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 693609 Torsional Rigidities of Reinforced Concrete Beams Subjected to Elastic Lateral Torsional Buckling
Authors: Ilker Kalkan, Saruhan Kartal
Abstract:
Reinforced concrete (RC) beams rarely undergo lateral-torsional buckling (LTB), since these beams possess large lateral bending and torsional rigidities owing to their stocky cross-sections, unlike steel beams. However, the problem of LTB is becoming more and more pronounced in the last decades as the span lengths of concrete beams increase and the cross-sections become more slender with the use of pre-stressed concrete. The buckling moment of a beam mainly depends on its lateral bending rigidity and torsional rigidity. The nonhomogeneous and elastic-inelastic nature of RC complicates estimation of the buckling moments of concrete beams. Furthermore, the lateral bending and torsional rigidities of RC beams and the buckling moments are affected from different forms of concrete cracking, including flexural, torsional and restrained shrinkage cracking. The present study pertains to the effects of concrete cracking on the torsional rigidities of RC beams prone to elastic LTB. A series of tests on rather slender RC beams indicated that torsional cracking does not initiate until buckling in elastic LTB, while flexural cracking associated with lateral bending takes place even at the initial stages of loading. Hence, the present study clearly indicated that the un-cracked torsional rigidity needs to be used for estimating the buckling moments of RC beams liable to elastic LTB.Keywords: Lateral stability, post-cracking torsional rigidity, uncracked torsional rigidity, critical moment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2320608 Multi-Objective Optimization Contingent on Subcarrier-Wise Beamforming for Multiuser MIMO-OFDM Interference Channels
Authors: R. Vedhapriya Vadhana, Ruba Soundar, K. G. Jothi Shalini
Abstract:
We address the problem of interference over all the channels in multiuser MIMO-OFDM systems. This paper contributes three beamforming strategies designed for multiuser multiple-input and multiple-output by way of orthogonal frequency division multiplexing, in which the transmit and receive beamformers are acquired repetitious by secure-form stages. In the principal case, the transmit (TX) beamformers remain fixed then the receive (RX) beamformers are computed. This eradicates one interference span for every user by means of extruding the transmit beamformers into a null space of relevant channels. Formerly, by gratifying the orthogonality condition to exclude the residual interferences in RX beamformer for every user is done by maximizing the signal-to-noise ratio (SNR). The second case comprises mutually optimizing the TX and RX beamformers from controlled SNR maximization. The outcomes of first case is used here. The third case also includes combined optimization of TX-RX beamformers; however, uses the both controlled SNR and signal-to-interference-plus-noise ratio maximization (SINR). By the standardized channel model for IEEE 802.11n, the proposed simulation experiments offer rapid beamforming and enhanced error performance.Keywords: Beamforming, interference channels, MIMO-OFDM, multi-objective optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1126607 Enhanced Multi-Intensity Analysis in Multi-Scenery Classification-Based Macro and Micro Elements
Authors: R. Bremananth
Abstract:
Several computationally challenging issues are encountered while classifying complex natural scenes. In this paper, we address the problems that are encountered in rotation invariance with multi-intensity analysis for multi-scene overlapping. In the present literature, various algorithms proposed techniques for multi-intensity analysis, but there are several restrictions in these algorithms while deploying them in multi-scene overlapping classifications. In order to resolve the problem of multi-scenery overlapping classifications, we present a framework that is based on macro and micro basis functions. This algorithm conquers the minimum classification false alarm while pigeonholing multi-scene overlapping. Furthermore, a quadrangle multi-intensity decay is invoked. Several parameters are utilized to analyze invariance for multi-scenery classifications such as rotation, classification, correlation, contrast, homogeneity, and energy. Benchmark datasets were collected for complex natural scenes and experimented for the framework. The results depict that the framework achieves a significant improvement on gray-level matrix of co-occurrence features for overlapping in diverse degree of orientations while pigeonholing multi-scene overlapping.Keywords: Automatic classification, contrast, homogeneity, invariant analysis, multi-scene analysis, overlapping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1121606 Pension Plan Member’s Investment Strategies with Transaction Cost and Couple Risky Assets Modelled by the O-U Process
Authors: Udeme O. Ini, Edikan E. Akpanibah
Abstract:
This paper studies the optimal investment strategies for a plan member (PM) in a defined contribution (DC) pension scheme with transaction cost, taxes on invested funds and couple risky assets (stocks) under the Ornstein-Uhlenbeck (O-U) process. The PM’s portfolio is assumed to consist of a risk-free asset and two risky assets where the two risky assets are driven by the O-U process. The Legendre transformation and dual theory is use to transform the resultant optimal control problem which is a nonlinear partial differential equation (PDE) into linear PDE and the resultant linear PDE is then solved for the explicit solutions of the optimal investment strategies for PM exhibiting constant absolute risk aversion (CARA) using change of variable technique. Furthermore, theoretical analysis is used to study the influences of some sensitive parameters on the optimal investment strategies with observations that the optimal investment strategies for the two risky assets increase with increase in the dividend and decreases with increase in tax on the invested funds, risk averse coefficient, initial fund size and the transaction cost.
Keywords: Ornstein-Uhlenbeck process, portfolio management, Legendre transforms, CARA utility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 477605 Activity Recognition by Smartphone Accelerometer Data Using Ensemble Learning Methods
Authors: Eu Tteum Ha, Kwang Ryel Ryu
Abstract:
As smartphones are equipped with various sensors, there have been many studies focused on using these sensors to create valuable applications. Human activity recognition is one such application motivated by various welfare applications, such as the support for the elderly, measurement of calorie consumption, lifestyle and exercise patterns analyses, and so on. One of the challenges one faces when using smartphone sensors for activity recognition is that the number of sensors should be minimized to save battery power. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we adopt to deal with this twelve-class problem uses various methods. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point, but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window. The experiments compared the performance of four kinds of basic multi-class classifiers and the performance of four kinds of ensemble learning methods based on three kinds of basic multi-class classifiers. The results show that while the method with the highest accuracy is ECOC based on Random forest.
Keywords: Ensemble learning, activity recognition, smartphone accelerometer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2173604 Best Starting Pitcher of the Chinese Professional Baseball League in 2009
Authors: Chih-Cheng Chen, Meng-Lung Lin, Yung-Tan Lee, Tien-Tze Chen, Ching-Yu Tseng
Abstract:
Baseball is unique among other sports in Taiwan. Baseball has become a “symbol of the Taiwanese spirit and Taiwan-s national sport". Taiwan-s first professional sports league, the Chinese Professional Baseball League (CPBL), was established in 1989. Starters pitch many more innings over the course of a season and for a century teams have made all their best pitchers starters. In this study, we attempt to determine the on-field performance these pitchers and which won the most CPBL games in 2009. We utilize the discriminate analysis approach to solve the problem, examining winning pitchers and their statistics, to reliably find the best starting pitcher. The data employed in this paper include innings pitched (IP), earned runs allowed (ERA) and walks plus hits per inning pitched (WPHIP) provided by the official website of the CPBL. The results show that Aaron Rakers was the best starting pitcher of the CPBL. The top 10 CPBL starting pitchers won 14 games to 8 games in the 2009 season. Though Fisher Discriminant Analysis, predicted to top 10 CPBL starting pitchers probably won 20 games to 9 games, more 1 game to 7 games in actually counts in 2009 season.Keywords: Chinese Professional Baseball League, startingpitcher, Fisher's Discriminate analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1961603 A Note on Metallurgy at Khanak: An Indus Site in Tosham Mining Area, Haryana
Authors: Ravindra N. Singh, Dheerendra P. Singh
Abstract:
Recent discoveries of Bronze Age artefacts, tin slag, furnaces and crucibles, together with new geological evidence on tin deposits in Tosham area of Bhiwani district in Haryana (India) provide the opportunity to survey the evidence for possible sources of tin and the use of bronze in the Harappan sites of north western India. Earlier, Afghanistan emerged as the most promising eastern source of tin utilized by Indus Civilization copper-smiths. Our excavations conducted at Khanak near Tosham mining area during 2014 and 2016 revealed ample evidence of metallurgical activities as attested by the occurrence of slag, ores and evidences of ashes and fragments of furnaces in addition to the bronze objects. We have conducted petrological, XRD, EDAX, TEM, SEM and metallography on the slag, ores, crucible fragments and bronze objects samples recovered from Khanak excavations. This has given positive indication of mining and metallurgy of poly-mettalic Tin at the site; however, it can only be ascertained after the detailed scientific examination of the materials which is underway. In view of the importance of site, we intend to excavate the site horizontally in future so as to obtain more samples for scientific studies.
Keywords: Archaeometallurgy, problem of tin, metallography, Indus civilization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2012602 A Frequency Grouping Approach for Blind Deconvolution of Fairly Motionless Sources
Authors: E. S. Gower, T. Tsalaile, E. Rakgati, M. O. J. Hawksford
Abstract:
A frequency grouping approach for multi-channel instantaneous blind source separation (I-BSS) of convolutive mixtures is proposed for a lower net residual inter-symbol interference (ISI) and inter-channel interference (ICI) than the conventional short-time Fourier transform (STFT) approach. Starting in the time domain, STFTs are taken with overlapping windows to convert the convolutive mixing problem into frequency domain instantaneous mixing. Mixture samples at the same frequency but from different STFT windows are grouped together forming unique frequency groups. The individual frequency group vectors are input to the I-BSS algorithm of choice, from which the output samples are dispersed back to their respective STFT windows. After applying the inverse STFT, the resulting time domain signals are used to construct the complete source estimates via the weighted overlap-add method (WOLA). The proposed algorithm is tested for source deconvolution given two mixtures, and simulated along with the STFT approach to illustrate its superiority for fairly motionless sources.Keywords: Blind source separation, short-time Fouriertransform, weighted overlap-add method
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527601 Simulation-Based Optimization of a Non-Uniform Piezoelectric Energy Harvester with Stack Boundary
Authors: Alireza Keshmiri, Shahriar Bagheri, Nan Wu
Abstract:
This research presents an analytical model for the development of an energy harvester with piezoelectric rings stacked at the boundary of the structure based on the Adomian decomposition method. The model is applied to geometrically non-uniform beams to derive the steady-state dynamic response of the structure subjected to base motion excitation and efficiently harvest the subsequent vibrational energy. The in-plane polarization of the piezoelectric rings is employed to enhance the electrical power output. A parametric study for the proposed energy harvester with various design parameters is done to prepare the dataset required for optimization. Finally, simulation-based optimization technique helps to find the optimum structural design with maximum efficiency. To solve the optimization problem, an artificial neural network is first trained to replace the simulation model, and then, a genetic algorithm is employed to find the optimized design variables. Higher geometrical non-uniformity and length of the beam lowers the structure natural frequency and generates a larger power output.Keywords: Piezoelectricity, energy harvesting, simulation-based optimization, artificial neural network, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 845600 CPT Pore Water Pressure Correlations with PDA to Identify Pile Drivability Problem
Authors: Fauzi Jarushi, Paul Cosentino, Edward Kalajian, Hadeel Dekhn
Abstract:
At certain depths during large diameter displacement pile driving, rebound well over 0.25 inches was experienced, followed by a small permanent-set during each hammer blow. High pile rebound (HPR) soils may stop the pile driving and results in a limited pile capacity. In some cases, rebound leads to pile damage, delaying the construction project, and the requiring foundations redesign. HPR was evaluated at seven Florida sites, during driving of square precast, prestressed concrete piles driven into saturated, fine silty to clayey sands and sandy clays. Pile Driving Analyzer (PDA) deflection versus time data recorded during installation, was used to develop correlations between cone penetrometer (CPT) pore-water pressures, pile displacements and rebound. At five sites where piles experienced excessive HPR with minimal set, the pore pressure yielded very high positive values of greater than 20 tsf. However, at the site where the pile rebounded, followed by an acceptable permanent-set, the measured pore pressure ranged between 5 and 20 tsf. The pore pressure exhibited values of less than 5 tsf at the site where no rebound was noticed. In summary, direct correlations between CPTu pore pressure and rebound were produced, allowing identification of soils that produce HPR.
Keywords: CPTu, pore water pressure, pile rebound.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2671599 Comparative Study of Evolutionary Model and Clustering Methods in Circuit Partitioning Pertaining to VLSI Design
Authors: K. A. Sumitra Devi, N. P. Banashree, Annamma Abraham
Abstract:
Partitioning is a critical area of VLSI CAD. In order to build complex digital logic circuits its often essential to sub-divide multi -million transistor design into manageable Pieces. This paper looks at the various partitioning techniques aspects of VLSI CAD, targeted at various applications. We proposed an evolutionary time-series model and a statistical glitch prediction system using a neural network with selection of global feature by making use of clustering method model, for partitioning a circuit. For evolutionary time-series model, we made use of genetic, memetic & neuro-memetic techniques. Our work focused in use of clustering methods - K-means & EM methodology. A comparative study is provided for all techniques to solve the problem of circuit partitioning pertaining to VLSI design. The performance of all approaches is compared using benchmark data provided by MCNC standard cell placement benchmark net lists. Analysis of the investigational results proved that the Neuro-memetic model achieves greater performance then other model in recognizing sub-circuits with minimum amount of interconnections between them.
Keywords: VLSI, circuit partitioning, memetic algorithm, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1639598 An Agent Based Dynamic Resource Scheduling Model with FCFS-Job Grouping Strategy in Grid Computing
Authors: Raksha Sharma, Vishnu Kant Soni, Manoj Kumar Mishra, Prachet Bhuyan, Utpal Chandra Dey
Abstract:
Grid computing is a group of clusters connected over high-speed networks that involves coordinating and sharing computational power, data storage and network resources operating across dynamic and geographically dispersed locations. Resource management and job scheduling are critical tasks in grid computing. Resource selection becomes challenging due to heterogeneity and dynamic availability of resources. Job scheduling is a NP-complete problem and different heuristics may be used to reach an optimal or near optimal solution. This paper proposes a model for resource and job scheduling in dynamic grid environment. The main focus is to maximize the resource utilization and minimize processing time of jobs. Grid resource selection strategy is based on Max Heap Tree (MHT) that best suits for large scale application and root node of MHT is selected for job submission. Job grouping concept is used to maximize resource utilization for scheduling of jobs in grid computing. Proposed resource selection model and job grouping concept are used to enhance scalability, robustness, efficiency and load balancing ability of the grid.Keywords: Agent, Grid Computing, Job Grouping, Max Heap Tree (MHT), Resource Scheduling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2090597 CFD Simulation and Validation of Flow Pattern Transition Boundaries during Moderately Viscous Oil-Water Two-Phase Flow through Horizontal Pipeline
Authors: Anand B. Desamala, Anjali Dasari, Vinayak Vijayan, Bharath K. Goshika, Ashok K. Dasmahapatra, Tapas K. Mandal
Abstract:
In the present study, computational fluid dynamics (CFD) simulation has been executed to investigate the transition boundaries of different flow patterns for moderately viscous oil-water (viscosity ratio 107, density ratio 0.89 and interfacial tension of 0.032 N/m.) two-phase flow through a horizontal pipeline with internal diameter and length of 0.025 m and 7.16 m respectively. Volume of Fluid (VOF) approach including effect of surface tension has been employed to predict the flow pattern. Geometry and meshing of the present problem has been drawn using GAMBIT and ANSYS FLUENT has been used for simulation. A total of 47037 quadrilateral elements are chosen for the geometry of horizontal pipeline. The computation has been performed by assuming unsteady flow, immiscible liquid pair, constant liquid properties, co-axial flow and a T-junction as entry section. The simulation correctly predicts the transition boundaries of wavy stratified to stratified mixed flow. Other transition boundaries are yet to be simulated. Simulated data has been validated with our own experimental results.Keywords: CFD simulation, flow pattern transition, moderately viscous oil-water flow, prediction of flow transition boundary, VOF technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4250596 An Optimal Algorithm for Finding (r, Q) Policy in a Price-Dependent Order Quantity Inventory System with Soft Budget Constraint
Authors: S. Hamid Mirmohammadi, Shahrazad Tamjidzad
Abstract:
This paper is concerned with the single-item continuous review inventory system in which demand is stochastic and discrete. The budget consumed for purchasing the ordered items is not restricted but it incurs extra cost when exceeding specific value. The unit purchasing price depends on the quantity ordered under the all-units discounts cost structure. In many actual systems, the budget as a resource which is occupied by the purchased items is limited and the system is able to confront the resource shortage by charging more costs. Thus, considering the resource shortage costs as a part of system costs, especially when the amount of resource occupied by the purchased item is influenced by quantity discounts, is well motivated by practical concerns. In this paper, an optimization problem is formulated for finding the optimal (r, Q) policy, when the system is influenced by the budget limitation and a discount pricing simultaneously. Properties of the cost function are investigated and then an algorithm based on a one-dimensional search procedure is proposed for finding an optimal (r, Q) policy which minimizes the expected system costs.Keywords: (r, Q) policy, Stochastic demand, backorders, limited resource, quantity discounts.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1862595 Load Forecasting Using Neural Network Integrated with Economic Dispatch Problem
Authors: Mariyam Arif, Ye Liu, Israr Ul Haq, Ahsan Ashfaq
Abstract:
High cost of fossil fuels and intensifying installations of alternate energy generation sources are intimidating main challenges in power systems. Making accurate load forecasting an important and challenging task for optimal energy planning and management at both distribution and generation side. There are many techniques to forecast load but each technique comes with its own limitation and requires data to accurately predict the forecast load. Artificial Neural Network (ANN) is one such technique to efficiently forecast the load. Comparison between two different ranges of input datasets has been applied to dynamic ANN technique using MATLAB Neural Network Toolbox. It has been observed that selection of input data on training of a network has significant effects on forecasted results. Day-wise input data forecasted the load accurately as compared to year-wise input data. The forecasted load is then distributed among the six generators by using the linear programming to get the optimal point of generation. The algorithm is then verified by comparing the results of each generator with their respective generation limits.
Keywords: Artificial neural networks, demand-side management, economic dispatch, linear programming, power generation dispatch.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 914594 Mixed Integer Programing for Multi-Tier Rebate with Discontinuous Cost Function
Authors: Y. Long, L. Liu, K. V. Branin
Abstract:
One challenge faced by procurement decision-maker during the acquisition process is how to compare similar products from different suppliers and allocate orders among different products or services. This work focuses on allocating orders among multiple suppliers considering rebate. The objective function is to minimize the total acquisition cost including purchasing cost and rebate benefit. Rebate benefit is complex and difficult to estimate at the ordering step. Rebate rules vary for different suppliers and usually change over time. In this work, we developed a system to collect the rebate policies, standardized the rebate policies and developed two-stage optimization models for ordering allocation. Rebate policy with multi-tiers is considered in modeling. The discontinuous cost function of rebate benefit is formulated for different scenarios. A piecewise linear function is used to approximate the discontinuous cost function of rebate benefit. And a Mixed Integer Programing (MIP) model is built for order allocation problem with multi-tier rebate. A case study is presented and it shows that our optimization model can reduce the total acquisition cost by considering rebate rules.
Keywords: Discontinuous cost function, mixed integer programming, optimization, procurement, rebate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 663