Search results for: Grid base clustering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1698

Search results for: Grid base clustering

198 Physicochemistry of Pozzolanic Stabilization of a Class A-2-7 Lateritic Soil

Authors: Ahmed O. Apampa, Yinusa A. Jimoh

Abstract:

The paper examines the mechanism of pozzolan-soil reactions, using a recent study on the chemical stabilization of a Class A-2-7 (3) lateritic soil, with corn cob ash (CCA) as case study. The objectives are to establish a nexus between cation exchange capacity of the soil, the alkaline forming compounds in CCA and percentage CCA addition to soil beyond which no more improvement in strength properties can be achieved; and to propose feasible chemical reactions to explain the chemical stabilization of the lateritic soil with CCA alone. The lateritic soil, as well as CCA of pozzolanic quality Class C were separately analysed for their metallic oxide composition using the X-Ray Fluorescence technique. The cation exchange capacity (CEC) of the soil and the CCA were computed theoretically using the percentage composition of the base cations Ca2+, Mg2+ K+ and Na2+ as 1.48 meq/100 g and 61.67 meq/100 g respectively, thus indicating a ratio of 0.024 or 2.4%. This figure, taken as the theoretical amount required to just fill up the exchangeable sites of the clay molecules, compares well with the laboratory observation of 1.5% for the optimum level of CCA addition to lateritic soil. The paper went on to present chemical reaction equations between the alkaline earth metals in the CCA and the silica in the lateritic soil to form silicates, thereby proposing an extension of the theory of mechanism of soil stabilization to cover chemical stabilization with pozzolanic ash only. The paper concluded by recommending further research on the molecular structure of soils stabilized with pozzolanic waste ash alone, with a view to confirming the chemical equations advanced in the study.

Keywords: Cation exchange capacity, corn cob ash, lateritic soil, soil stabilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1314
197 Preparation and Cutting Performance of Boron-Doped Diamond Coating on Cemented Carbide Cutting Tools with High Cobalt Content

Authors: Zhaozhi Liu, Feng Xu, Junhua Xu, Xiaolong Tang, Ying Liu, Dunwen Zuo

Abstract:

Chemical vapor deposition (CVD) diamond coated cutting tool has excellent cutting performance, it is the most ideal tool for the processing of nonferrous metals and alloys, composites, nonmetallic materials and other difficult-to-machine materials efficiently and accurately. Depositing CVD diamond coating on the cemented carbide with high cobalt content can improve its toughness and strength, therefore, it is very important to research on the preparation technology and cutting properties of CVD diamond coated cemented carbide cutting tool with high cobalt content. The preparation technology of boron-doped diamond (BDD) coating has been studied and the coated drills were prepared. BDD coating were deposited on the drills by using the optimized parameters and the SEM results show that there are no cracks or collapses in the coating. Cutting tests with the prepared drills against the silumin and aluminum base printed circuit board (PCB) have been studied. The results show that the wear amount of the coated drill is small and the machined surface has a better precision. The coating does not come off during the test, which shows good adhesion and cutting performance of the drill.

Keywords: Cemented carbide with high cobalt content, CVD boron-doped diamond, cutting test, drill.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2686
196 Comparison between Pushover Analysis Techniques and Validation of the Simplified Modal Pushover Analysis

Authors: N. F. Hanna, A. M. Haridy

Abstract:

One of the main drawbacks of the Modal Pushover Analysis (MPA) is the need to perform nonlinear time-history analysis, which complicates the analysis method and time. A simplified version of the MPA has been proposed based on the concept of the inelastic deformation ratio. Furthermore, the effect of the higher modes of vibration is considered by assuming linearly-elastic responses, which enables the use of standard elastic response spectrum analysis. In this thesis, the simplified MPA (SMPA) method is applied to determine the target global drift and the inter-story drifts of steel frame building. The effect of the higher vibration modes is considered within the framework of the SMPA. A comprehensive survey about the inelastic deformation ratio is presented. After that, a suitable expression from literature is selected for the inelastic deformation ratio and then implemented in the SMPA. The estimated seismic demands using the SMPA, such as target drift, base shear, and the inter-story drifts, are compared with the seismic responses determined by applying the standard MPA. The accuracy of the estimated seismic demands is validated by comparing with the results obtained by the nonlinear time-history analysis using real earthquake records.

Keywords: Modal analysis, pushover analysis, seismic performance, target displacement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584
195 Thermal Analysis on Heat Transfer Enhancement and Fluid Flow for Al2O3 Water-Ethylene Glycol Nanofluid in Single PEMFC Mini Channel

Authors: Irnie Zakaria, W. A. N. W Mohamed, W. H. Azmi

Abstract:

Thermal enhancement of a single mini channel in Proton Exchange Membrane Fuel Cell (PEMFC) cooling plate is numerically investigated. In this study, low concentration of Al2O3 in Water - Ethylene Glycol mixtures is used as coolant in single channel of carbon graphite plate to mimic the mini channels in PEMFC cooling plate. A steady and incompressible flow with constant heat flux is assumed in the channel of 1mm x 5mm x 100mm. Nano particle of Al2O3 used ranges from 0.1, 0.3 and 0.5 vol % concentration and then dispersed in 60:40 (water: Ethylene Glycol) mixture. The effect of different flow rates to fluid flow and heat transfer enhancement in Re number range of 20 to 140 was observed. The result showed that heat transfer coefficient was improved by 18.11%, 9.86% and 5.37% for 0.5, 0.3 and 0.1 vol. % Al2O3 in 60:40 (water: EG) as compared to base fluid of 60:40 (water: EG). It is also showed that the higher vol. % concentration of Al2O3 performed better in term of thermal enhancement but at the expense of higher pumping power required due to increase in pressure drop experienced. Maximum additional pumping power of 0.0012W was required for 0.5 vol % Al2O3 in 60:40 (water: EG) at Re number 140.

Keywords: Heat transfer, mini channel, nanofluid, PEMFC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2108
194 Enlightening Malaysia's Energy Policies and Strategies for Modernization and Sustainable Development

Authors: Hussain Ali Bekhet, Nor Salwati Othman

Abstract:

Malaysia has achieved remarkable economic growth since 1957, moving toward modernization from a predominantly agriculture base to manufacturing and—now—modern services. The development policies (i.e., New Economic Policy [1970–1990], the National Development Policy [1990–2000], and Vision 2020) have been recognized as the most important drivers of this transformation. The transformation of the economic structure has moved along with rapid gross domestic product (GDP) growth, urbanization growth, and greater demand for energy from mainly fossil fuel resources, which in turn, increase CO2 emissions. Malaysia faced a great challenge to bring down the CO2 emissions without compromising economic development. Solid policies and a strategy to reduce dependencies on fossil fuel resources and reduce CO2 emissions are needed in order to achieve sustainable development. This study provides an overview of the Malaysian economic, energy, and environmental situation, and explores the existing policies and strategies related to energy and the environment. The significance is to grasp a clear picture on what types of policies and strategies Malaysia has in hand. In the future, this examination should be extended by drawing a comparison with other developed countries and highlighting several options for sustainable development.

Keywords: Energy policies, energy efficiency, renewable energy, green building, Malaysia, sustainable development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2627
193 Assessment of Multi-Domain Energy Systems Modelling Methods

Authors: M. Stewart, Ameer Al-Khaykan, J. M. Counsell

Abstract:

Emissions are a consequence of electricity generation. A major option for low carbon generation, local energy systems featuring Combined Heat and Power with solar PV (CHPV) has significant potential to increase energy performance, increase resilience, and offer greater control of local energy prices while complementing the UK’s emissions standards and targets. Recent advances in dynamic modelling and simulation of buildings and clusters of buildings using the IDEAS framework have successfully validated a novel multi-vector (simultaneous control of both heat and electricity) approach to integrating the wide range of primary and secondary plant typical of local energy systems designs including CHP, solar PV, gas boilers, absorption chillers and thermal energy storage, and associated electrical and hot water networks, all operating under a single unified control strategy. Results from this work indicate through simulation that integrated control of thermal storage can have a pivotal role in optimizing system performance well beyond the present expectations. Environmental impact analysis and reporting of all energy systems including CHPV LES presently employ a static annual average carbon emissions intensity for grid supplied electricity. This paper focuses on establishing and validating CHPV environmental performance against conventional emissions values and assessment benchmarks to analyze emissions performance without and with an active thermal store in a notional group of non-domestic buildings. Results of this analysis are presented and discussed in context of performance validation and quantifying the reduced environmental impact of CHPV systems with active energy storage in comparison with conventional LES designs.

Keywords: CHPV, thermal storage, control, dynamic simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1489
192 Numerical Simulation of Tidal Currents in Persian Gulf

Authors: Ameleh Aghajanloo, Moharam Dolatshahi Pirouz, Masoud Montazeri Namin

Abstract:

In this paper, a two-dimensional (2D) numerical model for the tidal currents simulation in Persian Gulf is presented. The model is based on the depth averaged equations of shallow water which consider hydrostatic pressure distribution. The continuity equation and two momentum equations including the effects of bed friction, the Coriolis effects and wind stress have been solved. To integrate the 2D equations, the Alternative Direction Implicit (ADI) technique has been used. The base of equations discritization was finite volume method applied on rectangular mesh. To evaluate the model validation, a dam break case study including analytical solution is selected and the comparison is done. After that, the capability of the model in simulation of tidal current in a real field is represented by modeling the current behavior in Persian Gulf. The tidal fluctuations in Hormuz Strait have caused the tidal currents in the area of study. Therefore, the water surface oscillations data at Hengam Island on Hormoz Strait are used as the model input data. The check point of the model is measured water surface elevations at Assaluye port. The comparison between the results and the acceptable agreement of them showed the model ability for modeling marine hydrodynamic.

Keywords: Persian Gulf, Tidal Currents, Shallow Water Equations, Finite Volumes

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2026
191 Delineation of Oil – Polluted Sites in Ibeno LGA, Nigeria, Using Geophysical Techniques

Authors: Ime R. Udotong, Justina I. R. Udotong, Ofonime U. M. John

Abstract:

Ibeno, Nigeria hosts the operational base of Mobil Producing Nigeria Unlimited (MPNU), a subsidiary of ExxonMobil and the current highest oil & condensate producer in Nigeria. Besides MPNU, other oil companies operate onshore, on the continental shelf and deep offshore of the Atlantic Ocean in Ibeno, Nigeria. This study was designed to delineate oil polluted sites in Ibeno, Nigeria using geophysical methods of electrical resistivity (ER) and ground penetrating radar (GPR). Results obtained revealed that there have been hydrocarbon contaminations of this environment by past crude oil spills as observed from high resistivity values and GPR profiles which clearly show the distribution, thickness and lateral extent of hydrocarbon contamination as represented on the radargram reflector tones. Contaminations were of varying degrees, ranging from slight to high, indicating levels of substantial attenuation of crude oil contamination over time. Moreover, the display of relatively lower resistivities of locations outside the impacted areas compared to resistivity values within the impacted areas and the 3-D Cartesian images of oil contaminant plume depicted by red, light brown and magenta for high, low and very low oil impacted areas, respectively confirmed significant recent pollution of the study area with crude oil.

Keywords: Electrical resistivity, geophysical investigations, ground penetrating radar, oil-polluted sites.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3057
190 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Cheima Ben Soltane, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: Feature Extraction, Speaker Modeling, Feature Matching, Mel Frequency Cepstrum Coefficient (MFCC), Gaussian mixture model (GMM), Vector Quantization (VQ), Linde-Buzo-Gray (LBG), Expectation Maximization (EM), pre-processing, Voice Activity Detection (VAD), Short Time Energy (STE), Background Noise Statistical Modeling, Closed-Set Tex-Independent Speaker Identification System (CISI).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845
189 Seamless Handover in Urban 5G-UAV Systems Using Entropy Weighted Method

Authors: Anirudh Sunil Warrier, Saba Al-Rubaye, Dimitrios Panagiotakopoulos, Gokhan Inalhan, Antonios Tsourdos

Abstract:

The demand for increased data transfer rate and network traffic capacity has given rise to the concept of heterogeneous networks. Heterogeneous networks are wireless networks, consisting of devices using different underlying radio access technologies (RAT). For Unmanned Aerial Vehicles (UAVs) this enhanced data rate and network capacity are even more critical especially in their applications of medicine, delivery missions and military. In an urban heterogeneous network environment, the UAVs must be able switch seamlessly from one base station (BS) to another for maintaining a reliable link. Therefore, seamless handover in such urban environments has become a major challenge. In this paper, a scheme to achieve seamless handover is developed, an algorithm based on Received Signal Strength (RSS) criterion for network selection is used and Entropy Weighted Method (EWM) is implemented for decision making. Seamless handover using EWM decision-making is demonstrated successfully for a UAV moving across fifth generation (5G) and long-term evolution (LTE) networks via a simulation level analysis. Thus, a solution for UAV-5G communication, specifically the mobility challenge in heterogeneous networks is solved and this work could act as step forward in making UAV-5G architecture integration a possibility.

Keywords: Air to ground, A2G, fifth generation, 5G, handover, mobility, unmanned aerial vehicle, UAV, urban environments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 374
188 Automatic Detection of Defects in Ornamental Limestone Using Wavelets

Authors: Maria C. Proença, Marco Aniceto, Pedro N. Santos, José C. Freitas

Abstract:

A methodology based on wavelets is proposed for the automatic location and delimitation of defects in limestone plates. Natural defects include dark colored spots, crystal zones trapped in the stone, areas of abnormal contrast colors, cracks or fracture lines, and fossil patterns. Although some of these may or may not be considered as defects according to the intended use of the plate, the goal is to pair each stone with a map of defects that can be overlaid on a computer display. These layers of defects constitute a database that will allow the preliminary selection of matching tiles of a particular variety, with specific dimensions, for a requirement of N square meters, to be done on a desktop computer rather than by a two-hour search in the storage park, with human operators manipulating stone plates as large as 3 m x 2 m, weighing about one ton. Accident risks and work times are reduced, with a consequent increase in productivity. The base for the algorithm is wavelet decomposition executed in two instances of the original image, to detect both hypotheses – dark and clear defects. The existence and/or size of these defects are the gauge to classify the quality grade of the stone products. The tuning of parameters that are possible in the framework of the wavelets corresponds to different levels of accuracy in the drawing of the contours and selection of the defects size, which allows for the use of the map of defects to cut a selected stone into tiles with minimum waste, according the dimension of defects allowed.

Keywords: Automatic detection, wavelets, defects, fracture lines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1139
187 Latent Semantic Inference for Agriculture FAQ Retrieval

Authors: Dawei Wang, Rujing Wang, Ying Li, Baozi Wei

Abstract:

FAQ system can make user find answer to the problem that puzzles them. But now the research on Chinese FAQ system is still on the theoretical stage. This paper presents an approach to semantic inference for FAQ mining. To enhance the efficiency, a small pool of the candidate question-answering pairs retrieved from the system for the follow-up work according to the concept of the agriculture domain extracted from user input .Input queries or questions are converted into four parts, the question word segment (QWS), the verb segment (VS), the concept of agricultural areas segment (CS), the auxiliary segment (AS). A semantic matching method is presented to estimate the similarity between the semantic segments of the query and the questions in the pool of the candidate. A thesaurus constructed from the HowNet, a Chinese knowledge base, is adopted for word similarity measure in the matcher. The questions are classified into eleven intension categories using predefined question stemming keywords. For FAQ mining, given a query, the question part and answer part in an FAQ question-answer pair is matched with the input query, respectively. Finally, the probabilities estimated from these two parts are integrated and used to choose the most likely answer for the input query. These approaches are experimented on an agriculture FAQ system. Experimental results indicate that the proposed approach outperformed the FAQ-Finder system in agriculture FAQ retrieval.

Keywords: FAQ, Semantic Inference, Ontology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1356
186 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-

Authors: Nieto Bernal Wilson, Carmona Suarez Edgar

Abstract:

The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects.  Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.

Keywords: Data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1456
185 Sewer Culvert Installation Method to Accommodate Underground Construction in an Urban Area with Narrow Streets (The Development of Shield Switching Type Micro-Tunneling Method and the Introduction of Construction Examples)

Authors: Osamu Igawa, Hiroshi Kouchiwa, Yuji Ito

Abstract:

In recent years, a reconstruction project for sewer  pipelines has been progressing in Japan with the aim of renewing old  sewer culverts. However, it is difficult to secure a sufficient base area  for shafts in an urban area because many streets are narrow with a  complex layout. As a result, construction in such urban areas is  generally very demanding.  In urban areas, there is a strong requirement for a safe, reliable and  economical construction method that does not disturb the public’s  daily life and urban activities. With this in mind, we developed a new  construction method called the “shield switching type micro-tunneling  method,” which integrates the micro-tunneling method and shield  method.  In this method, pipeline is constructed first for sections that are  gently curved or straight using the economical micro-tunneling  method, and then the method is switched to the shield method for  sections with a sharp curve or a series of curves without establishing  an intermediate shaft.  This paper provides the information, features and construction  examples of this newly developed method.

 

Keywords: Micro-tunneling method, Secondary lining applied RC segment, Sharp curve, Shield method, Switching type.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2119
184 Identification of Training Topics for the Improvement of the Relevant Cognitive Skills of Technical Operators in the Railway Domain

Authors: Giulio Nisoli, Jonas Brüngger, Karin Hostettler, Nicole Stoller, Katrin Fischer

Abstract:

Technical operators in the railway domain are experts responsible for the supervisory control of the railway power grid as well as of the railway tunnels. The technical systems used to master these demanding tasks are constantly increasing in their degree of automation. It becomes therefore difficult for technical operators to maintain the control over the technical systems and the processes of their job. In particular, the operators must have the necessary experience and knowledge in dealing with a malfunction situation or unexpected event. For this reason, it is of growing importance that the skills relevant for the execution of the job are maintained and further developed beyond the basic training they receive, where they are educated in respect of technical knowledge and the work with guidelines. Training methods aimed at improving the cognitive skills needed by technical operators are still missing and must be developed. Goals of the present study were to identify which are the relevant cognitive skills of technical operators in the railway domain and to define which topics should be addressed by the training of these skills. Observational interviews were conducted in order to identify the main tasks and the organization of the work of technical operators as well as the technical systems used for the execution of their job. Based on this analysis, the most demanding tasks of technical operators could be identified and described. The cognitive skills involved in the execution of these tasks are those, which need to be trained. In order to identify and analyze these cognitive skills a cognitive task analysis (CTA) was developed. CTA specifically aims at identifying the cognitive skills that employees implement when performing their own tasks. The identified cognitive skills of technical operators were summarized and grouped in training topics. For every training topic, specific goals were defined. The goals regard the three main categories; knowledge, skills and attitude to be trained in every training topic. Based on the results of this study, it is possible to develop specific training methods to train the relevant cognitive skills of the technical operators.

Keywords: Cognitive skills, cognitive task analysis, technical operators in the railway domain, training topics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 656
183 Critical Assessment of Scoring Schemes for Protein-Protein Docking Predictions

Authors: Dhananjay C. Joshi, Jung-Hsin Lin

Abstract:

Protein-protein interactions (PPI) play a crucial role in many biological processes such as cell signalling, transcription, translation, replication, signal transduction, and drug targeting, etc. Structural information about protein-protein interaction is essential for understanding the molecular mechanisms of these processes. Structures of protein-protein complexes are still difficult to obtain by biophysical methods such as NMR and X-ray crystallography, and therefore protein-protein docking computation is considered an important approach for understanding protein-protein interactions. However, reliable prediction of the protein-protein complexes is still under way. In the past decades, several grid-based docking algorithms based on the Katchalski-Katzir scoring scheme were developed, e.g., FTDock, ZDOCK, HADDOCK, RosettaDock, HEX, etc. However, the success rate of protein-protein docking prediction is still far from ideal. In this work, we first propose a more practical measure for evaluating the success of protein-protein docking predictions,the rate of first success (RFS), which is similar to the concept of mean first passage time (MFPT). Accordingly, we have assessed the ZDOCK bound and unbound benchmarks 2.0 and 3.0. We also createda new benchmark set for protein-protein docking predictions, in which the complexes have experimentally determined binding affinity data. We performed free energy calculation based on the solution of non-linear Poisson-Boltzmann equation (nlPBE) to improve the binding mode prediction. We used the well-studied thebarnase-barstarsystem to validate the parameters for free energy calculations. Besides,thenlPBE-based free energy calculations were conducted for the badly predicted cases by ZDOCK and ZRANK. We found that direct molecular mechanics energetics cannot be used to discriminate the native binding pose from the decoys.Our results indicate that nlPBE-based calculations appeared to be one of the promising approaches for improving the success rate of binding pose predictions.

Keywords: protein-protein docking, protein-protein interaction, molecular mechanics energetics, Poisson-Boltzmann calculations

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1766
182 Ethereum Based Smart Contracts for Trade and Finance

Authors: Rishabh Garg

Abstract:

Traditionally, business parties build trust with a centralized operating mechanism, such as payment by letter of credit. However, the increase in cyber-attacks and malicious hacking has jeopardized business operations and finance practices. Emerging markets, due to their high banking risks and the large presence of digital financing, are looking for technology that enables transparency and traceability of any transaction in trade, finance or supply chain management. Blockchain systems, in the absence of any central authority, enable transactions across the globe with the help of decentralized applications. DApps consist of a front-end, a blockchain back-end, and middleware, that is, the code that connects the two. The front-end can be a sophisticated web app or mobile app, which is used to implement the functions/methods on the smart contract. Web apps can employ technologies such as HTML, CSS, React and Express. In this wake, fintech and blockchain products are popping up in brokerages, digital wallets, exchanges, post-trade clearance, settlement, middleware, infrastructure and base protocols. The present paper provides a technology driven solution, financial inclusion and innovative working paradigm for business and finance.

Keywords: Authentication, blockchain, channel, cryptography, DApps, data portability, Decentralized Public Key Infrastructure, Ethereum, hash function, Hashgraph, Privilege creep, Proof of Work algorithm, revocation, storage variables, Zero Knowledge Proof.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 489
181 Influence of Displacement Amplitude and Vertical Load on the Horizontal Dynamic and Static Behavior of Helical Wire Rope Isolators

Authors: Nicolò Vaiana, Mariacristina Spizzuoco, Giorgio Serino

Abstract:

In this paper, the results of experimental tests performed on a Helical Wire Rope Isolator (HWRI) are presented in order to describe the dynamic and static behavior of the selected metal device in three different displacements ranges, namely small, relatively large, and large displacements ranges, without and under the effect of a vertical load. A testing machine, allowing to apply horizontal displacement or load histories to the tested bearing with a constant vertical load, has been adopted to perform the dynamic and static tests. According to the experimental results, the dynamic behavior of the tested device depends on the applied displacement amplitude. Indeed, the HWRI displays a softening and a hardening stiffness at small and relatively large displacements, respectively, and a stronger nonlinear stiffening behavior at large displacements. Furthermore, the experimental tests reveal that the application of a vertical load allows to have a more flexible device with higher damping properties and that the applied vertical load affects much less the dynamic response of the metal device at large displacements. Finally, a decrease in the static to dynamic effective stiffness ratio with increasing displacement amplitude has been observed.

Keywords: Base isolation, earthquake engineering, experimental hysteresis loops, wire rope isolators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1299
180 Computer Software Applicable in Rehabilitation, Cardiology and Molecular Biology

Authors: P. Kowalska, P. Gabka, K. Kamieniarz, M. Kamieniarz, W. Stryla, P. Guzik, T. Krauze

Abstract:

We have developed a computer program consisting of 6 subtests assessing the children hand dexterity applicable in the rehabilitation medicine. We have carried out a normative study on a representative sample of 285 children aged from 7 to 15 (mean age 11.3) and we have proposed clinical standards for three age groups (7-9, 9-11, 12-15 years). We have shown statistical significance of differences among the corresponding mean values of the task time completion. We have also found a strong correlation between the task time completion and the age of the subjects, as well as we have performed the test-retest reliability checks in the sample of 84 children, giving the high values of the Pearson coefficients for the dominant and non-dominant hand in the range 0.740.97 and 0.620.93, respectively. A new MATLAB-based programming tool aiming at analysis of cardiologic RR intervals and blood pressure descriptors, is worked out, too. For each set of data, ten different parameters are extracted: 2 in time domain, 4 in frequency domain and 4 in Poincaré plot analysis. In addition twelve different parameters of baroreflex sensitivity are calculated. All these data sets can be visualized in time domain together with their power spectra and Poincaré plots. If available, the respiratory oscillation curves can be also plotted for comparison. Another application processes biological data obtained from BLAST analysis.

Keywords: Biomedical data base processing, Computer software, Hand dexterity, Heart rate and blood pressure variability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1441
179 The Study of Rapeseed Characteristics by Factor Analysis under Normal and Drought Stress Conditions

Authors: Ali Bakhtiari Gharibdosti, Mohammad Hosein Bijeh Keshavarzi, Samira Alijani

Abstract:

To understand internal characteristics relationships and determine factors which explain under consideration characteristics in rapeseed varieties, 10 rapeseed genotypes were implemented in complete accidental plot with three-time repetitions under drought stress in 2009-2010 in research field of agriculture college, Islamic Azad University, Karaj branch. In this research, 11 characteristics include of characteristics related to growth, production and functions stages was considered. Variance analysis results showed that there is a significant difference among rapeseed varieties characteristics. By calculating simple correlation coefficient under both conditions, normal and drought stress indicate that seed function characteristics in plant and pod number have positive and significant correlation in 1% probable level with seed function and selection on the base of these characteristics was effective for improving this function. Under normal and drought stress, analyzing the main factors showed that numbers of factors which have more than one amount, had five factors under normal conditions which were 82.72% of total variance totally, but under drought stress four factors diagnosed which were 76.78% of total variance. By considering total results of this research and by assessing effective characteristics for factor analysis and selecting different components of these characteristics, they can be used for modifying works to select applicable and tolerant genotypes in drought stress conditions.

Keywords: Correlation, drought stress, factor analysis, rapeseed.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 665
178 Investigation of Seismic T-Resisting Frame with Shear and Flexural Yield of Horizontal Plate Girders

Authors: Helia Barzegar Sedigh, Farzaneh Hamedi, Payam Ashtari

Abstract:

There are some limitations in common structural systems, such as providing appropriate lateral stiffness, adequate ductility, and architectural openings at the same time. Consequently, the concept of T-Resisting Frame (TRF) has been introduced to overcome all these deficiencies. The configuration of TRF in this study is a Vertical Plate Girder (VPG) which is placed within the span and two Horizontal Plate Girders (HPGs) connect VPG to side columns at each story level by the use of rigid connections. System performance is improved by utilizing rigid connections in side columns base joint. Shear yield of HPGs causes energy dissipation in TRF; therefore, high plastic deformation in web of HPGs and VPG affects the ductility of system. Moreover, in order to prevent shear buckling in web of TRF’s members and appropriate criteria for placement of web stiffeners are applied. In this paper, an experimental study is conducted by applying cyclic loading and using finite element models and numerical studies such as push over method are assessed on shear and flexural yielding of HPGs. As a result, seismic parameters indicate adequate lateral stiffness, and high ductility factor of 6.73, and HPGs’ shear yielding achieved as a proof of TRF’s better performance.

Keywords: Experimental study, finite element model, flexural and shear yielding, T-resisting frame.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 701
177 Eliciting and Confirming Data, Information, Knowledge and Wisdom in a Specialist Health Care Setting: The WICKED Method

Authors: S. Impey, D. Berry, S. Furtado, M. Galvin, L. Grogan, O. Hardiman, L. Hederman, M. Heverin, V. Wade, L. Douris, D. O'Sullivan, G. Stephens

Abstract:

Healthcare is a knowledge-rich environment. This knowledge, while valuable, is not always accessible outside the borders of individual clinics. This research aims to address part of this problem (at a study site) by constructing a maximal data set (knowledge artefact) for motor neurone disease (MND). This data set is proposed as an initial knowledge base for a concurrent project to develop an MND patient data platform. It represents the domain knowledge at the study site for the duration of the research (12 months). A knowledge elicitation method was also developed from the lessons learned during this process - the WICKED method. WICKED is an anagram of the words: eliciting and confirming data, information, knowledge, wisdom. But it is also a reference to the concept of wicked problems, which are complex and challenging, as is eliciting expert knowledge. The method was evaluated at a second site, and benefits and limitations were noted. Benefits include that the method provided a systematic way to manage data, information, knowledge and wisdom (DIKW) from various sources, including healthcare specialists and existing data sets. Limitations surrounded the time required and how the data set produced only represents DIKW known during the research period. Future work is underway to address these limitations.

Keywords: Healthcare, knowledge acquisition, maximal data sets, action design science.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 432
176 Towards Real-Time Classification of Finger Movement Direction Using Encephalography Independent Components

Authors: Mohamed Mounir Tellache, Hiroyuki Kambara, Yasuharu Koike, Makoto Miyakoshi, Natsue Yoshimura

Abstract:

This study explores the practicality of using electroencephalographic (EEG) independent components to predict eight-direction finger movements in pseudo-real-time. Six healthy participants with individual-head MRI images performed finger movements in eight directions with two different arm configurations. The analysis was performed in two stages. The first stage consisted of using independent component analysis (ICA) to separate the signals representing brain activity from non-brain activity signals and to obtain the unmixing matrix. The resulting independent components (ICs) were checked, and those reflecting brain-activity were selected. Finally, the time series of the selected ICs were used to predict eight finger-movement directions using Sparse Logistic Regression (SLR). The second stage consisted of using the previously obtained unmixing matrix, the selected ICs, and the model obtained by applying SLR to classify a different EEG dataset. This method was applied to two different settings, namely the single-participant level and the group-level. For the single-participant level, the EEG dataset used in the first stage and the EEG dataset used in the second stage originated from the same participant. For the group-level, the EEG datasets used in the first stage were constructed by temporally concatenating each combination without repetition of the EEG datasets of five participants out of six, whereas the EEG dataset used in the second stage originated from the remaining participants. The average test classification results across datasets (mean ± S.D.) were 38.62 ± 8.36% for the single-participant, which was significantly higher than the chance level (12.50 ± 0.01%), and 27.26 ± 4.39% for the group-level which was also significantly higher than the chance level (12.49% ± 0.01%). The classification accuracy within [–45°, 45°] of the true direction is 70.03 ± 8.14% for single-participant and 62.63 ± 6.07% for group-level which may be promising for some real-life applications. Clustering and contribution analyses further revealed the brain regions involved in finger movement and the temporal aspect of their contribution to the classification. These results showed the possibility of using the ICA-based method in combination with other methods to build a real-time system to control prostheses.

Keywords: Brain-computer interface, BCI, electroencephalography, EEG, finger motion decoding, independent component analysis, pseudo-real-time motion decoding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 564
175 3D Numerical Studies on Jets Acoustic Characteristics of Chevron Nozzles for Aerospace Applications

Authors: R. Kanmaniraja, R. Freshipali, J. Abdullah, K. Niranjan, K. Balasubramani, V. R. Sanal Kumar

Abstract:

The present environmental issues have made aircraft jet noise reduction a crucial problem in aero-acoustics research. Acoustic studies reveal that addition of chevrons to the nozzle reduces the sound pressure level reasonably with acceptable reduction in performance. In this paper comprehensive numerical studies on acoustic characteristics of different types of chevron nozzles have been carried out with non-reacting flows for the shape optimization of chevrons in supersonic nozzles for aerospace applications. The numerical studies have been carried out using a validated steady 3D density based, k-ε turbulence model. In this paper chevron with sharp edge, flat edge, round edge and U-type edge are selected for the jet acoustic characterization of supersonic nozzles. We observed that compared to the base model a case with round-shaped chevron nozzle could reduce 4.13% acoustic level with 0.6% thrust loss. We concluded that the prudent selection of the chevron shape will enable an appreciable reduction of the aircraft jet noise without compromising its overall performance. It is evident from the present numerical simulations that k-ε model can predict reasonably well the acoustic level of chevron supersonic nozzles for its shape optimization.

Keywords: Supersonic nozzle, Chevron, Acoustic level, Shape Optimization of Chevron Nozzles, Jet noise suppression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3784
174 Establishing Econometric Modeling Equations for Lumpy Skin Disease Outbreaks in the Nile Delta of Egypt under Current Climate Conditions

Authors: Abdelgawad, Salah El-Tahawy

Abstract:

This paper aimed to establish econometrical equation models for the Nile delta region in Egypt, which will represent a basement for future predictions of Lumpy skin disease outbreaks and its pathway in relation to climate change. Data of lumpy skin disease (LSD) outbreaks were collected from the cattle farms located in the provinces representing the Nile delta region during 1 January, 2015 to December, 2015. The obtained results indicated that there was a significant association between the degree of the LSD outbreaks and the investigated climate factors (temperature, wind speed, and humidity) and the outbreaks peaked during the months of June, July, and August and gradually decreased to the lowest rate in January, February, and December. The model obtained depicted that the increment of these climate factors were associated with evidently increment on LSD outbreaks on the Nile Delta of Egypt. The model validation process was done by the root mean square error (RMSE) and means bias (MB) which compared the number of LSD outbreaks expected with the number of observed outbreaks and estimated the confidence level of the model. The value of RMSE was 1.38% and MB was 99.50% confirming that this established model described the current association between the LSD outbreaks and the change on climate factors and also can be used as a base for predicting the of LSD outbreaks depending on the climatic change on the future.

Keywords: LSD, climate factors, econometric models, Nile Delta.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 921
173 Modelling Hydrological Time Series Using Wakeby Distribution

Authors: Ilaria Lucrezia Amerise

Abstract:

The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.

Keywords: Generalized extreme values (GEV), likelihood estimation, precipitation data, Wakeby distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 640
172 GridNtru: High Performance PKCS

Authors: Narasimham Challa, Jayaram Pradhan

Abstract:

Cryptographic algorithms play a crucial role in the information society by providing protection from unauthorized access to sensitive data. It is clear that information technology will become increasingly pervasive, Hence we can expect the emergence of ubiquitous or pervasive computing, ambient intelligence. These new environments and applications will present new security challenges, and there is no doubt that cryptographic algorithms and protocols will form a part of the solution. The efficiency of a public key cryptosystem is mainly measured in computational overheads, key size and bandwidth. In particular the RSA algorithm is used in many applications for providing the security. Although the security of RSA is beyond doubt, the evolution in computing power has caused a growth in the necessary key length. The fact that most chips on smart cards can-t process key extending 1024 bit shows that there is need for alternative. NTRU is such an alternative and it is a collection of mathematical algorithm based on manipulating lists of very small integers and polynomials. This allows NTRU to high speeds with the use of minimal computing power. NTRU (Nth degree Truncated Polynomial Ring Unit) is the first secure public key cryptosystem not based on factorization or discrete logarithm problem. This means that given sufficient computational resources and time, an adversary, should not be able to break the key. The multi-party communication and requirement of optimal resource utilization necessitated the need for the present day demand of applications that need security enforcement technique .and can be enhanced with high-end computing. This has promoted us to develop high-performance NTRU schemes using approaches such as the use of high-end computing hardware. Peer-to-peer (P2P) or enterprise grids are proven as one of the approaches for developing high-end computing systems. By utilizing them one can improve the performance of NTRU through parallel execution. In this paper we propose and develop an application for NTRU using enterprise grid middleware called Alchemi. An analysis and comparison of its performance for various text files is presented.

Keywords: Alchemi, GridNtru, Ntru, PKCS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
171 MHD Boundary Layer Flow of a Nanofluid Past a Wedge Shaped Wick in Heat Pipe

Authors: Ziya Uddin

Abstract:

This paper deals with the theoretical and numerical investigation of magneto hydrodynamic boundary layer flow of a nanofluid past a wedge shaped wick in heat pipe used for the cooling of electronic components and different type of machines. To incorporate the effect of nanoparticle diameter, concentration of nanoparticles in the pure fluid, nanothermal layer formed around the nanoparticle and Brownian motion of nanoparticles etc., appropriate models are used for the effective thermal and physical properties of nanofluids. To model the rotation of nanoparticles inside the base fluid, microfluidics theory is used. In this investigation ethylene glycol (EG) based nanofluids, are taken into account. The non-linear equations governing the flow and heat transfer are solved by using a very effective particle swarm optimization technique along with Runge-Kutta method. The values of heat transfer coefficient are found for different parameters involved in the formulation viz. nanoparticle concentration, nanoparticle size, magnetic field and wedge angle etc. It is found that, the wedge angle, presence of magnetic field, nanoparticle size and nanoparticle concentration etc. have prominent effects on fluid flow and heat transfer characteristics for the considered configuration.

Keywords: Heat transfer, Heat pipe, numerical modeling, nanofluid applications, particle swarm optimization, wedge shaped wick.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2285
170 Fault Detection and Diagnosis of Broken Bar Problem in Induction Motors Base Wavelet Analysis and EMD Method: Case Study of Mobarakeh Steel Company in Iran

Authors: M. Ahmadi, M. Kafil, H. Ebrahimi

Abstract:

Nowadays, induction motors have a significant role in industries. Condition monitoring (CM) of this equipment has gained a remarkable importance during recent years due to huge production losses, substantial imposed costs and increases in vulnerability, risk, and uncertainty levels. Motor current signature analysis (MCSA) is one of the most important techniques in CM. This method can be used for rotor broken bars detection. Signal processing methods such as Fast Fourier transformation (FFT), Wavelet transformation and Empirical Mode Decomposition (EMD) are used for analyzing MCSA output data. In this study, these signal processing methods are used for broken bar problem detection of Mobarakeh steel company induction motors. Based on wavelet transformation method, an index for fault detection, CF, is introduced which is the variation of maximum to the mean of wavelet transformation coefficients. We find that, in the broken bar condition, the amount of CF factor is greater than the healthy condition. Based on EMD method, the energy of intrinsic mode functions (IMF) is calculated and finds that when motor bars become broken the energy of IMFs increases.

Keywords: Broken bar, condition monitoring, diagnostics, empirical mode decomposition, Fourier transform, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 756
169 A Mathematical Investigation of the Turkevich Organizer Theory in the Citrate Method for the Synthesis of Gold Nanoparticles

Authors: Emmanuel Agunloye, Asterios Gavriilidis, Luca Mazzei

Abstract:

Gold nanoparticles are commonly synthesized by reducing chloroauric acid with sodium citrate. This method, referred to as the citrate method, can produce spherical gold nanoparticles (NPs) in the size range 10-150 nm. Gold NPs of this size are useful in many applications. However, the NPs are usually polydisperse and irreproducible. A better understanding of the synthesis mechanisms is thus required. This work thoroughly investigated the only model that describes the synthesis. This model combines mass and population balance equations, describing the NPs synthesis through a sequence of chemical reactions. Chloroauric acid reacts with sodium citrate to form aurous chloride and dicarboxy acetone. The latter organizes aurous chloride in a nucleation step and concurrently degrades into acetone. The unconsumed precursor then grows the formed nuclei. However, depending on the pH, both the precursor and the reducing agent react differently thus affecting the synthesis. In this work, we investigated the model for different conditions of pH, temperature and initial reactant concentrations. To solve the model, we used Parsival, a commercial numerical code, whilst to test it, we considered various conditions studied experimentally by different researchers, for which results are available in the literature. The model poorly predicted the experimental data. We believe that this is because the model does not account for the acid-base properties of both chloroauric acid and sodium citrate.

Keywords: Gold nanoparticles, Citrate method, Turkevich organizer theory, population balance modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 968