Search results for: sparse graph
40 Artificial Neural Network based Modeling of Evaporation Losses in Reservoirs
Authors: Surinder Deswal, Mahesh Pal
Abstract:
An Artificial Neural Network based modeling technique has been used to study the influence of different combinations of meteorological parameters on evaporation from a reservoir. The data set used is taken from an earlier reported study. Several input combination were tried so as to find out the importance of different input parameters in predicting the evaporation. The prediction accuracy of Artificial Neural Network has also been compared with the accuracy of linear regression for predicting evaporation. The comparison demonstrated superior performance of Artificial Neural Network over linear regression approach. The findings of the study also revealed the requirement of all input parameters considered together, instead of individual parameters taken one at a time as reported in earlier studies, in predicting the evaporation. The highest correlation coefficient (0.960) along with lowest root mean square error (0.865) was obtained with the input combination of air temperature, wind speed, sunshine hours and mean relative humidity. A graph between the actual and predicted values of evaporation suggests that most of the values lie within a scatter of ±15% with all input parameters. The findings of this study suggest the usefulness of ANN technique in predicting the evaporation losses from reservoirs.Keywords: Artificial neural network, evaporation losses, multiple linear regression, modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 197839 Ontology-based Domain Modelling for Consistent Content Change Management
Authors: Muhammad Javed, Yalemisew M. Abgaz, Claus Pahl
Abstract:
Ontology-based modelling of multi-formatted software application content is a challenging area in content management. When the number of software content unit is huge and in continuous process of change, content change management is important. The management of content in this context requires targeted access and manipulation methods. We present a novel approach to deal with model-driven content-centric information systems and access to their content. At the core of our approach is an ontology-based semantic annotation technique for diversely formatted content that can improve the accuracy of access and systems evolution. Domain ontologies represent domain-specific concepts and conform to metamodels. Different ontologies - from application domain ontologies to software ontologies - capture and model the different properties and perspectives on a software content unit. Interdependencies between domain ontologies, the artifacts and the content are captured through a trace model. The annotation traces are formalised and a graph-based system is selected for the representation of the annotation traces.Keywords: Consistent Content Management, Impact Categorisation, Trace Model, Ontology Evolution
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168438 Reliability Analysis of P-I Diagram Formula for RC Column Subjected to Blast Load
Authors: Masoud Abedini, Azrul A. Mutalib, Shahrizan Baharom, Hong Hao
Abstract:
This study was conducted published to investigate there liability of the equation pressure-impulse (PI) reinforced concrete column inprevious studies. Equation involves three different levels of damage criteria known as D =0. 2, D =0. 5 and D =0. 8.The damage criteria known as a minor when 0-0.2, 0.2-0.5is known as moderate damage, high damage known as 0.5-0.8, and 0.8-1 of the structure is considered a failure. In this study, two types of reliability analyzes conducted. First, using pressure-impulse equation with different parameters. The parameters involved are the concrete strength, depth, width, and height column, the ratio of longitudinal reinforcement and transverse reinforcement ratio. In the first analysis of the reliability of this new equation is derived to improve the previous equations. The second reliability analysis involves three types of columns used to derive the PI curve diagram using the derived equation to compare with the equation derived from other researchers and graph minimum standoff versus weapon yield Federal Emergency Management Agency (FEMA). The results showed that the derived equation is more accurate with FEMA standards than previous researchers.
Keywords: Blast load, RC column, P-I curve, Analytical formulae, Standard FEMA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 291237 Economy-Based Computing with WebCom
Authors: Adarsh Patil, David A. Power, John P. Morrison
Abstract:
Grid environments consist of the volatile integration of discrete heterogeneous resources. The notion of the Grid is to unite different users and organisations and pool their resources into one large computing platform where they can harness, inter-operate, collaborate and interact. If the Grid Community is to achieve this objective, then participants (Users and Organisations) need to be willing to donate or share their resources and permit other participants to use their resources. Resources do not have to be shared at all times, since it may result in users not having access to their own resource. The idea of reward-based computing was developed to address the sharing problem in a pragmatic manner. Participants are offered a reward to donate their resources to the Grid. A reward may include monetary recompense or a pro rata share of available resources when constrained. This latter point may imply a quality of service, which in turn may require some globally agreed reservation mechanism. This paper presents a platform for economybased computing using the WebCom Grid middleware. Using this middleware, participants can configure their resources at times and priority levels to suit their local usage policy. The WebCom system accounts for processing done on individual participants- resources and rewards them accordingly.Keywords: WebCom, Economy-based computing, WebComGrid Bank Reward, Condensed Graph, Distributor, Accounting, GridPoint.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 120736 A Hybrid Image Fusion Model for Generating High Spatial-Temporal-Spectral Resolution Data Using OLI-MODIS-Hyperion Satellite Imagery
Authors: Yongquan Zhao, Bo Huang
Abstract:
Spatial, Temporal, and Spectral Resolution (STSR) are three key characteristics of Earth observation satellite sensors; however, any single satellite sensor cannot provide Earth observations with high STSR simultaneously because of the hardware technology limitations of satellite sensors. On the other hand, a conflicting circumstance is that the demand for high STSR has been growing with the remote sensing application development. Although image fusion technology provides a feasible means to overcome the limitations of the current Earth observation data, the current fusion technologies cannot enhance all STSR simultaneously and provide high enough resolution improvement level. This study proposes a Hybrid Spatial-Temporal-Spectral image Fusion Model (HSTSFM) to generate synthetic satellite data with high STSR simultaneously, which blends the high spatial resolution from the panchromatic image of Landsat-8 Operational Land Imager (OLI), the high temporal resolution from the multi-spectral image of Moderate Resolution Imaging Spectroradiometer (MODIS), and the high spectral resolution from the hyper-spectral image of Hyperion to produce high STSR images. The proposed HSTSFM contains three fusion modules: (1) spatial-spectral image fusion; (2) spatial-temporal image fusion; (3) temporal-spectral image fusion. A set of test data with both phenological and land cover type changes in Beijing suburb area, China is adopted to demonstrate the performance of the proposed method. The experimental results indicate that HSTSFM can produce fused image that has good spatial and spectral fidelity to the reference image, which means it has the potential to generate synthetic data to support the studies that require high STSR satellite imagery.Keywords: Hybrid spatial-temporal-spectral fusion, high resolution synthetic imagery, least square regression, sparse representation, spectral transformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 123535 Semantically Enriched Web Usage Mining for Personalization
Authors: Suresh Shirgave, Prakash Kulkarni, José Borges
Abstract:
The continuous growth in the size of the World Wide Web has resulted in intricate Web sites, demanding enhanced user skills and more sophisticated tools to help the Web user to find the desired information. In order to make Web more user friendly, it is necessary to provide personalized services and recommendations to the Web user. For discovering interesting and frequent navigation patterns from Web server logs many Web usage mining techniques have been applied. The recommendation accuracy of usage based techniques can be improved by integrating Web site content and site structure in the personalization process.
Herein, we propose semantically enriched Web Usage Mining method for Personalization (SWUMP), an extension to solely usage based technique. This approach is a combination of the fields of Web Usage Mining and Semantic Web. In the proposed method, we envisage enriching the undirected graph derived from usage data with rich semantic information extracted from the Web pages and the Web site structure. The experimental results show that the SWUMP generates accurate recommendations and is able to achieve 10-20% better accuracy than the solely usage based model. The SWUMP addresses the new item problem inherent to solely usage based techniques.
Keywords: Prediction, Recommendation, Semantic Web Usage Mining, Web Usage Mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 302334 Implementing an Intuitive Reasoner with a Large Weather Database
Authors: Yung-Chien Sun, O. Grant Clark
Abstract:
In this paper, the implementation of a rule-based intuitive reasoner is presented. The implementation included two parts: the rule induction module and the intuitive reasoner. A large weather database was acquired as the data source. Twelve weather variables from those data were chosen as the “target variables" whose values were predicted by the intuitive reasoner. A “complex" situation was simulated by making only subsets of the data available to the rule induction module. As a result, the rules induced were based on incomplete information with variable levels of certainty. The certainty level was modeled by a metric called "Strength of Belief", which was assigned to each rule or datum as ancillary information about the confidence in its accuracy. Two techniques were employed to induce rules from the data subsets: decision tree and multi-polynomial regression, respectively for the discrete and the continuous type of target variables. The intuitive reasoner was tested for its ability to use the induced rules to predict the classes of the discrete target variables and the values of the continuous target variables. The intuitive reasoner implemented two types of reasoning: fast and broad where, by analogy to human thought, the former corresponds to fast decision making and the latter to deeper contemplation. . For reference, a weather data analysis approach which had been applied on similar tasks was adopted to analyze the complete database and create predictive models for the same 12 target variables. The values predicted by the intuitive reasoner and the reference approach were compared with actual data. The intuitive reasoner reached near-100% accuracy for two continuous target variables. For the discrete target variables, the intuitive reasoner predicted at least 70% as accurately as the reference reasoner. Since the intuitive reasoner operated on rules derived from only about 10% of the total data, it demonstrated the potential advantages in dealing with sparse data sets as compared with conventional methods.Keywords: Artificial intelligence, intuition, knowledge acquisition, limited certainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 138333 Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured Global Navigation Satellite System Denied Environments
Authors: David L. Olson, Stephen B. H. Bruder, Adam S. Watkins, Cleon E. Davis
Abstract:
In global navigation satellite system (GNSS) denied settings, such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.
Keywords: Autonomous mobile robotics, dead reckoning, depth camera, inertial navigation, Kalman filtering, localization, sensor fusion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 72032 Investigation on Toxicity of Manufactured Nanoparticles to Bioluminescence Bacteria Vibrio fischeri
Authors: E. Binaeian, SH. Soroushnia
Abstract:
Acute toxicity of nano SiO2, ZnO, MCM-41 (Meso pore silica), Cu, Multi Wall Carbon Nano Tube (MWCNT), Single Wall Carbon Nano Tube (SWCNT) , Fe (Coated) to bacteria Vibrio fischeri using a homemade luminometer , was evaluated. The values of the nominal effective concentrations (EC), causing 20% and 50% inhibition of biouminescence, using two mathematical models at two times of 5 and 30 minutes were calculated. Luminometer was designed with Photomultiplier (PMT) detector. Luminol chemiluminescence reaction was carried out for the calibration graph. In the linear calibration range, the correlation coefficients and coefficient of Variation (CV) were 0.988 and 3.21% respectively which demonstrate the accuracy and reproducibility of the instrument that are suitable. The important part of this research depends on how to optimize the best condition for maximum bioluminescence. The culture of Vibrio fischeri with optimal conditions in liquid media, were stirring at 120 rpm at a temperature of 150C to 180C and were incubated for 24 to 72 hours while solid medium was held at 180C and for 48 hours. Suspension of nanoparticles ZnO, after 30 min contact time to bacteria Vibrio fischeri, showed the highest toxicity while SiO2 nanoparticles showed the lowest toxicity. After 5 min exposure time, the toxicity of ZnO was the strongest and MCM-41 was the weakest toxicant component.
Keywords: Bioluminescence, effective concentration, nanomaterials, toxicity, Vibrio fischeri.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 296031 Performance Analysis of HSDPA Systems using Low-Density Parity-Check (LDPC)Coding as Compared to Turbo Coding
Authors: K. Anitha Sheela, J. Tarun Kumar
Abstract:
HSDPA is a new feature which is introduced in Release-5 specifications of the 3GPP WCDMA/UTRA standard to realize higher speed data rate together with lower round-trip times. Moreover, the HSDPA concept offers outstanding improvement of packet throughput and also significantly reduces the packet call transfer delay as compared to Release -99 DSCH. Till now the HSDPA system uses turbo coding which is the best coding technique to achieve the Shannon limit. However, the main drawbacks of turbo coding are high decoding complexity and high latency which makes it unsuitable for some applications like satellite communications, since the transmission distance itself introduces latency due to limited speed of light. Hence in this paper it is proposed to use LDPC coding in place of Turbo coding for HSDPA system which decreases the latency and decoding complexity. But LDPC coding increases the Encoding complexity. Though the complexity of transmitter increases at NodeB, the End user is at an advantage in terms of receiver complexity and Bit- error rate. In this paper LDPC Encoder is implemented using “sparse parity check matrix" H to generate a codeword at Encoder and “Belief Propagation algorithm "for LDPC decoding .Simulation results shows that in LDPC coding the BER suddenly drops as the number of iterations increase with a small increase in Eb/No. Which is not possible in Turbo coding. Also same BER was achieved using less number of iterations and hence the latency and receiver complexity has decreased for LDPC coding. HSDPA increases the downlink data rate within a cell to a theoretical maximum of 14Mbps, with 2Mbps on the uplink. The changes that HSDPA enables includes better quality, more reliable and more robust data services. In other words, while realistic data rates are only a few Mbps, the actual quality and number of users achieved will improve significantly.Keywords: AMC, HSDPA, LDPC, WCDMA, 3GPP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 204830 Attribute Based Comparison and Selection of Modular Self-Reconfigurable Robot Using Multiple Attribute Decision Making Approach
Authors: Manpreet Singh, V. P. Agrawal, Gurmanjot Singh Bhatti
Abstract:
From the last decades, there is a significant technological advancement in the field of robotics, and a number of modular self-reconfigurable robots were introduced that can help in space exploration, bucket to stuff, search, and rescue operation during earthquake, etc. As there are numbers of self-reconfigurable robots, choosing the optimum one is always a concern for robot user since there is an increase in available features, facilities, complexity, etc. The objective of this research work is to present a multiple attribute decision making based methodology for coding, evaluation, comparison ranking and selection of modular self-reconfigurable robots using a technique for order preferences by similarity to ideal solution approach. However, 86 attributes that affect the structure and performance are identified. A database for modular self-reconfigurable robot on the basis of different pertinent attribute is generated. This database is very useful for the user, for selecting a robot that suits their operational needs. Two visual methods namely linear graph and spider chart are proposed for ranking of modular self-reconfigurable robots. Using five robots (Atron, Smores, Polybot, M-Tran 3, Superbot), an example is illustrated, and raking of the robots is successfully done, which shows that Smores is the best robot for the operational need illustrated, and this methodology is found to be very effective and simple to use.
Keywords: Self-reconfigurable robots, MADM, TOPSIS, morphogenesis, scalability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 89029 Flood Predicting in Karkheh River Basin Using Stochastic ARIMA Model
Authors: Karim Hamidi Machekposhti, Hossein Sedghi, Abdolrasoul Telvari, Hossein Babazadeh
Abstract:
Floods have huge environmental and economic impact. Therefore, flood prediction is given a lot of attention due to its importance. This study analysed the annual maximum streamflow (discharge) (AMS or AMD) of Karkheh River in Karkheh River Basin for flood predicting using ARIMA model. For this purpose, we use the Box-Jenkins approach, which contains four-stage method model identification, parameter estimation, diagnostic checking and forecasting (predicting). The main tool used in ARIMA modelling was the SAS and SPSS software. Model identification was done by visual inspection on the ACF and PACF. SAS software computed the model parameters using the ML, CLS and ULS methods. The diagnostic checking tests, AIC criterion, RACF graph and RPACF graphs, were used for selected model verification. In this study, the best ARIMA models for Annual Maximum Discharge (AMD) time series was (4,1,1) with their AIC value of 88.87. The RACF and RPACF showed residuals’ independence. To forecast AMD for 10 future years, this model showed the ability of the model to predict floods of the river under study in the Karkheh River Basin. Model accuracy was checked by comparing the predicted and observation series by using coefficient of determination (R2).
Keywords: Time series modelling, stochastic processes, ARIMA model, Karkheh River.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 104428 A Systems Approach to Gene Ranking from DNA Microarray Data of Cervical Cancer
Authors: Frank Emmert Streib, Matthias Dehmer, Jing Liu, Max Mühlhauser
Abstract:
In this paper we present a method for gene ranking from DNA microarray data. More precisely, we calculate the correlation networks, which are unweighted and undirected graphs, from microarray data of cervical cancer whereas each network represents a tissue of a certain tumor stage and each node in the network represents a gene. From these networks we extract one tree for each gene by a local decomposition of the correlation network. The interpretation of a tree is that it represents the n-nearest neighbor genes on the n-th level of a tree, measured by the Dijkstra distance, and, hence, gives the local embedding of a gene within the correlation network. For the obtained trees we measure the pairwise similarity between trees rooted by the same gene from normal to cancerous tissues. This evaluates the modification of the tree topology due to progression of the tumor. Finally, we rank the obtained similarity values from all tissue comparisons and select the top ranked genes. For these genes the local neighborhood in the correlation networks changes most between normal and cancerous tissues. As a result we find that the top ranked genes are candidates suspected to be involved in tumor growth and, hence, indicates that our method captures essential information from the underlying DNA microarray data of cervical cancer.Keywords: Graph similarity, DNA microarray data, cancer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 175627 An Adaptive Dimensionality Reduction Approach for Hyperspectral Imagery Semantic Interpretation
Authors: Akrem Sellami, Imed Riadh Farah, Basel Solaiman
Abstract:
With the development of HyperSpectral Imagery (HSI) technology, the spectral resolution of HSI became denser, which resulted in large number of spectral bands, high correlation between neighboring, and high data redundancy. However, the semantic interpretation is a challenging task for HSI analysis due to the high dimensionality and the high correlation of the different spectral bands. In fact, this work presents a dimensionality reduction approach that allows to overcome the different issues improving the semantic interpretation of HSI. Therefore, in order to preserve the spatial information, the Tensor Locality Preserving Projection (TLPP) has been applied to transform the original HSI. In the second step, knowledge has been extracted based on the adjacency graph to describe the different pixels. Based on the transformation matrix using TLPP, a weighted matrix has been constructed to rank the different spectral bands based on their contribution score. Thus, the relevant bands have been adaptively selected based on the weighted matrix. The performance of the presented approach has been validated by implementing several experiments, and the obtained results demonstrate the efficiency of this approach compared to various existing dimensionality reduction techniques. Also, according to the experimental results, we can conclude that this approach can adaptively select the relevant spectral improving the semantic interpretation of HSI.Keywords: Band selection, dimensionality reduction, feature extraction, hyperspectral imagery, semantic interpretation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 117026 Incorporating Semantic Similarity Measure in Genetic Algorithm : An Approach for Searching the Gene Ontology Terms
Authors: Razib M. Othman, Safaai Deris, Rosli M. Illias, Hany T. Alashwal, Rohayanti Hassan, FarhanMohamed
Abstract:
The most important property of the Gene Ontology is the terms. These control vocabularies are defined to provide consistent descriptions of gene products that are shareable and computationally accessible by humans, software agent, or other machine-readable meta-data. Each term is associated with information such as definition, synonyms, database references, amino acid sequences, and relationships to other terms. This information has made the Gene Ontology broadly applied in microarray and proteomic analysis. However, the process of searching the terms is still carried out using traditional approach which is based on keyword matching. The weaknesses of this approach are: ignoring semantic relationships between terms, and highly depending on a specialist to find similar terms. Therefore, this study combines semantic similarity measure and genetic algorithm to perform a better retrieval process for searching semantically similar terms. The semantic similarity measure is used to compute similitude strength between two terms. Then, the genetic algorithm is employed to perform batch retrievals and to handle the situation of the large search space of the Gene Ontology graph. The computational results are presented to show the effectiveness of the proposed algorithm.Keywords: Gene Ontology, Semantic similarity measure, Genetic algorithm, Ontology search
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 149025 Numerical Solution of Manning's Equation in Rectangular Channels
Authors: Abdulrahman Abdulrahman
Abstract:
When the Manning equation is used, a unique value of normal depth in the uniform flow exists for a given channel geometry, discharge, roughness, and slope. Depending on the value of normal depth relative to the critical depth, the flow type (supercritical or subcritical) for a given characteristic of channel conditions is determined whether or not flow is uniform. There is no general solution of Manning's equation for determining the flow depth for a given flow rate, because the area of cross section and the hydraulic radius produce a complicated function of depth. The familiar solution of normal depth for a rectangular channel involves 1) a trial-and-error solution; 2) constructing a non-dimensional graph; 3) preparing tables involving non-dimensional parameters. Author in this paper has derived semi-analytical solution to Manning's equation for determining the flow depth given the flow rate in rectangular open channel. The solution was derived by expressing Manning's equation in non-dimensional form, then expanding this form using Maclaurin's series. In order to simplify the solution, terms containing power up to 4 have been considered. The resulted equation is a quartic equation with a standard form, where its solution was obtained by resolving this into two quadratic factors. The proposed solution for Manning's equation is valid over a large range of parameters, and its maximum error is within -1.586%.Keywords: Channel design, civil engineering, hydraulic engineering, open channel flow, Manning's equation, normal depth, uniform flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 228324 Thermal Analysis of Extrusion Process in Plastic Making
Authors: S. K. Fasogbon, T. M. Oladosu, O. S. Osasuyi
Abstract:
Plastic extrusion has been an important process of plastic production since 19th century. Meanwhile, in plastic extrusion process, wide variation in temperature along the extrudate usually leads to scraps formation on the side of finished products. To avoid this situation, there is a need to deeply understand temperature distribution along the extrudate in plastic extrusion process. This work developed an analytical model that predicts the temperature distribution over the billet (the polymers melt) along the extrudate during extrusion process with the limitation that the polymer in question does not cover biopolymer such as DNA. The model was solved and simulated. Results for two different plastic materials (polyvinylchloride and polycarbonate) using self-developed MATLAB code and a commercially developed software (ANSYS) were generated and ultimately compared. It was observed that there is a thermodynamic heat transfer from the entry level of the billet into the die down to the end of it. The graph plots indicate a natural exponential decay of temperature with time and along the die length, with the temperature being 413 K and 474 K for polyvinylchloride and polycarbonate respectively at the entry level and 299.3 K and 328.8 K at the exit when the temperature of the surrounding was 298 K. The extrusion model was validated by comparison of MATLAB code simulation with a commercially available ANSYS simulation and the results favourably agree. This work concludes that the developed mathematical model and the self-generated MATLAB code are reliable tools in predicting temperature distribution along the extrudate in plastic extrusion process.
Keywords: ANSYS, extrusion process, MATLAB, plastic making, thermal analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 185523 Aggregation Scheduling Algorithms in Wireless Sensor Networks
Authors: Min Kyung An
Abstract:
In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.Keywords: Data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 79922 Design of an Ensemble Learning Behavior Anomaly Detection Framework
Authors: Abdoulaye Diop, Nahid Emad, Thierry Winter, Mohamed Hilia
Abstract:
Data assets protection is a crucial issue in the cybersecurity field. Companies use logical access control tools to vault their information assets and protect them against external threats, but they lack solutions to counter insider threats. Nowadays, insider threats are the most significant concern of security analysts. They are mainly individuals with legitimate access to companies information systems, which use their rights with malicious intents. In several fields, behavior anomaly detection is the method used by cyber specialists to counter the threats of user malicious activities effectively. In this paper, we present the step toward the construction of a user and entity behavior analysis framework by proposing a behavior anomaly detection model. This model combines machine learning classification techniques and graph-based methods, relying on linear algebra and parallel computing techniques. We show the utility of an ensemble learning approach in this context. We present some detection methods tests results on an representative access control dataset. The use of some explored classifiers gives results up to 99% of accuracy.Keywords: Cybersecurity, data protection, access control, insider threat, user behavior analysis, ensemble learning, high performance computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 115121 Seamless Flow of Voluminous Data in High Speed Network without Congestion Using Feedback Mechanism
Abstract:
Continuously growing needs for Internet applications that transmit massive amount of data have led to the emergence of high speed network. Data transfer must take place without any congestion and hence feedback parameters must be transferred from the receiver end to the sender end so as to restrict the sending rate in order to avoid congestion. Even though TCP tries to avoid congestion by restricting the sending rate and window size, it never announces the sender about the capacity of the data to be sent and also it reduces the window size by half at the time of congestion therefore resulting in the decrease of throughput, low utilization of the bandwidth and maximum delay. In this paper, XCP protocol is used and feedback parameters are calculated based on arrival rate, service rate, traffic rate and queue size and hence the receiver informs the sender about the throughput, capacity of the data to be sent and window size adjustment, resulting in no drastic decrease in window size, better increase in sending rate because of which there is a continuous flow of data without congestion. Therefore as a result of this, there is a maximum increase in throughput, high utilization of the bandwidth and minimum delay. The result of the proposed work is presented as a graph based on throughput, delay and window size. Thus in this paper, XCP protocol is well illustrated and the various parameters are thoroughly analyzed and adequately presented.Keywords: Bandwidth-Delay Product, Congestion Control, Congestion Window, TCP/IP
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 148720 Investigating the Regulation System of the Synchronous Motor Excitation Mode Serving as a Reactive Power Source
Authors: Baghdasaryan Marinka, Ulikyan Azatuhi
Abstract:
The efficient usage of the compensation abilities of the electrical drive synchronous motors used in production processes can essentially improve the technical and economic indices of the process. Reducing the flows of the reactive electrical energy due to the compensation of reactive power allows to significantly reduce the load losses of power in the electrical networks. As a result of analyzing the scientific works devoted to the issues of regulating the excitation of the synchronous motors, the need for comprehensive investigation and estimation of the excitation mode has been substantiated. By means of the obtained transmission functions, in the Simulink environment of the software package MATLAB, the transition processes of the excitation mode have been studied. As a result of obtaining and estimating the graph of the Nyquist plot and the transient process, the necessity of developing the Proportional-Integral-Derivative (PID) regulator has been justified. The transient processes of the system of the PID regulator have been investigated, and the amplitude–phase characteristics of the system have been estimated. The analysis of the obtained results has shown that the regulation indices of the developed system have been improved. The developed system can be successfully applied for regulating the excitation voltage of different-power synchronous motors, operating with a changing load, ensuring a value of the power coefficient close to 1.
Keywords: Transient process, synchronous motor, excitation mode, regulator, reactive power.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 68819 ParkedGuard: An Efficient and Accurate Parked Domain Detection System Using Graphical Locality Analysis and Coarse-To-Fine Strategy
Authors: Chia-Min Lai, Wan-Ching Lin, Hahn-Ming Lee, Ching-Hao Mao
Abstract:
As world wild internet has non-stop developments, making profit by lending registered domain names emerges as a new business in recent years. Unfortunately, the larger the market scale of domain lending service becomes, the riskier that there exist malicious behaviors or malwares hiding behind parked domains will be. Also, previous work for differentiating parked domain suffers two main defects: 1) too much data-collecting effort and CPU latency needed for features engineering and 2) ineffectiveness when detecting parked domains containing external links that are usually abused by hackers, e.g., drive-by download attack. Aiming for alleviating above defects without sacrificing practical usability, this paper proposes ParkedGuard as an efficient and accurate parked domain detector. Several scripting behavioral features were analyzed, while those with special statistical significance are adopted in ParkedGuard to make feature engineering much more cost-efficient. On the other hand, finding memberships between external links and parked domains was modeled as a graph mining problem, and a coarse-to-fine strategy was elaborately designed by leverage the graphical locality such that ParkedGuard outperforms the state-of-the-art in terms of both recall and precision rates.Keywords: Coarse-to-fine strategy, domain parking service, graphical locality analysis, parked domain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 125018 Interest Rate Fluctuation Effect on Commercial Bank’s Fixed Fund Deposit in Nigeria
Authors: Okolo Chimaobi Valentine
Abstract:
Commercial banks in Nigeria adopted many strategies to attract fresh deposits including the use of high deposit rate. However, pricing of banking services moved in favor of the banks at the expense of customers, resulting in their seeking other investment alternatives rather than saving their money in the bank. Both deposit and lending rates were greatly influenced by the Central Bank of Nigeria (CBN) decision on interest rate. Therefore, commercial bank effort to attract deposits via manipulation of her rates was greatly limited, otherwise the banks will be giving out more than it earned. The study aimed at examining the relationship between interest rate and fixed fund deposit of commercial banks, how policy-controlled interest rate affected commercial bank’s fixed fund deposit The researcher employed ordinary least square technique, using, multiple linear regression, unrestricted vector auto-regression, correlation matrix test, granger causality and impulse response graph in the analysis. Commercial bank’s interest rates affected commercial bank’s fixed fund deposit significantly while policy-controlled interest rate did not significantly transmit through the commercial bank’s interest rates to affect fixed fund deposit. While commercial banks seek creative ways to expand their fixed fund deposit, policy authorities in Nigeria should better coordinate interest rate fluctuation and induce competition in the entire financial sector.Keywords: Commercial bank, fixed fund deposit, fluctuation effects, interest rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 360117 A Green Design for Assembly Model for Integrated Design Evaluation and Assembly and Disassembly Sequence Planning
Authors: Yuan-Jye Tseng, Fang-Yu Yu, Feng-Yi Huang
Abstract:
A green design for assembly model is presented to integrate design evaluation and assembly and disassembly sequence planning by evaluating the three activities in one integrated model. For an assembled product, an assembly sequence planning model is required for assembling the product at the start of the product life cycle. A disassembly sequence planning model is needed for disassembling the product at the end. In a green product life cycle, it is important to plan how a product can be disassembled, reused, or recycled, before the product is actually assembled and produced. Given a product requirement, there may be several design alternative cases to design the same product. In the different design cases, the assembly and disassembly sequences for producing the product can be different. In this research, a new model is presented to concurrently evaluate the design and plan the assembly and disassembly sequences. First, the components are represented by using graph based models. Next, a particle swarm optimization (PSO) method with a new encoding scheme is developed. In the new PSO encoding scheme, a particle is represented by a position matrix defining an assembly sequence and a disassembly sequence. The assembly and disassembly sequences can be simultaneously planned with an objective of minimizing the total of assembly costs and disassembly costs. The test results show that the presented method is feasible and efficient for solving the integrated design evaluation and assembly and disassembly sequence planning problem. An example product is implemented and illustrated in this paper.Keywords: green design, assembly and disassembly sequence planning, green design for assembly, particle swarm optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 177816 Application of Cite Space Software in Visual Analysis of Land Use Coupling Research Progress
Authors: Jing Zhou, Weiqun Su, Naying Luo, Min Shang, Li Wu
Abstract:
The coupling of land use system in geographical research is mainly the coupling of pattern and process, which is essentially the human-land coupling, and is an important part of the research and discussion of human-land relationship. Based on the Web of Science database, the paper titles, authors, keywords, and references from 1997-2020 related to land use coupling were used as data sources to explore the research progress of land use coupling. Cite Space bibliometric tool was used for co-occurrence analysis of the issuing country, issuing institution, co-cited author, disciplinary institution, and keywords. The results are shown as follows: (1) From 1997 to 2020, the United States, China, and Germany rank the top, with more than 250 published papers. Although China ranks second in the number of published papers on foreign literature, it has less centrality and less influence. (2) The top 10 institutions (universities) in the number of published papers (more than 300 articles) are mainly from the United States and China, and the University of Chinese Academy of Sciences has the highest output of papers. At the same time, the phenomenon of multi-institutional cooperation has increased in the field of land use coupling research. (3) From 1997 to 2020, land sensitivity research and the impact of climate change on land use patterns are the main directions of land use coupling research. However, in the past five years, scholars have mainly focused on the coupling research methods of land use and the coupling relationship between ecological and environmental factors and land use.
Keywords: Land use coupling, cite space, knowledge graph, visual analysis, research progress.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38315 Merging and Comparing Ontologies Generically
Authors: Xiuzhan Guo, Arthur Berrill, Ajinkya Kulkarni, Kostya Belezko, Min Luo
Abstract:
Ontology operations, e.g., aligning and merging, were studied and implemented extensively in different settings, such as, categorical operations, relation algebras, typed graph grammars, with different concerns. However, aligning and merging operations in the settings share some generic properties, e.g., idempotence, commutativity, associativity, and representativity, which are defined on an ontology merging system, given by a nonempty set of the ontologies concerned, a binary relation on the set of the ontologies modeling ontology aligning, and a partial binary operation on the set of the ontologies modeling ontology merging. Given an ontology repository, a finite subset of the set of the ontologies, its merging closure is the smallest subset of the set of the ontologies, which contains the repository and is closed with respect to merging. If idempotence, commutativity, associativity, and representativity properties are satisfied, then both the set of the ontologies and the merging closure of the ontology repository are partially ordered naturally by merging, the merging closure of the ontology repository is finite and can be computed, compared, and sorted efficiently, including sorting, selecting, and querying some specific elements, e.g., maximal ontologies and minimal ontologies. An ontology Valignment pair is a pair of ontology homomorphisms with a common domain. We also show that the ontology merging system, given by ontology V-alignment pairs and pushouts, satisfies idempotence, commutativity, associativity, and representativity properties so that the merging system is partially ordered and the merging closure of a given repository with respect to pushouts can be computed efficiently.
Keywords: Ontology aligning, ontology merging, merging system, poset, merging closure, ontology V-alignment pair, ontology homomorphism, ontology V-alignment pair homomorphism, pushout.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33114 Towards Real-Time Classification of Finger Movement Direction Using Encephalography Independent Components
Authors: Mohamed Mounir Tellache, Hiroyuki Kambara, Yasuharu Koike, Makoto Miyakoshi, Natsue Yoshimura
Abstract:
This study explores the practicality of using electroencephalographic (EEG) independent components to predict eight-direction finger movements in pseudo-real-time. Six healthy participants with individual-head MRI images performed finger movements in eight directions with two different arm configurations. The analysis was performed in two stages. The first stage consisted of using independent component analysis (ICA) to separate the signals representing brain activity from non-brain activity signals and to obtain the unmixing matrix. The resulting independent components (ICs) were checked, and those reflecting brain-activity were selected. Finally, the time series of the selected ICs were used to predict eight finger-movement directions using Sparse Logistic Regression (SLR). The second stage consisted of using the previously obtained unmixing matrix, the selected ICs, and the model obtained by applying SLR to classify a different EEG dataset. This method was applied to two different settings, namely the single-participant level and the group-level. For the single-participant level, the EEG dataset used in the first stage and the EEG dataset used in the second stage originated from the same participant. For the group-level, the EEG datasets used in the first stage were constructed by temporally concatenating each combination without repetition of the EEG datasets of five participants out of six, whereas the EEG dataset used in the second stage originated from the remaining participants. The average test classification results across datasets (mean ± S.D.) were 38.62 ± 8.36% for the single-participant, which was significantly higher than the chance level (12.50 ± 0.01%), and 27.26 ± 4.39% for the group-level which was also significantly higher than the chance level (12.49% ± 0.01%). The classification accuracy within [–45°, 45°] of the true direction is 70.03 ± 8.14% for single-participant and 62.63 ± 6.07% for group-level which may be promising for some real-life applications. Clustering and contribution analyses further revealed the brain regions involved in finger movement and the temporal aspect of their contribution to the classification. These results showed the possibility of using the ICA-based method in combination with other methods to build a real-time system to control prostheses.Keywords: Brain-computer interface, BCI, electroencephalography, EEG, finger motion decoding, independent component analysis, pseudo-real-time motion decoding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59913 An Application of Path Planning Algorithms for Autonomous Inspection of Buried Pipes with Swarm Robots
Authors: Richard Molyneux, Christopher Parrott, Kirill Horoshenkov
Abstract:
This paper aims to demonstrate how various algorithms can be implemented within swarms of autonomous robots to provide continuous inspection within underground pipeline networks. Current methods of fault detection within pipes are costly, time consuming and inefficient. As such, solutions tend toward a more reactive approach, repairing faults, as opposed to proactively seeking leaks and blockages. The paper presents an efficient inspection method, showing that autonomous swarm robotics is a viable way of monitoring underground infrastructure. Tailored adaptations of various Vehicle Routing Problems (VRP) and path-planning algorithms provide a customised inspection procedure for complicated networks of underground pipes. The performance of multiple algorithms is compared to determine their effectiveness and feasibility. Notable inspirations come from ant colonies and stigmergy, graph theory, the k-Chinese Postman Problem ( -CPP) and traffic theory. Unlike most swarm behaviours which rely on fast communication between agents, underground pipe networks are a highly challenging communication environment with extremely limited communication ranges. This is due to the extreme variability in the pipe conditions and relatively high attenuation of acoustic and radio waves with which robots would usually communicate. This paper illustrates how to optimise the inspection process and how to increase the frequency with which the robots pass each other, without compromising the routes they are able to take to cover the whole network.
Keywords: Autonomous inspection, buried pipes, stigmergy, swarm intelligence, vehicle routing problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 101412 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation
Authors: H. Khanfari, M. Johari Fard
Abstract:
Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.
Keywords: Carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 179011 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation
Authors: Somayeh Komeylian
Abstract:
The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).
Keywords: DoA estimation, adaptive antenna array, Deep Neural Network, LS-SVM optimization model, radial basis function, MSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 539