Search results for: Solar-cell power generating system.
683 Robot Control by ERPs of Brain Waves
Authors: K. T. Sun, Y. H. Tai, H. W. Yang, H. T. Lin
Abstract:
This paper presented the technique of robot control by event-related potentials (ERPs) of brain waves. Based on the proposed technique, severe physical disabilities can free browse outside world. A specific component of ERPs, N2P3, was found and used to control the movement of robot and the view of camera on the designed brain-computer interface (BCI). Users only required watching the stimuli of attended button on the BCI, the evoked potentials of brain waves of the target button, N2P3, had the greatest amplitude among all control buttons. An experimental scene had been constructed that the robot required walking to a specific position and move the view of camera to see the instruction of the mission, and then completed the task. Twelve volunteers participated in this experiment, and experimental results showed that the correct rate of BCI control achieved 80% and the average of execution time was 353 seconds for completing the mission. Four main contributions included in this research: (1) find an efficient component of ERPs, N2P3, for BCI control, (2) embed robot's viewpoint image into user interface for robot control, (3) design an experimental scene and conduct the experiment, and (4) evaluate the performance of the proposed system for assessing the practicability.
Keywords: Brain-computer interface (BCI), event-related potentials (ERPs), robot control, severe physical disabilities.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2597682 Development of Reliable Web-Based Laboratories for Developing Countries
Authors: Teyana S. Sapula, Damian D. Haule
Abstract:
In online context, the design and implementation of effective remote laboratories environment is highly challenging on account of hardware and software needs. This paper presents the remote laboratory software framework modified from ilab shared architecture (ISA). The ISA is a framework which enables students to remotely acccess and control experimental hardware using internet infrastructure. The need for remote laboratories came after experiencing problems imposed by traditional laboratories. Among them are: the high cost of laboratory equipment, scarcity of space, scarcity of technical personnel along with the restricted university budget creates a significant bottleneck on building required laboratory experiments. The solution to these problems is to build web-accessible laboratories. Remote laboratories allow students and educators to interact with real laboratory equipment located anywhere in the world at anytime. Recently, many universities and other educational institutions especially in third world countries rely on simulations because they do not afford the experimental equipment they require to their students. Remote laboratories enable users to get real data from real-time hand-on experiments. To implement many remote laboratories, the system architecture should be flexible, understandable and easy to implement, so that different laboratories with different hardware can be deployed easily. The modifications were made to enable developers to add more equipment in ISA framework and to attract the new developers to develop many online laboratories.Keywords: Batched, ISA, labserver, servicebroker.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1428681 A Generic Middleware to Instantly Sync Intensive Writes of Heterogeneous Massive Data via Internet
Authors: Haitao Yang, Zhenjiang Ruan, Fei Xu, Lanting Xia
Abstract:
Industry data centers often need to sync data changes reliably and instantly from a large-scale of heterogeneous autonomous relational databases accessed via the not-so-reliable Internet, for which a practical generic sync middleware of low maintenance and operation costs is most wanted. To this demand, this paper presented a generic sync middleware system (GSMS), which has been developed, applied and optimized since 2006, holding the principles or advantages that it must be SyncML-compliant and transparent to data application layer logic without referring to implementation details of databases synced, does not rely on host computer operating systems deployed, and its construction is light weighted and hence of low cost. Regarding these hard commitments of developing GSMS, in this paper we stressed the significant optimization breakthrough of GSMS sync delay being well below a fraction of millisecond per record sync. A series of ultimate tests with GSMS sync performance were conducted for a persuasive example, in which the source relational database underwent a broad range of write loads (from one thousand to one million intensive writes within a few minutes). All these tests showed that the performance of GSMS is competent and smooth even under ultimate write loads.
Keywords: Heterogeneous massive data, instantly sync intensive writes, Internet generic middleware design, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 453680 After Schubert’s Winterreise: Contemporary Aesthetic Journeys
Authors: Maria de Fátima Lambert
Abstract:
Following previous studies about Writing and Seeing, this paper focuses on the aesthetic assumptions within the concept of Winter Journey (Voyage d’Hiver/Winterreise) both in Georges Perec’s Saga and the Oulipo Group vis-à-vis with the creations by William Kentridge and Michael Borremans. The aesthetic and artistic connections are widespread. Nevertheless, we can identify common poetical principles shared by these different authors, not only according to the notion of ekphrasis, but also following the procedures of contemporary creation in literature and visual arts. The analysis of the ongoing process of the French writers as individuals and as group and the visual artists’ acting might contribute for another crossed definition of contemporary conception. The same title/theme was a challenge and a goal for them. Let’s wonder how deep the concept encouraged them and which symbolic upbringings were directing their poetical achievements. The idea of an inner journey became the main point, and got “over” and “across” a shared path worth to be followed. The authors were chosen due to the resilient contents of their visual and written images, and looking for the reasons that might had driven their conceptual basis to be. In Pérec’s “Winter Journey” as for the following fictions by Jacques Roubaud, Hervé le Tellier, Jacques Jouet and Hugo Vernier (that emerges from Perec’s fiction and becomes a real author) powerful aesthetic and enigmatic reflections grow connected with a poetic (and aesthetic) understanding of Walkscapes. They might be assumed as ironic fictions and poetical drifts. Outstanding from different logics, the overwhelming impact of Winterreise Lied by Schubert after Wilhelm Müller’s poems is a major reference in present authorship creations. Both Perec and Oulipo’s author’s texts are powerfully ekphrastic, although we should not forget they follow goals, frameworks and identities. When acting as a reader, they induce powerful imageries - cinematic or cinematographic - that flow in our minds. It was well-matched with William Kentridge animated video Winter Journey (2014) and the creations (sharing the same title) of Michael Borremans (2014) for the KlaraFestival, Bozar, Cité de la musique, in Belgium. Both were taken by the foremost Schubert’s Winterreise. Several metaphors fulfil new Winter Journeys (or Travels) that were achieved in contemporary art and literature, as it once succeeded in the 19th century. Maybe the contemporary authors and artists were compelled by the consciousness of nothingness, although outstanding different aesthetics and ontological sources. The unbearable knowledge of the road’s end, and also the urge of fulfilling the void might be a common element to all of them. As Schopenhauer once wrote, after all, Art is the only human subjective power that we can call upon in life. These newer aesthetic meanings, released from these winter journeys are surely open to wider approaches that might happen in other poetic makings to be.Keywords: Aesthetic, Voyage d’Hiver, Georges Perec & Oulipo, Schubert's Winterreise, William Kentridge & Michael Borremans.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 450679 A Test Methodology to Measure the Open-Loop Voltage Gain of an Operational Amplifier
Authors: Maninder Kaur Gill, Alpana Agarwal
Abstract:
It is practically not feasible to measure the open-loop voltage gain of the operational amplifier in the open loop configuration. It is because the open-loop voltage gain of the operational amplifier is very large. In order to avoid the saturation of the output voltage, a very small input should be given to operational amplifier which is not possible to be measured practically by a digital multimeter. A test circuit for measurement of open loop voltage gain of an operational amplifier has been proposed and verified using simulation tools as well as by experimental methods on breadboard. The main advantage of this test circuit is that it is simple, fast, accurate, cost effective, and easy to handle even on a breadboard. The test circuit requires only the device under test (DUT) along with resistors. This circuit has been tested for measurement of open loop voltage gain for different operational amplifiers. The underlying goal is to design testable circuits for various analog devices that are simple to realize in VLSI systems, giving accurate results and without changing the characteristics of the original system. The DUTs used are LM741CN and UA741CP. For LM741CN, the simulated gain and experimentally measured gain (average) are calculated as 89.71 dB and 87.71 dB, respectively. For UA741CP, the simulated gain and experimentally measured gain (average) are calculated as 101.15 dB and 105.15 dB, respectively. These values are found to be close to the datasheet values.Keywords: Device under test, open-loop voltage gain, operational amplifier, test circuit.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3333678 Two Scenarios for Ultra-Light Overhead Conveyor System in Logistics Applications
Authors: Batin Latif Aylak, Bernd Noche
Abstract:
Overhead conveyor systems are in use in many installations around the world, meeting the widest range of applications possible. Overhead conveyor systems are particularly preferred in automotive industry but also at post offices. Overhead conveyor systems must always be integrated with a logistical process by finding the best way for a cheaper material flow in order to guarantee precise and fast workflows. With their help, any transport can take place without wasting ground and space, without excessive company capacity, lost or damaged products, erroneous delivery, endless travels and without wasting time. Ultra-light overhead conveyor systems are rope-based conveying systems with individually driven vehicles. The vehicles can move automatically on the rope and this can be realized by energy and signals. Crossings are realized by switches. Ultra-light overhead conveyor systems provide optimal material flow, which produces profit and saves time. This article introduces two new ultra-light overhead conveyor designs in logistics and explains their components. According to the explanation of the components, scenarios are created by means of their technical characteristics. The scenarios are visualized with the help of CAD software. After that, assumptions are made for application area. According to these assumptions scenarios are visualized. These scenarios help logistics companies achieve lower development costs as well as quicker market maturity.
Keywords: Logistics, material flow, overhead conveyor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1994677 Students’ Level of Knowledge Construction and Pattern of Social Interaction in an Online Forum
Authors: K. Durairaj, I. N. Umar
Abstract:
The asynchronous discussion forum is one of the most widely used activities in learning management system environment. Online forum allows participants to interact, construct knowledge, and can be used to complement face to face sessions in blended learning courses. However, to what extent do the students perceive the benefits or advantages of forum remain to be seen. Through content and social network analyses, instructors will be able to gauge the students’ engagement and knowledge construction level. Thus, this study aims to analyze the students’ level of knowledge construction and their participation level that occur through online discussion. It also attempts to investigate the relationship between the level of knowledge construction and their social interaction patterns. The sample involves 23 students undertaking a master course in one public university in Malaysia. The asynchronous discussion forum was conducted for three weeks as part of the course requirement. The finding indicates that the level of knowledge construction is quite low. Also, the density value of 0.11 indicating the overall communication among the participants in the forum is low. This study reveals that strong and significant correlations between SNA measures (in-degree centrality, out-degree centrality) and level of knowledge construction. Thus, allocating these active students in different group aids the interactive discussion takes place. Finally, based upon the findings, some recommendations to increase students’ level of knowledge construction and also for further research are proposed.
Keywords: Asynchronous Discussion Forums, Content Analysis, Knowledge Construction, Social Network Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2210676 Removal of Boron from Waste Waters by Ion- Exchange in a Batch System
Authors: Pelin Demirçivi, Gülhayat Nasün-Saygılı
Abstract:
Boron minerals are very useful for various industrial activities, such as glass industry and detergent industry, due to its mechanical and chemical properties. During the production of boron compounds, many of these are introduced into the environment in the form of waste. Boron is also an important micro nutrient for the plants to vegetate but if it exists in high concentrations, it could have toxic effects. The maximum boron level in drinking water for human health is given as 0.3 mg/L in World Health Organization (WHO) standards. The toxic effects of boron should be noted especially for dry regions, thus, in recent years, increasing attention has been paid to remove the boron from waste waters. In this study, boron removal is implemented by ion exchange process using Amberlite IRA-743 resin. Amberlite IRA-743 resin is a boron specific resin and it belongs to the polymerizate sorbent group within the aminopolyol functional group. Batch studies were performed to investigate the effects of various experimental parameters, such as adsorbent dose, initial concentration and pH, on the removal of boron. It is found that, when the adsorbent dose increases removal of boron from the liquid phase increases. However, an increase in the initial concentration decreases the removal of boron. The effective pH values for removal of boron are determined between 8.5 and 9. Equilibrium isotherms were also analyzed by Langmuir and Freundlich isotherm models. The Langmuir isotherm is obeyed better than the Freundlich isotherm.Keywords: Amberlite resin, boron removal, ion exchange, isotherm models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2420675 A Trainable Neural Network Ensemble for ECG Beat Classification
Authors: Atena Sajedin, Shokoufeh Zakernejad, Soheil Faridi, Mehrdad Javadi, Reza Ebrahimpour
Abstract:
This paper illustrates the use of a combined neural network model for classification of electrocardiogram (ECG) beats. We present a trainable neural network ensemble approach to develop customized electrocardiogram beat classifier in an effort to further improve the performance of ECG processing and to offer individualized health care. We process a three stage technique for detection of premature ventricular contraction (PVC) from normal beats and other heart diseases. This method includes a denoising, a feature extraction and a classification. At first we investigate the application of stationary wavelet transform (SWT) for noise reduction of the electrocardiogram (ECG) signals. Then feature extraction module extracts 10 ECG morphological features and one timing interval feature. Then a number of multilayer perceptrons (MLPs) neural networks with different topologies are designed. The performance of the different combination methods as well as the efficiency of the whole system is presented. Among them, Stacked Generalization as a proposed trainable combined neural network model possesses the highest recognition rate of around 95%. Therefore, this network proves to be a suitable candidate in ECG signal diagnosis systems. ECG samples attributing to the different ECG beat types were extracted from the MIT-BIH arrhythmia database for the study. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2214674 Malware Beaconing Detection by Mining Large-scale DNS Logs for Targeted Attack Identification
Authors: Andrii Shalaginov, Katrin Franke, Xiongwei Huang
Abstract:
One of the leading problems in Cyber Security today is the emergence of targeted attacks conducted by adversaries with access to sophisticated tools. These attacks usually steal senior level employee system privileges, in order to gain unauthorized access to confidential knowledge and valuable intellectual property. Malware used for initial compromise of the systems are sophisticated and may target zero-day vulnerabilities. In this work we utilize common behaviour of malware called ”beacon”, which implies that infected hosts communicate to Command and Control servers at regular intervals that have relatively small time variations. By analysing such beacon activity through passive network monitoring, it is possible to detect potential malware infections. So, we focus on time gaps as indicators of possible C2 activity in targeted enterprise networks. We represent DNS log files as a graph, whose vertices are destination domains and edges are timestamps. Then by using four periodicity detection algorithms for each pair of internal-external communications, we check timestamp sequences to identify the beacon activities. Finally, based on the graph structure, we infer the existence of other infected hosts and malicious domains enrolled in the attack activities.Keywords: Malware detection, network security, targeted attack.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6105673 Analytical Model Based Evaluation of Human Machine Interfaces Using Cognitive Modeling
Authors: Belkacem Chikhaoui, Helene Pigot
Abstract:
Cognitive models allow predicting some aspects of utility and usability of human machine interfaces (HMI), and simulating the interaction with these interfaces. The action of predicting is based on a task analysis, which investigates what a user is required to do in terms of actions and cognitive processes to achieve a task. Task analysis facilitates the understanding of the system-s functionalities. Cognitive models are part of the analytical approaches, that do not associate the users during the development process of the interface. This article presents a study about the evaluation of a human machine interaction with a contextual assistant-s interface using ACTR and GOMS cognitive models. The present work shows how these techniques may be applied in the evaluation of HMI, design and research by emphasizing firstly the task analysis and secondly the time execution of the task. In order to validate and support our results, an experimental study of user performance is conducted at the DOMUS laboratory, during the interaction with the contextual assistant-s interface. The results of our models show that the GOMS and ACT-R models give good and excellent predictions respectively of users performance at the task level, as well as the object level. Therefore, the simulated results are very close to the results obtained in the experimental study.Keywords: HMI, interface evaluation, Analytical evaluation, cognitivemodeling, user modeling, user performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530672 Numerical Study of Natural Convection Effects in Latent Heat Storage using Aluminum Fins and Spiral Fillers
Authors: Lippong Tan, Yuenting Kwok, Ahbijit Date, Aliakbar Akbarzadeh
Abstract:
A numerical investigation has carried out to understand the melting characteristics of phase change material (PCM) in a fin type latent heat storage with the addition of embedded aluminum spiral fillers. It is known that melting performance of PCM can be significantly improved by increasing the number of embedded metallic fins in the latent heat storage system but to certain values where only lead to small improvement in heat transfer rate. Hence, adding aluminum spiral fillers within the fin gap can be an option to improve heat transfer internally. This paper presents extensive computational visualizations on the PCM melting patterns of the proposed fin-spiral fillers configuration. The aim of this investigation is to understand the PCM-s melting behaviors by observing the natural convection currents movement and melting fronts formation. Fluent 6.3 simulation software was utilized in producing twodimensional visualizations of melting fractions, temperature distributions and flow fields to illustrate the melting process internally. The results show that adding aluminum spiral fillers in Fin type latent heat storage can promoted small but more active natural convection currents and improve melting of PCM.
Keywords: Phase change material, thermal enhancement, aluminum spiral fillers, fins
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3404671 Evolutionary Training of Hybrid Systems of Recurrent Neural Networks and Hidden Markov Models
Authors: Rohitash Chandra, Christian W. Omlin
Abstract:
We present a hybrid architecture of recurrent neural networks (RNNs) inspired by hidden Markov models (HMMs). We train the hybrid architecture using genetic algorithms to learn and represent dynamical systems. We train the hybrid architecture on a set of deterministic finite-state automata strings and observe the generalization performance of the hybrid architecture when presented with a new set of strings which were not present in the training data set. In this way, we show that the hybrid system of HMM and RNN can learn and represent deterministic finite-state automata. We ran experiments with different sets of population sizes in the genetic algorithm; we also ran experiments to find out which weight initializations were best for training the hybrid architecture. The results show that the hybrid architecture of recurrent neural networks inspired by hidden Markov models can train and represent dynamical systems. The best training and generalization performance is achieved when the hybrid architecture is initialized with random real weight values of range -15 to 15.Keywords: Deterministic finite-state automata, genetic algorithm, hidden Markov models, hybrid systems and recurrent neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1889670 An Investigation on the Accuracy of Nonlinear Static Procedures for Seismic Evaluation of Buckling-restrained Braced Frames
Authors: An Hong Nguyen, Chatpan Chintanapakdee, Toshiro Hayashikawa
Abstract:
Presented herein is an assessment of current nonlinear static procedures (NSPs) for seismic evaluation of bucklingrestrained braced frames (BRBFs) which have become a favorable lateral-force resisting system for earthquake resistant buildings. The bias and accuracy of modal, improved modal pushover analysis (MPA, IMPA) and mass proportional pushover (MPP) procedures are comparatively investigated when they are applied to BRBF buildings subjected to two sets of strong ground motions. The assessment is based on a comparison of seismic displacement demands such as target roof displacements, peak floor/roof displacements and inter-story drifts. The NSP estimates are compared to 'exact' results from nonlinear response history analysis (NLRHA). The response statistics presented show that the MPP procedure tends to significantly overestimate seismic demands of lower stories of tall buildings considered in this study while MPA and IMPA procedures provide reasonably accurate results in estimating maximum inter-story drift over all stories of studied BRBF systems.Keywords: Buckling-restrained braced frames, nonlinearresponse history analysis, nonlinear static procedure, seismicdemands.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1957669 Face Recognition Using Double Dimension Reduction
Authors: M. A Anjum, M. Y. Javed, A. Basit
Abstract:
In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.
Keywords: Biometrics, DCT, Face Recognition, Feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491668 Motion Analysis for Duplicate Frame Removal in Wireless Capsule Endoscope Video
Authors: Min Kook Choi, Hyun Gyu Lee, Ryan You, Byeong-Seok Shin, Sang-Chul Lee
Abstract:
Wireless capsule Endoscopy (WCE) has rapidly shown its wide applications in medical domain last ten years thanks to its noninvasiveness for patients and support for thorough inspection through a patient-s entire digestive system including small intestine. However, one of the main barriers to efficient clinical inspection procedure is that it requires large amount of effort for clinicians to inspect huge data collected during the examination, i.e., over 55,000 frames in video. In this paper, we propose a method to compute meaningful motion changes of WCE by analyzing the obtained video frames based on regional optical flow estimations. The computed motion vectors are used to remove duplicate video frames caused by WCE-s imaging nature, such as repetitive forward-backward motions from peristaltic movements. The motion vectors are derived by calculating directional component vectors in four local regions. Our experiments are performed on small intestine area, which is of main interest to clinical experts when using WCEs, and our experimental results show significant frame reductions comparing with a simple frame-to-frame similarity-based image reduction method.Keywords: Wireless capsule endoscopy, optical flow, duplicated image, duplicated frame.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1691667 Applying the Regression Technique for Prediction of the Acute Heart Attack
Authors: Paria Soleimani, Arezoo Neshati
Abstract:
Myocardial infarction is one of the leading causes of death in the world. Some of these deaths occur even before the patient reaches the hospital. Myocardial infarction occurs as a result of impaired blood supply. Because the most of these deaths are due to coronary artery disease, hence the awareness of the warning signs of a heart attack is essential. Some heart attacks are sudden and intense, but most of them start slowly, with mild pain or discomfort, then early detection and successful treatment of these symptoms is vital to save them. Therefore, importance and usefulness of a system designing to assist physicians in early diagnosis of the acute heart attacks is obvious. The main purpose of this study would be to enable patients to become better informed about their condition and to encourage them to seek professional care at an earlier stage in the appropriate situations. For this purpose, the data were collected on 711 heart patients in Iran hospitals. 28 attributes of clinical factors can be reported by patients; were studied. Three logistic regression models were made on the basis of the 28 features to predict the risk of heart attacks. The best logistic regression model in terms of performance had a C-index of 0.955 and with an accuracy of 94.9%. The variables, severe chest pain, back pain, cold sweats, shortness of breath, nausea and vomiting, were selected as the main features.
Keywords: Coronary heart disease, acute heart attacks, prediction, logistic regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2424666 Microservices-Based Provisioning and Control of Network Services for Heterogeneous Networks
Authors: Shameemraj M. Nadaf, Sipra Behera, Hemant K. Rath, Garima Mishra, Raja Mukhopadhyay, Sumanta Patro
Abstract:
Microservices architecture has been widely embraced for rapid, frequent, and reliable delivery of complex applications. It enables organizations to evolve their technology stack in various domains. Today, the networking domain is flooded with plethora of devices and software solutions which address different functionalities ranging from elementary operations, viz., switching, routing, firewall etc., to complex analytics and insights based intelligent services. In this paper, we attempt to bring in the microservices based approach for agile and adaptive delivery of network services for any underlying networking technology. We discuss the life cycle management of each individual microservice and a distributed control approach with emphasis for dynamic provisioning, management, and orchestration in an automated fashion which can provide seamless operations in large scale networks. We have conducted validations of the system in lab testbed comprising of Traditional/Legacy and Software Defined Wireless Local Area networks.
Keywords: Microservices architecture, software defined wireless networks, traditional wireless networks, automation, orchestration, intelligent networks, network analytics, seamless management, single pane control, fine-grain control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 888665 FEM Simulation of Triple Diffusive Magnetohydrodynamics Effect of Nanofluid Flow over a Nonlinear Stretching Sheet
Authors: Rangoli Goyal, Rama Bhargava
Abstract:
The triple diffusive boundary layer flow of nanofluid under the action of constant magnetic field over a non-linear stretching sheet has been investigated numerically. The model includes the effect of Brownian motion, thermophoresis, and cross-diffusion; slip mechanisms which are primarily responsible for the enhancement of the convective features of nanofluid. The governing partial differential equations are transformed into a system of ordinary differential equations (by using group theory transformations) and solved numerically by using variational finite element method. The effects of various controlling parameters, such as the magnetic influence number, thermophoresis parameter, Brownian motion parameter, modified Dufour parameter, and Dufour solutal Lewis number, on the fluid flow as well as on heat and mass transfer coefficients (both of solute and nanofluid) are presented graphically and discussed quantitatively. The present study has industrial applications in aerodynamic extrusion of plastic sheets, coating and suspensions, melt spinning, hot rolling, wire drawing, glass-fibre production, and manufacture of polymer and rubber sheets, where the quality of the desired product depends on the stretching rate as well as external field including magnetic effects.Keywords: FEM, Thermophoresis, Diffusiophoresis, Brownian motion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450664 Analysis Model for the Relationship of Users, Products, and Stores on Online Marketplace Based on Distributed Representation
Authors: Ke He, Wumaier Parezhati, Haruka Yamashita
Abstract:
Recently, online marketplaces in the e-commerce industry, such as Rakuten and Alibaba, have become some of the most popular online marketplaces in Asia. In these shopping websites, consumers can select purchase products from a large number of stores. Additionally, consumers of the e-commerce site have to register their name, age, gender, and other information in advance, to access their registered account. Therefore, establishing a method for analyzing consumer preferences from both the store and the product side is required. This study uses the Doc2Vec method, which has been studied in the field of natural language processing. Doc2Vec has been used in many cases to analyze the extraction of semantic relationships between documents (represented as consumers) and words (represented as products) in the field of document classification. This concept is applicable to represent the relationship between users and items; however, the problem is that one more factor (i.e., shops) needs to be considered in Doc2Vec. More precisely, a method for analyzing the relationship between consumers, stores, and products is required. The purpose of our study is to combine the analysis of the Doc2vec model for users and shops, and for users and items in the same feature space. This method enables the calculation of similar shops and items for each user. In this study, we derive the real data analysis accumulated in the online marketplace and demonstrate the efficiency of the proposal.Keywords: Doc2Vec, marketing, online marketplace, recommendation system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 465663 Optimization of Molasses Desugarization Process Using Steffen Method in Sugar Beet Factories
Authors: Simin Asadollahi, Mohammad Hossein Haddad Khodaparast
Abstract:
Molasses is one of the most important by-products in sugar industry, which contains a large amount of sucrose. The routine way to separate the sucrose from molasses is using steffen method. Whereas this method is very usual in sugar factories, the aim of this research is optimization of this method. Mentioned optimization depends to three factors of reactor alkality, reactor temperature and diluted molasses brix. Accordingly, three different stages must be done:
- Construction of a pilot plant similar to actual steffen system in sugar factories
- Experimenting using the pilot plant
- Laboratory analysis
These experiences included 27 treatments in three replications. In each replication, brix, polarization and purity characters in Saccharate syrup and hot and cold waste were measured. The results showed that diluted molasses brix, reactor alkality and reactor temperature had many significant effects on Saccharate purity and efficiency of molasses desugarization. This research was performed in "randomize complete design" form & was analyzed with "duncan multiple range test". The significant difference in the level of α = 5% is observed between the treatments. The results indicated that the optimal conditions for molasses desugarization by steffen method are: diluted molasses brix= 10, reactor alkality= 10 and reactor temperature=8˚C.
Keywords: Molasses desugarization, Saccharate purity, Steffen process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3003662 Flutter Analysis of Slender Beams with Variable Cross Sections Based on Integral Equation Formulation
Authors: Z. El Felsoufi, L. Azrar
Abstract:
This paper studies a mathematical model based on the integral equations for dynamic analyzes numerical investigations of a non-uniform or multi-material composite beam. The beam is subjected to a sub-tangential follower force and elastic foundation. The boundary conditions are represented by generalized parameterized fixations by the linear and rotary springs. A mathematical formula based on Euler-Bernoulli beam theory is presented for beams with variable cross-sections. The non-uniform section introduces non-uniformity in the rigidity and inertia of beams and consequently, more complicated equilibrium who governs the equation. Using the boundary element method and radial basis functions, the equation of motion is reduced to an algebro-differential system related to internal and boundary unknowns. A generalized formula for the deflection, the slope, the moment and the shear force are presented. The free vibration of non-uniform loaded beams is formulated in a compact matrix form and all needed matrices are explicitly given. The dynamic stability analysis of slender beam is illustrated numerically based on the coalescence criterion. A realistic case related to an industrial chimney is investigated.
Keywords: Chimney, BEM and integral equation formulation, non uniform cross section, vibration and Flutter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1619661 Upgraded Cuckoo Search Algorithm to Solve Optimisation Problems Using Gaussian Selection Operator and Neighbour Strategy Approach
Authors: Mukesh Kumar Shah, Tushar Gupta
Abstract:
An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by “Study Nearby Strategy” to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.
Keywords: Economic dispatch, Gaussian selection operator, prohibited operating zones, ramp rate limits, upgraded cuckoo search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 683660 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-
Authors: Nieto Bernal Wilson, Carmona Suarez Edgar
Abstract:
The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects. Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.
Keywords: Data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1476659 Numerical Simulation for a Shallow Braced Excavation of Campus Building
Authors: Sao-Jeng Chao, Wen-Cheng Chen, Wei-Humg Lu
Abstract:
In order to prevent encountering unpredictable factors, geotechnical engineers always conduct numerical analysis for braced excavation design. Simulation work in advance can predict the response of subsequent excavation and thus will be designed to increase the security coefficient of construction. The parameters that are considered include geological conditions, soil properties, soil distributions, loading types, and the analysis and design methods. National Ilan University is located on the LanYang plain, mainly deposited by clayey soil and loose sand, and thus is vulnerable to external influence displacement. National Ilan University experienced a construction of braced excavation with a complete program of monitoring excavation. This study takes advantage of a one-dimensional finite element method RIDO to simulate the excavation process. The predicted results from numerical simulation analysis are compared with the monitored results of construction to explore the differences between them. Numerical simulation analysis of the excavation process can be used to analyze retaining structures for the purpose of understanding the relationship between the displacement and supporting system. The resulting deformation and stress distribution from the braced excavation cab then be understand in advance. The problems can be prevented prior to the construction process, and thus acquire all the affected important factors during design and construction.
Keywords: Excavation, numerical simulation, rido, retaining structure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 916658 Time-Domain Analysis Approaches of Soil-Structure Interaction: A Comparative Study
Authors: Abdelrahman Taha, Niloofar Malekghaini, Hamed Ebrahimian, Ramin Motamed
Abstract:
This paper compares the substructure and direct approaches for soil-structure interaction (SSI) analysis in the time domain. In the substructure approach, the soil domain is replaced by a set of springs and dashpots, also referred to as the impedance function, derived through the study of the behavior of a massless rigid foundation. The impedance function is inherently frequency dependent, i.e., it varies as a function of the frequency content of the structural response. To use the frequency-dependent impedance function for time-domain SSI analysis, the impedance function is approximated at the fundamental frequency of the coupled soil-structure system. To explore the potential limitations of the substructure modeling process, a two-dimensional (2D) reinforced concrete frame structure is modeled and analyzed using the direct and substructure approaches. The results show discrepancy between the simulated responses of the direct and substructure models. It is concluded that the main source of discrepancy is likely attributed to the way the impedance functions are calculated, i.e., assuming a massless rigid foundation without considering the presence of the superstructure. Hence, a refined impedance function, considering the presence of the superstructure, shall alternatively be developed. This refined impedance function is expected to improve the simulation accuracy of the substructure approach.
Keywords: Direct approach, impedance function, massless rigid foundation, soil-structure interaction, substructure approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 464657 Mathieu Stability of Offshore Buoyant Leg Storage and Regasification Platform
Authors: S. Chandrasekaran, P. A. Kiran
Abstract:
Increasing demand for large-sized Floating, Storage and Regasification Units (FSRUs) for oil and gas industries led to the development of novel geometric form of Buoyant Leg Storage and Regasification Platform (BLSRP). BLSRP consists of a circular deck supported by six buoyant legs placed symmetrically with respect to wave direction. Circular deck is connected to buoyant legs using hinged joints, which restrain transfer of rotational response from the legs to deck and vice-versa. Buoyant legs are connected to seabed using taut moored system with high initial pretension, enabling rigid body motion in vertical plane. Encountered environmental loads induce dynamic tether tension variations, which in turn affect stability of the platform. The present study investigates Mathieu stability of BLSRP under the postulated tether pullout cases by inducing additional tension in the tethers. From the numerical studies carried out, it is seen that postulated tether pullout on any one of the buoyant legs does not result in Mathieu type instability even under excessive tether tension. This is due to the presence of hinged joints, which are capable of dissipating the unbalanced loads to other legs. However, under tether pullout of consecutive buoyant legs, Mathieu-type instability is observed.Keywords: Offshore platforms, stability, postulated failure, dynamic tether tension.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 900656 Evaluation of Aquifer Protective Capacity and Soil Corrosivity Using Geoelectrical Method
Authors: M. T. Tsepav, Y. Adamu, M. A. Umar
Abstract:
A geoelectric survey was carried out in some parts of Angwan Gwari, an outskirt of Lapai Local Government Area on Niger State which belongs to the Nigerian Basement Complex, with the aim of evaluating the soil corrosivity, aquifer transmissivity and protective capacity of the area from which aquifer characterisation was made. The G41 Resistivity Meter was employed to obtain fifteen Schlumberger Vertical Electrical Sounding data along profiles in a square grid network. The data were processed using interpex 1-D sounding inversion software, which gives vertical electrical sounding curves with layered model comprising of the apparent resistivities, overburden thicknesses, and depth. This information was used to evaluate longitudinal conductance and transmissivities of the layers. The results show generally low resistivities across the survey area and an average longitudinal conductance variation from 0.0237Siemens in VES 6 to 0.1261Siemens in VES 15 with almost the entire area giving values less than 1.0 Siemens. The average transmissivity values range from 96.45 Ω.m2 in VES 4 to 299070 Ω.m2 in VES 1. All but VES 4 and VES14 had an average overburden greater than 400 Ω.m2, these results suggest that the aquifers are highly permeable to fluid movement within, leading to the possibility of enhanced migration and circulation of contaminants in the groundwater system and that the area is generally corrosive.
Keywords: Geoelectric survey, corrosivity, protective capacity, transmissivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2241655 Analysis of Noodle Production Process at Yan Hu Food Manufacturing: Basis for Production Improvement
Authors: Rhadinia Tayag-Relanes, Felina C. Young
Abstract:
This study was conducted to analyze the noodle production process at Yan Hu Food Manufacturing for the basis of production improvement. The study utilized the Plan, Do, Check, Act (PDCA) approach and record review in the gathering of data for the calendar year 2019, specifically from August to October, focusing on the noodle products miki, canton, and misua. A causal-comparative research design was employed to establish cause-effect relationships among the variables, using descriptive statistics and correlation to compute the data gathered. The findings indicate that miki, canton, and misua production have distinct cycle times and production outputs in every set of its production processes, as well as varying levels of wastage. The company has not yet established a formal allowable rejection rate for wastage; instead, this paper used a 1% wastage limit. We recommended the following: machines used for each process of the noodle product must be consistently maintained and monitored; an assessment of all the production operators should be conducted by assessing their performance statistically based on the output and the machine performance; a root cause analysis must be conducted to identify solutions to production issues; and, an improved recording system for input and output of the production process of each noodle product should be established to eliminate the poor recording of data.
Keywords: Production, continuous improvement, process, operations, Plan, Do, Check, Act approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23654 Analysis of One-Way and Two-Way FSI Approaches to Characterise the Flow Regime and the Mechanical Behaviour during Closing Manoeuvring Operation of a Butterfly Valve
Authors: M. Ezkurra, J. A. Esnaola, M. Martinez-Agirre, U. Etxeberria, U. Lertxundi, L. Colomo, M. Begiristain, I. Zurutuza
Abstract:
Butterfly valves are widely used industrial piping components as on-off and flow controlling devices. The main challenge in the design process of this type of valves is the correct dimensioning to ensure proper mechanical performance as well as to minimise flow losses that affect the efficiency of the system. Butterfly valves are typically dimensioned in a closed position based on mechanical approaches considering uniform hydrostatic pressure, whereas the flow losses are analysed by means of CFD simulations. The main limitation of these approaches is that they do not consider either the influence of the dynamics of the manoeuvring stage or coupled phenomena. Recent works have included the influence of the flow on the mechanical behaviour for different opening angles by means of one-way FSI approach. However, these works consider steady-state flow for the selected angles, not capturing the effect of the transient flow evolution during the manoeuvring stage. Two-way FSI modelling approach could allow overcoming such limitations providing more accurate results. Nevertheless, the use of this technique is limited due to the increase in the computational cost. In the present work, the applicability of FSI one-way and two-way approaches is evaluated for the analysis of butterfly valves, showing that not considering fluid-structure coupling involves not capturing the most critical situation for the valve disc.
Keywords: Butterfly valves, fluid-structure interaction, one-way approach, two-way approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1597