Search results for: large graph
6908 A Forbidden-Minor Characterization for the Class of Co-Graphic Matroids Which Yield the Graphic Element-Splitting Matroids
Authors: Prashant Malavadkar, Santosh Dhotre, Maruti Shikare
Abstract:
The n-point splitting operation on graphs is used to characterize 4-connected graphs with some more operations. Element splitting operation on binary matroids is a natural generalization of the notion of n-point splitting operation on graphs. The element splitting operation on a graphic (cographic) matroid may not yield a graphic (cographic) matroid. Characterization of graphic (cographic) matroids whose element splitting matroids are graphic (cographic) is known. The element splitting operation on a co-graphic matroid, in general may not yield a graphic matroid. In this paper, we give a necessary and sufficient condition for the cographic matroid to yield a graphic matroid under the element splitting operation. In fact, we prove that the element splitting operation, by any pair of elements, on a cographic matroid yields a graphic matroid if and only if it has no minor isomorphic to M(K4); where K4 is the complete graph on 4 vertices.Keywords: binary matroids, splitting, element splitting, forbidden minor
Procedia PDF Downloads 2786907 Design and Implementation of Agricultural Machinery Equipment Scheduling Platform Based On Case-Based Reasoning
Authors: Wen Li, Zhengyu Bai, Qi Zhang
Abstract:
The demand for smart scheduling platform in agriculture, particularly in the scheduling process of machinery equipment, is high. With the continuous development of agricultural machinery equipment technology, a large number of agricultural machinery equipment and agricultural machinery cooperative service organizations continue to appear in China. The large area of cultivated land and a large number of agricultural activities in the central and western regions of China have made the demand for smart and efficient agricultural machinery equipment scheduling platforms more intense. In this study, we design and implement a platform for agricultural machinery equipment scheduling to allocate agricultural machinery equipment resources reasonably. With agricultural machinery equipment scheduling platform taken as the research object, we discuss its research significance and value, use the service blueprint technology to analyze and characterize the agricultural machinery equipment schedule workflow, the network analytic method to obtain the demand platform function requirements, and divide the platform functions through the platform function division diagram. Simultaneously, based on the case-based reasoning (CBR) algorithm, the equipment scheduling module of the agricultural machinery equipment scheduling platform is realized; finally, a design scheme of the agricultural machinery equipment scheduling platform architecture is provided, and the visualization interface of the platform is established via VB programming language. It provides design ideas and theoretical support for the construction of a modern agricultural equipment information scheduling platform.Keywords: case-based reasoning, service blueprint, system design, ANP, VB programming language
Procedia PDF Downloads 1766906 Feasibility Study on Hybrid Multi-Stage Direct-Drive Generator for Large-Scale Wind Turbine
Authors: Jin Uk Han, Hye Won Han, Hyo Lim Kang, Tae An Kim, Seung Ho Han
Abstract:
Direct-drive generators for large-scale wind turbine, which are divided into AFPM(Axial Flux Permanent Magnet) and RFPM(Radial Flux Permanent Magnet) type machine, have attracted interest because of a higher energy density in comparison with gear train type generators. Each type of the machines provides distinguishable geometrical features such as narrow width with a large diameter for the AFPM-type machine and wide width with a certain diameter for the RFPM-type machine. When the AFPM-type machine is applied, an increase of electric power production through a multi-stage arrangement in axial direction is easily achieved. On the other hand, the RFPM-type machine can be applied by using its geometric feature of wide width. In this study, a hybrid two-stage direct-drive generator for 6.2MW class wind turbine was proposed, in which the two-stage AFPM-type machine for 5 MW was composed of two models arranged in axial direction with a hollow shape topology of the rotor with annular disc, the stator and the main shaft mounted on coupled slew bearings. In addition, the RFPM-type machine for 1.2MW was installed at the empty space of the rotor. Analytic results obtained from an electro-magnetic and structural interaction analysis showed that the structural weight of the proposed hybrid two-stage direct-drive generator can be achieved as 155tonf in a condition satisfying the requirements of structural behaviors such as allowable air-gap clearance and strength. Therefore, it was sure that the 6.2MW hybrid two-stage direct-drive generator is competitive than conventional generators. (NRF grant funded by the Korea government MEST, No. 2017R1A2B4005405).Keywords: AFPM-type machine, direct-drive generator, electro-magnetic analysis, large-scale wind turbine, RFPM-type machine
Procedia PDF Downloads 1696905 Petri Net Modeling and Simulation of a Call-Taxi System
Authors: T. Godwin
Abstract:
A call-taxi system is a type of taxi service where a taxi could be requested through a phone call or mobile app. A schematic functioning of a call-taxi system is modeled using Petri net, which provides the necessary conditions for a taxi to be assigned by a dispatcher to pick a customer as well as the conditions for the taxi to be released by the customer. A Petri net is a graphical modeling tool used to understand sequences, concurrences, and confluences of activities in the working of discrete event systems. It uses tokens on a directed bipartite multi-graph to simulate the activities of a system. The Petri net model is translated into a simulation model and a call-taxi system is simulated. The simulation model helps in evaluating the operation of a call-taxi system based on the fleet size as well as the operating policies for call-taxi assignment and empty call-taxi repositioning. The developed Petri net based simulation model can be used to decide the fleet size as well as the call-taxi assignment policies for a call-taxi system.Keywords: call-taxi, discrete event system, petri net, simulation modeling
Procedia PDF Downloads 4256904 Experimental Investigation of Plane Jets Exiting Five Parallel Channels with Large Aspect Ratio
Authors: Laurentiu Moruz, Jens Kitzhofer, Mircea Dinulescu
Abstract:
The paper aims to extend the knowledge about jet behavior and jet interaction between five plane unventilated jets with large aspect ratio (AR). The distance between the single plane jets is two times the channel height. The experimental investigation applies 2D Particle Image Velocimetry (PIV) and static pressure measurements. Our study focuses on the influence of two different outlet nozzle geometries (triangular shape with 2 x 7.5° and blunt geometry) with respect to variation of Reynolds number from 5500 - 12000. It is shown that the outlet geometry has a major influence on the jet formation in terms of uniformity of velocity profiles downstream of the sudden expansion. Furthermore, we describe characteristic regions like converging region, merging region and combined region. The triangular outlet geometry generates most uniform velocity distributions in comparison to a blunt outlet nozzle geometry. The blunt outlet geometry shows an unstable behavior where the jets tend to attach to one side of the walls (ceiling) generating a large recirculation region on the opposite side. Static pressure measurements confirm the observation and indicate that the recirculation region is connected to larger pressure drop.Keywords: 2D particle image velocimetry, parallel jet interaction, pressure drop, sudden expansion
Procedia PDF Downloads 2766903 Thermal Analysis and Computational Fluid Dynamics Simulation of Large-Scale Cryopump
Authors: Yue Shuai Zhao, Rong Ping Shao, Wei Sun, Guo Hua Ren, Yong Wang, Li Chen Sun
Abstract:
A large-scale cryopump (DN1250) used in large vacuum leak detecting system was designed and its performance experimentally investigated by Beijing Institute of Spacecraft Environment Engineering. The cryopump was cooled by four closed cycle helium refrigerators (two dual stage refrigerators and two single stage refrigerators). Detailed numerical analysis of the heat transfer in the first stage array and the second stage array were performed by using computational fluid dynamic method (CFD). Several design parameters were considered to find the effect on the temperature distribution and the cooldown time. The variation of thermal conductivity and heat capacity with temperature was taken into account. The thermal analysis method based on numerical techniques was introduced in this study, the heat transfer in the first stage array and the second stage cryopanel was carefully analyzed to determine important considerations in the thermal design of the cryopump. A performance test system according to the RNEUROP standards was built to test main performance of the cryopump. The experimental results showed that the structure of first stage array which was optimized by the method could meet the requirement of the cryopump well. The temperature of the cryopanel was down to 10K within 300 min, and the result of the experiment was accordant with theoretical analysis' conclusion. The test also showed that the pumping speed for N2 of the pump was up to 57,000 L/s, and the crossover was over than 300,000 Pa•L.Keywords: cryopump, temperature distribution, thermal analysis, CFD Simulation
Procedia PDF Downloads 3046902 Predicting Oil Spills in Real-Time: A Machine Learning and AIS Data-Driven Approach
Authors: Tanmay Bisen, Aastha Shayla, Susham Biswas
Abstract:
Oil spills from tankers can cause significant harm to the environment and local communities, as well as have economic consequences. Early predictions of oil spills can help to minimize these impacts. Our proposed system uses machine learning and neural networks to predict potential oil spills by monitoring data from ship Automatic Identification Systems (AIS). The model analyzes ship movements, speeds, and changes in direction to identify patterns that deviate from the norm and could indicate a potential spill. Our approach not only identifies anomalies but also predicts spills before they occur, providing early detection and mitigation measures. This can prevent or minimize damage to the reputation of the company responsible and the country where the spill takes place. The model's performance on the MV Wakashio oil spill provides insight into its ability to detect and respond to real-world oil spills, highlighting areas for improvement and further research.Keywords: Anomaly Detection, Oil Spill Prediction, Machine Learning, Image Processing, Graph Neural Network (GNN)
Procedia PDF Downloads 766901 Maximizing Coverage with Mobile Crime Cameras in a Stochastic Spatiotemporal Bipartite Network
Authors: (Ted) Edward Holmberg, Mahdi Abdelguerfi, Elias Ioup
Abstract:
This research details a coverage measure for evaluating the effectiveness of observer node placements in a spatial bipartite network. This coverage measure can be used to optimize the configuration of stationary or mobile spatially oriented observer nodes, or a hybrid of the two, over time in order to fully utilize their capabilities. To demonstrate the practical application of this approach, we construct a SpatioTemporal Bipartite Network (STBN) using real-time crime center (RTCC) camera nodes and NOPD calls for service (CFS) event nodes from New Orleans, La (NOLA). We use the coverage measure to identify optimal placements for moving mobile RTCC camera vans to improve coverage of vulnerable areas based on temporal patterns.Keywords: coverage measure, mobile node dynamics, Monte Carlo simulation, observer nodes, observable nodes, spatiotemporal bipartite knowledge graph, temporal spatial analysis
Procedia PDF Downloads 1166900 A Comprehensive Review on Structural Properties and Erection Benefits of Large Span Stressed-Arch Steel Truss Industrial Buildings
Authors: Anoush Saadatmehr
Abstract:
Design and build of large clear span structures have always been demanding in the construction industry targeting industrial and commercial buildings around the world. The function of these spectacular structures encompasses distinguished types of building such as aircraft and airship hangars, warehouses, bulk storage buildings, sports and recreation facilities. From an engineering point of view, there are various types of steel structure systems that are often adopted in large-span buildings like conventional trusses, space frames and cable-supported roofs. However, this paper intends to investigate and review an innovative light, economic and quickly erected large span steel structure renowned as “Stressed-Arch,” which has several advantages over the other common types of structures. This patented system integrates the use of cold-formed hollow section steel material with high-strength pre-stressing strands and concrete grout to establish an arch shape truss frame anywhere there is a requirement to construct a cost-effective column-free space for spans within the range of 60m to 180m. In this study and firstly, the main structural properties of the stressed-arch system and its components are discussed technically. These features include nonlinear behavior of truss chords during stress-erection, the effect of erection method on member’s compressive strength, the rigidity of pre-stressed trusses to overcome strict deflection criteria for cases with roof suspended cranes or specialized front doors and more importantly, the prominent lightness of steel structure. Then, the effects of utilizing pre-stressing strands to safeguard a smooth process of installation of main steel members and roof components and cladding are investigated. In conclusion, it is shown that the Stressed-Arch system not only provides an optimized light steel structure up to 30% lighter than its conventional competitors but also streamlines the process of building erection and minimizes the construction time while preventing the risks of working at height.Keywords: large span structure, pre-stressed steel truss, stressed-arch building, stress-erection, steel structure
Procedia PDF Downloads 1686899 Study the Influence of the Type of Cast Iron Chips on the Quality of Briquettes Obtained with Controlled Impact
Authors: Dimitar N. Karastoianov, Stanislav D. Gyoshev, Todor N. Penchev
Abstract:
Preparation of briquettes of metal chips with good density and quality is of great importance for the efficiency of this process. In this paper are presented the results of impact briquetting of grey cast iron chips with rectangular shape and dimensions 15x25x1 mm. Density and quality of briquettes of these chips are compared with those obtained in another work of the authors using cast iron chips with smaller sizes. It has been found that by using a rectangular chips with a large size are produced briquettes with a very low density and poor quality. From the photographs taken by X-ray tomography, it is clear that the reason for this is the orientation of the chip in the peripheral wall of the briquettes, which does not allow of the air to escape from it. It was concluded that in order to obtain briquettes of cast iron chips with a large size, these chips must first be ground, for example in a small ball mill.Keywords: briquetting, chips, impact, rocket engine
Procedia PDF Downloads 5256898 A Strategy for Reducing Dynamic Disorder in Small Molecule Organic Semiconductors by Suppressing Large Amplitude Thermal Motions
Authors: Steffen Illig, Alexander S. Eggeman, Alessandro Troisi, Stephen G. Yeates, John E. Anthony, Henning Sirringhaus
Abstract:
Large-amplitude intermolecular vibrations in combination with complex shaped transfer integrals generate a thermally fluctuating energetic landscape. The resulting dynamic disorder and its intrinsic presence in organic semiconductors is one of the most fundamental differences to their inorganic counterparts. Dynamic disorder is believed to govern many of the unique electrical and optical properties of organic systems. However, the low energy nature of these vibrations makes it difficult to access them experimentally and because of this we still lack clear molecular design rules to control and reduce dynamic disorder. Applying a novel technique based on electron diffraction we encountered strong intermolecular, thermal vibrations in every single organic material we studied (14 up to date), indicating that a large degree of dynamic disorder is a universal phenomenon in organic crystals. In this paper a new molecular design strategy will be presented to avoid dynamic disorder. We found that small molecules that have their side chains attached to the long axis of their conjugated core have been found to be less likely to suffer from dynamic disorder effects. In particular, we demonstrate that 2,7-dioctyl[1]benzothieno[3,2-b][1]benzothio-phene (C8-BTBT) and 2,9-di-decyl-dinaphtho-[2,3-b:20,30-f]-thieno-[3,2-b]-thiophene (C10DNTT) exhibit strongly reduced thermal vibrations in comparison to other molecules and relate their outstanding performance to their lower dynamic disorder. We rationalize the low degree of dynamic disorder in C8-BTBT and C10-DNTT with a better encapsulation of the conjugated cores in the crystal structure which helps reduce large amplitude thermal motions. The work presented in this paper provides a general strategy for the design of new classes of very high mobility organic semiconductors with low dynamic disorder.Keywords: charge transport, C8-BTBT, C10-DNTT, dynamic disorder, organic semiconductors, thermal vibrations
Procedia PDF Downloads 3996897 Calibration of the Discrete Element Method Using a Large Shear Box
Authors: C. J. Coetzee, E. Horn
Abstract:
One of the main challenges in using the Discrete Element Method (DEM) is to specify the correct input parameter values. In general, the models are sensitive to the input parameter values and accurate results can only be achieved if the correct values are specified. For the linear contact model, micro-parameters such as the particle density, stiffness, coefficient of friction, as well as the particle size and shape distributions are required. There is a need for a procedure to accurately calibrate these parameters before any attempt can be made to accurately model a complete bulk materials handling system. Since DEM is often used to model applications in the mining and quarrying industries, a calibration procedure was developed for materials that consist of relatively large (up to 40 mm in size) particles. A coarse crushed aggregate was used as the test material. Using a specially designed large shear box with a diameter of 590 mm, the confined Young’s modulus (bulk stiffness) and internal friction angle of the material were measured by means of the confined compression test and the direct shear test respectively. DEM models of the experimental setup were developed and the input parameter values were varied iteratively until a close correlation between the experimental and numerical results was achieved. The calibration process was validated by modelling the pull-out of an anchor from a bed of material. The model results compared well with experimental measurement.Keywords: Discrete Element Method (DEM), calibration, shear box, anchor pull-out
Procedia PDF Downloads 2916896 Developing a Deep Understanding of the Immune Response in Hepatitis B Virus Infected Patients Using a Knowledge Driven Approach
Authors: Hanan Begali, Shahi Dost, Annett Ziegler, Markus Cornberg, Maria-Esther Vidal, Anke R. M. Kraft
Abstract:
Chronic hepatitis B virus (HBV) infection can be treated with nucleot(s)ide analog (NA), for example, which inhibits HBV replication. However, they have hardly any influence on the functional cure of HBV, which is defined by hepatitis B surface antigen (HBsAg) loss. NA needs to be taken life-long, which is not available for all patients worldwide. Additionally, NA-treated patients are still at risk of developing cirrhosis, liver failure, or hepatocellular carcinoma (HCC). Although each patient has the same components of the immune system, immune responses vary between patients. Therefore, a deeper understanding of the immune response against HBV in different patients is necessary to understand the parameters leading to HBV cure and to use this knowledge to optimize HBV therapies. This requires seamless integration of an enormous amount of diverse and fine-grained data from viral markers, e.g., hepatitis B core-related antigen (HBcrAg) and hepatitis B surface antigen (HBsAg). The data integration system relies on the assumption that profiling human immune systems requires the analysis of various variables (e.g., demographic data, treatments, pre-existing conditions, immune cell response, or HLA-typing) rather than only one. However, the values of these variables are collected independently. They are presented in a myriad of formats, e.g., excel files, textual descriptions, lab book notes, and images of flow cytometry dot plots. Additionally, patients can be identified differently in these analyses. This heterogeneity complicates the integration of variables, as data management techniques are needed to create a unified view in which individual formats and identifiers are transparent when profiling the human immune systems. The proposed study (HBsRE) aims at integrating heterogeneous data sets of 87 chronically HBV-infected patients, e.g., clinical data, immune cell response, and HLA-typing, with knowledge encoded in biomedical ontologies and open-source databases into a knowledge-driven framework. This new technique enables us to harmonize and standardize heterogeneous datasets in the defined modeling of the data integration system, which will be evaluated in the knowledge graph (KG). KGs are data structures that represent the knowledge and data as factual statements using a graph data model. Finally, the analytic data model will be applied on top of KG in order to develop a deeper understanding of the immune profiles among various patients and to evaluate factors playing a role in a holistic profile of patients with HBsAg level loss. Additionally, our objective is to utilize this unified approach to stratify patients for new effective treatments. This study is developed in the context of the project “Transforming big data into knowledge: for deep immune profiling in vaccination, infectious diseases, and transplantation (ImProVIT)”, which is a multidisciplinary team composed of computer scientists, infection biologists, and immunologists.Keywords: chronic hepatitis B infection, immune response, knowledge graphs, ontology
Procedia PDF Downloads 1086895 Semantic Platform for Adaptive and Collaborative e-Learning
Authors: Massra M. Sabeima, Myriam lamolle, Mohamedade Farouk Nanne
Abstract:
Adapting the learning resources of an e-learning system to the characteristics of the learners is an important aspect to consider when designing an adaptive e-learning system. However, this adaptation is not a simple process; it requires the extraction, analysis, and modeling of user information. This implies a good representation of the user's profile, which is the backbone of the adaptation process. Moreover, during the e-learning process, collaboration with similar users (same geographic province or knowledge context) is important. Productive collaboration motivates users to continue or not abandon the course and increases the assimilation of learning objects. The contribution of this work is the following: we propose an adaptive e-learning semantic platform to recommend learning resources to learners, using ontology to model the user profile and the course content, furthermore an implementation of a multi-agent system able to progressively generate the learning graph (taking into account the user's progress, and the changes that occur) for each user during the learning process, and to synchronize the users who collaborate on a learning object.Keywords: adaptative learning, collaboration, multi-agent, ontology
Procedia PDF Downloads 1766894 Grid and Market Integration of Large Scale Wind Farms using Advanced Predictive Data Mining Techniques
Authors: Umit Cali
Abstract:
The integration of intermittent energy sources like wind farms into the electricity grid has become an important challenge for the utilization and control of electric power systems, because of the fluctuating behaviour of wind power generation. Wind power predictions improve the economic and technical integration of large amounts of wind energy into the existing electricity grid. Trading, balancing, grid operation, controllability and safety issues increase the importance of predicting power output from wind power operators. Therefore, wind power forecasting systems have to be integrated into the monitoring and control systems of the transmission system operator (TSO) and wind farm operators/traders. The wind forecasts are relatively precise for the time period of only a few hours, and, therefore, relevant with regard to Spot and Intraday markets. In this work predictive data mining techniques are applied to identify a statistical and neural network model or set of models that can be used to predict wind power output of large onshore and offshore wind farms. These advanced data analytic methods helps us to amalgamate the information in very large meteorological, oceanographic and SCADA data sets into useful information and manageable systems. Accurate wind power forecasts are beneficial for wind plant operators, utility operators, and utility customers. An accurate forecast allows grid operators to schedule economically efficient generation to meet the demand of electrical customers. This study is also dedicated to an in-depth consideration of issues such as the comparison of day ahead and the short-term wind power forecasting results, determination of the accuracy of the wind power prediction and the evaluation of the energy economic and technical benefits of wind power forecasting.Keywords: renewable energy sources, wind power, forecasting, data mining, big data, artificial intelligence, energy economics, power trading, power grids
Procedia PDF Downloads 5196893 Improved K-Means Clustering Algorithm Using RHadoop with Combiner
Authors: Ji Eun Shin, Dong Hoon Lim
Abstract:
Data clustering is a common technique used in data analysis and is used in many applications, such as artificial intelligence, pattern recognition, economics, ecology, psychiatry and marketing. K-means clustering is a well-known clustering algorithm aiming to cluster a set of data points to a predefined number of clusters. In this paper, we implement K-means algorithm based on MapReduce framework with RHadoop to make the clustering method applicable to large scale data. RHadoop is a collection of R packages that allow users to manage and analyze data with Hadoop. The main idea is to introduce a combiner as a function of our map output to decrease the amount of data needed to be processed by reducers. The experimental results demonstrated that K-means algorithm using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also showed that our K-means algorithm using RHadoop with combiner was faster than regular algorithm without combiner as the size of data set increases.Keywords: big data, combiner, K-means clustering, RHadoop
Procedia PDF Downloads 4406892 Optimization of Mechanical Cacao Shelling Parameters Using Unroasted Cocoa Beans
Authors: Jeffrey A. Lavarias, Jessie C. Elauria, Arnold R. Elepano, Engelbert K. Peralta, Delfin C. Suministrado
Abstract:
Shelling process is one of the primary processes and critical steps in the processing of chocolate or any product that is derived from cocoa beans. It affects the quality of the cocoa nibs in terms of flavor and purity. In the Philippines, small-scale food processor cannot really compete with large scale confectionery manufacturers because of lack of available postharvest facilities that are appropriate to their level of operation. The impact of this study is to provide the needed intervention that will pave the way for cacao farmers of engaging on the advantage of value-adding as way to maximize the economic potential of cacao. Thus, provision and availability of needed postharvest machines like mechanical cacao sheller will revolutionize the current state of cacao industry in the Philippines. A mechanical cacao sheller was developed, fabricated, and evaluated to establish optimum shelling conditions such as moisture content of cocoa beans, clearance where of cocoa beans passes through the breaker section and speed of the breaking mechanism on shelling recovery, shelling efficiency, shelling rate, energy utilization and large nib recovery; To establish the optimum level of shelling parameters of the mechanical sheller. These factors were statistically analyzed using design of experiment by Box and Behnken and Response Surface Methodology (RSM). By maximizing shelling recovery, shelling efficiency, shelling rate, large nib recovery and minimizing energy utilization, the optimum shelling conditions were established at moisture content, clearance and breaker speed of 6.5%, 3 millimeters and 1300 rpm, respectively. The optimum values for shelling recovery, shelling efficiency, shelling rate, large nib recovery and minimizing energy utilization were recorded at 86.51%, 99.19%, 21.85kg/hr, 89.75%, and 542.84W, respectively. Experimental values obtained using the optimum conditions were compared with predicted values using predictive models and were found in good agreement.Keywords: cocoa beans, optimization, RSM, shelling parameters
Procedia PDF Downloads 3606891 Optimum Parameter of a Viscous Damper for Seismic and Wind Vibration
Authors: Soltani Amir, Hu Jiaxin
Abstract:
Determination of optimal parameters of a passive control system device is the primary objective of this study. Expanding upon the use of control devices in wind and earthquake hazard reduction has led to development of various control systems. The advantage of non-linearity characteristics in a passive control device and the optimal control method using LQR algorithm are explained in this study. Finally, this paper introduces a simple approach to determine optimum parameters of a nonlinear viscous damper for vibration control of structures. A MATLAB program is used to produce the dynamic motion of the structure considering the stiffness matrix of the SDOF frame and the non-linear damping effect. This study concluded that the proposed system (variable damping system) has better performance in system response control than a linear damping system. Also, according to the energy dissipation graph, the total energy loss is greater in non-linear damping system than other systems.Keywords: passive control system, damping devices, viscous dampers, control algorithm
Procedia PDF Downloads 4716890 In-door Localization Algorithm and Appropriate Implementation Using Wireless Sensor Networks
Authors: Adeniran K. Ademuwagun, Alastair Allen
Abstract:
The relationship dependence between RSS and distance in an enclosed environment is an important consideration because it is a factor that can influence the reliability of any localization algorithm founded on RSS. Several algorithms effectively reduce the variance of RSS to improve localization or accuracy performance. Our proposed algorithm essentially avoids this pitfall and consequently, its high adaptability in the face of erratic radio signal. Using 3 anchors in close proximity of each other, we are able to establish that RSS can be used as reliable indicator for localization with an acceptable degree of accuracy. Inherent in this concept, is the ability for each prospective anchor to validate (guarantee) the position or the proximity of the other 2 anchors involved in the localization and vice versa. This procedure ensures that the uncertainties of radio signals due to multipath effects in enclosed environments are minimized. A major driver of this idea is the implicit topological relationship among sensors due to raw radio signal strength. The algorithm is an area based algorithm; however, it does not trade accuracy for precision (i.e the size of the returned area).Keywords: anchor nodes, centroid algorithm, communication graph, radio signal strength
Procedia PDF Downloads 5106889 High Input Driven Factors in Idea Campaigns in Large Organizations: A Case Depicting Best Practices
Authors: Babar Rasheed, Saad Ghafoor
Abstract:
Introduction: Idea campaigns are commonly held across organizations for generating employee engagement. The contribution is specifically designed to identify and solve prevalent issues. It is argued that numerous organizations fail to achieve their desired goals despite arranging for such campaigns and investing heavily in them. There are however practices that organizations use to achieve higher degree of effectiveness, and these practices may be up for exploration by research to make them usable for the other organizations. Purpose: The aim of this research is to surface the idea management practices of a leading electric company with global operations. The study involves a large sized, multi site organization that is attributed to have added challenges in terms of managing ideas from employees, in comparison to smaller organizations. The study aims to highlight the factors that are looked at as the idea management team strategies for the campaign, sets terms and rewards for it, makes follow up with the employees and lastly, evaluate and award ideas. Methodology: The study is conducted in a leading electric appliance corporation that has a large number of employees and is based in numerous regions of the world. A total of 7 interviews are carried out involving the chief innovation officer, innovation manager and members of idea management and evaluation teams. The interviews are carried out either on Skype or in-person based on the availability of the interviewee. Findings: While this being a working paper and while the study is under way, it is anticipated that valuable information is being achieved about specific details on how idea management systems are governed and how idea campaigns are carried out. The findings may be particularly useful for innovation consultants as resources they can use to promote idea campaigning. The usefulness of the best practices highlighted as a result is, in any case, the most valuable output of this study.Keywords: employee engagement, motivation, idea campaigns, large organizations, best practices, employees input, organizational output
Procedia PDF Downloads 1756888 Relations of Progression in Cognitive Decline with Initial EEG Resting-State Functional Network in Mild Cognitive Impairment
Authors: Chia-Feng Lu, Yuh-Jen Wang, Yu-Te Wu, Sui-Hing Yan
Abstract:
This study aimed at investigating whether the functional brain networks constructed using the initial EEG (obtained when patients first visited hospital) can be correlated with the progression of cognitive decline calculated as the changes of mini-mental state examination (MMSE) scores between the latest and initial examinations. We integrated the time–frequency cross mutual information (TFCMI) method to estimate the EEG functional connectivity between cortical regions, and the network analysis based on graph theory to investigate the organization of functional networks in aMCI. Our finding suggested that higher integrated functional network with sufficient connection strengths, dense connection between local regions, and high network efficiency in processing information at the initial stage may result in a better prognosis of the subsequent cognitive functions for aMCI. In conclusion, the functional connectivity can be a useful biomarker to assist in prediction of cognitive declines in aMCI.Keywords: cognitive decline, functional connectivity, MCI, MMSE
Procedia PDF Downloads 3866887 A Network-Theorical Perspective on Music Analysis
Authors: Alberto Alcalá-Alvarez, Pablo Padilla-Longoria
Abstract:
The present paper describes a framework for constructing mathematical networks encoding relevant musical information from a music score for structural analysis. These graphs englobe statistical information about music elements such as notes, chords, rhythms, intervals, etc., and the relations among them, and so become helpful in visualizing and understanding important stylistic features of a music fragment. In order to build such networks, musical data is parsed out of a digital symbolic music file. This data undergoes different analytical procedures from Graph Theory, such as measuring the centrality of nodes, community detection, and entropy calculation. The resulting networks reflect important structural characteristics of the fragment in question: predominant elements, connectivity between them, and complexity of the information contained in it. Music pieces in different styles are analyzed, and the results are contrasted with the traditional analysis outcome in order to show the consistency and potential utility of this method for music analysis.Keywords: computational musicology, mathematical music modelling, music analysis, style classification
Procedia PDF Downloads 1046886 Radar Charts Analysis to Compare the Level of Innovation in Mexico with Most Innovative Countries in Triple Helix Schema Economic and Human Factor Dimension
Authors: M. Peña Aguilar Juan, Valencia Luis, Pastrana Alberto, Nava Estefany, A. Martinez, M. Vivanco, A. Castañeda
Abstract:
This paper seeks to compare the innovation of Mexico from an economic and human perspective, with the seven most innovative countries according to the Global Innovation Index 2013, done by the World Intellectual Property Organization (WIPO). The above analysis suggests nine dimensions: Expenditure on R & D, intellectual property, appropriate environment to conduct business, economic stability, and triple helix for R & D, ICT Infrastructure, education, human resources and quality of life. Each dimension is represented by an indicator which is later used to construct a radial graph that compares the innovative capacity of the countries analysed. As a result, it is proposed a new indicator of innovation called The Area of Innovation. Observations are made from the results, and finally as a conclusion, those items or dimensions in which Mexico suffers lag in innovation are identify.Keywords: dimension, measure, innovation level, economy, radar chart
Procedia PDF Downloads 4726885 Annular Hyperbolic Profile Fins with Variable Thermal Conductivity Using Laplace Adomian Transform and Double Decomposition Methods
Authors: Yinwei Lin, Cha'o-Kuang Chen
Abstract:
In this article, the Laplace Adomian transform method (LADM) and double decomposition method (DDM) are used to solve the annular hyperbolic profile fins with variable thermal conductivity. As the thermal conductivity parameter ε is relatively large, the numerical solution using DDM become incorrect. Moreover, when the terms of DDM are more than seven, the numerical solution using DDM is very complicated. However, the present method can be easily calculated as terms are over seven and has more precisely numerical solutions. As the thermal conductivity parameter ε is relatively large, LADM also has better accuracy than DDM.Keywords: fins, thermal conductivity, Laplace transform, Adomian, nonlinear
Procedia PDF Downloads 3366884 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning
Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan
Abstract:
The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass
Procedia PDF Downloads 1176883 Study of Divalent Phosphate Iron-Oxide Precursor Recycling Technology
Authors: Shinn-Dar Wu
Abstract:
This study aims to synthesize lithium iron phosphate cathode material using a recycling technology involving non-protective gas calcination. The advantages include lower cost and easier production than traditional methods that require a large amount of protective gas. The novel technology may have extensive industrial applications. Given that the traditional gas calcination has a large number of protection free Fe3+ production, this study developed a precursor iron phosphate (Fe2+) material recycling technology and conducted related tests and analyses. It focused on flow field design of calcination and new technology as well as analyzed the best conditions for powder calcination combination. The electrical properties were determined by button batteries and exhibited a capacity of 118 mAh/g (The use of new materials synthesis, capacitance is about 122 mAh/g). The cost reduced to 50% of the original.Keywords: lithium battery, lithium iron phosphate, calcined technology, recycling technology
Procedia PDF Downloads 4816882 Decision Trees Constructing Based on K-Means Clustering Algorithm
Authors: Loai Abdallah, Malik Yousef
Abstract:
A domain space for the data should reflect the actual similarity between objects. Since objects belonging to the same cluster usually share some common traits even though their geometric distance might be relatively large. In general, the Euclidean distance of data points that represented by large number of features is not capturing the actual relation between those points. In this study, we propose a new method to construct a different space that is based on clustering to form a new distance metric. The new distance space is based on ensemble clustering (EC). The EC distance space is defined by tracking the membership of the points over multiple runs of clustering algorithm metric. Over this distance, we train the decision trees classifier (DT-EC). The results obtained by applying DT-EC on 10 datasets confirm our hypotheses that embedding the EC space as a distance metric would improve the performance.Keywords: ensemble clustering, decision trees, classification, K nearest neighbors
Procedia PDF Downloads 1916881 Identifying Network Subgraph-Associated Essential Genes in Molecular Networks
Authors: Efendi Zaenudin, Chien-Hung Huang, Ka-Lok Ng
Abstract:
Essential genes play an important role in the survival of an organism. It has been shown that cancer-associated essential genes are genes necessary for cancer cell proliferation, where these genes are potential therapeutic targets. Also, it was demonstrated that mutations of the cancer-associated essential genes give rise to the resistance of immunotherapy for patients with tumors. In the present study, we focus on studying the biological effects of the essential genes from a network perspective. We hypothesize that one can analyze a biological molecular network by decomposing it into both three-node and four-node digraphs (subgraphs). These network subgraphs encode the regulatory interaction information among the network’s genetic elements. In this study, the frequency of occurrence of the subgraph-associated essential genes in a molecular network was quantified by using the statistical parameter, odds ratio. Biological effects of subgraph-associated essential genes are discussed. In summary, the subgraph approach provides a systematic method for analyzing molecular networks and it can capture useful biological information for biomedical research.Keywords: biological molecular networks, essential genes, graph theory, network subgraphs
Procedia PDF Downloads 1586880 Bayesian Network and Feature Selection for Rank Deficient Inverse Problem
Authors: Kyugneun Lee, Ikjin Lee
Abstract:
Parameter estimation with inverse problem often suffers from unfavorable conditions in the real world. Useless data and many input parameters make the problem complicated or insoluble. Data refinement and reformulation of the problem can solve that kind of difficulties. In this research, a method to solve the rank deficient inverse problem is suggested. A multi-physics system which has rank deficiency caused by response correlation is treated. Impeditive information is removed and the problem is reformulated to sequential estimations using Bayesian network (BN) and subset groups. At first, subset grouping of the responses is performed. Feature selection with singular value decomposition (SVD) is used for the grouping. Next, BN inference is used for sequential conditional estimation according to the group hierarchy. Directed acyclic graph (DAG) structure is organized to maximize the estimation ability. Variance ratio of response to noise is used to pairing the estimable parameters by each response.Keywords: Bayesian network, feature selection, rank deficiency, statistical inverse analysis
Procedia PDF Downloads 3146879 Multiscale Structures and Their Evolution in a Screen Cylinder Wake
Authors: Azlin Mohd Azmi, Tongming Zhou, Akira Rinoshika, Liang Cheng
Abstract:
The turbulent structures in the wake (x/d =10 to 60) of a screen cylinder have been reduced to understand the roles of the various structures as evolving downstream by comparing with those obtained in a solid circular cylinder wake at Reynolds number, Re of 7000. Using a wavelet multi-resolution technique, the flow structures are decomposed into a number of wavelet components based on their central frequencies. It is observed that in the solid cylinder wake, large-scale structures (of frequency f0 and 1.2 f0) make the largest contribution to the Reynolds stresses although they start to lose their roles significantly at x/d > 20. In the screen cylinder wake, the intermediate-scale structures (2f0 and 4f0) contribute the most to the Reynolds stresses at x/d =10 before being taken over by the large-scale structures (f0) further downstream.Keywords: turbulent structure, screen cylinder, vortex, wavelet multi-resolution analysis
Procedia PDF Downloads 460