Search results for: exponentially weighted moving average (EWMA)
6019 Intelligent Swarm-Finding in Formation Control of Multi-Robots to Track a Moving Target
Authors: Anh Duc Dang, Joachim Horn
Abstract:
This paper presents a new approach to control robots, which can quickly find their swarm while tracking a moving target through the obstacles of the environment. In this approach, an artificial potential field is generated between each free-robot and the virtual attractive point of the swarm. This artificial potential field will lead free-robots to their swarm. The swarm-finding of these free-robots dose not influence the general motion of their swarm and nor other robots. When one singular robot approaches the swarm then its swarm-search will finish, and it will further participate with its swarm to reach the position of the target. The connections between member-robots with their neighbours are controlled by the artificial attractive/repulsive force field between them to avoid collisions and keep the constant distances between them in ordered formation. The effectiveness of the proposed approach has been verified in simulations.Keywords: formation control, potential field method, obstacle avoidance, swarm intelligence, multi-agent systems
Procedia PDF Downloads 4416018 Runoff Estimation Using NRCS-CN Method
Authors: E. K. Naseela, B. M. Dodamani, Chaithra Chandran
Abstract:
The GIS and remote sensing techniques facilitate accurate estimation of surface runoff from watershed. In the present study an attempt has been made to evaluate the applicability of Natural Resources Service Curve Number method using GIS and Remote sensing technique in the upper Krishna basin (69,425 Sq.km). Landsat 7 (with resolution 30 m) satellite data for the year 2012 has been used for the preparation of land use land cover (LU/LC) map. The hydrologic soil group is mapped using GIS platform. The weighted curve numbers (CN) for all the 5 subcatchments calculated on the basis of LU/LC type and hydrologic soil class in the area by considering antecedent moisture condition. Monthly rainfall data was available for 58 raingauge stations. Overlay technique is adopted for generating weighted curve number. Results of the study show that land use changes determined from satellite images are useful in studying the runoff response of the basin. The results showed that there is no significant difference between observed and estimated runoff depths. For each subcatchment, statistically positive correlations were detected between observed and estimated runoff depth (0.66017 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps
Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo
Abstract:
With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.Keywords: interactive applications, power management, QoS, Web apps, WebGL
Procedia PDF Downloads 1936016 Bounds on the Laplacian Vertex PI Energy
Authors: Ezgi Kaya, A. Dilek Maden
Abstract:
A topological index is a number related to graph which is invariant under graph isomorphism. In theoretical chemistry, molecular structure descriptors (also called topological indices) are used for modeling physicochemical, pharmacologic, toxicologic, biological and other properties of chemical compounds. Let G be a graph with n vertices and m edges. For a given edge uv, the quantity nu(e) denotes the number of vertices closer to u than v, the quantity nv(e) is defined analogously. The vertex PI index defined as the sum of the nu(e) and nv(e). Here the sum is taken over all edges of G. The energy of a graph is defined as the sum of the eigenvalues of adjacency matrix of G and the Laplacian energy of a graph is defined as the sum of the absolute value of difference of laplacian eigenvalues and average degree of G. In theoretical chemistry, the π-electron energy of a conjugated carbon molecule, computed using the Hückel theory, coincides with the energy. Hence results on graph energy assume special significance. The Laplacian matrix of a graph G weighted by the vertex PI weighting is the Laplacian vertex PI matrix and the Laplacian vertex PI eigenvalues of a connected graph G are the eigenvalues of its Laplacian vertex PI matrix. In this study, Laplacian vertex PI energy of a graph is defined of G. We also give some bounds for the Laplacian vertex PI energy of graphs in terms of vertex PI index, the sum of the squares of entries in the Laplacian vertex PI matrix and the absolute value of the determinant of the Laplacian vertex PI matrix.Keywords: energy, Laplacian energy, laplacian vertex PI eigenvalues, Laplacian vertex PI energy, vertex PI index
Procedia PDF Downloads 2466015 Analysing the Behaviour of Local Hurst Exponent and Lyapunov Exponent for Prediction of Market Crashes
Authors: Shreemoyee Sarkar, Vikhyat Chadha
Abstract:
In this paper, the local fractal properties and chaotic properties of financial time series are investigated by calculating two exponents, the Local Hurst Exponent: LHE and Lyapunov Exponent in a moving time window of a financial series.y. For the purpose of this paper, the Dow Jones Industrial Average (DIJA) and S&P 500, two of the major indices of United States have been considered. The behaviour of the above-mentioned exponents prior to some major crashes (1998 and 2008 crashes in S&P 500 and 2002 and 2008 crashes in DIJA) is discussed. Also, the optimal length of the window for obtaining the best possible results is decided. Based on the outcomes of the above, an attempt is made to predict the crashes and accuracy of such an algorithm is decided.Keywords: local hurst exponent, lyapunov exponent, market crash prediction, time series chaos, time series local fractal properties
Procedia PDF Downloads 1536014 Microbial Quality of Raw Camel Milk Produced in South of Morocco
Authors: Maha Alaoui Ismaili, Bouchta Saidi, Mohamed Zahar, Abed Hamama
Abstract:
Thirty one samples of raw camel milk obtained from the region of Laâyoune (South of Morocco) were examined for their microbial quality and presence of some pathogenic bacteria (Staphylococcus aureus and Salmonella sp.). pH of the samples ranged from 6.31 to 6.64 and their titratable acidity had a mean value of 18.56 °Dornic. Data obtained showed a strong microbial contamination with an average total aerobic flora of 1.76 108 ufc ml-1 and a very high fecal counts: 1.82 107 ; 3.25 106 and 3.75 106 ufc.ml-1 in average for total coliforms, fecal coliforms and enterococci respectively. Yeasts and moulds were also found at average respective levels of 3.13 106 and 1.60 105 ufc.ml-1. Salmonella sp. and S. aureus was detected respectively in 13% and 30% of the milk samples. These results indicate clearly the lack of hygienic conditions of camel milk production and storage in this region. Lactic acid bacteria were found at the following average numbers: 4.25 107 ; 4.45 107 and 3.55 107 ufc.ml-1 for Lactococci, Leuconostocs and Lactobacilli respectively.Keywords: camel milk, microbial quality, Salmonella, Staphylococcus aureus
Procedia PDF Downloads 4726013 Detecting Port Maritime Communities in Spain with Complex Network Analysis
Authors: Nicanor Garcia Alvarez, Belarmino Adenso-Diaz, Laura Calzada Infante
Abstract:
In recent years, researchers have shown an interest in modelling maritime traffic as a complex network. In this paper, we propose a bipartite weighted network to model maritime traffic and detect port maritime communities. The bipartite weighted network considers two different types of nodes. The first one represents Spanish ports, while the second one represents the countries with which there is major import/export activity. The flow among both types of nodes is modeled by weighting the volume of product transported. To illustrate the model, the data is segmented by each type of traffic. This will allow fine tuning and the creation of communities for each type of traffic and therefore finding similar ports for a specific type of traffic, which will provide decision-makers with tools to search for alliances or identify their competitors. The traffic with the greatest impact on the Spanish gross domestic product is selected, and the evolution of the communities formed by the most important ports and their differences between 2019 and 2009 will be analyzed. Finally, the set of communities formed by the ports of the Spanish port system will be inspected to determine global similarities between them, analyzing the sum of the membership of the different ports in communities formed for each type of traffic in particular.Keywords: bipartite networks, competition, infomap, maritime traffic, port communities
Procedia PDF Downloads 1506012 Identification of Suitable Rainwater Harvesting Sites Using Geospatial Techniques with AHP in Chacha Watershed, Jemma Sub-Basin Upper Blue Nile, Ethiopia
Authors: Abrha Ybeyn Gebremedhn, Yitea Seneshaw Getahun, Alebachew Shumye Moges, Fikrey Tesfay
Abstract:
Rainfed agriculture in Ethiopia has failed to produce enough food, to achieve the increasing demand for food. Pinpointing the appropriate site for rainwater harvesting (RWH) have a substantial contribution to increasing the available water and enhancing agricultural productivity. The current study related to the identification of the potential RWH sites was conducted at the Chacha watershed central highlands of Ethiopia which is endowed with rugged topography. The Geographic Information System with Analytical Hierarchy Process was used to generate the different maps for identifying appropriate sites for RWH. In this study, 11 factors that determine the RWH locations including slope, soil texture, runoff depth, land cover type, annual average rainfall, drainage density, lineament intensity, hydrologic soil group, antecedent moisture content, and distance to the roads were considered. The overall analyzed result shows that 10.50%, 71.10%, 17.90%, and 0.50% of the areas were found under highly, moderately, marginally suitable, and unsuitable areas for RWH, respectively. The RWH site selection was found highly dependent on a slope, soil texture, and runoff depth; moderately dependent on drainage density, annual average rainfall, and land use land cover; but less dependent on the other factors. The highly suitable areas for rainwater harvesting expansion are lands having a flat topography with a soil textural class of high-water holding capacity that can produce high runoff depth. The application of this study could be a baseline for planners and decision-makers and support any strategy adoption for appropriate RWH site selection.Keywords: runoff depth, antecedent moisture condition, AHP, weighted overlay, water resource
Procedia PDF Downloads 546011 Language Development and Growing Spanning Trees in Children Semantic Network
Authors: Somayeh Sadat Hashemi Kamangar, Fatemeh Bakouie, Shahriar Gharibzadeh
Abstract:
In this study, we target to exploit Maximum Spanning Trees (MST) of children's semantic networks to investigate their language development. To do so, we examine the graph-theoretic properties of word-embedding networks. The networks are made of words children learn prior to the age of 30 months as the nodes and the links which are built from the cosine vector similarity of words normatively acquired by children prior to two and a half years of age. These networks are weighted graphs and the strength of each link is determined by the numerical similarities of the two words (nodes) on the sides of the link. To avoid changing the weighted networks to the binaries by setting a threshold, constructing MSTs might present a solution. MST is a unique sub-graph that connects all the nodes in such a way that the sum of all the link weights is maximized without forming cycles. MSTs as the backbone of the semantic networks are suitable to examine developmental changes in semantic network topology in children. From these trees, several parameters were calculated to characterize the developmental change in network organization. We showed that MSTs provides an elegant method sensitive to capture subtle developmental changes in semantic network organization.Keywords: maximum spanning trees, word-embedding, semantic networks, language development
Procedia PDF Downloads 1486010 Visual Servoing for Quadrotor UAV Target Tracking: Effects of Target Information Sharing
Authors: Jason R. King, Hugh H. T. Liu
Abstract:
This research presents simulation and experimental work in the visual servoing of a quadrotor Unmanned Aerial Vehicle (UAV) to stabilize overtop of a moving target. Most previous work in the field assumes static or slow-moving, unpredictable targets. In this experiment, the target is assumed to be a friendly ground robot moving freely on a horizontal plane, which shares information with the UAV. This information includes velocity and acceleration information of the ground target to aid the quadrotor in its tracking task. The quadrotor is assumed to have a downward-facing camera which is fixed to the frame of the quadrotor. Only onboard sensing for the quadrotor is utilized for the experiment, with a VICON motion capture system in place used only to measure ground truth and evaluate the performance of the controller. The experimental platform consists of an ArDrone 2.0 and a Create Roomba, communicating using Robot Operating System (ROS). The addition of the target’s information is demonstrated to help the quadrotor in its tracking task using simulations of the dynamic model of a quadrotor in Matlab Simulink. A nested PID control loop is utilized for inner-loop control the quadrotor, similar to previous works at the Flight Systems and Controls Laboratory (FSC) at the University of Toronto Institute for Aerospace Studies (UTIAS). Experiments are performed with ground truth provided by an indoor motion capture system, and the results are analyzed. It is demonstrated that a velocity controller which incorporates the additional information is able to perform better than the controllers which do not have access to the target’s information.Keywords: quadrotor, target tracking, unmanned aerial vehicle, UAV, UAS, visual servoing
Procedia PDF Downloads 3426009 Pure Scalar Equilibria for Normal-Form Games
Authors: Herbert W. Corley
Abstract:
A scalar equilibrium (SE) is an alternative type of equilibrium in pure strategies for an n-person normal-form game G. It is defined using optimization techniques to obtain a pure strategy for each player of G by maximizing an appropriate utility function over the acceptable joint actions. The players’ actions are determined by the choice of the utility function. Such a utility function could be agreed upon by the players or chosen by an arbitrator. An SE is an equilibrium since no players of G can increase the value of this utility function by changing their strategies. SEs are formally defined, and examples are given. In a greedy SE, the goal is to assign actions to the players giving them the largest individual payoffs jointly possible. In a weighted SE, each player is assigned weights modeling the degree to which he helps every player, including himself, achieve as large a payoff as jointly possible. In a compromise SE, each player wants a fair payoff for a reasonable interpretation of fairness. In a parity SE, the players want their payoffs to be as nearly equal as jointly possible. Finally, a satisficing SE achieves a personal target payoff value for each player. The vector payoffs associated with each of these SEs are shown to be Pareto optimal among all such acceptable vectors, as well as computationally tractable.Keywords: compromise equilibrium, greedy equilibrium, normal-form game, parity equilibrium, pure strategies, satisficing equilibrium, scalar equilibria, utility function, weighted equilibrium
Procedia PDF Downloads 1136008 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator
Authors: Wedad Albalawi
Abstract:
The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator
Procedia PDF Downloads 976007 CompPSA: A Component-Based Pairwise RNA Secondary Structure Alignment Algorithm
Authors: Ghada Badr, Arwa Alturki
Abstract:
The biological function of an RNA molecule depends on its structure. The objective of the alignment is finding the homology between two or more RNA secondary structures. Knowing the common functionalities between two RNA structures allows a better understanding and a discovery of other relationships between them. Besides, identifying non-coding RNAs -that is not translated into a protein- is a popular application in which RNA structural alignment is the first step A few methods for RNA structure-to-structure alignment have been developed. Most of these methods are partial structure-to-structure, sequence-to-structure, or structure-to-sequence alignment. Less attention is given in the literature to the use of efficient RNA structure representation and the structure-to-structure alignment methods are lacking. In this paper, we introduce an O(N2) Component-based Pairwise RNA Structure Alignment (CompPSA) algorithm, where structures are given as a component-based representation and where N is the maximum number of components in the two structures. The proposed algorithm compares the two RNA secondary structures based on their weighted component features rather than on their base-pair details. Extensive experiments are conducted illustrating the efficiency of the CompPSA algorithm when compared to other approaches and on different real and simulated datasets. The CompPSA algorithm shows an accurate similarity measure between components. The algorithm gives the flexibility for the user to align the two RNA structures based on their weighted features (position, full length, and/or stem length). Moreover, the algorithm proves scalability and efficiency in time and memory performance.Keywords: alignment, RNA secondary structure, pairwise, component-based, data mining
Procedia PDF Downloads 4596006 Forecasting Model to Predict Dengue Incidence in Malaysia
Authors: W. H. Wan Zakiyatussariroh, A. A. Nasuhar, W. Y. Wan Fairos, Z. A. Nazatul Shahreen
Abstract:
Forecasting dengue incidence in a population can provide useful information to facilitate the planning of the public health intervention. Many studies on dengue cases in Malaysia were conducted but are limited in modeling the outbreak and forecasting incidence. This article attempts to propose the most appropriate time series model to explain the behavior of dengue incidence in Malaysia for the purpose of forecasting future dengue outbreaks. Several seasonal auto-regressive integrated moving average (SARIMA) models were developed to model Malaysia’s number of dengue incidence on weekly data collected from January 2001 to December 2011. SARIMA (2,1,1)(1,1,1)52 model was found to be the most suitable model for Malaysia’s dengue incidence with the least value of Akaike information criteria (AIC) and Bayesian information criteria (BIC) for in-sample fitting. The models further evaluate out-sample forecast accuracy using four different accuracy measures. The results indicate that SARIMA (2,1,1)(1,1,1)52 performed well for both in-sample fitting and out-sample evaluation.Keywords: time series modeling, Box-Jenkins, SARIMA, forecasting
Procedia PDF Downloads 4876005 Performance of LTE Multicast Systems in the Presence of the Colored Noise Jamming
Authors: S. Malisuwan, J. Sivaraks, N. Madan, N. Suriyakrai
Abstract:
The ever going evolution of advanced wireless technologies makes it financially impossible for military operations to completely manufacture their own equipment. Therefore, Commercial-Off-The-Shelf (COTS) and Modified-Off-The-Shelf (MOTS) are being considered in military mission with low-cost modifications. In this paper, we focus on the LTE multicast systems for military communication systems under tactical environments with jamming condition. We examine the influence of the colored noise jamming on the performance of the LTE multicast systems in terms of the average throughput. The simulation results demonstrate the degradation of the average throughput for different dynamic ranges of the colored noise jamming versus average SNR.Keywords: performance, LTE, multicast, jamming, throughput
Procedia PDF Downloads 4196004 An Alternative Framework of Multi-Resolution Nested Weighted Essentially Non-Oscillatory Schemes for Solving Euler Equations with Adaptive Order
Authors: Zhenming Wang, Jun Zhu, Yuchen Yang, Ning Zhao
Abstract:
In the present paper, an alternative framework is proposed to construct a class of finite difference multi-resolution nested weighted essentially non-oscillatory (WENO) schemes with an increasingly higher order of accuracy for solving inviscid Euler equations. These WENO schemes firstly obtain a set of reconstruction polynomials by a hierarchy of nested central spatial stencils, and then recursively achieve a higher order approximation through the lower-order precision WENO schemes. The linear weights of such WENO schemes can be set as any positive numbers with a requirement that their sum equals one and they will not pollute the optimal order of accuracy in smooth regions and could simultaneously suppress spurious oscillations near discontinuities. Numerical results obtained indicate that these alternative finite-difference multi-resolution nested WENO schemes with different accuracies are very robust with low dissipation and use as few reconstruction stencils as possible while maintaining the same efficiency, achieving the high-resolution property without any equivalent multi-resolution representation. Besides, its finite volume form is easier to implement in unstructured grids.Keywords: finite-difference, WENO schemes, high order, inviscid Euler equations, multi-resolution
Procedia PDF Downloads 1466003 Diffusion Magnetic Resonance Imaging and Magnetic Resonance Spectroscopy in Detecting Malignancy in Maxillofacial Lesions
Authors: Mohamed Khalifa Zayet, Salma Belal Eiid, Mushira Mohamed Dahaba
Abstract:
Introduction: Malignant tumors may not be easily detected by traditional radiographic techniques especially in an anatomically complex area like maxillofacial region. At the same time, the advent of biological functional MRI was a significant footstep in the diagnostic imaging field. Objective: The purpose of this study was to define the malignant metabolic profile of maxillofacial lesions using diffusion MRI and magnetic resonance spectroscopy, as adjunctive aids for diagnosing of such lesions. Subjects and Methods: Twenty-one patients with twenty-two lesions were enrolled in this study. Both morphological and functional MRI scans were performed, where T1, T2 weighted images, diffusion-weighted MRI with four apparent diffusion coefficient (ADC) maps were constructed for analysis, and magnetic resonance spectroscopy with qualitative and semi-quantitative analyses of choline and lactate peaks were applied. Then, all patients underwent incisional or excisional biopsies within two weeks from MR scans. Results: Statistical analysis revealed that not all the parameters had the same diagnostic performance, where lactate had the highest areas under the curve (AUC) of 0.9 and choline was the lowest with insignificant diagnostic value. The best cut-off value suggested for lactate was 0.125, where any lesion above this value is supposed to be malignant with 90 % sensitivity and 83.3 % specificity. Despite that ADC maps had comparable AUCs still, the statistical measure that had the final say was the interpretation of likelihood ratio. As expected, lactate again showed the best combination of positive and negative likelihood ratios, whereas for the maps, ADC map with 500 and 1000 b-values showed the best realistic combination of likelihood ratios, however, with lower sensitivity and specificity than lactate. Conclusion: Diffusion weighted imaging and magnetic resonance spectroscopy are state-of-art in the diagnostic arena and they manifested themselves as key players in the differentiation process of orofacial tumors. The complete biological profile of malignancy can be decoded as low ADC values, high choline and/or high lactate, whereas that of benign entities can be translated as high ADC values, low choline and no lactate.Keywords: diffusion magnetic resonance imaging, magnetic resonance spectroscopy, malignant tumors, maxillofacial
Procedia PDF Downloads 1726002 Hybrid Algorithm for Frequency Channel Selection in Wi-Fi Networks
Authors: Cesar Hernández, Diego Giral, Ingrid Páez
Abstract:
This article proposes a hybrid algorithm for spectrum allocation in cognitive radio networks based on the algorithms Analytical Hierarchical Process (AHP) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to improve the performance of the spectrum mobility of secondary users in cognitive radio networks. To calculate the level of performance of the proposed algorithm a comparative analysis between the proposed AHP-TOPSIS, Grey Relational Analysis (GRA) and Multiplicative Exponent Weighting (MEW) algorithm is performed. Four evaluation metrics is used. These metrics are the accumulative average of failed handoffs, the accumulative average of handoffs performed, the accumulative average of transmission bandwidth, and the accumulative average of the transmission delay. The results of the comparison show that AHP-TOPSIS Algorithm provides 2.4 times better performance compared to a GRA Algorithm and, 1.5 times better than the MEW Algorithm.Keywords: cognitive radio, decision making, hybrid algorithm, spectrum handoff, wireless networks
Procedia PDF Downloads 5426001 Impact Factor Analysis for Spatially Varying Aerosol Optical Depth in Wuhan Agglomeration
Authors: Wenting Zhang, Shishi Liu, Peihong Fu
Abstract:
As an indicator of air quality and directly related to concentration of ground PM2.5, the spatial-temporal variation and impact factor analysis of Aerosol Optical Depth (AOD) have been a hot spot in air pollution. This paper concerns the non-stationarity and the autocorrelation (with Moran’s I index of 0.75) of the AOD in Wuhan agglomeration (WHA), in central China, uses the geographically weighted regression (GRW) to identify the spatial relationship of AOD and its impact factors. The 3 km AOD product of Moderate Resolution Imaging Spectrometer (MODIS) is used in this study. Beyond the economic-social factor, land use density factors, vegetable cover, and elevation, the landscape metric is also considered as one factor. The results suggest that the GWR model is capable of dealing with spatial varying relationship, with R square, corrected Akaike Information Criterion (AICc) and standard residual better than that of ordinary least square (OLS) model. The results of GWR suggest that the urban developing, forest, landscape metric, and elevation are the major driving factors of AOD. Generally, the higher AOD trends to located in the place with higher urban developing, less forest, and flat area.Keywords: aerosol optical depth, geographically weighted regression, land use change, Wuhan agglomeration
Procedia PDF Downloads 3576000 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements
Authors: Alexander Buhr, Klaus Ehrenfried
Abstract:
Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements
Procedia PDF Downloads 3075999 Spatial REE Geochemical Modeling at Lake Acıgöl, Denizli, Turkey: Analytical Approaches on Spatial Interpolation and Spatial Correlation
Authors: M. Budakoglu, M. Karaman, A. Abdelnasser, M. Kumral
Abstract:
The spatial interpolation and spatial correlation of the rare earth elements (REE) of lake surface sediments of Lake Acıgöl and its surrounding lithological units is carried out by using GIS techniques like Inverse Distance Weighted (IDW) and Geographically Weighted Regression (GWR) techniques. IDW technique which makes the spatial interpolation shows that the lithological units like Hayrettin Formation at north of Lake Acigol have high REE contents than lake sediments as well as ∑LREE and ∑HREE contents. However, Eu/Eu* values (based on chondrite-normalized REE pattern) show high value in some lake surface sediments than in lithological units and that refers to negative Eu-anomaly. Also, the spatial interpolation of the V/Cr ratio indicated that Acıgöl lithological units and lake sediments deposited in in oxic and dysoxic conditions. But, the spatial correlation is carried out by GWR technique. This technique shows high spatial correlation coefficient between ∑LREE and ∑HREE which is higher in the lithological units (Hayrettin Formation and Cameli Formation) than in the other lithological units and lake surface sediments. Also, the matching between REEs and Sc and Al refers to REE abundances of Lake Acıgöl sediments weathered from local bedrock around the lake.Keywords: spatial geochemical modeling, IDW, GWR techniques, REE, lake sediments, Lake Acıgöl, Turkey
Procedia PDF Downloads 5565998 Time Series Analysis of Radon Concentration at Different Depths in an Underground Goldmine
Authors: Theophilus Adjirackor, Frederic Sam, Irene Opoku-Ntim, David Okoh Kpeglo, Prince K. Gyekye, Frank K. Quashie, Kofi Ofori
Abstract:
Indoor radon concentrations were collected monthly over a period of one year in 10 different levels in an underground goldmine, and the data was analyzed using a four-moving average time series to determine the relationship between the depths of the underground mine and the indoor radon concentration. The detectors were installed in batches within four quarters. The measurements were carried out using LR115 solid-state nuclear track detectors. Statistical models are applied in the prediction and analysis of the radon concentration at various depths. The time series model predicted a positive relationship between the depth of the underground mine and the indoor radon concentration. Thus, elevated radon concentrations are expected at deeper levels of the underground mine, but the relationship was insignificant at the 5% level of significance with a negative adjusted R2 (R2 = – 0.021) due to an appropriate engineering and adequate ventilation rate in the underground mine.Keywords: LR115, radon concentration, rime series, underground goldmine
Procedia PDF Downloads 475997 The Role of Leisure in Older Adults Transitioning to New Homes
Authors: Kristin Prentice, Carri Hand
Abstract:
As the Canadian population ages and chronic health conditions continue to escalate, older adults will require various types of housing, such as long term care or retirement homes. Moving to a new home may require a change in leisure activities and social networks, which could be challenging to maintain identity and create a sense of home. Leisure has been known to help older adults maintain or increase their quality of life and life satisfaction and may help older adults in moving to new homes. Sense of home and identity within older adults' transitions to new homes are concepts that may also relate to leisure engagement. Literature is scant regarding the role of leisure in older adults moving to new homes and how the sense of home and identity inter-relate. This study aims to explore how leisure may play a role in older adults' transitioning to new homes, including how sense of home and identity inter-relate. An ethnographic approach will be used to understand the culture of older adults transitioning to new homes. This study will involve older adults who have recently relocated to a mid-sized city in Ontario, Canada. The study will focus on the older adult’s interactions with and connections to their home environment through leisure. Data collection will take place via video-conferencing and will include a narrative interview and two other interviews to discuss an activity diary of leisure engagement pre and post move and mental maps to capture spaces where participants engaged in leisure. Participants will be encouraged to share photographs of leisure engagement taken inside and outside their home to help understand the social spaces the participants refer to in their activity diaries and mental maps. Older adults attempt to adjust to their new homes by maintaining their identity, developing a sense of home through creating attachment to place, and maintaining social networks, all of which have been linked to engaging in leisure. This research will provide insight into the role of leisure in this transition process and the extent that the home and community can contribute to aiding their transition to the new home. This research will contribute to existing literature on the inter-relationships of leisure, sense of home, and identity and how they relate to older adults moving to new homes. This research also has potential for influencing policy and practice for meeting the housing needs of older adults.Keywords: leisure, older adults, transition, identity
Procedia PDF Downloads 1225996 Early Detection of Major Earthquakes Using Broadband Accelerometers
Authors: Umberto Cerasani, Luca Cerasani
Abstract:
Methods for earthquakes forecasting have been intensively investigated in the last decades, but there is still no universal solution agreed by seismologists. Rock failure is most often preceded by a tiny elastic movement in the failure area and by the appearance of micro-cracks. These micro-cracks could be detected at the soil surface and represent useful earth-quakes precursors. The aim of this study was to verify whether tiny raw acceleration signals (in the 10⁻¹ to 10⁻⁴ cm/s² range) prior to the arrival of main primary-waves could be exploitable and related to earthquakes magnitude. Mathematical tools such as Fast Fourier Transform (FFT), moving average and wavelets have been applied on raw acceleration data available on the ITACA web site, and the study focused on one of the most unpredictable earth-quakes, i.e., the August 24th, 2016 at 01H36 one that occurred in the central Italy area. It appeared that these tiny acceleration signals preceding main P-waves have different patterns both on frequency and time domains for high magnitude earthquakes compared to lower ones.Keywords: earthquake, accelerometer, earthquake forecasting, seism
Procedia PDF Downloads 1465995 MCDM Spectrum Handover Models for Cognitive Wireless Networks
Authors: Cesar Hernández, Diego Giral, Fernando Santa
Abstract:
The spectral handoff is important in cognitive wireless networks to ensure an adequate quality of service and performance for secondary user communications. This work proposes a benchmarking of performance of the three spectrum handoff models: VIKOR, SAW and MEW. Four evaluation metrics are used. These metrics are, accumulative average of failed handoffs, accumulative average of handoffs performed, accumulative average of transmission bandwidth and, accumulative average of the transmission delay. As a difference with related work, the performance of the three spectrum handoff models was validated with captured data of spectral occupancy in experiments realized at the GSM frequency band (824 MHz-849 MHz). These data represent the actual behavior of the licensed users for this wireless frequency band. The results of the comparative show that VIKOR Algorithm provides 15.8% performance improvement compared to a SAW Algorithm and, 12.1% better than the MEW Algorithm.Keywords: cognitive radio, decision making, MEW, SAW, spectrum handoff, VIKOR
Procedia PDF Downloads 4395994 Rail Degradation Modelling Using ARMAX: A Case Study Applied to Melbourne Tram System
Authors: M. Karimpour, N. Elkhoury, L. Hitihamillage, S. Moridpour, R. Hesami
Abstract:
There is a necessity among rail transportation authorities for a superior understanding of the rail track degradation overtime and the factors influencing rail degradation. They need an accurate technique to identify the time when rail tracks fail or need maintenance. In turn, this will help to increase the level of safety and comfort of the passengers and the vehicles as well as improve the cost effectiveness of maintenance activities. An accurate model can play a key role in prediction of the long-term behaviour of railroad tracks. An accurate model can decrease the cost of maintenance. In this research, the rail track degradation is predicted using an autoregressive moving average with exogenous input (ARMAX). An ARMAX has been implemented on Melbourne tram data to estimate the values for the tram track degradation. Gauge values and rail usage in Million Gross Tone (MGT) are the main parameters used in the model. The developed model can accurately predict the future status of the tram tracks.Keywords: ARMAX, dynamic systems, MGT, prediction, rail degradation
Procedia PDF Downloads 2495993 An Adaptive Dimensionality Reduction Approach for Hyperspectral Imagery Semantic Interpretation
Authors: Akrem Sellami, Imed Riadh Farah, Basel Solaiman
Abstract:
With the development of HyperSpectral Imagery (HSI) technology, the spectral resolution of HSI became denser, which resulted in large number of spectral bands, high correlation between neighboring, and high data redundancy. However, the semantic interpretation is a challenging task for HSI analysis due to the high dimensionality and the high correlation of the different spectral bands. In fact, this work presents a dimensionality reduction approach that allows to overcome the different issues improving the semantic interpretation of HSI. Therefore, in order to preserve the spatial information, the Tensor Locality Preserving Projection (TLPP) has been applied to transform the original HSI. In the second step, knowledge has been extracted based on the adjacency graph to describe the different pixels. Based on the transformation matrix using TLPP, a weighted matrix has been constructed to rank the different spectral bands based on their contribution score. Thus, the relevant bands have been adaptively selected based on the weighted matrix. The performance of the presented approach has been validated by implementing several experiments, and the obtained results demonstrate the efficiency of this approach compared to various existing dimensionality reduction techniques. Also, according to the experimental results, we can conclude that this approach can adaptively select the relevant spectral improving the semantic interpretation of HSI.Keywords: band selection, dimensionality reduction, feature extraction, hyperspectral imagery, semantic interpretation
Procedia PDF Downloads 3545992 Consideration of Starlight Waves Redshift as Produced by Friction of These Waves on Its Way through Space
Authors: Angel Pérez Sánchez
Abstract:
In 1929, a light redshift was discovered in distant galaxies and was interpreted as produced by galaxies moving away from each other at high speed. This interpretation led to the consideration of a new source of energy, which was called Dark Energy. Redshift is a loss of light wave frequency produced by galaxies moving away at high speed, but the loss of frequency can also be produced by the friction of light waves on their way to Earth. This friction is impossible because outer space is empty, but if it were not empty and a medium existed in this empty space, it would be possible. The consequences would be extraordinary because Universe acceleration and Dark Energy would be in doubt. This article presents evidence that empty space is actually a medium occupied by different particles, among them the most significant would-be Graviton or Higgs Boson, because let's not forget that gravity also affects empty space.Keywords: Big Bang, dark energy, doppler effect, redshift, starlight frequency reduction, universe acceleration
Procedia PDF Downloads 635991 The Development of XML Resume System in Thailand
Authors: Jarumon Nookhong, Thanakorn Uiphanit
Abstract:
This study is a research and development project which aims to develop XML Resume System to be the standard system in Thailand as well as to measure the efficiency of the XML Resume System in Thailand. This research separates into 2 stages: 1) to develop XML Document System to be the standard in Thailand, and 2) to experiment the system performance. The sample in this research is committed by 50 specialists in the field of human resources by selecting specifically. The tool that uses in this research is XML Resume System in Thailand and the performance evaluation format of system while the analysis of the data is calculated by using average and standard deviation. The result of the research found that the development of the XML Resume System that aims to be the standard system in Thailand had the result 2.67 of the average which is in a good level. The evaluation in testing the performance of the system had been done by the specialists of human resources who use the XML Resume system. When analyzing each part, it found out that the abilities according to the user’s requirement from specialists in the field of human resources, the convenience and easiness in usages, and the functional competency are respectively in a good level. The average of the ability according to the user’s need from specialists of human resources is 2.92. The average of the convenience and easiness in usages is 2.56. The average of functional competency is 2.53. These can be used as the standard effectively.Keywords: resume, XML, XML schema, computer science
Procedia PDF Downloads 4105990 Development of an Implicit Coupled Partitioned Model for the Prediction of the Behavior of a Flexible Slender Shaped Membrane in Interaction with Free Surface Flow under the Influence of a Moving Flotsam
Authors: Mahtab Makaremi Masouleh, Günter Wozniak
Abstract:
This research is part of an interdisciplinary project, promoting the design of a light temporary installable textile defence system against flood. In case river water levels increase abruptly especially in winter time, one can expect massive extra load on a textile protective structure in term of impact as a result of floating debris and even tree trunks. Estimation of this impulsive force on such structures is of a great importance, as it can ensure the reliability of the design in critical cases. This fact provides the motivation for the numerical analysis of a fluid structure interaction application, comprising flexible slender shaped and free-surface water flow, where an accelerated heavy flotsam tends to approach the membrane. In this context, the analysis on both the behavior of the flexible membrane and its interaction with moving flotsam is conducted by finite elements based solvers of the explicit solver and implicit Abacus solver available as products of SIMULIA software. On the other hand, a study on how free surface water flow behaves in response to moving structures, has been investigated using the finite volume solver of Star CCM+ from Siemens PLM Software. An automatic communication tool (CSE, SIMULIA Co-Simulation Engine) and the implementation of an effective partitioned strategy in form of an implicit coupling algorithm makes it possible for partitioned domains to be interconnected powerfully. The applied procedure ensures stability and convergence in the solution of these complicated issues, albeit with high computational cost; however, the other complexity of this study stems from mesh criterion in the fluid domain, where the two structures approach each other. This contribution presents the approaches for the establishment of a convergent numerical solution and compares the results with experimental findings.Keywords: co-simulation, flexible thin structure, fluid-structure interaction, implicit coupling algorithm, moving flotsam
Procedia PDF Downloads 389