Search results for: mathematical programming problem
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9142

Search results for: mathematical programming problem

8392 An Improved Lower Bound for Minimal-Area Convex Cover for Closed Unit Curves

Authors: S. Som-Am, B. Grechuk

Abstract:

Moser’s worm problem is the unsolved problem in geometry which asks for the minimal area of a convex region on the plane which can cover all curves of unit length, assuming that curves may be rotated and translated to fit inside the region. We study a version of this problem asking for a minimal convex cover for closed unit curves. By combining geometric methods with numerical box’s search algorithm, we show that any such cover should have an area at least 0.0975. This improves the best previous lower bound of 0.096694. In fact, we show that the minimal area of convex hull of circle, equilateral triangle, and rectangle of perimeter 1 is between 0.0975 and 0.09763.

Keywords: Moser’s worm problem, closed arcs, convex cover, minimal-area cover

Procedia PDF Downloads 211
8391 Frequency of Problem Drinking and Depression in Males with a History of Alcohol Consumption Admitted to a Tertiary Care Setting in Southern Sri Lanka

Authors: N. H. D. P. Fonseka, I. H. Rajapakse, A. S. Dissanayake

Abstract:

Background: Problem drinking, namely alcohol dependence (AD) and alcohol abuse (AA) are associated with major medical, social and economic adverse consequences. Problem drinking behavior is noted among those admitted to hospitals due to alcohol-related medical/surgical complaints as well as those with unrelated complaints. Literature shows an association between alcohol consumption and depression. Aims of this study were to determine the frequency of problem drinking and depression among males with a history of alcohol consumption tertiary care setting in Southern Sri Lanka. Method: Two-hundred male patients who consumed alcohol, receiving care in medical and surgical wards in Teaching Hospital Galle, were assessed. A validated J12 questionnaire of the Mini International Neuropsychiatric Interview was administered to determine frequency AA and AD. A validated PHQ 9 questionnaire to determine the prevalence and severity of depression. Results: Sixty-three participants (31%) had problem drinking. Of them, 61% had AD, and 39% had AA. Depression was noted in 39 (19%) subjects. In those who reported alcohol consumption not amounting to problem drinking, depression was noted in 23 (16%) participants. Mild depression was seen in 17, moderate in five and moderately severe in one. Among those who had problem drinking, 16 (25%) had depression. Mild depression was seen in four, moderate in seven, moderately severe in three and severe in two. Conclusions: A high proportion alcohol users had problem drinking. Adverse consequences associated with problem drinking places a major strain on the health system especially in a low resource setting where healthcare spending is limited and alcohol cessation support services are not well organised. Thus alcohol consumption and problem drinking behaviour need to be inquired into all medical consultations. Community prevalence of depression in Sri Lanka is approximately 10%. Depression among those consuming alcohol was two times higher compared to the general population. The rates of depression among those with problem drinking were especially high being 2.5 times more common than in the general population. A substantial proportion of these patients with depression had moderately severe or severe depression. When depression coexists with problem drinking, it may increase the tendency to consume alcohol as well as act as a barrier to the success of alcohol cessation interventions. Thus screening all patients who consume alcohol for depression, especially those who are problem drinkers becomes an important step in their clinical evaluation. In addition, in view of the high prevalence of problem drinking and coexistent depression, the need to organize a structured alcohol cessation support service in Sri Lanka as well as the need for increasing access to psychological evaluation and treatment of those with problem drinking are highlighted.

Keywords: alcohol abuse, alcohol, depression, problem drinking

Procedia PDF Downloads 160
8390 Left to Right-Right Most Parsing Algorithm with Lookahead

Authors: Jamil Ahmed

Abstract:

Left to Right-Right Most (LR) parsing algorithm is a widely used algorithm of syntax analysis. It is contingent on a parsing table, whereas the parsing tables are extracted from the grammar. The parsing table specifies the actions to be taken during parsing. It requires that the parsing table should have no action conflicts for the same input symbol. This requirement imposes a condition on the class of grammars over which the LR algorithms work. However, there are grammars for which the parsing tables hold action conflicts. In such cases, the algorithm needs a capability of scanning (looking-ahead) next input symbols ahead of the current input symbol. In this paper, a ‘Left to Right’-‘Right Most’ parsing algorithm with lookahead capability is introduced. The 'look-ahead' capability in the LR parsing algorithm is the major contribution of this paper. The practicality of the proposed algorithm is substantiated by the parser implementation of the Context Free Grammar (CFG) of an already proposed programming language 'State Controlled Object Oriented Programming' (SCOOP). SCOOP’s Context Free Grammar has 125 productions and 192 item sets. This algorithm parses SCOOP while the grammar requires to ‘look ahead’ the input symbols due to action conflicts in its parsing table. Proposed LR parsing algorithm with lookahead capability can be viewed as an optimization of ‘Simple Left to Right’-‘Right Most’ (SLR) parsing algorithm.

Keywords: left to right-right most parsing, syntax analysis, bottom-up parsing algorithm

Procedia PDF Downloads 126
8389 Load Management Using Multiple Sequential Load Shaping Techniques

Authors: Amira M. Attia, Karim H. Youssef, Nabil H. Abbasi

Abstract:

Demand Side Management (DSM) is an essential characteristic of current and future smart grid systems. As one of DSM functions, load management aims to control customers’ total electric consumption and utility’s load factor by using various load shaping techniques. However, applying load shaping techniques such as load shifting, peak clipping, or strategic conservation individually does not provide the desired level of improvement for load factor increment and/or customer’s bill reduction. In this paper, two load shaping techniques will be simulated as constrained optimization problems. The purpose is to reflect the application of combined load shifting and strategic conservation model together at the same time, and the application of combined load shifting and peak clipping model as well. The problem will be formulated and solved by using disciplined convex programming (CVX) based MATLAB® R2013b. Simulation results will be evaluated and compared for studying the most impactful multi-techniques model in improving load curve.

Keywords: convex programing, demand side management, load shaping, multiple, building energy optimization

Procedia PDF Downloads 313
8388 Application Programming Interface Security in Embedded and Open Finance

Authors: Andrew John Zeller, Artjoms Formulevics

Abstract:

Banking and financial services are rapidly transitioning from being monolithic structures focusing merely on their own financial offerings to becoming integrated players in multiple customer journeys and supply chains. Banks themselves are refocusing on being liquidity providers and underwriters in these networks, while the general concept of ‘embeddedness’ builds on the market readily available API (Application Programming Interface) architectures to flexibly deliver services to various requestors, i.e., online retailers who need finance and insurance products to better serve their customers, respectively. With this new flexibility come new requirements for enhanced cybersecurity. API structures are more decentralized and inherently prone to change. Unfortunately, this has not been comprehensively addressed in the literature. This paper tries to fill this gap by looking at security approaches and technologies relevant to API architectures found in embedded finance. After presenting the research methodology applied and introducing the major bodies of knowledge involved, the paper will discuss six dominating technology trends shaping high-level financial services architectures. Subsequently, embedded finance and the respective usage of API strategies will be described. Building on this, security considerations for APIs in financial and insurance services will be elaborated on before concluding with some ideas for possible further research.

Keywords: embedded finance, embedded banking strategy, cybersecurity, API management, data security, cybersecurity, IT management

Procedia PDF Downloads 42
8387 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 343
8386 Complementary Mathematical Model for Underwater Vehicles under Load Variation Test Conditions

Authors: Erim Koyun

Abstract:

This paper aim to construct a mathematical model for Underwater vehicles under load variation test conditions. Propeller effects on underwater vehicle are investigated. Body with counter rotating propeller model is analyzed by CFD methods, thus forces and moment are obtained. Propeller effects of vehicle’s hydrodynamic performance under load variation conditions will be investigated. Additionally, pressure contour is examined for differences between different load conditions. Axial force equation is established using hydrodynamic coefficients, which contains resistance, thrust, and additional coefficients occurs due to load variations. Additional coefficients helps to express completely axial force on underwater vehicle. When the vehicle accelerates, additional force occurs besides thrust force increment. This is propeller effect on the body. Hence, mathematical model cover this effect. For CFD analysis, the incompressible, three-dimensional, and unsteady Reynolds Averaged Navier-Stokes equations will be used Numerical results is verified with experimental results for verification. The overall goal of this study is to present complementary mathematical model for body with counter rotating propeller.

Keywords: counter rotating propeller, CFD, hydrodynamic mathematic model, hydrodynamics analysis, thrust deduction

Procedia PDF Downloads 136
8385 Metrics and Methods for Improving Resilience in Agribusiness Supply Chains

Authors: Golnar Behzadi, Michael O'Sullivan, Tava Olsen, Abraham Zhang

Abstract:

By definition, increasing supply chain resilience improves the supply chain’s ability to return to normal, or to an even more desirable situation, quickly and efficiently after being hit by a disruption. This is especially critical in agribusiness supply chains where the products are perishable and have a short life-cycle. In this paper, we propose a resilience metric to capture and improve the recovery process in terms of both performance and time, of an agribusiness supply chain following either supply or demand-side disruption. We build a model that determines optimal supply chain recovery planning decisions and selects the best resilient strategies that minimize the loss of profit during the recovery time window. The model is formulated as a two-stage stochastic mixed-integer linear programming problem and solved with a branch-and-cut algorithm. The results show that the optimal recovery schedule is highly dependent on the duration of the time-window allowed for recovery. In addition, the profit loss during recovery is reduced by utilizing the proposed resilient actions.

Keywords: agribusiness supply chain, recovery, resilience metric, risk management

Procedia PDF Downloads 397
8384 Six-Phase Tooth-Coil Winding Starter-Generator Embedded in Aerospace Engine

Authors: Flur R. Ismagilov, Vyacheslav E. Vavilov, Denis V. Gusakov

Abstract:

This paper is devoted to solve the problem of increasing the electrification of aircraft engines by installing a synchronous generator at high pressure shaft. Technical solution of this problem by various research centers is discussed. A design solution of the problem was proposed. To evaluate the effectiveness of the proposed cooling system, thermal analysis was carried out in ANSYS software.

Keywords: starter-generator, more electrical engine, aircraft engines, high pressure shaft, synchronous generator

Procedia PDF Downloads 257
8383 The Application of Pareto Local Search to the Single-Objective Quadratic Assignment Problem

Authors: Abdullah Alsheddy

Abstract:

This paper presents the employment of Pareto optimality as a strategy to help (single-objective) local search escaping local optima. Instead of local search, Pareto local search is applied to solve the quadratic assignment problem which is multi-objectivized by adding a helper objective. The additional objective is defined as a function of the primary one with augmented penalties that are dynamically updated.

Keywords: Pareto optimization, multi-objectivization, quadratic assignment problem, local search

Procedia PDF Downloads 466
8382 Future Trends of Mechatronics Engineering in Pakistan

Authors: Aqeela Mir, Akhtar Nawaz Malik, Javaid Iqbal

Abstract:

The paper presents a survey based approach in order to observe the level of awareness regarding Mechatronics in society of Pakistan and the factors affecting the future development trend of Mechatronics in Pakistan. With the help of these surveys a new direction for making a Mathematical model for the future development trend of Mechatronics in Pakistan is also suggested.

Keywords: mechatronics society survey, future development trend of mechatronics in pakistan, probability estimation, mathematical model

Procedia PDF Downloads 522
8381 Predictive Output Feedback Linearization for Safe Control of Collaborative Robots

Authors: Aliasghar Arab

Abstract:

Autonomous robots interacting with humans, as safety-critical nonlinear control systems, are complex closed-loop cyber-physical dynamical machines. Keeping these intelligent yet complicated systems safe and smooth during their operations is challenging. The aim of the safe predictive output feedback linearization control synthesis is to design a novel controller for smooth trajectory following while unsafe situations must be avoided. The controller design should obtain a linearized output for smoothness and invariance to a safety subset. Inspired by finite-horizon nonlinear model predictive control, the problem is formulated as constrained nonlinear dynamic programming. The safety constraints can be defined as control barrier functions. Avoiding unsafe maneuvers and performing smooth motions increases the predictability of the robot’s movement for humans when robots and people are working together. Our results demonstrate the proposed output linearization method obeys the safety constraints and, compared to existing safety-guaranteed methods, is smoother and performs better.

Keywords: robotics, collaborative robots, safety, autonomous robots

Procedia PDF Downloads 97
8380 An Algorithm for the Map Labeling Problem with Two Kinds of Priorities

Authors: Noboru Abe, Yoshinori Amai, Toshinori Nakatake, Sumio Masuda, Kazuaki Yamaguchi

Abstract:

We consider the problem of placing labels of the points on a plane. For each point, its position, the size of its label and a priority are given. Moreover, several candidates of its label positions are prespecified, and each of such label positions is assigned a priority. The objective of our problem is to maximize the total sum of priorities of placed labels and their points. By refining a labeling algorithm that can use these priorities, we propose a new heuristic algorithm which is more suitable for treating the assigned priorities.

Keywords: map labeling, greedy algorithm, heuristic algorithm, priority

Procedia PDF Downloads 433
8379 Using Gaussian Process in Wind Power Forecasting

Authors: Hacene Benkhoula, Mohamed Badreddine Benabdella, Hamid Bouzeboudja, Abderrahmane Asraoui

Abstract:

The wind is a random variable difficult to master, for this, we developed a mathematical and statistical methods enable to modeling and forecast wind power. Gaussian Processes (GP) is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space or time and space. GP is an underlying process formed by unrecognized operator’s uses to solve a problem. The purpose of this paper is to present how to forecast wind power by using the GP. The Gaussian process method for forecasting are presented. To validate the presented approach, a simulation under the MATLAB environment has been given.

Keywords: wind power, Gaussien process, modelling, forecasting

Procedia PDF Downloads 418
8378 Noise Source Identification on Urban Construction Sites Using Signal Time Delay Analysis

Authors: Balgaisha G. Mukanova, Yelbek B. Utepov, Aida G. Nazarova, Alisher Z. Imanov

Abstract:

The problem of identifying local noise sources on a construction site using a sensor system is considered. Mathematical modeling of detected signals on sensors was carried out, considering signal decay and signal delay time between the source and detector. Recordings of noises produced by construction tools were used as a dependence of noise on time. Synthetic sensor data was constructed based on these data, and a model of the propagation of acoustic waves from a point source in the three-dimensional space was applied. All sensors and sources are assumed to be located in the same plane. A source localization method is checked based on the signal time delay between two adjacent detectors and plotting the direction of the source. Based on the two direct lines' crossline, the noise source's position is determined. Cases of one dominant source and the case of two sources in the presence of several other sources of lower intensity are considered. The number of detectors varies from three to eight detectors. The intensity of the noise field in the assessed area is plotted. The signal of a two-second duration is considered. The source is located for subsequent parts of the signal with a duration above 0.04 sec; the final result is obtained by computing the average value.

Keywords: acoustic model, direction of arrival, inverse source problem, sound localization, urban noises

Procedia PDF Downloads 62
8377 The Eye Tracking Technique and the Study of Some Abstract Mathematical Concepts at the University

Authors: Tamara Díaz-Chang, Elizabeth-H Arredondo

Abstract:

This article presents the results of mixed approach research, where the ocular movements of students are examined while they solve questionnaires related to some abstract mathematical concepts. The objective of this research is to determine possible correlations between the parameters of ocular activity and the level of difficulty of the tasks. The difficulty level categories were established based on two types of criteria: a subjective one, through an evaluation, carried out by the subjects, and a behavioral one, related to obtaining the correct solution. Correlations of these criteria with ocular activity parameters, which were considered indicators of mental effort, were identified. The analysis of the data obtained allowed us to observe discrepancies in the categorization of difficulty levels based on subjective and behavioral criteria. There was a negative correlation of the eye movement parameters with the students' opinions on the level of difficulty of the questions, while a strong positive and significant correlation was noted between most of the parameters of ocular activity and the level of difficulty, determined by the percentage of correct answers. The results obtained by the analysis of the data suggest that eye movement parameters can be taken as indicators of the difficulty level of the tasks related to the study of some abstract mathematical concepts at the university.

Keywords: abstract mathematical concepts, cognitive neuroscience, eye-tracking, university education

Procedia PDF Downloads 120
8376 Applying Neural Networks for Solving Record Linkage Problem via Fuzzy Description Logics

Authors: Mikheil Kalmakhelidze

Abstract:

Record linkage (RL) problem has become more and more important in recent years due to the growing interest towards big data analysis. The problem can be formulated in a very simple way: Given two entries a and b of a database, decide whether they represent the same object or not. There are two classical deterministic and probabilistic ways of solving the RL problem. Using simple Bayes classifier in many cases produces useful results but sometimes they show to be poor. In recent years several successful approaches have been made towards solving specific RL problems by neural network algorithms including single layer perception, multilayer back propagation network etc. In our work, we model the RL problem for specific dataset of student applications in fuzzy description logic (FDL) where linkage of specific pair (a,b) depends on the truth value of corresponding formula A(a,b) in a canonical FDL model. As a main result, we build neural network for deciding truth value of FDL formulas in a canonical model and thus link RL problem to machine learning. We apply the approach to dataset with 10000 entries and also compare to classical RL solving approaches. The results show to be more accurate than standard probabilistic approach.

Keywords: description logic, fuzzy logic, neural networks, record linkage

Procedia PDF Downloads 272
8375 Loudspeaker Parameters Inverse Problem for Improving Sound Frequency Response Simulation

Authors: Y. T. Tsai, Jin H. Huang

Abstract:

The sound pressure level (SPL) of the moving-coil loudspeaker (MCL) is often simulated and analyzed using the lumped parameter model. However, the SPL of a MCL cannot be simulated precisely in the high frequency region, because the value of cone effective area is changed due to the geometry variation in different mode shapes, it is also related to affect the acoustic radiation mass and resistance. Herein, the paper presents the inverse method which has a high ability to measure the value of cone effective area in various frequency points, also can estimate the MCL electroacoustic parameters simultaneously. The proposed inverse method comprises the direct problem, adjoint problem, and sensitivity problem in collaboration with nonlinear conjugate gradient method. Estimated values from the inverse method are validated experimentally which compared with the measured SPL curve result. Results presented in this paper not only improve the accuracy of lumped parameter model but also provide the valuable information on loudspeaker cone design.

Keywords: inverse problem, cone effective area, loudspeaker, nonlinear conjugate gradient method

Procedia PDF Downloads 303
8374 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization

Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller

Abstract:

The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.

Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization

Procedia PDF Downloads 34
8373 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R

Authors: Pavel H. Llamocca, Victoria Lopez

Abstract:

The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.

Keywords: open data, R language, data integration, environmental data

Procedia PDF Downloads 315
8372 ECO ROADS: A Solution to the Vehicular Pollution on Roads

Authors: Harshit Garg, Shakshi Gupta

Abstract:

One of the major problems in today’s world is the growing pollution. The cause for all environmental problems is the increasing pollution rate. Looking upon the statistics, one can find out that most of the pollution is caused by the vehicular pollution which is more than 70 % of the total pollution, effecting the environment as well as human health proportionally. One is aware of the fact that vehicles run on roads so why not having the roads which could adsorb that pollution, not only once but a number of times. Every problem has a solution which can be solved by the state of art of technology, that is one can use the innovative ideas and thoughts to make technology as a solution to the problem of vehicular pollution on roads. Solving the problem up to a certain limit/ percentage can be formulated into a new term called ECO ROADS.

Keywords: environment, pollution, roads, sustainibility

Procedia PDF Downloads 557
8371 A Parallel Computation Based on GPU Programming for a 3D Compressible Fluid Flow Simulation

Authors: Sugeng Rianto, P.W. Arinto Yudi, Soemarno Muhammad Nurhuda

Abstract:

A computation of a 3D compressible fluid flow for virtual environment with haptic interaction can be a non-trivial issue. This is especially how to reach good performances and balancing between visualization, tactile feedback interaction, and computations. In this paper, we describe our approach of computation methods based on parallel programming on a GPU. The 3D fluid flow solvers have been developed for smoke dispersion simulation by using combinations of the cubic interpolated propagation (CIP) based fluid flow solvers and the advantages of the parallelism and programmability of the GPU. The fluid flow solver is generated in the GPU-CPU message passing scheme to get rapid development of haptic feedback modes for fluid dynamic data. A rapid solution in fluid flow solvers is developed by applying cubic interpolated propagation (CIP) fluid flow solvers. From this scheme, multiphase fluid flow equations can be solved simultaneously. To get more acceleration in the computation, the Navier-Stoke Equations (NSEs) is packed into channels of texel, where computation models are performed on pixels that can be considered to be a grid of cells. Therefore, despite of the complexity of the obstacle geometry, processing on multiple vertices and pixels can be done simultaneously in parallel. The data are also shared in global memory for CPU to control the haptic in providing kinaesthetic interaction and felling. The results show that GPU based parallel computation approaches provide effective simulation of compressible fluid flow model for real-time interaction in 3D computer graphic for PC platform. This report has shown the feasibility of a new approach of solving the compressible fluid flow equations on the GPU. The experimental tests proved that the compressible fluid flowing on various obstacles with haptic interactions on the few model obstacles can be effectively and efficiently simulated on the reasonable frame rate with a realistic visualization. These results confirm that good performances and balancing between visualization, tactile feedback interaction, and computations can be applied successfully.

Keywords: CIP, compressible fluid, GPU programming, parallel computation, real-time visualisation

Procedia PDF Downloads 432
8370 Preventing Corruption in Dubai: Governance, Contemporary Strategies and Systemic Flaws

Authors: Graham Brooks, Belaisha Bin Belaisha, Hakkyong Kim

Abstract:

The problem of preventing and/or reducing corruption is a major international problem. This paper, however, specifically focuses on how organisations in Dubai are tackling the problem of money laundering. This research establishes that Dubai has a clear international anti-money laundering framework but suffers from some national weaknesses such as diverse anti-money laundering working practice, lack of communication, sharing information and disparate organisational vested self-interest.

Keywords: corruption, governance, money laundering, prevention, strategies

Procedia PDF Downloads 273
8369 Qualitative Measurement of Literacy

Authors: Indrajit Ghosh, Jaydip Roy

Abstract:

Literacy rate is an important indicator for measurement of human development. But this is not a good one to capture the qualitative dimension of educational attainment of an individual or a society. The overall educational level of an area is an important issue beyond the literacy rate. The overall educational level can be thought of as an outcome of the educational levels of individuals. But there is no well-defined algorithm and mathematical model available to measure the overall educational level of an area. A heuristic approach based on accumulated experience of experts is effective one. It is evident that fuzzy logic offers a natural and convenient framework in modeling various concepts in social science domain. This work suggests the implementation of fuzzy logic to develop a mathematical model for measurement of educational attainment of an area in terms of Education Index. The contribution of the study is two folds: conceptualization of “Education Profile” and proposing a new mathematical model to measure educational attainment in terms of “Education Index”.

Keywords: education index, education profile, fuzzy logic, literacy

Procedia PDF Downloads 316
8368 A Combined Meta-Heuristic with Hyper-Heuristic Approach to Single Machine Production Scheduling Problem

Authors: C. E. Nugraheni, L. Abednego

Abstract:

This paper is concerned with minimization of mean tardiness and flow time in a real single machine production scheduling problem. Two variants of genetic algorithm as meta-heuristic are combined with hyper-heuristic approach are proposed to solve this problem. These methods are used to solve instances generated with real world data from a company. Encouraging results are reported.

Keywords: hyper-heuristics, evolutionary algorithms, production scheduling, meta-heuristic

Procedia PDF Downloads 381
8367 Upon One Smoothing Problem in Project Management

Authors: Dimitri Golenko-Ginzburg

Abstract:

A CPM network project with deterministic activity durations, in which activities require homogenous resources with fixed capacities, is considered. The problem is to determine the optimal schedule of starting times for all network activities within their maximal allowable limits (in order not to exceed the network's critical time) to minimize the maximum required resources for the project at any point in time. In case when a non-critical activity may start only at discrete moments with the pregiven time span, the problem becomes NP-complete and an optimal solution may be obtained via a look-over algorithm. For the case when a look-over requires much computational time an approximate algorithm is suggested. The algorithm's performance ratio, i.e., the relative accuracy error, is determined. Experimentation has been undertaken to verify the suggested algorithm.

Keywords: resource smoothing problem, CPM network, lookover algorithm, lexicographical order, approximate algorithm, accuracy estimate

Procedia PDF Downloads 302
8366 Survey Paper on Graph Coloring Problem and Its Application

Authors: Prateek Chharia, Biswa Bhusan Ghosh

Abstract:

Graph coloring is one of the prominent concepts in graph coloring. It can be defined as a coloring of the various regions of the graph such that all the constraints are fulfilled. In this paper various graphs coloring approaches like greedy coloring, Heuristic search for maximum independent set and graph coloring using edge table is described. Graph coloring can be used in various real time applications like student time tabling generation, Sudoku as a graph coloring problem, GSM phone network.

Keywords: graph coloring, greedy coloring, heuristic search, edge table, sudoku as a graph coloring problem

Procedia PDF Downloads 539
8365 A Hybrid Expert System for Generating Stock Trading Signals

Authors: Hosein Hamisheh Bahar, Mohammad Hossein Fazel Zarandi, Akbar Esfahanipour

Abstract:

In this paper, a hybrid expert system is developed by using fuzzy genetic network programming with reinforcement learning (GNP-RL). In this system, the frame-based structure of the system uses the trading rules extracted by GNP. These rules are extracted by using technical indices of the stock prices in the training time period. For developing this system, we applied fuzzy node transition and decision making in both processing and judgment nodes of GNP-RL. Consequently, using these method not only did increase the accuracy of node transition and decision making in GNP's nodes, but also extended the GNP's binary signals to ternary trading signals. In the other words, in our proposed Fuzzy GNP-RL model, a No Trade signal is added to conventional Buy or Sell signals. Finally, the obtained rules are used in a frame-based system implemented in Kappa-PC software. This developed trading system has been used to generate trading signals for ten companies listed in Tehran Stock Exchange (TSE). The simulation results in the testing time period shows that the developed system has more favorable performance in comparison with the Buy and Hold strategy.

Keywords: fuzzy genetic network programming, hybrid expert system, technical trading signal, Tehran stock exchange

Procedia PDF Downloads 332
8364 A General Variable Neighborhood Search Algorithm to Minimize Makespan of the Distributed Permutation Flowshop Scheduling Problem

Authors: G. M. Komaki, S. Mobin, E. Teymourian, S. Sheikh

Abstract:

This paper addresses minimizing the makespan of the distributed permutation flow shop scheduling problem. In this problem, there are several parallel identical factories or flowshops each with series of similar machines. Each job should be allocated to one of the factories and all of the operations of the jobs should be performed in the allocated factory. This problem has recently gained attention and due to NP-Hard nature of the problem, metaheuristic algorithms have been proposed to tackle it. Majority of the proposed algorithms require large computational time which is the main drawback. In this study, a general variable neighborhood search algorithm (GVNS) is proposed where several time-saving schemes have been incorporated into it. Also, the GVNS uses the sophisticated method to change the shaking procedure or perturbation depending on the progress of the incumbent solution to prevent stagnation of the search. The performance of the proposed algorithm is compared to the state-of-the-art algorithms based on standard benchmark instances.

Keywords: distributed permutation flow shop, scheduling, makespan, general variable neighborhood search algorithm

Procedia PDF Downloads 354
8363 Why and When to Teach Definitions: Necessary and Unnecessary Discontinuities Resulting from the Definition of Mathematical Concepts

Authors: Josephine Shamash, Stuart Smith

Abstract:

We examine reasons for introducing definitions in teaching mathematics in a number of different cases. We try to determine if, where, and when to provide a definition, and which definition to choose. We characterize different types of definitions and the different purposes we may have for formulating them, and detail examples of each type. Giving a definition at a certain stage can sometimes be detrimental to the development of the concept image. In such a case, it is advisable to delay the precise definition to a later stage. We describe two models, the 'successive approximation model', and the 'model of the extending definition' that fit such situations. Detailed examples that fit the different models are given based on material taken from a number of textbooks, and analysis of the way the concept is introduced, and where and how its definition is given. Our conclusions, based on this analysis, is that some of the definitions given may cause discontinuities in the learning sequence and constitute obstacles and unnecessary cognitive conflicts in the formation of the concept definition. However, in other cases, the discontinuity in passing from definition to definition actually serves a didactic purpose, is unavoidable for the mathematical evolution of the concept image, and is essential for students to deepen their understanding.

Keywords: concept image, mathematical definitions, mathematics education, mathematics teaching

Procedia PDF Downloads 129