Search results for: maximum clique problem
10216 Effect of Fast and Slow Tempo Music on Muscle Endurance Time
Authors: Rohit Kamal, Devaki Perumal Rajaram, Rajam Krishna, Sai Kumar Pindagiri, Silas Danielraj
Abstract:
Introduction: According to WHO, Global health observatory at least 2.8 million people die each year because of obesity and overweight. This is mainly because of the adverse metabolic effects of obesity and overweight on blood pressure, lipid profile especially cholesterol and insulin resistance. To achieve optimum health WHO has set the BMI in the range of 18.5 to 24.9 kg/m2. Due to modernization of life style, physical exercise in the form of work is no longer a possibility and hence an effective way to burn out calories to achieve the optimum BMI is the need of the hour. Studies have shown that exercising for more than 60 minutes /day helps to maintain the weight and to reduce the weight exercise should be done for 90 minutes a day. Moderate exercise for about 30 min is essential for burning up of calories. People with low endurance fail to perform even the low intensity exercise for minimal time. Hence, it is necessary to find out some effective method to increase the endurance time. Methodology: This study was approved by the Institutional Ethical committee of our college. After getting written informed consent, 25 apparently healthy males between the age group 18-20 years were selected. Subjects are with muscular disorder, subjects who are Hypertensive, Diabetes, Smokers, Alcoholics, taking drugs affecting the muscle strength. To determine the endurance time: Maximum voluntary contraction (MVC) was measured by asking the participants to squeeze the hand grip dynamometer as hard as possible and hold it for 3 seconds. This procedure was repeated thrice and the average of the three reading was taken as the maximum voluntary contraction. The participant was then asked to squeeze the dynamometer and hold it at 70% of the maximum voluntary contraction while hearing fast tempo music which was played for about ten minutes then the participant was asked to relax for ten minutes and was made to hold the hand grip dynamometer at 70% of the maximum voluntary contraction while hearing slow tempo music. To avoid the bias of getting habituated to the procedure the order of hearing for the fast and slow tempo music was changed. The time for which they can hold it at 70% of MVC was determined by using a stop watch and that was taken as the endurance time. Results: The mean value of the endurance time during fast and slow tempo music was compared in all the subjects. The mean MVC was 34.92 N. The mean endurance time was 21.8 (16.3) seconds with slow tempo music which was more then with fast tempo music with which the mean endurance time was 20.6 (11.7) seconds. The preference was more for slow tempo music then for fast tempo music. Conclusion: Music when played during exercise by some unknown mechanism helps to increase the endurance time by alleviating the symptoms of lactic acid accumulation.Keywords: endurance time, fast tempo music, maximum voluntary contraction, slow tempo music
Procedia PDF Downloads 29910215 Modeling Karachi Dengue Outbreak and Exploration of Climate Structure
Authors: Syed Afrozuddin Ahmed, Junaid Saghir Siddiqi, Sabah Quaiser
Abstract:
Various studies have reported that global warming causes unstable climate and many serious impact to physical environment and public health. The increasing incidence of dengue incidence is now a priority health issue and become a health burden of Pakistan. In this study it has been investigated that spatial pattern of environment causes the emergence or increasing rate of dengue fever incidence that effects the population and its health. The climatic or environmental structure data and the Dengue Fever (DF) data was processed by coding, editing, tabulating, recoding, restructuring in terms of re-tabulating was carried out, and finally applying different statistical methods, techniques, and procedures for the evaluation. Five climatic variables which we have studied are precipitation (P), Maximum temperature (Mx), Minimum temperature (Mn), Humidity (H) and Wind speed (W) collected from 1980-2012. The dengue cases in Karachi from 2010 to 2012 are reported on weekly basis. Principal component analysis is applied to explore the climatic variables and/or the climatic (structure) which may influence in the increase or decrease in the number of dengue fever cases in Karachi. PC1 for all the period is General atmospheric condition. PC2 for dengue period is contrast between precipitation and wind speed. PC3 is the weighted difference between maximum temperature and wind speed. PC4 for dengue period contrast between maximum and wind speed. Negative binomial and Poisson regression model are used to correlate the dengue fever incidence to climatic variable and principal component score. Relative humidity is estimated to positively influence on the chances of dengue occurrence by 1.71% times. Maximum temperature positively influence on the chances dengue occurrence by 19.48% times. Minimum temperature affects positively on the chances of dengue occurrence by 11.51% times. Wind speed is effecting negatively on the weekly occurrence of dengue fever by 7.41% times.Keywords: principal component analysis, dengue fever, negative binomial regression model, poisson regression model
Procedia PDF Downloads 44410214 Old Community Spatial Integration: Discussion on the Mechanism of Aging Space System Replacement
Authors: Wan-I Chen, Tsung-I Pai
Abstract:
Future the society aging of population will create the social problem has not had the good mechanism solution in the Asian country, especially in Taiwan. In the future ten year the people in Taiwan must facing the condition which is localization aging social problem. In this situation, how to use the spatial in eco way to development space use to solve the old age spatial demand is the way which might develop in the future Taiwan society. Over the next 10 years, taking care of the aging people will become part of the social problem of aging phenomenon. The research concentrate in the feasibility of spatial substitution, secondary use of spatial might solve out of spatial problem for aging people. In order to prove the space usable, the research required to review the project with the support system and infill system for space experiment, by using network grid way. That defined community level of space elements location relationship, make new definitions of space and return to cooperation. Research to innovation in the the appraisal space causes the possibility, by spatial replacement way solution on spatial insufficient suitable condition. To evaluation community spatial by using the support system and infill system in order to see possibilities of use in replacement inner space and modular architecture into housing. The study is discovering the solution on the Eco way to develop space use to figure out the old age spatial demand.Keywords: sustainable use, space conversion, integration, replacement
Procedia PDF Downloads 17510213 Scheduling in a Single-Stage, Multi-Item Compatible Process Using Multiple Arc Network Model
Authors: Bokkasam Sasidhar, Ibrahim Aljasser
Abstract:
The problem of finding optimal schedules for each equipment in a production process is considered, which consists of a single stage of manufacturing and which can handle different types of products, where changeover for handling one type of product to the other type incurs certain costs. The machine capacity is determined by the upper limit for the quantity that can be processed for each of the products in a set up. The changeover costs increase with the number of set ups and hence to minimize the costs associated with the product changeover, the planning should be such that similar types of products should be processed successively so that the total number of changeovers and in turn the associated set up costs are minimized. The problem of cost minimization is equivalent to the problem of minimizing the number of set ups or equivalently maximizing the capacity utilization in between every set up or maximizing the total capacity utilization. Further, the production is usually planned against customers’ orders, and generally different customers’ orders are assigned one of the two priorities – “normal” or “priority” order. The problem of production planning in such a situation can be formulated into a Multiple Arc Network (MAN) model and can be solved sequentially using the algorithm for maximizing flow along a MAN and the algorithm for maximizing flow along a MAN with priority arcs. The model aims to provide optimal production schedule with an objective of maximizing capacity utilization, so that the customer-wise delivery schedules are fulfilled, keeping in view the customer priorities. Algorithms have been presented for solving the MAN formulation of the production planning with customer priorities. The application of the model is demonstrated through numerical examples.Keywords: scheduling, maximal flow problem, multiple arc network model, optimization
Procedia PDF Downloads 40110212 Solving Single Machine Total Weighted Tardiness Problem Using Gaussian Process Regression
Authors: Wanatchapong Kongkaew
Abstract:
This paper proposes an application of probabilistic technique, namely Gaussian process regression, for estimating an optimal sequence of the single machine with total weighted tardiness (SMTWT) scheduling problem. In this work, the Gaussian process regression (GPR) model is utilized to predict an optimal sequence of the SMTWT problem, and its solution is improved by using an iterated local search based on simulated annealing scheme, called GPRISA algorithm. The results show that the proposed GPRISA method achieves a very good performance and a reasonable trade-off between solution quality and time consumption. Moreover, in the comparison of deviation from the best-known solution, the proposed mechanism noticeably outperforms the recently existing approaches.Keywords: Gaussian process regression, iterated local search, simulated annealing, single machine total weighted tardiness
Procedia PDF Downloads 30810211 Cars in a Neighborhood: A Case of Sustainable Living in Sector 22 Chandigarh
Authors: Maninder Singh
Abstract:
The Chandigarh city is under the strain of exponential growth of car density across various neighborhood. The consumerist nature of society today is to be blamed for this menace because everyone wants to own and ride a car. Car manufacturers are busy selling two or more cars per household. The Regional Transport Offices are busy issuing as many licenses to new vehicles as they can in order to generate revenue in the form of Road Tax. The car traffic in the neighborhoods of Chandigarh has reached a tipping point. There needs to be a more empirical and sustainable model of cars per household, which should be based on specific parameters of livable neighborhoods. Sector 22 in Chandigarh is one of the first residential sectors to be established in the city. There is scope to think, reflect, and work out a method to know how many cars we need to sell our citizens before we lose the argument to traffic problems, parking problems, and road rage. This is where the true challenge of a planner or a designer of the city lies. Currently, in Chandigarh city, there are no clear visible answers to this problem. The way forward is to look at spatial mapping, planning, and design of car parking units to address the problem, rather than suggesting extreme measures of banning cars (short-term) or promoting plans for citywide transport (very long-term). This is a chance to resolve the problem with a pragmatic approach from a citizen’s perspective, instead of an orthodox development planner’s methodology. Since citizens are at the center of how the problem is to be addressed, acceptable solutions are more likely to emerge from the car and traffic problem as defined by the citizens. Thus, the idea and its implementation would be interesting in comparison to the known academic methodologies. The novel and innovative process would lead to a more acceptable and sustainable approach to the issue of number of car parks in the neighborhood of Chandigarh city.Keywords: cars, Chandigarh, neighborhood, sustainable living, walkability
Procedia PDF Downloads 14710210 Cadmium Adsorption by Modified Magnetic Biochar
Authors: Chompoonut Chaiyaraksa, Chanida Singbubpha, Kliaothong Angkabkingkaew, Thitikorn Boonyasawin
Abstract:
Heavy metal contamination in an environment is an important problem in Thailand that needs to be addressed urgently, particularly contaminated with water. It can spread to other environments faster. This research aims to study the adsorption of cadmium ion by unmodified biochar and sodium dodecyl sulfate modified magnetic biochar derived from Eichhornia Crassipes. The determination of the adsorbent characteristics was by Scanning Electron Microscope, Fourier Transform Infrared Spectrometer, X-ray Diffractometer, and the pH drift method. This study also included the comparison of adsorption efficiency of both types of biochar, adsorption isotherms, and kinetics. The pH value at the point of zero charges of the unmodified biochar and modified magnetic biochar was 7.40 and 3.00, respectively. The maximum value of adsorption reached when using pH 8. The equilibrium adsorption time was 5 hours and 1 hour for unmodified biochar and modified magnetic biochar, respectively. The cadmium adsorption by both adsorbents followed Freundlich, Temkin, and Dubinin – Radushkevich isotherm model and the pseudo-second-order kinetic. The adsorption process was spontaneous at high temperatures and non-spontaneous at low temperatures. It was an endothermic process, physisorption in nature, and can occur naturally.Keywords: Eichhornia crassipes, magnetic biochar, sodium dodecyl sulfate, water treatment
Procedia PDF Downloads 17110209 Reduced Power Consumption by Randomization for DSI3
Authors: David Levy
Abstract:
The newly released Distributed System Interface 3 (DSI3) Bus Standard specification defines 3 modulation levels from which 16 valid symbols are coded. This structure creates power consumption variations depending on the transmitted data of a factor of more than 2 between minimum and maximum. The power generation unit has to consider therefore the worst case maximum consumption all the time and be built accordingly. This paper proposes a method to reduce both the average current consumption and worst case current consumption. The transmitter randomizes the data using several pseudo-random sequences. It then estimates the energy consumption of the generated frames and selects to transmit the one which consumes the least. The transmitter also prepends the index of the pseudo-random sequence, which is not randomized, to allow the receiver to recover the original data using the correct sequence. We show that in the case that the frame occupies most of the DSI3 synchronization period, we achieve average power consumption reduction by up to 13% and the worst case power consumption is reduced by 17.7%.Keywords: DSI3, energy, power consumption, randomization
Procedia PDF Downloads 53710208 Analysis of User Data Usage Trends on Cellular and Wi-Fi Networks
Authors: Jayesh M. Patel, Bharat P. Modi
Abstract:
The availability of on mobile devices that can invoke the demonstrated that the total data demand from users is far higher than previously articulated by measurements based solely on a cellular-centric view of smart-phone usage. The ratio of Wi-Fi to cellular traffic varies significantly between countries, This paper is shown the compression between the cellular data usage and Wi-Fi data usage by the user. This strategy helps operators to understand the growing importance and application of yield management strategies designed to squeeze maximum returns from their investments into the networks and devices that enable the mobile data ecosystem. The transition from unlimited data plans towards tiered pricing and, in the future, towards more value-centric pricing offers significant revenue upside potential for mobile operators, but, without a complete insight into all aspects of smartphone customer behavior, operators will unlikely be able to capture the maximum return from this billion-dollar market opportunity.Keywords: cellular, Wi-Fi, mobile, smart phone
Procedia PDF Downloads 36410207 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem
Authors: Nan Xu
Abstract:
In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC
Procedia PDF Downloads 14510206 Numerical Solution of Portfolio Selecting Semi-Infinite Problem
Authors: Alina Fedossova, Jose Jorge Sierra Molina
Abstract:
SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution
Procedia PDF Downloads 30810205 Kirchoff Type Equation Involving the p-Laplacian on the Sierpinski Gasket Using Nehari Manifold Technique
Authors: Abhilash Sahu, Amit Priyadarshi
Abstract:
In this paper, we will discuss the existence of weak solutions of the Kirchhoff type boundary value problem on the Sierpinski gasket. Where S denotes the Sierpinski gasket in R² and S₀ is the intrinsic boundary of the Sierpinski gasket. M: R → R is a positive function and h: S × R → R is a suitable function which is a part of our main equation. ∆p denotes the p-Laplacian, where p > 1. First of all, we will define a weak solution for our problem and then we will show the existence of at least two solutions for the above problem under suitable conditions. There is no well-known concept of a generalized derivative of a function on a fractal domain. Recently, the notion of differential operators such as the Laplacian and the p-Laplacian on fractal domains has been defined. We recall the result first then we will address the above problem. In view of literature, Laplacian and p-Laplacian equations are studied extensively on regular domains (open connected domains) in contrast to fractal domains. In fractal domains, people have studied Laplacian equations more than p-Laplacian probably because in that case, the corresponding function space is reflexive and many minimax theorems which work for regular domains is applicable there which is not the case for the p-Laplacian. This motivates us to study equations involving p-Laplacian on the Sierpinski gasket. Problems on fractal domains lead to nonlinear models such as reaction-diffusion equations on fractals, problems on elastic fractal media and fluid flow through fractal regions etc. We have studied the above p-Laplacian equations on the Sierpinski gasket using fibering map technique on the Nehari manifold. Many authors have studied the Laplacian and p-Laplacian equations on regular domains using this Nehari manifold technique. In general Euler functional associated with such a problem is Frechet or Gateaux differentiable. So, a critical point becomes a solution to the problem. Also, the function space they consider is reflexive and hence we can extract a weakly convergent subsequence from a bounded sequence. But in our case neither the Euler functional is differentiable nor the function space is known to be reflexive. Overcoming these issues we are still able to prove the existence of at least two solutions of the given equation.Keywords: Euler functional, p-Laplacian, p-energy, Sierpinski gasket, weak solution
Procedia PDF Downloads 23210204 Model Updating-Based Approach for Damage Prognosis in Frames via Modal Residual Force
Authors: Gholamreza Ghodrati Amiri, Mojtaba Jafarian Abyaneh, Ali Zare Hosseinzadeh
Abstract:
This paper presents an effective model updating strategy for damage localization and quantification in frames by defining damage detection problem as an optimization issue. A generalized version of the Modal Residual Force (MRF) is employed for presenting a new damage-sensitive cost function. Then, Grey Wolf Optimization (GWO) algorithm is utilized for solving suggested inverse problem and the global extremums are reported as damage detection results. The applicability of the presented method is investigated by studying different damage patterns on the benchmark problem of the IASC-ASCE, as well as a planar shear frame structure. The obtained results emphasize good performance of the method not only in free-noise cases, but also when the input data are contaminated with different levels of noises.Keywords: frame, grey wolf optimization algorithm, modal residual force, structural damage detection
Procedia PDF Downloads 38710203 Estimation of Stress-Strength Parameter for Burr Type XII Distribution Based on Progressive Type-II Censoring
Authors: A. M. Abd-Elfattah, M. H. Abu-Moussa
Abstract:
In this paper, the estimation of stress-strength parameter R = P(Y < X) is considered when X; Y the strength and stress respectively are two independent random variables of Burr Type XII distribution. The samples taken for X and Y are progressively censoring of type II. The maximum likelihood estimator (MLE) of R is obtained when the common parameter is unknown. But when the common parameter is known the MLE, uniformly minimum variance unbiased estimator (UMVUE) and the Bayes estimator of R = P(Y < X) are obtained. The exact condence interval of R based on MLE is obtained. The performance of the proposed estimators is compared using the computer simulation.Keywords: Burr Type XII distribution, progressive type-II censoring, stress-strength model, unbiased estimator, maximum-likelihood estimator, uniformly minimum variance unbiased estimator, confidence intervals, Bayes estimator
Procedia PDF Downloads 45510202 Study of Rehydration Process of Dried Squash (Cucurbita pepo) at Different Temperatures and Dry Matter-Water Ratios
Authors: Sima Cheraghi Dehdezi, Nasser Hamdami
Abstract:
Air-drying is the most widely employed method for preserving fruits and vegetables. Most of the dried products must be rehydrated by immersion in water prior to their use, so the study of rehydration kinetics in order to optimize rehydration phenomenon has great importance. Rehydration typically composes of three simultaneous processes: the imbibition of water into dried material, the swelling of the rehydrated products and the leaching of soluble solids to rehydration medium. In this research, squash (Cucurbita pepo) fruits were cut into 0.4 cm thick and 4 cm diameter slices. Then, squash slices were blanched in a steam chamber for 4 min. After cooling to room temperature, squash slices were dehydrated in a hot air dryer, under air flow 1.5 m/s and air temperature of 60°C up to moisture content of 0.1065 kg H2O per kg d.m. Dehydrated samples were kept in polyethylene bags and stored at 4°C. Squash slices with specified weight were rehydrated by immersion in distilled water at different temperatures (25, 50, and 75°C), various dry matter-water ratios (1:25, 1:50, and 1:100), which was agitated at 100 rpm. At specified time intervals, up to 300 min, the squash samples were removed from the water, and the weight, moisture content and rehydration indices of the sample were determined.The texture characteristics were examined over a 180 min period. The results showed that rehydration time and temperature had significant effects on moisture content, water absorption capacity (WAC), dry matter holding capacity (DHC), rehydration ability (RA), maximum force and stress in dried squash slices. Dry matter-water ratio had significant effect (p˂0.01) on all squash slice properties except DHC. Moisture content, WAC and RA of squash slices increased, whereas DHC and texture firmness (maximum force and stress) decreased with rehydration time. The maximum moisture content, WAC and RA and the minimum DHC, force and stress, were observed in squash slices rehydrated into 75°C water. The lowest moisture content, WAC and RA and the highest DHC, force and stress, were observed in squash slices immersed in water at 1:100 dry matter-water ratio. In general, for all rehydration conditions of squash slices, the highest water absorption rate occurred during the first minutes of process. Then, this rate decreased. The highest rehydration rate and amount of water absorption occurred in 75°C.Keywords: dry matter-water ratio, squash, maximum force, rehydration ability
Procedia PDF Downloads 31210201 The Examination of Withdrawn Behavior in Chinese Adolescents
Authors: Zhidong Zhang, Zhi-Chao Zhang, Georgiana Duarte
Abstract:
This study examined withdrawn syndromes of Chinese school children in northeast China in Northeast China. Specifically, the study examined withdrawn behaviors and the relationship to anxious syndromes and education environments. The purpose is to examine how the elements of educational environments and the early adolescents’ behaviors as independent variables influence and possibly predict the school children’s withdrawn problems. Achenbach System of Empirically Based Assessment (ASEBA), was the instrument, used in collection of data. A stratified sampling method was utilized to collect data from 2532 participants in seven schools. The results indicated that several background variables influenced withdrawn problem. Specifically, age, grade, sports activities and hobbies had a relationship with the anxious/depressed variable. Further withdrawn syndromes and anxious problem indicate a significant correlation.Keywords: anxious/depressed problem, ASEBA, CBCL, withdrawn syndromes
Procedia PDF Downloads 29610200 A Modified NSGA-II Algorithm for Solving Multi-Objective Flexible Job Shop Scheduling Problem
Authors: Aydin Teymourifar, Gurkan Ozturk, Ozan Bahadir
Abstract:
NSGA-II is one of the most well-known and most widely used evolutionary algorithms. In addition to its new versions, such as NSGA-III, there are several modified types of this algorithm in the literature. In this paper, a hybrid NSGA-II algorithm has been suggested for solving the multi-objective flexible job shop scheduling problem. For a better search, new neighborhood-based crossover and mutation operators are defined. To create new generations, the neighbors of the selected individuals by the tournament selection are constructed. Also, at the end of each iteration, before sorting, neighbors of a certain number of good solutions are derived, except for solutions protected by elitism. The neighbors are generated using a constraint-based neural network that uses various constructs. The non-dominated sorting and crowding distance operators are same as the classic NSGA-II. A comparison based on some multi-objective benchmarks from the literature shows the efficiency of the algorithm.Keywords: flexible job shop scheduling problem, multi-objective optimization, NSGA-II algorithm, neighborhood structures
Procedia PDF Downloads 22710199 Development of IDF Curves for Precipitation in Western Watershed of Guwahati, Assam
Authors: Rajarshi Sharma, Rashidul Alam, Visavino Seleyi, Yuvila Sangtam
Abstract:
The Intensity-Duration-Frequency (IDF) relationship of rainfall amounts is one of the most commonly used tools in water resources engineering for planning, design and operation of water resources project, or for various engineering projects against design floods. The establishment of such relationships was reported as early as in 1932 (Bernard). Since then many sets of relationships have been constructed for several parts of the globe. The objective of this research is to derive IDF relationship of rainfall for western watershed of Guwahati, Assam. These relationships are useful in the design of urban drainage works, e.g. storm sewers, culverts and other hydraulic structures. In the study, rainfall depth for 10 years viz. 2001 to 2010 has been collected from the Regional Meteorological Centre Borjhar, Guwahati. Firstly, the data has been used to construct the mass curve for duration of more than 7 hours rainfall to calculate the maximum intensity and to form the intensity duration curves. Gumbel’s frequency analysis technique has been used to calculate the probable maximum rainfall intensities for a period of 2 yr, 5 yr, 10 yr, 50 yr, 100 yr from the maximum intensity. Finally, regression analysis has been used to develop the intensity-duration-frequency (IDF) curve. Thus, from the analysis the values for the constants ‘a’,‘b’ &‘c’ have been found out. The values of ‘a’ for which the sum of the squared deviation is minimum has been found out to be 40 and when the corresponding value of ‘c’ and ‘b’ for the minimum squared deviation of ‘a’ are 0.744 and 1981.527 respectively. The results obtained showed that in all the cases the correlation coefficient is very high indicating the goodness of fit of the formulae to estimate IDF curves in the region of interest.Keywords: intensity-duration-frequency relationship, mass curve, regression analysis, correlation coefficient
Procedia PDF Downloads 24310198 Three-Dimensional Optimal Path Planning of a Flying Robot for Terrain Following/Terrain Avoidance
Authors: Amirreza Kosari, Hossein Maghsoudi, Malahat Givar
Abstract:
In this study, the three-dimensional optimal path planning of a flying robot for Terrain Following / Terrain Avoidance (TF/TA) purposes using Direct Collocation has been investigated. To this purpose, firstly, the appropriate equations of motion representing the flying robot translational movement have been described. The three-dimensional optimal path planning of the flying vehicle in terrain following/terrain avoidance maneuver is formulated as an optimal control problem. The terrain profile, as the main allowable height constraint has been modeled using Fractal Generation Method. The resulting optimal control problem is discretized by applying Direct Collocation numerical technique, and then transformed into a Nonlinear Programming Problem (NLP). The efficacy of the proposed method is demonstrated by extensive simulations, and in particular, it is verified that this approach could produce a solution satisfying almost all performance and environmental constraints encountering a low-level flying maneuverKeywords: path planning, terrain following, optimal control, nonlinear programming
Procedia PDF Downloads 18410197 Evidence Theory Based Emergency Multi-Attribute Group Decision-Making: Application in Facility Location Problem
Authors: Bidzina Matsaberidze
Abstract:
It is known that, in emergency situations, multi-attribute group decision-making (MAGDM) models are characterized by insufficient objective data and a lack of time to respond to the task. Evidence theory is an effective tool for describing such incomplete information in decision-making models when the expert and his knowledge are involved in the estimations of the MAGDM parameters. We consider an emergency decision-making model, where expert assessments on humanitarian aid from distribution centers (HADC) are represented in q-rung ortho-pair fuzzy numbers, and the data structure is described within the data body theory. Based on focal probability construction and experts’ evaluations, an objective function-distribution centers’ selection ranking index is constructed. Our approach for solving the constructed bicriteria partitioning problem consists of two phases. In the first phase, based on the covering’s matrix, we generate a matrix, the columns of which allow us to find all possible partitionings of the HADCs with the service centers. Some constraints are also taken into consideration while generating the matrix. In the second phase, based on the matrix and using our exact algorithm, we find the partitionings -allocations of the HADCs to the centers- which correspond to the Pareto-optimal solutions. For an illustration of the obtained results, a numerical example is given for the facility location-selection problem.Keywords: emergency MAGDM, q-rung orthopair fuzzy sets, evidence theory, HADC, facility location problem, multi-objective combinatorial optimization problem, Pareto-optimal solutions
Procedia PDF Downloads 9210196 Numerical Simulation of the Kurtosis Effect on the EHL Problem
Authors: S. Gao, S. Srirattayawong
Abstract:
In this study, a computational fluid dynamics (CFD) model has been developed for studying the effect of surface roughness profile on the EHL problem. The cylinders contact geometry, meshing and calculation of the conservation of mass and momentum equations are carried out by using the commercial software packages ICEMCFD and ANSYS Fluent. The user defined functions (UDFs) for density, viscosity and elastic deformation of the cylinders as the functions of pressure and temperature have been defined for the CFD model. Three different surface roughness profiles are created and incorporated into the CFD model. It is found that the developed CFD model can predict the characteristics of fluid flow and heat transfer in the EHL problem, including the leading parameters such as the pressure distribution, minimal film thickness, viscosity, and density changes. The obtained results show that the pressure profile at the center of the contact area directly relates to the roughness amplitude. The rough surface with kurtosis value over 3 influences the fluctuated shape of pressure distribution higher than other cases.Keywords: CFD, EHL, kurtosis, surface roughness
Procedia PDF Downloads 31910195 Optimum Design of Piled-Raft Systems
Authors: Alaa Chasib Ghaleb, Muntadher M. Abbood
Abstract:
This paper presents a study of the problem of the optimum design of piled-raft foundation systems. The study has been carried out using a hypothetic problem and soil investigations of six sites locations in Basrah city to evaluate the adequacy of using the piled-raft foundation concept. Three dimensional finite element analysis method has been used, to perform the structural analysis. The problem is optimized using Hooke and Jeeves method with the total weight of the foundation as objective function and each of raft thickness, piles length, number of piles and piles diameter as design variables. It is found that the total and differential settlement decreases with increasing the raft thickness, the number of piles, the piles length, and the piles diameter. Finally parametric study for load values, load type and raft dimensions have been studied and the results have been discussed.Keywords: Hooke and Jeeves, optimum design, piled-raft, foundations
Procedia PDF Downloads 22210194 Influence of Composite Adherents Properties on the Dynamic Behavior of Double Lap Bonded Joint
Authors: P. Saleh, G. Challita, R. Hazimeh, K. Khalil
Abstract:
In this paper 3D FEM analysis was carried out on double lap bonded joint with composite adherents subjected to dynamic shear. The adherents are made of Carbon/Epoxy while the adhesive is epoxy Araldite 2031. The maximum average shear stress and the stress homogeneity in the adhesive layer were examined. Three fibers textures were considered: UD; 2.5D and 3D with same volume fiber then a parametric study based on changing the thickness and the type of fibers texture in 2.5D was accomplished. Moreover, adherents’ dissimilarity was also investigated. It was found that the main parameter influencing the behavior is the longitudinal stiffness of the adherents. An increase in the adherents’ longitudinal stiffness induces an increase in the maximum average shear stress in the adhesive layer and an improvement in the shear stress homogeneity within the joint. No remarkable improvement was observed for dissimilar adherents.Keywords: adhesive, composite adherents, impact shear, finite element
Procedia PDF Downloads 44110193 Improving Coverage in Wireless Sensor Networks Using Particle Swarm Optimization Algorithm
Authors: Ehsan Abdolzadeh, Sanaz Nouri, Siamak Khalaj
Abstract:
Today WSNs have many applications in different fields like the environment, military operations, discoveries, monitoring operations, and so on. Coverage size and energy consumption are the important challenges that these networks need to face. This paper tries to solve the problem of coverage with a requirement of k-coverage and minimum energy consumption. In order to minimize energy consumption, visual sensor networks have been used that observe and process just those targets that are located in their view direction. As a result, sensor rotations have decreased, and subsequently, energy consumption has been minimized. To solve the problem of coverage particle swarm optimization, coverage optimization has been able to ensure coverage requirement together with minimizing sensor rotations while meeting the problem requirement of k≤14. So energy consumption has decreased, and this could extend the sensors’ lifetime subsequently.Keywords: K coverage, particle union optimization algorithm, wireless sensor networks, visual sensor networks
Procedia PDF Downloads 11510192 Quantification of the Erosion Effect on Small Caliber Guns: Experimental and Numerical Analysis
Authors: Dhouibi Mohamed, Stirbu Bogdan, Chabotier André, Pirlot Marc
Abstract:
Effects of erosion and wear on the performance of small caliber guns have been analyzed throughout numerical and experimental studies. Mainly, qualitative observations were performed. Correlations between the volume change of the chamber and the maximum pressure are limited. This paper focuses on the development of a numerical model to predict the maximum pressure evolution when the interior shape of the chamber changes in the different weapon’s life phases. To fulfill this goal, an experimental campaign, followed by a numerical simulation study, is carried out. Two test barrels, « 5.56x45mm NATO » and « 7.62x51mm NATO,» are considered. First, a Coordinate Measuring Machine (CMM) with a contact scanning probe is used to measure the interior profile of the barrels after each 300-shots cycle until their worn out. Simultaneously, the EPVAT (Electronic Pressure Velocity and Action Time) method with a special WEIBEL radar are used to measure: (i) the chamber pressure, (ii) the action time, (iii) and the bullet velocity in each barrel. Second, a numerical simulation study is carried out. Thus, a coupled interior ballistic model is developed using the dynamic finite element program LS-DYNA. In this work, two different models are elaborated: (i) coupled Eularien Lagrangian method using fluid-structure interaction (FSI) techniques and a coupled thermo-mechanical finite element using a lumped parameter model (LPM) as a subroutine. Those numerical models are validated and checked through three experimental results, such as (i) the muzzle velocity, (ii) the chamber pressure, and (iii) the surface morphology of fired projectiles. Results show a good agreement between experiments and numerical simulations. Next, a comparison between the two models is conducted. The projectile motions, the dynamic engraving resistances and the maximum pressures are compared and analyzed. Finally, using this obtained database, a statistical correlation between the muzzle velocity, the maximum pressure and the chamber volume is established.Keywords: engraving process, finite element analysis, gun barrel erosion, interior ballistics, statistical correlation
Procedia PDF Downloads 21310191 Heavy Metals Concentration in Sediments Along the Ports, Samoa
Authors: T. Imo, F. Latū, S. Aloi, J. Leung-Wai, V. Vaurasi, P. Amosa, M. A. Sheikh
Abstract:
Contamination of heavy metals in coral reefs and coastal areas is a serious ecotoxicological and environmental problem due to direct runoff from anthropogenic wastes, commercial vessels, and discharge from industrial effluents. In Samoa, the information on the ecotoxicological impact of heavy metals on sediments is limited. This study presents baseline data on the concentration and distribution of heavy metals in sediments collected along the commercial and fishing ports in Samoa. Surface sediment samples were collected within the months of August-October 2013 from the 5 sites along the 2 ports. Sieved sample fractions were used for the evaluation of sediment physicochemical parameters namely pH, conductivity, organic matter, and bicarbonates of calcium. Heavy metal (Cu, Pb) analysis was achieved by flame atomic absorption spectrometry. Two heavy metals (Cu, Pb) were detected from each port with some concentration below the WHO permissible maximum concentration of environment quality standard. The results obtained from this study advocate for further studies regarding emerging threats of heavy metals on the vital marine resources which have significant importance to the livelihood of coastal societies, particularly Small Island States including Samoa.Keywords: coastal environment, heavy metals, pollution, sediments
Procedia PDF Downloads 32710190 Growth Performance and Nutrient Digestibility of Cirrhinus mrigala Fingerlings Fed on Sunflower Meal Based Diet Supplemented with Phytase
Authors: Syed Makhdoom Hussain, Muhammad Afzal, Farhat Jabeen, Arshad Javid, Tasneem Hameed
Abstract:
A feeding trial was conducted with Cirrhinus mrigala fingerlings to study the effects of microbial phytase with graded levels (0, 500, 1000, 1500, and 2000 FTUkg-1) by sunflower meal based diet on growth performance and nutrient digestibility. The chromic oxide was added as an indigestible marker in the diets. Three replicate groups of 15 fish (Average wt 5.98 g fish-1) were fed once a day and feces were collected twice daily. The results of present study showed improved growth and feed performance of Cirrhinus mrigala fingerlings in response to phytase supplementation. Maximum growth performance was obtained by the fish fed on test diet-III having 1000 FTU kg-1 phytase level. Similarly, nutrient digestibility was also significantly increased (p<0.05) by phytase supplementation. Digestibility coefficients for sunflower meal based diet increased 15.76%, 17.70%, and 12.70% for crude protein, crude fat and apparent gross energy as compared to the reference diet, respectively at 1000 FTU kg-1 level. Again, maximum response of nutrient digestibility was recorded at the phytase level of 1000 FTU kg-1 diet. It was concluded that the phytase supplementation to sunflower meal based diet at 1000 FTU kg-1 level is optimum to release adequate chelated nutrients for maximum growth performance of C. mrigala fingerlings. Our results also suggested that phytase supplementation to sunflower meal based diet can help in the development of sustainable aquaculture by reducing the feed cost and nutrient discharge through feces in the aquatic ecosystem.Keywords: sunflower meal, Cirrhinus mrigala, growth, nutrient digestibility, phytase
Procedia PDF Downloads 29810189 Investigating Software Engineering Challenges in Game Development
Authors: Fawad Zaidi
Abstract:
This paper discusses a variety of challenges and solutions involved with creating computer games and the issues faced by the software engineers working in this field. This review further investigates the articles coverage of project scope and the problem of feature creep that appears to be inherent with game development. The paper tries to answer the following question: Is this a problem caused by a shortage, or bad software engineering practices, or is this outside the control of the software engineering component of the game production process?Keywords: software engineering, computer games, software applications, development
Procedia PDF Downloads 47210188 Assessment of Planet Image for Land Cover Mapping Using Soft and Hard Classifiers
Authors: Lamyaa Gamal El-Deen Taha, Ashraf Sharawi
Abstract:
Planet image is a new data source from planet lab. This research is concerned with the assessment of Planet image for land cover mapping. Two pixel based classifiers and one subpixel based classifier were compared. Firstly, rectification of Planet image was performed. Secondly, a comparison between minimum distance, maximum likelihood and neural network classifications for classification of Planet image was performed. Thirdly, the overall accuracy of classification and kappa coefficient were calculated. Results indicate that neural network classification is best followed by maximum likelihood classifier then minimum distance classification for land cover mapping.Keywords: planet image, land cover mapping, rectification, neural network classification, multilayer perceptron, soft classifiers, hard classifiers
Procedia PDF Downloads 18510187 A Memetic Algorithm for an Energy-Costs-Aware Flexible Job-Shop Scheduling Problem
Authors: Christian Böning, Henrik Prinzhorn, Eric C. Hund, Malte Stonis
Abstract:
In this article, the flexible job-shop scheduling problem is extended by consideration of energy costs which arise owing to the power peak, and further decision variables such as work in process and throughput time are incorporated into the objective function. This enables a production plan to be simultaneously optimized in respect of the real arising energy and logistics costs. The energy-costs-aware flexible job-shop scheduling problem (EFJSP) which arises is described mathematically, and a memetic algorithm (MA) is presented as a solution. In the MA, the evolutionary process is supplemented with a local search. Furthermore, repair procedures are used in order to rectify any infeasible solutions that have arisen in the evolutionary process. The potential for lowering the real arising costs of a production plan through consideration of energy consumption levels is highlighted.Keywords: energy costs, flexible job-shop scheduling, memetic algorithm, power peak
Procedia PDF Downloads 343