Search results for: relative average residence time array
7163 Effect of Anion and Amino Functional Group on Resin for Lipase Immobilization with Adsorption-Cross Linking Method
Authors: Heri Hermansyah, Annisa Kurnia, A. Vania Anisya, Adi Surjosatyo, Yopi Sunarya, Rita Arbianti, Tania Surya Utami
Abstract:
Lipase is one of biocatalyst which is applied commercially for the process in industries, such as bioenergy, food, and pharmaceutical industry. Nowadays, biocatalysts are preferred in industries because they work in mild condition, high specificity, and reduce energy consumption (high pressure and temperature). But, the usage of lipase for industry scale is limited by economic reason due to the high price of lipase and difficulty of the separation system. Immobilization of lipase is one of the solutions to maintain the activity of lipase and reduce separation system in the process. Therefore, we conduct a study about lipase immobilization with the adsorption-cross linking method using glutaraldehyde because this method produces high enzyme loading and stability. Lipase is immobilized on different kind of resin with the various functional group. Highest enzyme loading (76.69%) was achieved by lipase immobilized on anion macroporous which have anion functional group (OH‑). However, highest activity (24,69 U/g support) through olive oil emulsion method was achieved by lipase immobilized on anion macroporous-chitosan which have amino (NH2) and anion (OH-) functional group. In addition, it also success to produce biodiesel until reach yield 50,6% through interesterification reaction and after 4 cycles stable 63.9% relative with initial yield. While for Aspergillus, niger lipase immobilized on anion macroporous-kitosan have unit activity 22,84 U/g resin and yield biodiesel higher than commercial lipase (69,1%) and after 4 cycles stable reach 70.6% relative from initial yield. This shows that optimum functional group on support for immobilization with adsorption-cross linking is the support that contains amino (NH2) and anion (OH-) functional group because they can react with glutaraldehyde and binding with enzyme prevent desorption of lipase from support through binding lipase with a functional group on support.
Keywords: Adsorption-Cross linking, lipase, resin, immobilization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7917162 The Assessment of Interactions in Ratios Control Schemes for a Binary Distillation Column
Authors: R. Bendib, A. Khelassi
Abstract:
In this paper we will consider the most known ratios control schemes ((L/D, V/B),(L/D,V/F), Ryskamp-s, and (D/(L+D),V/B)) for binary distillation column and we compare them in the basis of interactions and disturbance propagation. The models for these configurations are deuced using mathematical transformations taking the energy balance structure (LV) as a base model. The dynamic relative magnitude criterion (DRMC) is used to assess the interactions. The results show that the introduction of ratios in controlling the column tends to minimize the degree of interactions between the loops.Keywords: Distillation, interaction, DRMC, configurations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15597161 A Distributed Cognition Framework to Compare E-Commerce Websites Using Data Envelopment Analysis
Authors: C. lo Storto
Abstract:
This paper presents an approach based on the adoption of a distributed cognition framework and a non parametric multicriteria evaluation methodology (DEA) designed specifically to compare e-commerce websites from the consumer/user viewpoint. In particular, the framework considers a website relative efficiency as a measure of its quality and usability. A website is modelled as a black box capable to provide the consumer/user with a set of functionalities. When the consumer/user interacts with the website to perform a task, he/she is involved in a cognitive activity, sustaining a cognitive cost to search, interpret and process information, and experiencing a sense of satisfaction. The degree of ambiguity and uncertainty he/she perceives and the needed search time determine the effort size – and, henceforth, the cognitive cost amount – he/she has to sustain to perform his/her task. On the contrary, task performing and result achievement induce a sense of gratification, satisfaction and usefulness. In total, 9 variables are measured, classified in a set of 3 website macro-dimensions (user experience, site navigability and structure). The framework is implemented to compare 40 websites of businesses performing electronic commerce in the information technology market. A questionnaire to collect subjective judgements for the websites in the sample was purposely designed and administered to 85 university students enrolled in computer science and information systems engineering undergraduate courses.Keywords: Website, e-commerce, DEA, distributed cognition, evaluation, comparison.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17057160 Delay Analysis of Sampled-Data Systems in Hard RTOS
Authors: A. M. Azad, M. Alam, C. M. Hussain
Abstract:
In this paper, we have presented the effect of varying time-delays on performance and stability in the single-channel multirate sampled-data system in hard real-time (RT-Linux) environment. The sampling task require response time that might exceed the capacity of RT-Linux. So a straight implementation with RT-Linux is not feasible, because of the latency of the systems and hence, sampling period should be less to handle this task. The best sampling rate is chosen for the sampled-data system, which is the slowest rate meets all performance requirements. RT-Linux is consistent with its specifications and the resolution of the real-time is considered 0.01 seconds to achieve an efficient result. The test results of our laboratory experiment shows that the multi-rate control technique in hard real-time operating system (RTOS) can improve the stability problem caused by the random access delays and asynchronization.Keywords: Multi-rate, PID, RT-Linux, Sampled-data, Servo.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14437159 Two-Dimensional Modeling of Spent Nuclear Fuel Using FLUENT
Authors: Imane Khalil, Quinn Pratt
Abstract:
In a nuclear reactor, an array of fuel rods containing stacked uranium dioxide pellets clad with zircalloy is the heat source for a thermodynamic cycle of energy conversion from heat to electricity. After fuel is used in a nuclear reactor, the assemblies are stored underwater in a spent nuclear fuel pool at the nuclear power plant while heat generation and radioactive decay rates decrease before it is placed in packages for dry storage or transportation. A computational model of a Boiling Water Reactor spent fuel assembly is modeled using FLUENT, the computational fluid dynamics package. Heat transfer simulations were performed on the two-dimensional 9x9 spent fuel assembly to predict the maximum cladding temperature for different input to the FLUENT model. Uncertainty quantification is used to predict the heat transfer and the maximum temperature profile inside the assembly.Keywords: Spent nuclear fuel, conduction, heat transfer, uncertainty quantification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8557158 Modeling Ecological Responses of Some Forage Legumes in Iran
Authors: M. Keshavarzi
Abstract:
Grasslands of Iran are encountered with a vast desertification and destruction. Some legumes are plants of forage importance with high palatability. Studied legumes in this project are Onobrychis, Medicago sativa (alfalfa) and Trifolium repens. Seeds were cultivated in research field of Kaboutarabad (33 km East of Isfahan, Iran) with an average 80 mm. annual rainfall. Plants were cultivated in a split plot design with 3 replicate and two water treatments (weekly irrigation, and under stress with same amount per 15 days interval). Water entrance to each plots were measured by Partial flow. This project lasted 20 weeks. Destructive samplings (1m2 each time) were done weekly. At each sampling plants were gathered and weighed separately for each vegetative parts. An Area Meter (Vista) was used to measure root surface and leaf area. Total shoot and root fresh and dry weight, leaf area index and soil coverage were evaluated too. Dry weight was achieved in 750c oven after 24 hours. Statgraphic and Harvard Graphic software were used to formulate and demonstrate the parameters curves due to time. Our results show that Trifolium repens has affected 60 % and Medicago sativa 18% by water stress. Onobrychis total fresh weight was reduced 45%. Dry weight or Biomass in alfalfa is not so affected by water shortage. This means that in alfalfa fields we can decrease the irrigation amount and have some how same amount of Biomass. Onobrychis show a drastic decrease in Biomass. The increases in total dry matter due to time in studied plants are formulated. For Trifolium repens if removal or cattle entrance to meadows do not occurred at perfect time, it will decrease the palatability and water content of the shoots. Water stress in a short period could develop the root system in Trifolium repens, but if it last more than this other ecological and soil factors will affect the growth of this plant. Low level of soil water is not so important for studied legume forges. But water shortage affect palatability and water content of aerial parts. Leaf area due to time in studied legumes is formulated. In fact leaf area is decreased by shortage in available water. Higher leaf area means higher forage and biomass production. Medicago and Onobrychis reach to the maximum leaf area sooner than Trifolium and are able to produce an optimum soil cover and inhibit the transpiration of soil water of meadows. Correlation of root surface to Total biomass in studied plants is formulated. Medicago under water stress show a 40% decrease in crown cover while at optimum condition this amount reach to 100%. In order to produce forage in areas without soil erosion Medicago is the best choice even with a shortage in water resources. It is tried to represent the growth simulation of three famous Forage Legumes. By growth simulation farmers and range managers could better decide to choose best plant adapted to water availability without designing different time and labor consuming field experiments.Keywords: Ecological parameters, Medicago, Onobrychis, Trifolium.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16987157 Plaque Formation of Toxoplasma gondii in Vero Cells using Carboxymethylcellulose
Authors: L. Fonseca-Géigel, M. Alvarez, G. García, R. Cox, L. Morier, L. Fonte, M. G. Guzmán
Abstract:
Toxoplasma gondii is an intracellular parasite capable of infecting all nucleated cells in a diverse array of species. Toxoplasma plaque assay have been described using Bacto Agar. Because of its experimental advantages carboxymethyl cellulose overlay, medium viscosity was choosing and the aim of this work was to develop alternative method for formation of T. gondii plaques. Tachyzoites were inoculated onto monolayers of Vero cells and cultured at 37° C under 5 % CO2. The cultures were followed up by microscopy inspection. Small plaques were visible by naphtol blue stain 4 days after infection. Larger plaques could be observed by day 10 of culture. The carboxymethyl cellulose is a cheap reagent and the methodology is easier, faster than assays under agar overlay. This is the first description of the carboxymethyl cellulose overlay use for obtaining the formation of T. gondii plaques and may be useful in consequent obtaining tachyzoites for detailed studies.Keywords: Carboxymethyl cellulose, Cell culture, Plaque assay, Toxoplasma gondii.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27437156 Influence of Heterogeneous Traffic on the Roadside Fine (PM2.5 and PM1) and Coarse(PM10) Particulate Matter Concentrations in Chennai City, India
Authors: Srimuruganandam. B, S.M. Shiva Nagendra
Abstract:
In this paper the influence of heterogeneous traffic on the temporal variation of ambient PM10, PM2.5 and PM1 concentrations at a busy arterial route (Sardar Patel Road) in the Chennai city has been analyzed. The hourly PM concentration, traffic counts and average speed of the vehicles have been monitored at the study site for one week (19th-25th January 2009). Results indicated that the concentrations of coarse (PM10) and fine PM (PM2.5 and PM1) concentrations at SP road are having similar trend during peak and non-peak hours, irrespective of the days. The PM concentrations showed daily two peaks corresponding to morning (8 to 10 am) and evening (7 to 9 pm) peak hour traffic flow. The PM10 concentration is dominated by fine particles (53% of PM2.5 and 45% of PM1). The high PM2.5/PM10 ratio indicates that the majority of PM10 particles originate from re-suspension of road dust. The analysis of traffic flow at the study site showed that 2W, 3W and 4W are having similar diurnal trend as PM concentrations. This confirms that the 2W, 3W and 4W are the main emission source contributing to ambient PM concentration at SP road. The speed measurement at SP road showed that the average speed of 2W, 3W, 4W, LCV and HCV are 38, 40, 38, 40 and 38 km/hr and 43, 41, 42, 40 and 41 km/hr respectively for the weekdays and weekdays.Keywords: particulate matter, heterogeneous traffic, fineparticles, coarse particles, vehicle speed, weekend and weekday.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14667155 Efficient Time Synchronization in Wireless Sensor Networks
Authors: Shehzad Ashraf Ch., Aftab Ahmed Khan, Zahid Mehmood, Muhammad Ahsan Habib, Qasim Mehmood
Abstract:
Energy efficiency is the key requirement in wireless sensor network as sensors are small, cheap and are deployed in very large number in a large geographical area, so there is no question of replacing the batteries of the sensors once deployed. Different ways can be used for efficient energy transmission including Multi-Hop algorithms, collaborative communication, cooperativecommunication, Beam- forming, routing algorithm, phase, frequency and time synchronization. The paper reviews the need for time synchronization and proposed a BFS based synchronization algorithm to achieve energy efficiency. The efficiency of our protocol has been tested and verified by simulation
Keywords: time synchronization, sensor networks, energy efficiency, breadth first search
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17217154 A Reduced-Bit Multiplication Algorithm for Digital Arithmetic
Authors: Harpreet Singh Dhillon, Abhijit Mitra
Abstract:
A reduced-bit multiplication algorithm based on the ancient Vedic multiplication formulae is proposed in this paper. Both the Vedic multiplication formulae, Urdhva tiryakbhyam and Nikhilam, are first discussed in detail. Urdhva tiryakbhyam, being a general multiplication formula, is equally applicable to all cases of multiplication. It is applied to the digital arithmetic and is shown to yield a multiplier architecture which is very similar to the popular array multiplier. Due to its structure, it leads to a high carry propagation delay in case of multiplication of large numbers. Nikhilam Sutra, on the other hand, is more efficient in the multiplication of large numbers as it reduces the multiplication of two large numbers to that of two smaller numbers. The framework of the proposed algorithm is taken from this Sutra and is further optimized by use of some general arithmetic operations such as expansion and bit-shifting to take advantage of bit-reduction in multiplication. We illustrate the proposed algorithm by reducing a general 4x4-bit multiplication to a single 2 x 2-bit multiplication operation.
Keywords: Multiplication, algorithm, Vedic mathematics, digital arithmetic, reduced-bit.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34537153 A General Framework for Modeling Replicated Real-Time Database
Authors: Hala Abdel hameed, Hazem M. El-Bakry, Torky Sultan
Abstract:
There are many issues that affect modeling and designing real-time databases. One of those issues is maintaining consistency between the actual state of the real-time object of the external environment and its images as reflected by all its replicas distributed over multiple nodes. The need to improve the scalability is another important issue. In this paper, we present a general framework to design a replicated real-time database for small to medium scale systems and maintain all timing constrains. In order to extend the idea for modeling a large scale database, we present a general outline that consider improving the scalability by using an existing static segmentation algorithm applied on the whole database, with the intent to lower the degree of replication, enables segments to have individual degrees of replication with the purpose of avoiding excessive resource usage, which all together contribute in solving the scalability problem for DRTDBS.
Keywords: Database modeling, Distributed database, Real time databases, Replication
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13667152 Nonstationarity Modeling of Economic and Financial Time Series
Authors: C. Slim
Abstract:
Traditional techniques for analyzing time series are based on the notion of stationarity of phenomena under study, but in reality most economic and financial series do not verify this hypothesis, which implies the implementation of specific tools for the detection of such behavior. In this paper, we study nonstationary non-seasonal time series tests in a non-exhaustive manner. We formalize the problem of nonstationary processes with numerical simulations and take stock of their statistical characteristics. The theoretical aspects of some of the most common unit root tests will be discussed. We detail the specification of the tests, showing the advantages and disadvantages of each. The empirical study focuses on the application of these tests to the exchange rate (USD/TND) and the Consumer Price Index (CPI) in Tunisia, in order to compare the Power of these tests with the characteristics of the series.Keywords: Stationarity, unit root tests, economic time series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8637151 Hybrid Adaptive Modeling to Enhance Robustness of Real-Time Optimization
Authors: Hussain Syed Asad, Richard Kwok Kit Yuen, Gongsheng Huang
Abstract:
Real-time optimization has been considered an effective approach for improving energy efficient operation of heating, ventilation, and air-conditioning (HVAC) systems. In model-based real-time optimization, model mismatches cannot be avoided. When model mismatches are significant, the performance of the real-time optimization will be impaired and hence the expected energy saving will be reduced. In this paper, the model mismatches for chiller plant on real-time optimization are considered. In the real-time optimization of the chiller plant, simplified semi-physical or grey box model of chiller is always used, which should be identified using available operation data. To overcome the model mismatches associated with the chiller model, hybrid Genetic Algorithms (HGAs) method is used for online real-time training of the chiller model. HGAs combines Genetic Algorithms (GAs) method (for global search) and traditional optimization method (i.e. faster and more efficient for local search) to avoid conventional hit and trial process of GAs. The identification of model parameters is synthesized as an optimization problem; and the objective function is the Least Square Error between the output from the model and the actual output from the chiller plant. A case study is used to illustrate the implementation of the proposed method. It has been shown that the proposed approach is able to provide reliability in decision making, enhance the robustness of the real-time optimization strategy and improve on energy performance.
Keywords: Energy performance, hybrid adaptive modeling, hybrid genetic algorithms, real-time optimization, heating, ventilation, and air-conditioning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11387150 Arterial CO2 Pressure Drives Ventilation with a Time Delay during Recovery from an Impulse-like Exercise without Metabolic Acidosis
Authors: R. Afroundeh, T. Arimitsu, R. Yamanaka, C. S. Lian, T. Yunoki, T. Yano, K. Shirakawa
Abstract:
We investigated this hypothesis that arterial CO2 pressure (PaCO2) drives ventilation (V.E) with a time delay duringrecovery from short impulse-like exercise (10 s) with work load of 200 watts. V.E and end tidal CO2 pressure (PETCO2) were measured continuously during rest, warming up, exercise and recovery periods. PaCO2 was predicted (PaCO2 pre) from PETCO2 and tidal volume (VT). PETCO2 and PaCO2 pre peaked at 20 s of recovery. V.E increased and peaked at the end of exercise and then decreased during recovery; however, it peaked again at 30 s of recovery, which was 10 s later than the peak of PaCO2 pre. The relationship between V. E and PaCO2pre was not significant by using data of them obtained at the same time but was significant by using data of V.E obtained 10 s later for data of PaCO2 pre. The results support our hypothesis that PaCO2 drives V.E with a time delay.
Keywords: Arterial CO2 pressure, impulse-like exercise, time delay, ventilation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14297149 Issues in Deploying Smart Antennas in Mobile Radio Networks
Authors: Rameshwar Kawitkar
Abstract:
With the exponentially increasing demand for wireless communications the capacity of current cellular systems will soon become incapable of handling the growing traffic. Since radio frequencies are diminishing natural resources, there seems to be a fundamental barrier to further capacity increase. The solution can be found in smart antenna systems. Smart or adaptive antenna arrays consist of an array of antenna elements with signal processing capability, that optimize the radiation and reception of a desired signal, dynamically. Smart antennas can place nulls in the direction of interferers via adaptive updating of weights linked to each antenna element. They thus cancel out most of the co-channel interference resulting in better quality of reception and lower dropped calls. Smart antennas can also track the user within a cell via direction of arrival algorithms. This implies that they are more advantageous than other antenna systems. This paper focuses on few issues about the smart antennas in mobile radio networks.Keywords: Smart/Adaptive Antenna, Multipath fading, Beamforming, Radio propagation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26667148 Finding Pareto Optimal Front for the Multi-Mode Time, Cost Quality Trade-off in Project Scheduling
Authors: H. Iranmanesh, M. R. Skandari, M. Allahverdiloo
Abstract:
Project managers are the ultimate responsible for the overall characteristics of a project, i.e. they should deliver the project on time with minimum cost and with maximum quality. It is vital for any manager to decide a trade-off between these conflicting objectives and they will be benefited of any scientific decision support tool. Our work will try to determine optimal solutions (rather than a single optimal solution) from which the project manager will select his desirable choice to run the project. In this paper, the problem in project scheduling notated as (1,T|cpm,disc,mu|curve:quality,time,cost) will be studied. The problem is multi-objective and the purpose is finding the Pareto optimal front of time, cost and quality of a project (curve:quality,time,cost), whose activities belong to a start to finish activity relationship network (cpm) and they can be done in different possible modes (mu) which are non-continuous or discrete (disc), and each mode has a different cost, time and quality . The project is constrained to a non-renewable resource i.e. money (1,T). Because the problem is NP-Hard, to solve the problem, a meta-heuristic is developed based on a version of genetic algorithm specially adapted to solve multi-objective problems namely FastPGA. A sample project with 30 activities is generated and then solved by the proposed method.Keywords: FastPGA, Multi-Execution Activity Mode, ParetoOptimality, Project Scheduling, Time-Cost-Quality Trade-Off.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16837147 Optimizing Forecasting for Indonesia's Coal and Palm Oil Exports: A Comparative Analysis of ARIMA, ANN, and LSTM Methods
Authors: Mochammad Dewo, Sumarsono Sudarto
Abstract:
The Exponential Triple Smoothing Algorithm approach nowadays, which is used to anticipate the export value of Indonesia's two major commodities, coal and palm oil, has a Mean Percentage Absolute Error (MAPE) value of 30-50%, which may be considered as a "reasonable" forecasting mistake. Forecasting errors of more than 30% shall have a domino effect on industrial output, as extra production adds to raw material, manufacturing and storage expenses. Whereas, reaching an "excellent" classification with an error value of less than 10% will provide new investors and exporters with confidence in the commercial development of related sectors. Industrial growth will bring out a positive impact on economic development. It can be applied for other commodities if the forecast error is less than 10%. The purpose of this project is to create a forecasting technique that can produce precise forecasting results with an error of less than 10%. This research analyzes forecasting methods such as ARIMA (Autoregressive Integrated Moving Average), ANN (Artificial Neural Network) and LSTM (Long-Short Term Memory). By providing a MAPE of 1%, this study reveals that ANN is the most successful strategy for forecasting coal and palm oil commodities in Indonesia.
Keywords: ANN, Artificial Neural Network, ARIMA, Autoregressive Integrated Moving Average, export value, forecast, LSTM, Long Short Term Memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2237146 Experimental Analysis and Optimization of Process Parameters in Plasma Arc Cutting Machine of EN-45A Material Using Taguchi and ANOVA Method
Authors: Sahil Sharma, Mukesh Gupta, Raj Kumar, N. S Bindra
Abstract:
This paper presents an experimental investigation on the optimization and the effect of the cutting parameters on Material Removal Rate (MRR) in Plasma Arc Cutting (PAC) of EN-45A Material using Taguchi L 16 orthogonal array method. Four process variables viz. cutting speed, current, stand-off-distance and plasma gas pressure have been considered for this experimental work. Analysis of variance (ANOVA) has been performed to get the percentage contribution of each process parameter for the response variable i.e. MRR. Based on ANOVA, it has been observed that the cutting speed, current and the plasma gas pressure are the major influencing factors that affect the response variable. Confirmation test based on optimal setting shows the better agreement with the predicted values.Keywords: Analysis of variance, Material removal rate, plasma arc cutting, Taguchi method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12527145 Computation of Flood and Drought Years over the North-West Himalayan Region Using Indian Meteorological Department Rainfall Data
Authors: Sudip Kumar Kundu, Charu Singh
Abstract:
The climatic condition over Indian region is highly dependent on monsoon. India receives maximum amount of rainfall during southwest monsoon. Indian economy is highly dependent on agriculture. The presence of flood and drought years influenced the total cultivation system as well as the economy of the country as Indian agricultural systems is still highly dependent on the monsoon rainfall. The present study has been planned to investigate the flood and drought years for the north-west Himalayan region from 1951 to 2014 by using area average Indian Meteorological Department (IMD) rainfall data. For this investigation the Normalized index (NI) has been utilized to find out whether the particular year is drought or flood. The data have been extracted for the north-west Himalayan (NWH) region states namely Uttarakhand (UK), Himachal Pradesh (HP) and Jammu and Kashmir (J&K) to find out the rainy season average rainfall for each year, climatological mean and the standard deviation. After calculation it has been plotted by the diagrams (or graphs) to show the results- some of the years associated with drought years, some are flood years and rest are neutral. The flood and drought years can also relate with the large-scale phenomena El-Nino and La-Lina.
Keywords: Indian Meteorological Department, Rainfall, Normalized index, Flood, Drought, NWH.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8157144 Solution Approaches for Some Scheduling Problems with Learning Effect and Job Dependent Delivery Times
Authors: M. Duran Toksarı, B. Uçarkuş
Abstract:
In this paper, we propose two algorithms to optimally solve makespan and total completion time scheduling problems with learning effect and job dependent delivery times in a single machine environment. The delivery time is the extra time to eliminate adverse effect between the main processing and delivery to the customer. In this paper, we introduce the job dependent delivery times for some single machine scheduling problems with position dependent learning effect, which are makespan are total completion. The results with respect to two algorithms proposed for solving of the each problem are compared with LINGO solutions for 50-jobs, 100-jobs and 150- jobs problems. The proposed algorithms can find the same results in shorter time.Keywords: Delivery times, learning effect, makespan, scheduling, total completion time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15507143 Soft Real-Time Fuzzy Task Scheduling for Multiprocessor Systems
Authors: Mahdi Hamzeh, Sied Mehdi Fakhraie, Caro Lucas
Abstract:
All practical real-time scheduling algorithms in multiprocessor systems present a trade-off between their computational complexity and performance. In real-time systems, tasks have to be performed correctly and timely. Finding minimal schedule in multiprocessor systems with real-time constraints is shown to be NP-hard. Although some optimal algorithms have been employed in uni-processor systems, they fail when they are applied in multiprocessor systems. The practical scheduling algorithms in real-time systems have not deterministic response time. Deterministic timing behavior is an important parameter for system robustness analysis. The intrinsic uncertainty in dynamic real-time systems increases the difficulties of scheduling problem. To alleviate these difficulties, we have proposed a fuzzy scheduling approach to arrange real-time periodic and non-periodic tasks in multiprocessor systems. Static and dynamic optimal scheduling algorithms fail with non-critical overload. In contrast, our approach balances task loads of the processors successfully while consider starvation prevention and fairness which cause higher priority tasks have higher running probability. A simulation is conducted to evaluate the performance of the proposed approach. Experimental results have shown that the proposed fuzzy scheduler creates feasible schedules for homogeneous and heterogeneous tasks. It also and considers tasks priorities which cause higher system utilization and lowers deadline miss time. According to the results, it performs very close to optimal schedule of uni-processor systems.Keywords: Computational complexity, Deadline, Feasible scheduling, Fuzzy scheduling, Priority, Real-time multiprocessor systems, Robustness, System utilization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21277142 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data
Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz
Abstract:
In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.Keywords: Real-Time Spatial Big Data, Quality Of Service, Vertical partitioning, Horizontal partitioning, Matching algorithm, Hamming distance, Stream query.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10557141 Feasibility Investigation of Near Infrared Spectrometry for Particle Size Estimation of Nano Structures
Authors: A. Bagheri Garmarudi, M. Khanmohammadi, N. Khoddami, K. Shabani
Abstract:
Determination of nano particle size is substantial since the nano particle size exerts a significant effect on various properties of nano materials. Accordingly, proposing non-destructive, accurate and rapid techniques for this aim is of high interest. There are some conventional techniques to investigate the morphology and grain size of nano particles such as scanning electron microscopy (SEM), atomic force microscopy (AFM) and X-ray diffractometry (XRD). Vibrational spectroscopy is utilized to characterize different compounds and applied for evaluation of the average particle size based on relationship between particle size and near infrared spectra [1,4] , but it has never been applied in quantitative morphological analysis of nano materials. So far, the potential application of nearinfrared (NIR) spectroscopy with its ability in rapid analysis of powdered materials with minimal sample preparation, has been suggested for particle size determination of powdered pharmaceuticals. The relationship between particle size and diffuse reflectance (DR) spectra in near infrared region has been applied to introduce a method for estimation of particle size. Back propagation artificial neural network (BP-ANN) as a nonlinear model was applied to estimate average particle size based on near infrared diffuse reflectance spectra. Thirty five different nano TiO2 samples with different particle size were analyzed by DR-FTNIR spectrometry and the obtained data were processed by BP- ANN.Keywords: near infrared, particle size, chemometrics, neuralnetwork, nano structure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18407140 Multi-Context Recurrent Neural Network for Time Series Applications
Authors: B. Q. Huang, Tarik Rashid, M-T. Kechadi
Abstract:
this paper presents a multi-context recurrent network for time series analysis. While simple recurrent network (SRN) are very popular among recurrent neural networks, they still have some shortcomings in terms of learning speed and accuracy that need to be addressed. To solve these problems, we proposed a multi-context recurrent network (MCRN) with three different learning algorithms. The performance of this network is evaluated on some real-world application such as handwriting recognition and energy load forecasting. We study the performance of this network and we compared it to a very well established SRN. The experimental results showed that MCRN is very efficient and very well suited to time series analysis and its applications.
Keywords: Gradient descent method, recurrent neural network, learning algorithms, time series, BP
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30397139 Exponential Passivity Criteria for BAM Neural Networks with Time-Varying Delays
Authors: Qingqing Wang, Baocheng Chen, Shouming Zhong
Abstract:
In this paper,the exponential passivity criteria for BAM neural networks with time-varying delays is studied.By constructing new Lyapunov-Krasovskii functional and dividing the delay interval into multiple segments,a novel sufficient condition is established to guarantee the exponential stability of the considered system.Finally,a numerical example is provided to illustrate the usefulness of the proposed main results
Keywords: BAM neural networks, Exponential passivity, LMI approach, Time-varying delays.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19077138 Applying a Noise Reduction Method to Reveal Chaos in the River Flow Time Series
Authors: Mohammad H. Fattahi
Abstract:
Chaotic analysis has been performed on the river flow time series before and after applying the wavelet based de-noising techniques in order to investigate the noise content effects on chaotic nature of flow series. In this study, 38 years of monthly runoff data of three gauging stations were used. Gauging stations were located in Ghar-e-Aghaj river basin, Fars province, Iran. Noise level of time series was estimated with the aid of Gaussian kernel algorithm. This step was found to be crucial in preventing removal of the vital data such as memory, correlation and trend from the time series in addition to the noise during de-noising process.
Keywords: Chaotic behavior, wavelet, noise reduction, river flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20937137 Finding Pareto Optimal Front for the Multi- Mode Time, Cost Quality Trade-off in Project Scheduling
Authors: H. Iranmanesh, M. R. Skandari, M. Allahverdiloo
Abstract:
Project managers are the ultimate responsible for the overall characteristics of a project, i.e. they should deliver the project on time with minimum cost and with maximum quality. It is vital for any manager to decide a trade-off between these conflicting objectives and they will be benefited of any scientific decision support tool. Our work will try to determine optimal solutions (rather than a single optimal solution) from which the project manager will select his desirable choice to run the project. In this paper, the problem in project scheduling notated as (1,T|cpm,disc,mu|curve:quality,time,cost) will be studied. The problem is multi-objective and the purpose is finding the Pareto optimal front of time, cost and quality of a project (curve:quality,time,cost), whose activities belong to a start to finish activity relationship network (cpm) and they can be done in different possible modes (mu) which are non-continuous or discrete (disc), and each mode has a different cost, time and quality . The project is constrained to a non-renewable resource i.e. money (1,T). Because the problem is NP-Hard, to solve the problem, a meta-heuristic is developed based on a version of genetic algorithm specially adapted to solve multi-objective problems namely FastPGA. A sample project with 30 activities is generated and then solved by the proposed method.Keywords: FastPGA, Multi-Execution Activity Mode, Pareto Optimality, Project Scheduling, Time-Cost-Quality Trade-Off.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18067136 Effects of the Sintering Process on Properties of Triaxial Electrical Porcelain from Ugandan Ceramic Minerals
Authors: Peter W. Olupot, Stefan Jonsson, Joseph K. Byaruhanga
Abstract:
Porcelain specimens were fired at 6C/min to 1250C (dwell time 0.5-3h) and cooled at 6C/min to room temperature. Additionally, three different slower firing/cooling cycles were tried. Sintering profile and effects on MOR, crystalline phase content and morphology were investigated using dilatometry, 4-point bending strength, XRD and FEG-SEM respectively. Industrial-sized specimens prepared using the promising cycle were tested basing on the ANSI standards. Increasing dwell time from 1h to 3h at peak temperature of 1250C resulted in neither a significant effect on the quartz and mullite content nor MOR. Reducing the firing/cooling rate to below 6C/min, for peak temperature of 1250C (dwell time of 1h) does not result in improvement of strength of porcelain. The industrial sized specimen exhibited flashover voltages of 20.3kV (dry) and 9.3kV (wet) respectively, transverse strength of 12.5kN and bulk density of 2.27g/cm3, which are satisfactory. There was however dye penetration during porosity test. KeywordsDwell time, Microstructure, Porcelain, Strength.Keywords: Dwell time, Microstructure, Porcelain, Strength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29897135 Underpricing of IPOs during Hot and Cold Market Periods on the South African Stock Exchange (JSE)
Authors: Brownhilder N. Neneh, A. Van Aardt Smit
Abstract:
Underpricing is one anomaly in initial public offerings (IPO) literature that has been widely observed across different stock markets with different trends emerging over different time periods. This study seeks to determine how IPOs on the JSE performed on the first day, first week and first month over the period of 1996-2011. Underpricing trends are documented for both hot and cold market periods in terms of four main sectors (cyclical, defensive, growth stock and interest rate sensitive stocks). Using a sample of 360 listed companies on the JSE, the empirical findings established that IPOs on the JSE are significantly underpriced with an average market adjusted first day return of 62.9%. It is also established that hot market IPOs on the JSE are more underpriced than the cold market IPOs. Also observed is the fact that as the offer price per share increases above the median price for any given period, the level of underpricing decreases substantially. While significant differences exist in the level of underpricing of IPOs in the four different sectors in the hot and cold market periods, interest rates sensitive stocks showed a different trend from the other sectors and thus require further investigation to uncover this pattern.
Keywords: Underpricing, hot and cold markets, South Africa, JSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42007134 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).Keywords: Time history dynamic analysis, basic modal displacement, earthquake induced demands, shear steel structures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1418