Search results for: relative average residence time array
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8366

Search results for: relative average residence time array

7826 Scale Effects on the Wake Airflow of a Heavy Truck

Authors: A. Pérard Lecomte, G. Fokoua, A. Mehel, A. Tanière

Abstract:

Automotive experimental measurements in wind tunnel are often conducted on reduced scale. Depending on the study, different similitude parameters are used by researchers to best reproduce the flow at full scale. In this paper, two parameters are investigated, which are Reynolds number and upstream velocity when dealing with airflow of typical urban speed range, below 15 m.s-1. Their impact on flow structures and aerodynamic drag in the wake of a heavy truck model are explored. To achieve this, Computational Fluid Dynamics (CFD) simulations have been conducted with the aim of modeling the wake airflow of full- and reduced-scaled heavy trucks (1/4 and 1/28). The Reynolds Average Navier-Stokes (RANS) approach combined to the Reynolds Stress Model (RSM) as the turbulence model closure was used. Both drag coefficients and upstream velocity profiles (flow topology) were found to be close one another for the three investigated scales, when the dynamical similitude Reynolds is achieved. Moreover, the difference is weak for the simulations based on the same inlet air velocity. Hence, for the relative low velocity range investigated here, the impact of the scale factor is limited.

Keywords: Aerodynamics, CFD, heavy truck, recirculation area, scale effects, similitude parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 467
7825 Analysis of Performance of 3T1D Dynamic Random-Access Memory Cell

Authors: Nawang Chhunid, Gagnesh Kumar

Abstract:

On-chip memories consume a significant portion of the overall die space and power in modern microprocessors. On-chip caches depend on Static Random-Access Memory (SRAM) cells and scaling of technology occurring as per Moore’s law. Unfortunately, the scaling is affecting stability, performance, and leakage power which will become major problems for future SRAMs in aggressive nanoscale technologies due to increasing device mismatch and variations. 3T1D Dynamic Random-Access Memory (DRAM) cell is a non-destructive read DRAM cell with three transistors and a gated diode. In 3T1D DRAM cell gated diode (D1) acts as a storage device and also as an amplifier, which leads to fast read access. Due to its high tolerance to process variation, high density, and low cost of memory as compared to 6T SRAM cell, it is universally used by the advanced microprocessor for on chip data and program memory. In the present paper, it has been shown that 3T1D DRAM cell can perform better in terms of fast read access as compared to 6T, 4T, 3T SRAM cells, respectively.

Keywords: DRAM cell, read access time, tanner EDA tool write access time and retention time, average power dissipation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1318
7824 Estimation of Bayesian Sample Size for Binomial Proportions Using Areas P-tolerance with Lowest Posterior Loss

Authors: H. Bevrani, N. Najafi

Abstract:

This paper uses p-tolerance with the lowest posterior loss, quadratic loss function, average length criteria, average coverage criteria, and worst outcome criterion for computing of sample size to estimate proportion in Binomial probability function with Beta prior distribution. The proposed methodology is examined, and its effectiveness is shown.

Keywords: Bayesian inference, Beta-binomial Distribution, LPLcriteria, quadratic loss function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736
7823 Feasibility of the Evolutionary Algorithm using Different Behaviours of the Mutation Rate to Design Simple Digital Logic Circuits

Authors: Konstantin Movsovic, Emanuele Stomeo, Tatiana Kalganova

Abstract:

The evolutionary design of electronic circuits, or evolvable hardware, is a discipline that allows the user to automatically obtain the desired circuit design. The circuit configuration is under the control of evolutionary algorithms. Several researchers have used evolvable hardware to design electrical circuits. Every time that one particular algorithm is selected to carry out the evolution, it is necessary that all its parameters, such as mutation rate, population size, selection mechanisms etc. are tuned in order to achieve the best results during the evolution process. This paper investigates the abilities of evolution strategy to evolve digital logic circuits based on programmable logic array structures when different mutation rates are used. Several mutation rates (fixed and variable) are analyzed and compared with each other to outline the most appropriate choice to be used during the evolution of combinational logic circuits. The experimental results outlined in this paper are important as they could be used by every researcher who might need to use the evolutionary algorithm to design digital logic circuits.

Keywords: Evolvable hardware, evolutionary algorithm, digitallogic circuit, mutation rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486
7822 Comparative Study of Sedimentation in Hydraulic Structures using Sharc and Ssiim Soft Wares - A Case of the Dez and Hamidieh Intake Structures in Iran

Authors: A.H. Sajedipoor, N. Hedayat , M. Mashal, R. Nazarzadeh

Abstract:

Sedimentation formation is a complex hydraulic phenomenon that has emerged as a major operational and maintenance consideration in modern hydraulic engineering in general and river engineering in particular. Sediments accumulation along the river course and their eventual storage in a form of islands affect water intake in the canal systems that are fed by the storage reservoirs. Without proper management, sediment transport can lead to major operational challenges in water distribution system of arid regions like the Dez and Hamidieh command areas. The paper aims to investigate sedimentation in the Western Canal of Dez Diversion Weir using the SHARC model and compare the results with the two intake structures of the Hamidieh dam in Iran using SSIIM model. The objective was to identify the factors which influence the process, check reliability of outcome and provide ways in which to mitigate the implications on operation and maintenance of the structures. Results estimated sand and silt bed loads concentrations to be 193 ppm and 827ppm respectively. This followed ,ore or less similar pattern in Hamidieh where the sediment formation impeded water intake in the canal system. Given the available data on average annual bed loads and average suspended sediment loads of 165ppm and 837ppm in the Dez, there was a significant statistical difference (16%) between the sand grains, whereas no significant difference (1.2%) was find in the silt grain sizes. One explanation for such finding being that along the 6 Km river course there was considerable meandering effects which explains recent shift in the hydraulic behavior along the stream course under investigation. The sand concentration in downstream relative to present state of the canal showed a steep descending curve. Sediment trapping on the other hand indicated a steep ascending curve. These occurred because the diversion weir was not considered in the simulation model. The comparative study showed very close similarities in the results which explains the fact that both software can be used as accurate and reliable analytical tools for simulation of the sedimentation in hydraulic engineering.

Keywords: SHARC, SSIIM, sedimentation, Dez diversion weir, Hamidieh dam, Intake structures

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740
7821 Evaluating the Understanding of the University Students (Basic Sciences and Engineering) about the Numerical Representation of the Average Rate of Change

Authors: Saeid Haghjoo, Ebrahim Reyhani, Fahimeh Kolahdouz

Abstract:

The present study aimed to evaluate the understanding of the students in Tehran universities (Iran) about the numerical representation of the average rate of change based on the Structure of Observed Learning Outcomes (SOLO). In the present descriptive-survey research, the statistical population included undergraduate students (basic sciences and engineering) in the universities of Tehran. The samples were 604 students selected by random multi-stage clustering. The measurement tool was a task whose face and content validity was confirmed by math and mathematics education professors. Using Cronbach's Alpha criterion, the reliability coefficient of the task was obtained 0.95, which verified its reliability. The collected data were analyzed by descriptive statistics and inferential statistics (chi-squared and independent t-tests) under SPSS-24 software. According to the SOLO model in the prestructural, unistructural, and multistructural levels, basic science students had a higher percentage of understanding than that of engineering students, although the outcome was inverse at the relational level. However, there was no significant difference in the average understanding of both groups. The results indicated that students failed to have a proper understanding of the numerical representation of the average rate of change, in addition to missconceptions when using physics formulas in solving the problem. In addition, multiple solutions were derived along with their dominant methods during the qualitative analysis. The current research proposed to focus on the context problems with approximate calculations and numerical representation, using software and connection common relations between math and physics in the teaching process of teachers and professors.

Keywords: Average rate of change, context problems, derivative, numerical representation, SOLO taxonomy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 730
7820 Comparison between Haar and Daubechies Wavelet Transformations on FPGA Technology

Authors: Fatma H. Elfouly, Mohamed I. Mahmoud, Moawad I. M. Dessouky, Salah Deyab

Abstract:

Recently, the Field Programmable Gate Array (FPGA) technology offers the potential of designing high performance systems at low cost. The discrete wavelet transform has gained the reputation of being a very effective signal analysis tool for many practical applications. However, due to its computation-intensive nature, current implementation of the transform falls short of meeting real-time processing requirements of most application. The objectives of this paper are implement the Haar and Daubechies wavelets using FPGA technology. In addition, the Bit Error Rate (BER) between the input audio signal and the reconstructed output signal for each wavelet is calculated. From the BER, it is seen that the implementations execute the operation of the wavelet transform correctly and satisfying the perfect reconstruction conditions. The design procedure has been explained and designed using the stat-ofart Electronic Design Automation (EDA) tools for system design on FPGA. Simulation, synthesis and implementation on the FPGA target technology has been carried out.

Keywords: Daubechies wavelet, discrete wavelet transform, Haar wavelet, Xilinx FPGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7214
7819 Simulation of Lid Cavity Flow in Rectangular, Half-Circular and Beer Bucket Shapes using Quasi-Molecular Modeling

Authors: S. Kulsri, M. Jaroensutasinee, K. Jaroensutasinee

Abstract:

We developed a new method based on quasimolecular modeling to simulate the cavity flow in three cavity shapes: rectangular, half-circular and bucket beer in cgs units. Each quasi-molecule was a group of particles that interacted in a fashion entirely analogous to classical Newtonian molecular interactions. When a cavity flow was simulated, the instantaneous velocity vector fields were obtained by using an inverse distance weighted interpolation method. In all three cavity shapes, fluid motion was rotated counter-clockwise. The velocity vector fields of the three cavity shapes showed a primary vortex located near the upstream corners at time t ~ 0.500 s, t ~ 0.450 s and t ~ 0.350 s, respectively. The configurational kinetic energy of the cavities increased as time increased until the kinetic energy reached a maximum at time t ~ 0.02 s and, then, the kinetic energy decreased as time increased. The rectangular cavity system showed the lowest kinetic energy, while the half-circular cavity system showed the highest kinetic energy. The kinetic energy of rectangular, beer bucket and half-circular cavities fluctuated about stable average values 35.62 x 103, 38.04 x 103 and 40.80 x 103 ergs/particle, respectively. This indicated that the half-circular shapes were the most suitable shape for a shrimp pond because the water in shrimp pond flows best when we compared with rectangular and beer bucket shape.

Keywords: Quasi-molecular modelling, particle modelling, lid driven cavity flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710
7818 Identifying Project Delay Factors in the Australian Construction Industry

Authors: Syed Sohaib Bin Hasib, Hiyam Al-Kilidar

Abstract:

Meeting project deadlines is a major challenge for most construction projects. In this study, perceptions of contractors, clients, and consultants are compared relative to a list of factors derived from the review of the extant literature on project delay. 59 causes (categorized into 8 groups) of project delays were identified from the literature. A survey was devised to get insights and ranking of these factors from clients, consultants & contractors in the Australian construction industry. Findings showed that project delays in the Australian construction industry are mainly the result of skill shortages, interference in execution, and poor coordination and communication between the project stakeholders.

Keywords: Construction, delay factors, time delay, Australian construction industry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1207
7817 Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements

Authors: Azadeh Rouhandeh, Chris Joslin, Zhen Qu, Yuu Ono

Abstract:

The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics.

Keywords: Hip joint centre, motion capture, soft tissue artefact, ultrasound depth measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2846
7816 Image Features Comparison-Based Position Estimation Method Using a Camera Sensor

Authors: Jinseon Song, Yongwan Park

Abstract:

In this paper, propose method that can user’s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction.

Keywords: Positioning, Distance, Camera, Features, SURF (Speed-Up Robust Features), Database, Estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1438
7815 Existence and Uniqueness of Periodic Solution for a Discrete-time SIR Epidemic Model with Time Delays and Impulses

Authors: Ling Liu, Yuan Ye

Abstract:

In this paper, a discrete-time SIR epidemic model with nonlinear incidence rate, time delays and impulses is investigated. Sufficient conditions for the existence and uniqueness of periodic solutions are obtained by using contraction theorem and inequality techniques. An example is employed to illustrate our results.

Keywords: Discrete-time SIR epidemic model, time delay, nonlinear incidence rate, impulse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1632
7814 Parametric Analysis in the Electronic Sensor Frequency Adjustment Process

Authors: Rungchat Chompu-Inwai, Akararit Charoenkasemsuk

Abstract:

The use of electronic sensors in the electronics industry has become increasingly popular over the past few years, and it has become a high competition product. The frequency adjustment process is regarded as one of the most important process in the electronic sensor manufacturing process. Due to inaccuracies in the frequency adjustment process, up to 80% waste can be caused due to rework processes; therefore, this study aims to provide a preliminary understanding of the role of parameters used in the frequency adjustment process, and also make suggestions in order to further improve performance. Four parameters are considered in this study: air pressure, dispensing time, vacuum force, and the distance between the needle tip and the product. A full factorial design for experiment 2k was considered to determine those parameters that significantly affect the accuracy of the frequency adjustment process, where a deviation in the frequency after adjustment and the target frequency is expected to be 0 kHz. The experiment was conducted on two levels, using two replications and with five center-points added. In total, 37 experiments were carried out. The results reveal that air pressure and dispensing time significantly affect the frequency adjustment process. The mathematical relationship between these two parameters was formulated, and the optimal parameters for air pressure and dispensing time were found to be 0.45 MPa and 458 ms, respectively. The optimal parameters were examined by carrying out a confirmation experiment in which an average deviation of 0.082 kHz was achieved.

Keywords: Design of Experiment, Electronic Sensor, Frequency Adjustment, Parametric Analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1377
7813 Public-Private Partnership Transportation Projects: An Exploratory Study

Authors: Medya Fathi

Abstract:

When public transportation projects were delivered through design-bid-build and later design-build, governments found a serious issue: inadequate funding. With population growth, governments began to develop new arrangements in which the private sectors were involved to cut the financial burden. This arrangement, Public-Private Partnership (PPP), has its own risks; however, performance outputs can motivate or discourage its use. On top of such output are time and budget, which can be affected by the type of project delivery methods. Project completion within or ahead of schedule as well as within or under budget is among any owner’s objectives. With a higher application of PPP in the highway industry in the US and insufficient research, the current study addresses the schedule and cost performance of PPP highway projects and determines which one outperforms the other. To meet this objective, after collecting performance data of all PPP projects, schedule growth and cost growth are calculated, and finally, statistical analysis is conducted to evaluate the PPP performance. The results show that PPP highway projects on average have saved time and cost; however, the main benefit is a faster delivery rather than an under-budget completion. This study can provide better insights to understand PPP highways’ performance and assist practitioners in applying PPP for transportation projects with the opportunity to save time and cost.

Keywords: Cost, delivery method, highway, public-private partnership, schedule, transportation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 433
7812 Evaluation of Internal Ballistics of Multi-Perforated Grain in a Closed Vessel

Authors: B. A. Parate, C. P. Shetty

Abstract:

This research article describes the evaluation methodology of an internal ballistics of multi-perforated grain in a closed vessel (CV). The propellant testing in a CV is conducted to characterize the propellants and to ascertain the various internal ballistic parameters. The assessment of an internal ballistics plays a very crucial role for suitability of its use in the selection for a given particular application. The propellant used in defense sectors has to satisfy the user requirements as per laid down specifications. The outputs from CV evaluation of multi-propellant grain are maximum pressure of 226.75 MPa, differentiation of pressure with respect to time of 36.99 MPa/ms, average vivacity of 9.990×10-4/MPa ms, force constant of 933.9 J/g, rise time of 9.85 ms, pressure index of 0.878 including burning coefficient of 0.2919. This paper addresses an internal ballistic of multi-perforated grain, propellant selection, its calculation, and evaluation of various parameters in a CV testing. For the current analysis, the propellant is evaluated in 100 cc CV with propellant mass 20 g. The loading density of propellant is 0.2 g/cc. The method for determination of internal ballistic properties consists of burning of propellant mass under constant volume.

Keywords: Burning rate, closed vessel, force constant, internal ballistic, loading density, maximum pressure, multi-propellant grain, propellant, rise time, vivacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 335
7811 Some Physical Properties of Musk Lime (Citrus Microcarpa)

Authors: M.H.R.O. Abdullah, P.E. Ch'ng, N.A. Yunus

Abstract:

Some physical properties of musk lime (Citrus microcarpa) were determined in this study. The average moisture content (wet basis) of the fruit was found to be 85.10 (±0.72) %. The mean of length, width and thickness of the fruit was 26.36 (±0.97), 26.40 (±1.04) and 25.26 (±0.94) mm respectively. The average value for geometric mean diameter, sphericity, aspect ratio, mass, surface area, volume, true density, bulk density and porosity was 26.00 (±0.82) mm, 98.67 (±2.04) %, 100.23 (±3.28) %, 10.007 (±0.878) g, 2125.07 (±133.93) mm2, 8800.00 (±731.82) mm3, 1002.87 (±39.16) kgm-3, 501.70 (±22.58) kgm-3 and 49.89 (±3.15) % respectively. The coefficient of static friction on four types of structural surface was found to be varying from 0.238 (±0.025) for glass to 0.247 (±0.024) for steel surface.

Keywords: Musk lime, Citrus microcarpa, physical properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3275
7810 Performance Evaluation of Complex Electrical Bio-impedance from V/I Four-electrode Measurements

Authors: Towfeeq Fairooz, Salim Istyaq

Abstract:

The passive electrical properties of a tissue depends on the intrinsic constituents and its structure, therefore by measuring the complex electrical impedance of the tissue it might be possible to obtain indicators of the tissue state or physiological activity [1]. Complete bio-impedance information relative to physiology and pathology of a human body and functional states of the body tissue or organs can be extracted by using a technique containing a fourelectrode measurement setup. This work presents the estimation measurement setup based on the four-electrode technique. First, the complex impedance is estimated by three different estimation techniques: Fourier, Sine Correlation and Digital De-convolution and then estimation errors for the magnitude, phase, reactance and resistance are calculated and analyzed for different levels of disturbances in the observations. The absolute values of relative errors are plotted and the graphical performance of each technique is compared.

Keywords: Electrical Impedance, Fast Fourier Transform, Additive White Gaussian Noise, Total Least Square, Digital De-Convolution, Sine-Correlation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2719
7809 Optimizing the Project Delivery Time with Time Cost Trade-offs

Authors: Wei Lo, Ming-En Kuo

Abstract:

While to minimize the overall project cost is always one of the objectives of construction managers, to obtain the maximum economic return is definitely one the ultimate goals of the project investors. As there is a trade-off relationship between the project time and cost, and the project delivery time directly affects the timing of economic recovery of an investment project, to provide a method that can quantify the relationship between the project delivery time and cost, and identify the optimal delivery time to maximize economic return has always been the focus of researchers and industrial practitioners. Using genetic algorithms, this study introduces an optimization model that can quantify the relationship between the project delivery time and cost and furthermore, determine the optimal delivery time to maximize the economic return of the project. The results provide objective quantification for accurately evaluating the project delivery time and cost, and facilitate the analysis of the economic return of a project.

Keywords: Time-Cost Trade-Off, Genetic Algorithms, Resource Integration, Economic return.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1760
7808 Time Series Forecasting Using a Hybrid RBF Neural Network and AR Model Based On Binomial Smoothing

Authors: Fengxia Zheng, Shouming Zhong

Abstract:

ANNARIMA that combines both autoregressive integrated moving average (ARIMA) model and artificial neural network (ANN) model is a valuable tool for modeling and forecasting nonlinear time series, yet the over-fitting problem is more likely to occur in neural network models. This paper provides a hybrid methodology that combines both radial basis function (RBF) neural network and auto regression (AR) model based on binomial smoothing (BS) technique which is efficient in data processing, which is called BSRBFAR. This method is examined by using the data of Canadian Lynx data. Empirical results indicate that the over-fitting problem can be eased using RBF neural network based on binomial smoothing which is called BS-RBF, and the hybrid model–BS-RBFAR can be an effective way to improve forecasting accuracy achieved by BSRBF used separately.

Keywords: Binomial smoothing (BS), hybrid, Canadian Lynx data, forecasting accuracy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3661
7807 Advantages and Disadvantages of Business Continuity Management

Authors: K. Venclova, H. Urbancova, H. Vostra Vydrova

Abstract:

In current global economics the application of Business Continuity Management is the prerequisite for sustainable competitive advantage in an organization. Business Continuity Management is a managerial which identifies the potential impact of losses in an organization. The aim of this paper is to identify and critically evaluate the relative advantages and disadvantages of deploying Business Continuity Management in an organization on the basis of seven criteria. The strongest advantage of Business Continuity Management is in its capacity to identify a crisis situation and help the organization to flexibly and also to keep the critical knowledge within the organization. By contrast the main disadvantage is that establishing Business Continuity Management in an organization is time-consuming and its implementation as an integral part of the organizational culture present significant difficulties.

Keywords: Business continuity management, criteria, advantages, disadvantages, organisations, survey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13227
7806 Stability Analysis of Mutualism Population Model with Time Delay

Authors: Rusliza Ahmad, Harun Budin

Abstract:

This paper studies the effect of time delay on stability of mutualism population model with limited resources for both species. First, the stability of the model without time delay is analyzed. The model is then improved by considering a time delay in the mechanism of the growth rate of the population. We analyze the effect of time delay on the stability of the stable equilibrium point. Result showed that the time delay can induce instability of the stable equilibrium point, bifurcation and stability switches.

Keywords: Bifurcation, Delay margin, Mutualism population model, Time delay

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1963
7805 Genetic Algorithms Multi-Objective Model for Project Scheduling

Authors: Elsheikh Asser

Abstract:

Time and cost are the main goals of the construction project management. The first schedule developed may not be a suitable schedule for beginning or completing the project to achieve the target completion time at a minimum total cost. In general, there are trade-offs between time and cost (TCT) to complete the activities of a project. This research presents genetic algorithms (GAs) multiobjective model for project scheduling considering different scenarios such as least cost, least time, and target time.

Keywords: Genetic algorithms, Time-cost trade-off.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2307
7804 VaR Forecasting in Times of Increased Volatility

Authors: Ivo Jánský, Milan Rippel

Abstract:

The paper evaluates several hundred one-day-ahead VaR forecasting models in the time period between the years 2004 and 2009 on data from six world stock indices - DJI, GSPC, IXIC, FTSE, GDAXI and N225. The models model mean using the ARMA processes with up to two lags and variance with one of GARCH, EGARCH or TARCH processes with up to two lags. The models are estimated on the data from the in-sample period and their forecasting accuracy is evaluated on the out-of-sample data, which are more volatile. The main aim of the paper is to test whether a model estimated on data with lower volatility can be used in periods with higher volatility. The evaluation is based on the conditional coverage test and is performed on each stock index separately. The primary result of the paper is that the volatility is best modelled using a GARCH process and that an ARMA process pattern cannot be found in analyzed time series.

Keywords: VaR, risk analysis, conditional volatility, garch, egarch, tarch, moving average process, autoregressive process

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1413
7803 Simulating a Single-Server Queue using the Q – Simulator

Authors: Irene K. Amponsah, Bennony K. Gordor, Francis Dogbey

Abstract:

This paper introduces a technique for simulating a single-server exponential queuing system. The technique called the Q-Simulator is a computer program which can simulate the effect of traffic intensity on all system average quantities given the arrival and/or service rates. The Q-Simulator has three phases namely: the formula based method, the uncontrolled simulation, and the controlled simulation. The Q-Simulator generates graphs (crystal solutions) for all results of the simulation or calculation and can be used to estimate desirable average quantities such as waiting times, queue lengths, etc.

Keywords: Automation system-Simulator, Simulation, Singleserver exponential system

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2282
7802 Sensitivity Analysis for Determining Priority of Factors Controlling SOC Content in Semiarid Condition of West of Iran

Authors: Y. Parvizi, M. Gorji, M.H. Mahdian, M. Omid

Abstract:

Soil organic carbon (SOC) plays a key role in soil fertility, hydrology, contaminants control and acts as a sink or source of terrestrial carbon content that can affect the concentration of atmospheric CO2. SOC supports the sustainability and quality of ecosystems, especially in semi-arid region. This study was conducted to determine relative importance of 13 different exploratory climatic, soil and geometric factors on the SOC contents in one of the semiarid watershed zones in Iran. Two methods canonical discriminate analysis (CDA) and feed-forward back propagation neural networks were used to predict SOC. Stepwise regression and sensitivity analysis were performed to identify relative importance of exploratory variables. Results from sensitivity analysis showed that 7-2-1 neural networks and 5 inputs in CDA models output have highest predictive ability that explains %70 and %65 of SOC variability. Since neural network models outperformed CDA model, it should be preferred for estimating SOC.

Keywords: Soil organic carbon, modeling, neural networks, CDA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1417
7801 Improving University Operations with Data Mining: Predicting Student Performance

Authors: Mladen Dragičević, Mirjana Pejić Bach, Vanja Šimičević

Abstract:

The purpose of this paper is to develop models that would enable predicting student success. These models could improve allocation of students among colleges and optimize the newly introduced model of government subsidies for higher education. For the purpose of collecting data, an anonymous survey was carried out in the last year of undergraduate degree student population using random sampling method. Decision trees were created of which two have been chosen that were most successful in predicting student success based on two criteria: Grade Point Average (GPA) and time that a student needs to finish the undergraduate program (time-to-degree). Decision trees have been shown as a good method of classification student success and they could be even more improved by increasing survey sample and developing specialized decision trees for each type of college. These types of methods have a big potential for use in decision support systems.

Keywords: Data mining, knowledge discovery in databases, prediction models, student success.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2514
7800 The Effect of Smartphones on Human Health Relative to User’s Addiction: A Study on a Wide Range of Audiences in Jordan

Authors: T. Qasim, M. Obeidat, S. Al-Sharairi

Abstract:

The objective of this study is to investigate the effect of the excessive use of smartphones. Smartphones have enormous effects on the human body in that some musculoskeletal disorders (MSDs) and health problems might evolve. These days, there is a wide use of the smartphones among all age groups of society, thus, the focus on smartphone effects on human behavior and health, especially on the young and elderly people, becomes a crucial issue. This study was conducted in Jordan on smartphone users for different genders and ages, by conducting a survey to collect data related to the symptoms and MSDs that are resulted from the excessive use of smartphones. A total of 357 responses were used in the analysis. The main related symptoms were numbness, fingers pain, and pain in arm, all linked to age and gender for comparative reasons. A statistical analysis was performed to find the effects of extensive usage of a smartphone for long periods of time on the human body. Results show that the significant variables were the vision problems and the time spent when using the smartphone that cause vision problems. Other variables including age of user and ear problems due to the use of the headsets were found to be a border line significant.

Keywords: Smartphone, age group, musculoskeletal disorders (MSDs), health problems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2023
7799 VLSI Design of 2-D Discrete Wavelet Transform for Area-Efficient and High-Speed Image Computing

Authors: Mountassar Maamoun, Mehdi Neggazi, Abdelhamid Meraghni, Daoud Berkani

Abstract:

This paper presents a VLSI design approach of a highspeed and real-time 2-D Discrete Wavelet Transform computing. The proposed architecture, based on new and fast convolution approach, reduces the hardware complexity in addition to reduce the critical path to the multiplier delay. Furthermore, an advanced twodimensional (2-D) discrete wavelet transform (DWT) implementation, with an efficient memory area, is designed to produce one output in every clock cycle. As a result, a very highspeed is attained. The system is verified, using JPEG2000 coefficients filters, on Xilinx Virtex-II Field Programmable Gate Array (FPGA) device without accessing any external memory. The resulting computing rate is up to 270 M samples/s and the (9,7) 2-D wavelet filter uses only 18 kb of memory (16 kb of first-in-first-out memory) with 256×256 image size. In this way, the developed design requests reduced memory and provide very high-speed processing as well as high PSNR quality.

Keywords: Discrete Wavelet Transform (DWT), Fast Convolution, FPGA, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1949
7798 The Dialectical Unity of Capital and Non-Capital: The Role of Overpopulation in Popular Rebellion Today

Authors: Wim Dierckxsens, Andrés Piqueras

Abstract:

Throughout its history, Capital has established a decisive form of discrimination that has effectively strengthened its power against Labor: discrimination between an endogenous labor force (integrated, with certain guarantees and rights in the capitalist nexus) and an exogenous labor force (yet to be incorporated or incorporated as ‘heterochthonous’, without such guarantees and rights). We refer to the historical incorporation of the exogenous population from the non-capitalist to the capitalist nexus (with the consequent replaceability of the endogenous labor force) as absolute mobility.

The more possibilities Capital has of accessing a population in the non-capitalist nexus and of being able to incorporate it through absolute mobility into the capitalist nexus, the greater its unilaterality or class domination. In contrast, when these possibilities run dry, Capital is more inclined towards reformism or negotiation.

However, this absolute mobility has historically been combined with relative mobility of the labor force, which includes various processes of which labor force migration is a fundamental component.

This paper holds that both types of mobility are at the core of class struggles.

Keywords: Absolute mobility, capital-labor antagonism, relative mobility, substitutability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1209
7797 Low Resolution Single Neural Network Based Face Recognition

Authors: Jahan Zeb, Muhammad Younus Javed, Usman Qayyum

Abstract:

This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.

Keywords: Average filtering, Bicubic Interpolation, Neurons, vectorization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1738