Search results for: statistical feature.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2064

Search results for: statistical feature.

264 Thermal Cracking Approach Investigation to Improve Biodiesel Properties

Authors: Roghaieh Parvizsedghy, Seyyed Mojtaba Sadrameli

Abstract:

Biodiesel as an alternative diesel fuel is steadily gaining more attention and significance. However, there are some drawbacks while using biodiesel regarding its properties that requires it to be blended with petrol based diesel and/or additives to improve the fuel characteristics. This study analyses thermal cracking as an alternative technology to improve biodiesel characteristics in which, FAME based biodiesel produced by transesterification of castor oil is fed into a continuous thermal cracking reactor at temperatures range of 450-500°C and flowrate range of 20-40 g/hr. Experiments designed by response surface methodology and subsequent statistical studies show that temperature and feed flowrate significantly affect the products yield. Response surfaces were used to study the impact of temperature and flowrate on the product properties. After each experiment, the produced crude bio-oil was distilled and diesel cut was separated. As shorter chain molecules are produced through thermal cracking, the distillation curve of the diesel cut fitted more with petrol based diesel curve in comparison to the biodiesel. Moreover, the produced diesel cut properties adequately pose within property ranges defined by the related standard of petrol based diesel. Cold flow properties, high heating value as the main drawbacks of the biodiesel are improved by this technology. Thermal cracking decreases kinematic viscosity, Flash point and cetane number. 

Keywords: Biodiesel, castor oil, fuel properties, thermal cracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3624
263 A Model for Estimation of Efforts in Development of Software Systems

Authors: Parvinder S. Sandhu, Manisha Prashar, Pourush Bassi, Atul Bisht

Abstract:

Software effort estimation is the process of predicting the most realistic use of effort required to develop or maintain software based on incomplete, uncertain and/or noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets. There are various models like Halstead, Walston-Felix, Bailey-Basili, Doty and GA Based models which have already used to estimate the software effort for projects. In this study Statistical Models, Fuzzy-GA and Neuro-Fuzzy (NF) Inference Systems are experimented to estimate the software effort for projects. The performances of the developed models were tested on NASA software project datasets and results are compared with the Halstead, Walston-Felix, Bailey-Basili, Doty and Genetic Algorithm Based models mentioned in the literature. The result shows that the NF Model has the lowest MMRE and RMSE values. The NF Model shows the best results as compared with the Fuzzy-GA based hybrid Inference System and other existing Models that are being used for the Effort Prediction with lowest MMRE and RMSE values.

Keywords: Neuro-Fuzzy Model, Halstead Model, Walston-Felix Model, Bailey-Basili Model, Doty Model, GA Based Model, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3181
262 Influence of Bilateral and Unilateral Flatfoot on Pelvic Alignment

Authors: Mohamed Taher Eldesoky, Enas Elsayed Abutaleb

Abstract:

Background: The change in foot posture can possibly generate changes in the pelvic alignment. There is still a lack of evidence about the effects of bilateral and unilateral flatfoot on possible changes in pelvic alignment. The purpose of this study was to investigate the effect of flatfoot on the sagittal and frontal planes of pelvic postures. Materials and Methods: 56 subjects, aged 18–40 years, were assigned into three groups: 20 healthy subjects, 19 subjects with bilateral flexible second-degree flat foot, and 17 subjects with unilateral flexible second-degree flat foot. 3D assessment of the pelvis using the formetric-II device was used to evaluate pelvic alignment in the frontal and sagittal planes by measuring pelvic inclination and pelvic tilt angles. Results: ANOVA test with LSD test were used for statistical analysis. Both Unilateral and bilateral second degree flatfoot produced significant (P<0.05) pelvic anteversion, in comparison to the healthy subjects (P<0.05). But the bilateral flatfoot subjects seemed to have more anteversion than the unilateral subjects. Unilateral flatfoot caused a significant (P<0.05) lateral pelvic tilt in the direction of the affected side in comparison to the healthy and bilateral flatfoot subjects. Conclusion: The bilateral and unilateral second degree flatfoot changes pelvic alignment. Both of them lead to increases of pelvic anteversion while the unilateral one caused lateral pelvic tilt toward the affected side. Thus, foot posture should be considered when assessing patients with pelvic misalignment and disorders.

Keywords: Bilateral flatfoot, foot posture, pelvic alignment, unilateral flatfoot.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3258
261 Durability Study of Pultruded CFRP Plates under Sustained Bending in Distilled Water and Seawater Immersions: Effects on the Visco-Elastic Properties

Authors: Innocent Kafodya, Guijun Xian

Abstract:

This paper presents effects of distilled water, seawater and sustained bending strains of 30% and 50% ultimate strain at room temperature, on the durability of unidirectional pultruded carbon fiber reinforced polymer (CFRP) plates. In this study, dynamic mechanical analyzer (DMA) was used to investigate the synergic effects of the immersions and bending strains on the viscoelastic properties of (CFRP) such as storage modulus, tan delta and glass transition temperature. The study reveals that the storage modulus and glass transition temperature increase while tan delta peak decreases in the initial stage of both immersions due to the progression of curing. The storage modulus and Tg subsequently decrease and tan delta increases due to the matrix plasticization. The blister induced damages in the unstrained seawater samples enhance water uptake and cause more serious degradation of Tg and storage modulus than in water immersion. Increasing sustained bending decreases Tg and storage modulus in a long run for both immersions due to resin matrix cracking and debonding. The combined effects of immersions and strains are not clearly reflected due to the statistical effects of DMA sample sizes and competing processes of molecular reorientation and postcuring.

Keywords: Pultruded CFRP plate, bending strain, glass transition temperature, storage modulus, tan delta.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1854
260 A Novel Approach for Coin Identification using Eigenvalues of Covariance Matrix, Hough Transform and Raster Scan Algorithms

Authors: J. Prakash, K. Rajesh

Abstract:

In this paper we present a new method for coin identification. The proposed method adopts a hybrid scheme using Eigenvalues of covariance matrix, Circular Hough Transform (CHT) and Bresenham-s circle algorithm. The statistical and geometrical properties of the small and large Eigenvalues of the covariance matrix of a set of edge pixels over a connected region of support are explored for the purpose of circular object detection. Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain only a small number of non-zero elements, they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of the circumference pixels is identified using Raster scan algorithm which uses geometrical symmetry property. After finding circular objects, the proposed method uses the texture on the surface of the coins called texton, which are unique properties of coins, refers to the fundamental micro structure in generic natural images. This method has been tested on several real world images including coin and non-coin images. The performance is also evaluated based on the noise withstanding capability.

Keywords: Circular Hough Transform, Coin detection, Covariance matrix, Eigenvalues, Raster scan Algorithm, Texton.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1843
259 Development of Integrated GIS Interface for Characteristics of Regional Daily Flow

Authors: Ju Young Lee, Jung-Seok Yang, Jaeyoung Choi

Abstract:

The purpose of this paper primarily intends to develop GIS interface for estimating sequences of stream-flows at ungauged stations based on known flows at gauged stations. The integrated GIS interface is composed of three major steps. The first, precipitation characteristics using statistical analysis is the procedure for making multiple linear regression equation to get the long term mean daily flow at ungauged stations. The independent variables in regression equation are mean daily flow and drainage area. Traditionally, mean flow data are generated by using Thissen polygon method. However, method for obtaining mean flow data can be selected by user such as Kriging, IDW (Inverse Distance Weighted), Spline methods as well as other traditional methods. At the second, flow duration curve (FDC) is computing at unguaged station by FDCs in gauged stations. Finally, the mean annual daily flow is computed by spatial interpolation algorithm. The third step is to obtain watershed/topographic characteristics. They are the most important factors which govern stream-flows. In summary, the simulated daily flow time series are compared with observed times series. The results using integrated GIS interface are closely similar and are well fitted each other. Also, the relationship between the topographic/watershed characteristics and stream flow time series is highly correlated.

Keywords: Integrated GIS interface, spatial interpolation algorithm, FDC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1470
258 Optimum Surface Roughness Prediction in Face Milling of High Silicon Stainless Steel

Authors: M. Farahnakian, M.R. Razfar, S. Elhami-Joosheghan

Abstract:

This paper presents an approach for the determination of the optimal cutting parameters (spindle speed, feed rate, depth of cut and engagement) leading to minimum surface roughness in face milling of high silicon stainless steel by coupling neural network (NN) and Electromagnetism-like Algorithm (EM). In this regard, the advantages of statistical experimental design technique, experimental measurements, artificial neural network, and Electromagnetism-like optimization method are exploited in an integrated manner. To this end, numerous experiments on this stainless steel were conducted to obtain surface roughness values. A predictive model for surface roughness is created by using a back propogation neural network, then the optimization problem was solved by using EM optimization. Additional experiments were performed to validate optimum surface roughness value predicted by EM algorithm. It is clearly seen that a good agreement is observed between the predicted values by EM coupled with feed forward neural network and experimental measurements. The obtained results show that the EM algorithm coupled with back propogation neural network is an efficient and accurate method in approaching the global minimum of surface roughness in face milling.

Keywords: cutting parameters, face milling, surface roughness, artificial neural network, Electromagnetism-like algorithm,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2539
257 Pose-Dependency of Machine Tool Structures: Appearance, Consequences, and Challenges for Lightweight Large-Scale Machines

Authors: S. Apprich, F. Wulle, A. Lechler, A. Pott, A. Verl

Abstract:

Large-scale machine tools for the manufacturing of large work pieces, e.g. blades, casings or gears for wind turbines, feature pose-dependent dynamic behavior. Small structural damping coefficients lead to long decay times for structural vibrations that have negative impacts on the production process. Typically, these vibrations are handled by increasing the stiffness of the structure by adding mass. This is counterproductive to the needs of sustainable manufacturing as it leads to higher resource consumption both in material and in energy. Recent research activities have led to higher resource efficiency by radical mass reduction that is based on controlintegrated active vibration avoidance and damping methods. These control methods depend on information describing the dynamic behavior of the controlled machine tools in order to tune the avoidance or reduction method parameters according to the current state of the machine. This paper presents the appearance, consequences and challenges of the pose-dependent dynamic behavior of lightweight large-scale machine tool structures in production. It starts with the theoretical introduction of the challenges of lightweight machine tool structures resulting from reduced stiffness. The statement of the pose-dependent dynamic behavior is corroborated by the results of the experimental modal analysis of a lightweight test structure. Afterwards, the consequences of the pose-dependent dynamic behavior of lightweight machine tool structures for the use of active control and vibration reduction methods are explained. Based on the state of the art of pose-dependent dynamic machine tool models and the modal investigation of an FE-model of the lightweight test structure, the criteria for a pose-dependent model for use in vibration reduction are derived. The description of the approach for a general posedependent model of the dynamic behavior of large lightweight machine tools that provides the necessary input to the aforementioned vibration avoidance and reduction methods to properly tackle machine vibrations is the outlook of the paper.

Keywords: Dynamic behavior, lightweight, machine tool, pose-dependency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2781
256 Investigation of the Effect of Teaching a Thinking and Research Lesson by Cooperative and Traditional Methods on the Creativity of Sixth Grade Students

Authors: Faroogh Khakzad, Marzieh Dehghani, Elahe Hejazi

Abstract:

The present study investigates the effect of teaching a Thinking and Research lesson by cooperative and traditional methods on the creativity of sixth-grade students in Piranshahr province. The statistical society includes all the sixth-grade students of Piranshahr province. The sample of this studytable was selected by available sampling from among male elementary schools of Piranshahr. They were randomly assigned into two groups of cooperative teaching method and traditional teaching method. The design of the study is quasi-experimental with a control group. In this study, to assess students’ creativity, Abedi’s creativity questionnaire was used. Based on Cronbach’s alpha coefficient, the reliability of the factor flow was 0.74, innovation was 0.61, flexibility was 0.63, and expansion was 0.68. To analyze the data, t-test, univariate and multivariate covariance analysis were used for evaluation of the difference of means and the pretest and posttest scores. The findings of the research showed that cooperative teaching method does not significantly increase creativity (p > 0.05). Moreover, cooperative teaching method was found to have significant effect on flow factor (p < 0.05), but in innovation and expansion factors no significant effect was observed (p < 0.05).

Keywords: Cooperative teaching method, traditional teaching method, creativity, flow, innovation, flexibility, expansion, thinking and research lesson.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 633
255 Analysis of Precipitation and Temperature Trends in Sefid-Roud Basin

Authors: Amir Gandomkar, Tahereh Soltani Gord faramarzi, Parisa Safaripour Chafi, Abdol-Reza Amani

Abstract:

Temperature, humidity and precipitation in an area, are parameters proved influential in the climate of that area, and one should recognize them so that he can determine the climate of that area. Climate changes are of primary importance in climatology, and in recent years, have been of great concern to researchers and even politicians and organizations, for they can play an important role in social, political and economic activities. Even though the real cause of climate changes or their stability is not yet fully recognized, they are a matter of concern to researchers and their importance for countries has prompted them to investigate climate changes in different levels, especially in regional, national and continental level. This issue has less been investigated in our country. However, in recent years, there have been some researches and conferences on climate changes. This study is also in line with such researches and tries to investigate and analyze the trends of climate changes (temperature and precipitation) in Sefid-roud (the name of a river) basin. Three parameters of mean annual precipitation, temperature, and maximum and minimum temperatures in 36 synoptic and climatology stations in a statistical period of 49 years (1956-2005) in the stations of Sefid-roud basin were analyzed by Mann-Kendall test. The results obtained by data analysis show that climate changes are short term and have a trend. The analysis of mean temperature revealed that changes have a significantly rising trend, besides the precipitation has a significantly falling trend.

Keywords: Trend, Climate changes, Sefid-roud, Mann-Kendall

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1703
254 Modelling Hydrological Time Series Using Wakeby Distribution

Authors: Ilaria Lucrezia Amerise

Abstract:

The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.

Keywords: Generalized extreme values (GEV), likelihood estimation, precipitation data, Wakeby distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 617
253 The Effects and Interactions of Synthesis Parameters on Properties of Mg Substituted Hydroxyapatite

Authors: S. Sharma, U. Batra, S. Kapoor, A. Dua

Abstract:

In this study, the effects and interactions of reaction time and capping agent assistance during sol-gel synthesis of magnesium substituted hydroxyapatite nanopowder (MgHA) on hydroxyapatite (HA) to β-tricalcium phosphate (β-TCP) ratio, Ca/P ratio and mean crystallite size was examined experimentally as well as through statistical analysis. MgHA nanopowders were synthesized by sol-gel technique at room temperature using aqueous solution of calcium nitrate tetrahydrate, magnesium nitrate hexahydrate and potassium dihydrogen phosphate as starting materials. The reaction time for sol-gel synthesis was varied between 15 to 60 minutes. Two process routes were followed with and without addition of triethanolamine (TEA) in the solutions. The elemental compositions of as-synthesized powders were determined using X-ray fluorescence (XRF) spectroscopy. The functional groups present in the assynthesized MgHA nanopowders were established through Fourier Transform Infrared Spectroscopy (FTIR). The amounts of phases present, Ca/P ratio and mean crystallite sizes of MgHA nanopowders were determined using X-ray diffraction (XRD). The HA content in biphasic mixture of HA and β-TCP and Ca/P ratio in as-synthesized MgHA nanopowders increased effectively with reaction time of sols (p<0.0001, two way ANOVA), however, these were independent of TEA addition (p>0.15, two way ANOVA). The MgHA nanopowders synthesized with TEA assistance exhibited 14 nm lower crystallite size (p<0.018, 2 sample t-test) compared to the powder synthesized without TEA assistance.

Keywords: Capping agent, hydroxyapatite, regression analysis, sol-gel, 2- sample t-test, two-way ANOVA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1575
252 Hydrological Characterization of a Watershed for Streamflow Prediction

Authors: Oseni Taiwo Amoo, Bloodless Dzwairo

Abstract:

In this paper, we extend the versatility and usefulness of GIS as a methodology for any river basin hydrologic characteristics analysis (HCA). The Gurara River basin located in North-Central Nigeria is presented in this study. It is an on-going research using spatial Digital Elevation Model (DEM) and Arc-Hydro tools to take inventory of the basin characteristics in order to predict water abstraction quantification on streamflow regime. One of the main concerns of hydrological modelling is the quantification of runoff from rainstorm events. In practice, the soil conservation service curve (SCS) method and the Conventional procedure called rational technique are still generally used these traditional hydrological lumped models convert statistical properties of rainfall in river basin to observed runoff and hydrograph. However, the models give little or no information about spatially dispersed information on rainfall and basin physical characteristics. Therefore, this paper synthesizes morphometric parameters in generating runoff. The expected results of the basin characteristics such as size, area, shape, slope of the watershed and stream distribution network analysis could be useful in estimating streamflow discharge. Water resources managers and irrigation farmers could utilize the tool for determining net return from available scarce water resources, where past data records are sparse for the aspect of land and climate.

Keywords: Hydrological characteristic, land and climate, runoff discharge, streamflow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1409
251 The Mediating Effect of MSMEs Export Performance between Technological Advancement Capabilities and Business Performance

Authors: Fawad Hussain, Mohammad Basir Bin Saud, Mohd Azwardi Md Isa

Abstract:

The aim of this study is to empirically investigate the mediating impact of export performance (EP) between technological advancement capabilities and business performance (BP) of Malaysian manufacturing micro, small and medium sized enterprises (MSME’s). Firm’s technological advancement resources are hypothesized as a platform to enhance both exports and BP of manufacturing MSMEs in Malaysia. This study is twofold, primary it has investigated that technological advancement capabilities helps to appreciates main performance measures noted in terms of EP and Secondly, it investigates that how efficiently and effectively technological advancement capabilities can contribute in overall Malaysian MSME’s BP. Smart PLS-3 statistical software is used to know the association between technological advancement capabilities, MSME’s EP and BP. In this study, the data was composed from Malaysian manufacturing MSME’s in east coast industrial zones known as the manufacturing hub of MSMEs. Seven hundred and fifty (750) questionnaires were distributed, but only 148 usable questionnaires are returned. The finding of this study indicated that technological advancement capabilities helps to strengthen the export in term of time and cost efficient and it plays a significant role in appreciating their BP. This study is helpful for small and medium enterprise owners who intend to expand their business overseas and though smart technological advancement resources they can achieve their business competitiveness and excellence both at local and international markets.

Keywords: Technological advancement capabilities, export performance, business performance, small and medium manufacturing enterprises, Malaysia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1731
250 Mathematical Study for Traffic Flow and Traffic Density in Kigali Roads

Authors: Kayijuka Idrissa

Abstract:

This work investigates a mathematical study for traffic flow and traffic density in Kigali city roads and the data collected from the national police of Rwanda in 2012. While working on this topic, some mathematical models were used in order to analyze and compare traffic variables. This work has been carried out on Kigali roads specifically at roundabouts from Kigali Business Center (KBC) to Prince House as our study sites. In this project, we used some mathematical tools to analyze the data collected and to understand the relationship between traffic variables. We applied the Poisson distribution method to analyze and to know the number of accidents occurred in this section of the road which is from KBC to Prince House. The results show that the accidents that occurred in 2012 were at very high rates due to the fact that this section has a very narrow single lane on each side which leads to high congestion of vehicles, and consequently, accidents occur very frequently. Using the data of speeds and densities collected from this section of road, we found that the increment of the density results in a decrement of the speed of the vehicle. At the point where the density is equal to the jam density the speed becomes zero. The approach is promising in capturing sudden changes on flow patterns and is open to be utilized in a series of intelligent management strategies and especially in noncurrent congestion effect detection and control.

Keywords: Statistical methods, Poisson distribution, car moving techniques, traffic flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1774
249 Student Attitude towards Entrepreneurship: A South African and Dutch Comparison

Authors: Natanya Meyer, Johann Landsberg

Abstract:

Unemployment among the youth is a significant problem in South Africa. Large corporations and the public sector simply cannot create enough jobs. Too many youths in South Africa currently do not consider entrepreneurship as an option in order to become independent. Unlike the youth of the Netherlands, South African youth prefer to find employment in the public or private sector. The Netherlands has a much lower unemployment rate than South Africa and the Dutch are generally very entrepreneurial. From early on, entrepreneurship is considered a desirable career option in the Netherlands. The purpose of this study was to determine whether there is a difference in the perceptions of some Dutch and South African students in terms of unemployment and entrepreneurship. Questionnaires were distributed to students at the North West University's Vaal Triangle campus in Vanderbijlpark in Gauteng, South Africa and the Technical University of Delft in the Netherlands. A descriptive statistical analysis approach was followed and the means for the independent questions were calculated. The results demonstrate that the Dutch students are not as concerned about unemployment after completion of their studies as this is not as significant a problem as it is in South Africa. Both groups had positive responses towards the posed questions, but the South African group felt more strongly about the issues. Both groups of students felt that there was a need for more practical entrepreneurship training. The South African education system should focus on practical entrepreneurship training from a young age.

Keywords: Entrepreneurship development, entrepreneurship development programmes, entrepreneurship intention, Netherlands, South Africa, unemployment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871
248 Polymeric Sustained Biodegradable Patch Formulation for Wound Healing

Authors: Abhay Asthana, Gyati Shilakari Asthana

Abstract:

It is the patient compliance and stability in combination with controlled drug delivery and biocompatibility that forms the core feature in present research and development of sustained biodegradable patch formulation intended for wound healing. The aim was to impart sustained degradation, sterile formulation, significant folding endurance, elasticity, biodegradability, bio-acceptability and strength. The optimized formulation comprised of polymers including Hydroxypropyl methyl cellulose, Ethylcellulose, and Gelatin, and Citric Acid PEG Citric acid (CPEGC) triblock dendrimers and active Curcumin. Polymeric mixture dissolved in geometric order in suitable medium through continuous stirring under ambient conditions. With continued stirring Curcumin was added with aid of DCM and Methanol in optimized ratio to get homogenous dispersion. The dispersion was sonicated with optimum frequency and for given time and later casted to form a patch form. All steps were carried out under strict aseptic conditions. The formulations obtained in the acceptable working range were decided based on thickness, uniformity of drug content, smooth texture and flexibility and brittleness. The patch kept on stability using butter paper in sterile pack displayed folding endurance in range of 20 to 23 times without any evidence of crack in an optimized formulation at room temperature (RT) (24 ± 2°C). The patch displayed acceptable parameters after stability study conducted in refrigerated conditions (8±0.2°C) and at RT (24 ± 2°C) up to 90 days. Further, no significant changes were observed in critical parameters such as elasticity, biodegradability, drug release and drug content during stability study conducted at RT 24±2°C for 45 and 90 days. The drug content was in range 95 to 102%, moisture content didn’t exceeded 19.2% and patch passed the content uniformity test. Percentage cumulative drug release was found to be 80% in 12h and matched the biodegradation rate as drug release with correlation factor R2>0.9. The biodegradable patch based formulation developed shows promising results in terms of stability and release profiles.

Keywords: Sustained biodegradation, wound healing, polymeric patch, stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2260
247 Asymmetrical Informative Estimation for Macroeconomic Model: Special Case in the Tourism Sector of Thailand

Authors: Chukiat Chaiboonsri, Satawat Wannapan

Abstract:

This paper used an asymmetric informative concept to apply in the macroeconomic model estimation of the tourism sector in Thailand. The variables used to statistically analyze are Thailand international and domestic tourism revenues, the expenditures of foreign and domestic tourists, service investments by private sectors, service investments by the government of Thailand, Thailand service imports and exports, and net service income transfers. All of data is a time-series index which was observed between 2002 and 2015. Empirically, the tourism multiplier and accelerator were estimated by two statistical approaches. The first was the result of the Generalized Method of Moments model (GMM) based on the assumption which the tourism market in Thailand had perfect information (Symmetrical data). The second was the result of the Maximum Entropy Bootstrapping approach (MEboot) based on the process that attempted to deal with imperfect information and reduced uncertainty in data observations (Asymmetrical data). In addition, the tourism leakages were investigated by a simple model based on the injections and leakages concept. The empirical findings represented the parameters computed from the MEboot approach which is different from the GMM method. However, both of the MEboot estimation and GMM model suggests that Thailand’s tourism sectors are in a period capable of stimulating the economy.

Keywords: Thailand tourism, maximum entropy bootstrapping approach, macroeconomic model, asymmetric information.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1207
246 Air Quality Forecast Based on Principal Component Analysis-Genetic Algorithm and Back Propagation Model

Authors: Bin Mu, Site Li, Shijin Yuan

Abstract:

Under the circumstance of environment deterioration, people are increasingly concerned about the quality of the environment, especially air quality. As a result, it is of great value to give accurate and timely forecast of AQI (air quality index). In order to simplify influencing factors of air quality in a city, and forecast the city’s AQI tomorrow, this study used MATLAB software and adopted the method of constructing a mathematic model of PCA-GABP to provide a solution. To be specific, this study firstly made principal component analysis (PCA) of influencing factors of AQI tomorrow including aspects of weather, industry waste gas and IAQI data today. Then, we used the back propagation neural network model (BP), which is optimized by genetic algorithm (GA), to give forecast of AQI tomorrow. In order to verify validity and accuracy of PCA-GABP model’s forecast capability. The study uses two statistical indices to evaluate AQI forecast results (normalized mean square error and fractional bias). Eventually, this study reduces mean square error by optimizing individual gene structure in genetic algorithm and adjusting the parameters of back propagation model. To conclude, the performance of the model to forecast AQI is comparatively convincing and the model is expected to take positive effect in AQI forecast in the future.

Keywords: AQI forecast, principal component analysis, genetic algorithm, back propagation neural network model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 970
245 Machine Learning Techniques for Short-Term Rain Forecasting System in the Northeastern Part of Thailand

Authors: Lily Ingsrisawang, Supawadee Ingsriswang, Saisuda Somchit, Prasert Aungsuratana, Warawut Khantiyanan

Abstract:

This paper presents the methodology from machine learning approaches for short-term rain forecasting system. Decision Tree, Artificial Neural Network (ANN), and Support Vector Machine (SVM) were applied to develop classification and prediction models for rainfall forecasts. The goals of this presentation are to demonstrate (1) how feature selection can be used to identify the relationships between rainfall occurrences and other weather conditions and (2) what models can be developed and deployed for predicting the accurate rainfall estimates to support the decisions to launch the cloud seeding operations in the northeastern part of Thailand. Datasets collected during 2004-2006 from the Chalermprakiat Royal Rain Making Research Center at Hua Hin, Prachuap Khiri khan, the Chalermprakiat Royal Rain Making Research Center at Pimai, Nakhon Ratchasima and Thai Meteorological Department (TMD). A total of 179 records with 57 features was merged and matched by unique date. There are three main parts in this work. Firstly, a decision tree induction algorithm (C4.5) was used to classify the rain status into either rain or no-rain. The overall accuracy of classification tree achieves 94.41% with the five-fold cross validation. The C4.5 algorithm was also used to classify the rain amount into three classes as no-rain (0-0.1 mm.), few-rain (0.1- 10 mm.), and moderate-rain (>10 mm.) and the overall accuracy of classification tree achieves 62.57%. Secondly, an ANN was applied to predict the rainfall amount and the root mean square error (RMSE) were used to measure the training and testing errors of the ANN. It is found that the ANN yields a lower RMSE at 0.171 for daily rainfall estimates, when compared to next-day and next-2-day estimation. Thirdly, the ANN and SVM techniques were also used to classify the rain amount into three classes as no-rain, few-rain, and moderate-rain as above. The results achieved in 68.15% and 69.10% of overall accuracy of same-day prediction for the ANN and SVM models, respectively. The obtained results illustrated the comparison of the predictive power of different methods for rainfall estimation.

Keywords: Machine learning, decision tree, artificial neural network, support vector machine, root mean square error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3177
244 Statistical Analysis for Overdispersed Medical Count Data

Authors: Y. N. Phang, E. F. Loh

Abstract:

Many researchers have suggested the use of zero inflated Poisson (ZIP) and zero inflated negative binomial (ZINB) models in modeling overdispersed medical count data with extra variations caused by extra zeros and unobserved heterogeneity. The studies indicate that ZIP and ZINB always provide better fit than using the normal Poisson and negative binomial models in modeling overdispersed medical count data. In this study, we proposed the use of Zero Inflated Inverse Trinomial (ZIIT), Zero Inflated Poisson Inverse Gaussian (ZIPIG) and zero inflated strict arcsine models in modeling overdispered medical count data. These proposed models are not widely used by many researchers especially in the medical field. The results show that these three suggested models can serve as alternative models in modeling overdispersed medical count data. This is supported by the application of these suggested models to a real life medical data set. Inverse trinomial, Poisson inverse Gaussian and strict arcsine are discrete distributions with cubic variance function of mean. Therefore, ZIIT, ZIPIG and ZISA are able to accommodate data with excess zeros and very heavy tailed. They are recommended to be used in modeling overdispersed medical count data when ZIP and ZINB are inadequate.

Keywords: Zero inflated, inverse trinomial distribution, Poisson inverse Gaussian distribution, strict arcsine distribution, Pearson’s goodness of fit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3262
243 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: Time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518
242 Simulation and Statistical Analysis of Motion Behavior of a Single Rockfall

Authors: Iau-Teh Wang, Chin-Yu Lee

Abstract:

The impact force of a rockfall is mainly determined by its moving behavior and velocity, which are contingent on the rock shape, slope gradient, height, and surface roughness of the moving path. It is essential to precisely calculate the moving path of the rockfall in order to effectively minimize and prevent damages caused by the rockfall. By applying the Colorado Rockfall Simulation Program (CRSP) program as the analysis tool, this research studies the influence of three shapes of rock (spherical, cylindrical and discoidal) and surface roughness on the moving path of a single rockfall. As revealed in the analysis, in addition to the slope gradient, the geometry of the falling rock and joint roughness coefficient ( JRC ) of the slope are the main factors affecting the moving behavior of a rockfall. On a single flat slope, both the rock-s bounce height and moving velocity increase as the surface gradient increases, with a critical gradient value of 1:m = 1 . Bouncing behavior and faster moving velocity occur more easily when the rock geometry is more oval. A flat piece tends to cause sliding behavior and is easily influenced by the change of surface undulation. When JRC <1.4 the moving velocity decreases and the bounce height increases as JRC increases. If the gradient is fixed, when JRC is greater, the bounce height will be higher, while the moving velocity will experience a downward trend. Therefore, the best protecting point and facilities can be chosen if the moving paths of rockfalls are precisely estimated.

Keywords: rock shape, surface roughness, moving path.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1907
241 The Effect of Cooperation Teaching Method on Learning of Students in Primary Schools

Authors: Fereshteh Afkari, Davood Bagheri

Abstract:

The effect of teaching method on learning assistance Dunn Review .The study, to compare the effects of collaboration on teaching mathematics learning courses, including writing, science, experimental girl students by other methods of teaching basic first paid and the amount of learning students methods have been trained to cooperate with other students with other traditional methods have been trained to compare. The survey on 100 students in Tehran that using random sampling ¬ cluster of girl students between the first primary selections was performed. Considering the topic of semi-experimental research methods used to practice the necessary information by questionnaire, examination questions by the researcher, in collaboration with teachers and view authority in this field and related courses that teach these must have been collected. Research samples to test and control groups were divided. Experimental group and control group collaboration using traditional methods of mathematics courses, including writing and experimental sciences were trained. Research results using statistical methods T is obtained in two independent groups show that, through training assistance will lead to positive results and student learning in comparison with traditional methods, will increase also led to collaboration methods increase skills to solve math lesson practice, better understanding and increased skill level of students in practical lessons such as science and has been writing.

Keywords: method of teaching, learning, collaboration

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1592
240 Governance Commitment and Time Differences in Aspects of Sustainability Reporting in Nigerian Banks

Authors: Nwobu Obiamaka, Owolabi Akintola

Abstract:

This study examined the extent of statistical significant difference between the economic, environmental, governance and social aspects of sustainability reporting as a result of board committee on sustainability and time (year) of reporting for business organizations in the Nigerian banking sector. The years of reporting under consideration were 2010, 2011, 2012 and 2013. Content analysis methodology was employed through a reporting index used to score the amount of economic, environmental, governance and social indicators of sustainability reporting. The results of this study indicated that business organizations with board committee on sustainability had more indicators of sustainability reporting than those without board committees on sustainability issues. Also, sustainability reporting in 2013 was higher than that of prior years (2012, 2011 and 2010) for the economic, environmental and social indicators. The governance indicators of 2012 was highest compared to the other years (2013, 2011 and 2010) under consideration in this study. The implication of this finding is that business organizations that have board committees on sustainability are monitored by such boards to report more to their stakeholders. On the other hand, business organizations are appreciating the need to engage in sustainability reporting with each passing year. This could be due to the Central Bank of Nigeria (CBN) Sustainability Reporting framework that business organizations in the banking sector have to adhere to. When sustainability issues are monitored from the board of directors, business organizations are likely to increase and improve on their sustainability reporting.

Keywords: Governance, organizations, reporting, sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2041
239 Least Square-SVM Detector for Wireless BPSK in Multi-Environmental Noise

Authors: J. P. Dubois, Omar M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a statistical learning tool developed to a more complex concept of structural risk minimization (SRM). In this paper, SVM is applied to signal detection in communication systems in the presence of channel noise in various environments in the form of Rayleigh fading, additive white Gaussian background noise (AWGN), and interference noise generalized as additive color Gaussian noise (ACGN). The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these advanced stochastic noise models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to conventional binary signaling optimal model-based detector driven by binary phase shift keying (BPSK) modulation. We show that the SVM performance is superior to that of conventional matched filter-, innovation filter-, and Wiener filter-driven detectors, even in the presence of random Doppler carrier deviation, especially for low SNR (signal-to-noise ratio) ranges. For large SNR, the performance of the SVM was similar to that of the classical detectors. However, the convergence between SVM and maximum likelihood detection occurred at a higher SNR as the noise environment became more hostile.

Keywords: Colour noise, Doppler shift, innovation filter, least square-support vector machine, matched filter, Rayleigh fading, Wiener filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770
238 Cost of Governance in Nigeria: In Whose Interest?

Authors: Francis O. Iyoha, Daniel E. Gberevbie, Charles T. Iruonagbe, Matthew E. Egharevba

Abstract:

Cost of governance in Nigeria has become a challenge to development and concern to practitioners and scholars alike in the field of business and social science research. In the 2010 national budget of NGN4.6 trillion or USD28.75billion for instance, only a pantry sum of NGN1.8trillion or USD11.15billion was earmarked for capital expenditure. Similarly, in 2013, out of a total national budget of NGN4.92trillion or USD30.75billion, only the sum of NGN1.50trllion or USD9.38billion was voted for capital expenditure. Therefore, based on the data sourced from the Nigerian Office of Statistics, Central bank of Nigeria Statistical Bulletin as well as from the United Nations Development Programme, this study examined the causes of high cost of governance in Nigeria. It found out that the high cost of governance in the country is in the interest of the ruling class, arising from their unethical behaviour – corrupt practices and the poor management of public resources. As a result, the study recommends the need to intensify the war against corruption and mismanagement of public resources by government officials as possible solution to overcome the high cost of governance in Nigeria. This could be achieved by strengthening the constitutional powers of the various anti-corruption agencies in the area of arrest, investigation and prosecution of offenders without the interference of the executive arm of government either at the local, state or federal level.

Keywords: Capital expenditure, Cost of governance, recurrent expenditure, unethical behaviour.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3472
237 AniMoveMineR: Animal Behavior Exploratory Analysis Using Association Rules Mining

Authors: Suelane Garcia Fontes, Silvio Luiz Stanzani, Pedro L. Pizzigatti Corrła Ronaldo G. Morato

Abstract:

Environmental changes and major natural disasters are most prevalent in the world due to the damage that humanity has caused to nature and these damages directly affect the lives of animals. Thus, the study of animal behavior and their interactions with the environment can provide knowledge that guides researchers and public agencies in preservation and conservation actions. Exploratory analysis of animal movement can determine the patterns of animal behavior and with technological advances the ability of animals to be tracked and, consequently, behavioral studies have been expanded. There is a lot of research on animal movement and behavior, but we note that a proposal that combines resources and allows for exploratory analysis of animal movement and provide statistical measures on individual animal behavior and its interaction with the environment is missing. The contribution of this paper is to present the framework AniMoveMineR, a unified solution that aggregates trajectory analysis and data mining techniques to explore animal movement data and provide a first step in responding questions about the animal individual behavior and their interactions with other animals over time and space. We evaluated the framework through the use of monitored jaguar data in the city of Miranda Pantanal, Brazil, in order to verify if the use of AniMoveMineR allows to identify the interaction level between these jaguars. The results were positive and provided indications about the individual behavior of jaguars and about which jaguars have the highest or lowest correlation.

Keywords: Data mining, data science, trajectory, animal behavior.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 830
236 Multi Task Scheme to Monitor Multivariate Environments Using Artificial Neural Network

Authors: K. Atashgar

Abstract:

When an assignable cause(s) manifests itself to a multivariate process and the process shifts to an out-of-control condition, a root-cause analysis should be initiated by quality engineers to identify and eliminate the assignable cause(s) affected the process. A root-cause analysis in a multivariate process is more complex compared to a univariate process. In the case of a process involved several correlated variables an effective root-cause analysis can be only experienced when it is possible to identify the required knowledge including the out-of-control condition, the change point, and the variable(s) responsible to the out-of-control condition, all simultaneously. Although literature addresses different schemes to monitor multivariate processes, one can find few scientific reports focused on all the required knowledge. To the best of the author’s knowledge this is the first time that a multi task model based on artificial neural network (ANN) is reported to monitor all the required knowledge at the same time for a multivariate process with more than two correlated quality characteristics. The performance of the proposed scheme is evaluated numerically when different step shifts affect the mean vector. Average run length is used to investigate the performance of the proposed multi task model. The simulated results indicate the multi task scheme performs all the required knowledge effectively.

Keywords: Artificial neural network, Multivariate process, Statistical process control, Change point.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1623
235 The Virtual Container Yard: Identifying the Persuasive Factors in Container Interchange

Authors: L. Edirisinghe, Zhihong Jin, A. W. Wijeratne, R. Mudunkotuwa

Abstract:

The virtual container yard is an effective solution to the container inventory imbalance problem which is a global issue. It causes substantial cost to carriers, which inadvertently adds to the prices of consumer goods. The virtual container yard is rooted in the fundamentals of container interchange between carriers. If carriers opt to interchange their excess containers with those who are deficit, a substantial part of the empty reposition cost could be eliminated. Unlike in other types of ships, cargo cannot be directly loaded to a container ship. Slots and containers are supplementary components; thus, without containers, a carrier cannot ship cargo if the containers are not available and vice versa. Few decades ago, carriers recognized slot (the unit of space in a container ship) interchange as a viable solution for the imbalance of shipping space. Carriers interchange slots among them and it also increases the advantage of scale of economies in container shipping. Some of these service agreements between mega carriers have provisions to interchange containers too. However, the interchange mechanism is still not popular among carriers for containers. This is the paradox that prevails in the liner shipping industry. At present, carriers reposition their excess empty containers to areas where they are in demand. This research applied factor analysis statistical method. The paper reveals that five major components may influence the virtual container yard namely organisation, practice and culture, legal and environment, international nature, and marketing. There are 12 variables that may impact the virtual container yard, and these are explained in the paper.

Keywords: Virtual container yard, imbalance, management, inventory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 790