Search results for: kernel density estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2182

Search results for: kernel density estimation

1312 Evaluation of Internal Ballistics of Multi-Perforated Grain in a Closed Vessel

Authors: B. A. Parate, C. P. Shetty

Abstract:

This research article describes the evaluation methodology of an internal ballistics of multi-perforated grain in a closed vessel (CV). The propellant testing in a CV is conducted to characterize the propellants and to ascertain the various internal ballistic parameters. The assessment of an internal ballistics plays a very crucial role for suitability of its use in the selection for a given particular application. The propellant used in defense sectors has to satisfy the user requirements as per laid down specifications. The outputs from CV evaluation of multi-propellant grain are maximum pressure of 226.75 MPa, differentiation of pressure with respect to time of 36.99 MPa/ms, average vivacity of 9.990×10-4/MPa ms, force constant of 933.9 J/g, rise time of 9.85 ms, pressure index of 0.878 including burning coefficient of 0.2919. This paper addresses an internal ballistic of multi-perforated grain, propellant selection, its calculation, and evaluation of various parameters in a CV testing. For the current analysis, the propellant is evaluated in 100 cc CV with propellant mass 20 g. The loading density of propellant is 0.2 g/cc. The method for determination of internal ballistic properties consists of burning of propellant mass under constant volume.

Keywords: Burning rate, closed vessel, force constant, internal ballistic, loading density, maximum pressure, multi-propellant grain, propellant, rise time, vivacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 378
1311 Least-Squares Support Vector Machine for Characterization of Clusters of Microcalcifications

Authors: Baljit Singh Khehra, Amar Partap Singh Pharwaha

Abstract:

Clusters of Microcalcifications (MCCs) are most frequent symptoms of Ductal Carcinoma in Situ (DCIS) recognized by mammography. Least-Square Support Vector Machine (LS-SVM) is a variant of the standard SVM. In the paper, LS-SVM is proposed as a classifier for classifying MCCs as benign or malignant based on relevant extracted features from enhanced mammogram. To establish the credibility of LS-SVM classifier for classifying MCCs, a comparative evaluation of the relative performance of LS-SVM classifier for different kernel functions is made. For comparative evaluation, confusion matrix and ROC analysis are used. Experiments are performed on data extracted from mammogram images of DDSM database. A total of 380 suspicious areas are collected, which contain 235 malignant and 145 benign samples, from mammogram images of DDSM database. A set of 50 features is calculated for each suspicious area. After this, an optimal subset of 23 most suitable features is selected from 50 features by Particle Swarm Optimization (PSO). The results of proposed study are quite promising.

Keywords: Clusters of Microcalcifications, Ductal Carcinoma in Situ, Least-Square Support Vector Machine, Particle Swarm Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1812
1310 Ab initio Study of Co2ZrGe and Co2NbB Full Heusler Compounds

Authors: Abada Ahmed, Hiadsi Said, Ouahrani Tarik, Amrani Bouhalouane, Amara Kadda

Abstract:

Using the first-principles full-potential linearized augmented plane wave plus local orbital (FP-LAPW+lo) method based on density functional theory (DFT), we have investigated the electronic structure and magnetism of full Heusler alloys Co2ZrGe and Co2NbB. These compounds are predicted to be half-metallic ferromagnets (HMFs) with a total magnetic moment of 2.000 B per formula unit, well consistent with the Slater-Pauling rule. Calculations show that both the alloys have an indirect band gaps, in the minority-spin channel of density of states (DOS), with values of 0.58 eV and 0.47 eV for Co2ZrGe and Co2NbB, respectively. Analysis of the DOS and magnetic moments indicates that their magnetism is mainly related to the d-d hybridization between the Co and Zr (or Nb) atoms. The half-metallicity is found to be relatively robust against volume changes. In addition, an atom inside molecule AIM formalism and an electron localization function ELF were also adopted to study the bonding properties of these compounds, building a bridge between their electronic and bonding behavior. As they have a good crystallographic compatibility with the lattice of semiconductors used industrially and negative calculated cohesive energies with considerable absolute values these two alloys could be promising magnetic materials in the spintronic field.

Keywords: Electronic properties, full Heusler alloys, halfmetallic ferromagnets, magnetic properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2506
1309 Moving Object Detection Using Histogram of Uniformly Oriented Gradient

Authors: Wei-Jong Yang, Yu-Siang Su, Pau-Choo Chung, Jar-Ferr Yang

Abstract:

Moving object detection (MOD) is an important issue in advanced driver assistance systems (ADAS). There are two important moving objects, pedestrians and scooters in ADAS. In real-world systems, there exist two important challenges for MOD, including the computational complexity and the detection accuracy. The histogram of oriented gradient (HOG) features can easily detect the edge of object without invariance to changes in illumination and shadowing. However, to reduce the execution time for real-time systems, the image size should be down sampled which would lead the outlier influence to increase. For this reason, we propose the histogram of uniformly-oriented gradient (HUG) features to get better accurate description of the contour of human body. In the testing phase, the support vector machine (SVM) with linear kernel function is involved. Experimental results show the correctness and effectiveness of the proposed method. With SVM classifiers, the real testing results show the proposed HUG features achieve better than classification performance than the HOG ones.

Keywords: Moving object detection, histogram of oriented gradient histogram of oriented gradient, histogram of uniformly-oriented gradient, linear support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1233
1308 A Numerical Study on Electrophoresis of a Soft Particle with Charged Core Coated with Polyelectrolyte Layer

Authors: Partha Sarathi Majee, S. Bhattacharyya

Abstract:

Migration of a core-shell soft particle under the influence of an external electric field in an electrolyte solution is studied numerically. The soft particle is coated with a positively charged polyelectrolyte layer (PEL) and the rigid core is having a uniform surface charge density. The Darcy-Brinkman extended Navier-Stokes equations are solved for the motion of the ionized fluid, the non-linear Nernst-Planck equations for the ion transport and the Poisson equation for the electric potential. A pressure correction based iterative algorithm is adopted for numerical computations. The effects of convection on double layer polarization (DLP) and diffusion dominated counter ions penetration are investigated for a wide range of Debye layer thickness, PEL fixed surface charge density, and permeability of the PEL. Our results show that when the Debye layer is in order of the particle size, the DLP effect is significant and produces a reduction in electrophoretic mobility. However, the double layer polarization effect is negligible for a thin Debye layer or low permeable cases. The point of zero mobility and the existence of mobility reversal depending on the electrolyte concentration are also presented.

Keywords: Debye length, double layer polarization, electrophoresis, mobility reversal, soft particle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1143
1307 Forecast of the Small Wind Turbines Sales with Replacement Purchases and with or without Account of Price Changes

Authors: V. Churkin, M. Lopatin

Abstract:

The purpose of the paper is to estimate the US small wind turbines market potential and forecast the small wind turbines sales in the US. The forecasting method is based on the application of the Bass model and the generalized Bass model of innovations diffusion under replacement purchases. In the work an exponential distribution is used for modeling of replacement purchases. Only one parameter of such distribution is determined by average lifetime of small wind turbines. The identification of the model parameters is based on nonlinear regression analysis on the basis of the annual sales statistics which has been published by the American Wind Energy Association (AWEA) since 2001 up to 2012. The estimation of the US average market potential of small wind turbines (for adoption purchases) without account of price changes is 57080 (confidence interval from 49294 to 64866 at P = 0.95) under average lifetime of wind turbines 15 years, and 62402 (confidence interval from 54154 to 70648 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 90,7%, while in the second - 91,8%. The effect of the wind turbines price changes on their sales was estimated using generalized Bass model. This required a price forecast. To do this, the polynomial regression function, which is based on the Berkeley Lab statistics, was used. The estimation of the US average market potential of small wind turbines (for adoption purchases) in that case is 42542 (confidence interval from 32863 to 52221 at P = 0.95) under average lifetime of wind turbines 15 years, and 47426 (confidence interval from 36092 to 58760 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 95,3%, while in the second – 95,3%.

Keywords: Bass model, generalized Bass model, replacement purchases, sales forecasting of innovations, statistics of sales of small wind turbines in the United States.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1883
1306 Remaining Useful Life Estimation of Bearings Based on Nonlinear Dimensional Reduction Combined with Timing Signals

Authors: Zhongmin Wang, Wudong Fan, Hengshan Zhang, Yimin Zhou

Abstract:

In data-driven prognostic methods, the prediction accuracy of the estimation for remaining useful life of bearings mainly depends on the performance of health indicators, which are usually fused some statistical features extracted from vibrating signals. However, the existing health indicators have the following two drawbacks: (1) The differnet ranges of the statistical features have the different contributions to construct the health indicators, the expert knowledge is required to extract the features. (2) When convolutional neural networks are utilized to tackle time-frequency features of signals, the time-series of signals are not considered. To overcome these drawbacks, in this study, the method combining convolutional neural network with gated recurrent unit is proposed to extract the time-frequency image features. The extracted features are utilized to construct health indicator and predict remaining useful life of bearings. First, original signals are converted into time-frequency images by using continuous wavelet transform so as to form the original feature sets. Second, with convolutional and pooling layers of convolutional neural networks, the most sensitive features of time-frequency images are selected from the original feature sets. Finally, these selected features are fed into the gated recurrent unit to construct the health indicator. The results state that the proposed method shows the enhance performance than the related studies which have used the same bearing dataset provided by PRONOSTIA.

Keywords: Continuous wavelet transform, convolution neural network, gated recurrent unit, health indicators, remaining useful life.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 768
1305 Sensing Pressure for Authentication System Using Keystroke Dynamics

Authors: Hidetoshi Nonaka, Masahito Kurihara

Abstract:

In this paper, an authentication system using keystroke dynamics is presented. We introduced pressure sensing for the improvement of the accuracy of measurement and durability against intrusion using key-logger, and so on, however additional instrument is needed. As the result, it has been found that the pressure sensing is also effective for estimation of real moment of keystroke.

Keywords: Biometric authentication, Keystroke dynamics, Pressure sensing, Time-frequency analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2222
1304 Conflation Methodology Applied to Flood Recovery

Authors: E. L. Suarez, D. E. Meeroff, Y. Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: Community resilience, conflation, flood risk, nuisance flooding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 139
1303 Mathematical Correlation for Brake Thermal Efficiency and NOx Emission of CI Engine using Ester of Vegetable Oils

Authors: Samir J. Deshmukh, Lalit B. Bhuyar, Shashank B. Thakre, Sachin S. Ingole

Abstract:

The aim of this study is to develop mathematical relationships for the performance parameter brake thermal efficiency (BTE) and emission parameter nitrogen oxides (NOx) for the various esters of vegetable oils used as CI engine fuel. The BTE is an important performance parameter defining the ability of engine to utilize the energy supplied and power developed similarly it is indication of efficiency of fuels used. The esters of cottonseed oil, soybean oil, jatropha oil and hingan oil are prepared using transesterification process and characterized for their physical and main fuel properties including viscosity, density, flash point and higher heating value using standard test methods. These esters are tried as CI engine fuel to analyze the performance and emission parameters in comparison to diesel. The results of the study indicate that esters as a fuel does not differ greatly with that of diesel in properties. The CI engine performance with esters as fuel is in line with the diesel where as the emission parameters are reduced with the use of esters. The correlation developed between BTE and brake power(BP), gross calorific value(CV), air-fuel ratio(A/F), heat carried away by cooling water(HCW). Another equation is developed between the NOx emission and CO, HC, smoke density (SD), exhaust gas temperature (EGT). The equations are verified by comparing the observed and calculated values which gives the coefficient of correlation of 0.99 and 0.96 for the BTE and NOx equations respectively.

Keywords: Esters, emission, performance, and vegetable oil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2218
1302 Estimation of Exhaust and Non-Exhaust Particulate Matter Emissions’ Share from On-Road Vehicles in Addis Ababa City

Authors: Solomon Neway Jida, Jean-Francois Hetet, Pascal Chesse

Abstract:

Vehicular emission is the key source of air pollution in the urban environment. This includes both fine particles (PM2.5) and coarse particulate matters (PM10). However, particulate matter emissions from road traffic comprise emissions from exhaust tailpipe and emissions due to wear and tear of the vehicle part such as brake, tire and clutch and re-suspension of dust (non-exhaust emission). This study estimates the share of the two sources of pollutant particle emissions from on-roadside vehicles in the Addis Ababa municipality, Ethiopia. To calculate its share, two methods were applied; the exhaust-tailpipe emissions were calculated using the Europeans emission inventory Tier II method and Tier I for the non-exhaust emissions (like vehicle tire wear, brake, and road surface wear). The results show that of the total traffic-related particulate emissions in the city, 63% emitted from vehicle exhaust and the remaining 37% from non-exhaust sources. The annual roads transport exhaust emission shares around 2394 tons of particles from all vehicle categories. However, from the total yearly non-exhaust particulate matter emissions’ contribution, tire and brake wear shared around 65% and 35% emanated by road-surface wear. Furthermore, vehicle tire and brake wear were responsible for annual 584.8 tons of coarse particles (PM10) and 314.4 tons of fine particle matter (PM2.5) emissions in the city whereas surface wear emissions were responsible for around 313.7 tons of PM10 and 169.9 tons of PM2.5 pollutant emissions in the city. This suggests that non-exhaust sources might be as significant as exhaust sources and have a considerable contribution to the impact on air quality.

Keywords: Addis Ababa, automotive emission, emission estimation, particulate matters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 768
1301 Comparison of Reliability Systems Based Uncertainty

Authors: A. Aissani, H. Benaoudia

Abstract:

Stochastic comparison has been an important direction of research in various area. This can be done by the use of the notion of stochastic ordering which gives qualitatitive rather than purely quantitative estimation of the system under study. In this paper we present applications of comparison based uncertainty related to entropy in Reliability analysis, for example to design better systems. These results can be used as a priori information in simulation studies.

Keywords: Uncertainty, Stochastic comparison, Reliability, serie's system, imperfect repair.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1254
1300 Mixtures of Monotone Networks for Prediction

Authors: Marina Velikova, Hennie Daniels, Ad Feelders

Abstract:

In many data mining applications, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. In this paper we consider partially monotone prediction problems, where the target variable depends monotonically on some of the input variables but not on all. We propose a novel method to construct prediction models, where monotone dependences with respect to some of the input variables are preserved by virtue of construction. Our method belongs to the class of mixture models. The basic idea is to convolute monotone neural networks with weight (kernel) functions to make predictions. By using simulation and real case studies, we demonstrate the application of our method. To obtain sound assessment for the performance of our approach, we use standard neural networks with weight decay and partially monotone linear models as benchmark methods for comparison. The results show that our approach outperforms partially monotone linear models in terms of accuracy. Furthermore, the incorporation of partial monotonicity constraints not only leads to models that are in accordance with the decision maker's expertise, but also reduces considerably the model variance in comparison to standard neural networks with weight decay.

Keywords: mixture models, monotone neural networks, partially monotone models, partially monotone problems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1246
1299 Robust Adaptive ELS-QR Algorithm for Linear Discrete Time Stochastic Systems Identification

Authors: Ginalber L. O. Serra

Abstract:

This work proposes a recursive weighted ELS algorithm for system identification by applying numerically robust orthogonal Householder transformations. The properties of the proposed algorithm show it obtains acceptable results in a noisy environment: fast convergence and asymptotically unbiased estimates. Comparative analysis with others robust methods well known from literature are also presented.

Keywords: Stochastic Systems, Robust Identification, Parameter Estimation, Systems Identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491
1298 A New Fuzzy DSS/ES for Stock Portfolio Selection using Technical and Fundamental Approaches in Parallel

Authors: H. Zarei, M. H. Fazel Zarandi, M. Karbasian

Abstract:

A Decision Support System/Expert System for stock portfolio selection presented where at first step, both technical and fundamental data used to estimate technical and fundamental return and risk (1st phase); Then, the estimated values are aggregated with the investor preferences (2nd phase) to produce convenient stock portfolio. In the 1st phase, there are two expert systems, each of which is responsible for technical or fundamental estimation. In the technical expert system, for each stock, twenty seven candidates are identified and with using rough sets-based clustering method (RC) the effective variables have been selected. Next, for each stock two fuzzy rulebases are developed with fuzzy C-Mean method and Takai-Sugeno- Kang (TSK) approach; one for return estimation and the other for risk. Thereafter, the parameters of the rule-bases are tuned with backpropagation method. In parallel, for fundamental expert systems, fuzzy rule-bases have been identified in the form of “IF-THEN" rules through brainstorming with the stock market experts and the input data have been derived from financial statements; as a result two fuzzy rule-bases have been generated for all the stocks, one for return and the other for risk. In the 2nd phase, user preferences represented by four criteria and are obtained by questionnaire. Using an expert system, four estimated values of return and risk have been aggregated with the respective values of user preference. At last, a fuzzy rule base having four rules, treats these values and produce a ranking score for each stock which will lead to a satisfactory portfolio for the user. The stocks of six manufacturing companies and the period of 2003-2006 selected for data gathering.

Keywords: Stock Portfolio Selection, Fuzzy Rule-Base ExpertSystems, Financial Decision Support Systems, Technical Analysis, Fundamental Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1841
1297 Alcohols as a Phase Change Material with Excellent Thermal Storage Properties in Buildings

Authors: Dehong Li, Yuchen Chen, Alireza Kaboorani, Denis Rodrigue, Xiaodong (Alice) Wang

Abstract:

Utilizing solar energy for thermal energy storage has emerged as an appealing option for lowering the amount of energy that is consumed by buildings. Due to their high heat storage density, non-corrosive and non-polluting properties, alcohols can be a good alternative to petroleum-derived paraffin phase change materials (PCMs). In this paper, ternary eutectic PCMs with suitable phase change temperatures were designed and prepared using lauryl alcohol (LA), cetyl alcohol (CA), stearyl alcohol (SA) and xylitol (X). The Differential Scanning Calorimetry (DSC) results revealed that the phase change temperatures of LA-CA-SA, LA-CA-X, and LA-SA-X were 20.52 °C, 20.37 °C, and 22.18 °C, respectively. The latent heat of phase change of the ternary eutectic PCMs were all stronger than that of the paraffinic PCMs at roughly the same temperature. The highest latent heat was 195 J/g. It had good thermal energy storage capacity. The preparation mechanism was investigated using Fourier-transform Infrared Spectroscopy (FTIR), and it was found that the ternary eutectic PCMs were only physically mixed among the components. Ternary eutectic PCMs had a simple preparation process, suitable phase change temperature, and high energy storage density. They are suitable for low-temperature architectural packaging applications.

Keywords: Thermal energy storage, buildings, phase change materials, alcohols.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 270
1296 The Guaranteed Detection of the Seismoacoustic Emission Source in the C-OTDR Systems

Authors: Andrey V. Timofeev

Abstract:

A method is proposed for stable detection of seismoacoustic sources in C-OTDR systems that guarantee given upper bounds for probabilities of type I and type II errors. Properties of the proposed method are rigorously proved. The results of practical applications of the proposed method in a real C-OTDRsystem are presented.

Keywords: Guaranteed detection, C-OTDR systems, change point, interval estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1986
1295 A Damage Level Assessment Model for Extra High Voltage Transmission Towers

Authors: Huan-Chieh Chiu, Hung-Shuo Wu, Chien-Hao Wang, Yu-Cheng Yang, Ching-Ya Tseng, Joe-Air Jiang

Abstract:

Power failure resulting from tower collapse due to violent seismic events might bring enormous and inestimable losses. The Chi-Chi earthquake, for example, strongly struck Taiwan and caused huge damage to the power system on September 21, 1999. Nearly 10% of extra high voltage (EHV) transmission towers were damaged in the earthquake. Therefore, seismic hazards of EHV transmission towers should be monitored and evaluated. The ultimate goal of this study is to establish a damage level assessment model for EHV transmission towers. The data of earthquakes provided by Taiwan Central Weather Bureau serve as a reference and then lay the foundation for earthquake simulations and analyses afterward. Some parameters related to the damage level of each point of an EHV tower are simulated and analyzed by the data from monitoring stations once an earthquake occurs. Through the Fourier transform, the seismic wave is then analyzed and transformed into different wave frequencies, and the data would be shown through a response spectrum. With this method, the seismic frequency which damages EHV towers the most is clearly identified. An estimation model is built to determine the damage level caused by a future seismic event. Finally, instead of relying on visual observation done by inspectors, the proposed model can provide a power company with the damage information of a transmission tower. Using the model, manpower required by visual observation can be reduced, and the accuracy of the damage level estimation can be substantially improved. Such a model is greatly useful for health and construction monitoring because of the advantages of long-term evaluation of structural characteristics and long-term damage detection.

Keywords: Smart grid, EHV transmission tower, response spectrum, damage level monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1066
1294 An Investigation on Fresh and Hardened Properties of Concrete while Using Polyethylene Terephthalate (PET) as Aggregate

Authors: Md. Jahidul Islam, A. K. M. Rakinul Islam, Md. Salamah Meherier

Abstract:

This study investigates the suitability of using plastic, such as polyethylene terephthalate (PET), as a partial replacement of natural coarse and fine aggregates (for example, brick chips and natural sand) to produce lightweight concrete for load bearing structural members. The plastic coarse aggregate (PCA) and plastic fine aggregate (PFA) were produced from melted polyethylene terephthalate (PET) bottles. Tests were conducted using three different water–cement (w/c) ratios, such as 0.42, 0.48, and 0.57, where PCA and PFA were used as 50% replacement of coarse and fine aggregate respectively. Fresh and hardened properties of concrete have been compared for natural aggregate concrete (NAC), PCA concrete (PCC) and PFA concrete (PFC). The compressive strength of concrete at 28 days varied with the water–cement ratio for both the PCC and PFC. Between PCC and PFC, PFA concrete showed the highest compressive strength (23.7 MPa) at 0.42 w/c ratio and also the lowest compressive strength (13.7 MPa) at 0.57 w/c ratio. Significant reduction in concrete density was mostly observed for PCC samples, ranging between 1977–1924 kg/m³. With the increase in water–cement ratio PCC achieved higher workability compare to both NAC and PFC. It was found that both the PCA and PFA contained concrete achieved the required compressive strength to be used for structural purpose as partial replacement of the natural aggregate; but to obtain the desired lower density as lightweight concrete the PCA is most suited.

Keywords: Polyethylene terephthalate, plastic aggregate, concrete, fresh and hardened properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3277
1293 Disparity Estimation for Objects of Interest

Authors: Yen San Yong, Hock Woon Hon

Abstract:

An algorithm for estimating the disparity of objects of interest is proposed. This algorithm uses image shifting and overlapping area to estimate the disparity value; thereby depth of the objects of interest can be obtained. The algorithm is able to perform at different levels of accuracy. However, as the accuracy increases the processing speed decreases. The algorithm is tested with static stereo images and sequence of stereo images. The experimental results are presented in this paper.

Keywords: stereo vision, binocular parallax

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1230
1292 Identification of the Best Blend Composition of Natural Rubber-High Density Polyethylene Blends for Roofing Applications

Authors: W. V. W. H. Wickramaarachchi, S. Walpalage, S. M. Egodage

Abstract:

Thermoplastic elastomer (TPE) is a multifunctional polymeric material which possesses a combination of excellent properties of parent materials. Basically, TPE has a rubber phase and a thermoplastic phase which gives processability as thermoplastics. When the rubber phase is partially or fully crosslinked in the thermoplastic matrix, TPE is called as thermoplastic elastomer vulcanizate (TPV). If the rubber phase is non-crosslinked, it is called as thermoplastic elastomer olefin (TPO). Nowadays TPEs are introduced into the commercial market with different products. However, the application of TPE as a roofing material is limited. Out of the commercially available roofing products from different materials, only single ply roofing membranes and plastic roofing sheets are produced from rubbers and plastics. Natural rubber (NR) and high density polyethylene (HDPE) are used in various industrial applications individually with some drawbacks. Therefore, this study was focused to develop both TPO and TPV blends from NR and HDPE at different compositions and then to identify the best blend composition to use as a roofing material. A series of blends by varying NR loading from 10 wt% to 50 wt%, at 10 wt% intervals, were prepared using a twin screw extruder. Dicumyl peroxide was used as a crosslinker for TPV. The standard properties for a roofing material like tensile properties tear strength, hardness, impact strength, water absorption, swell/gel analysis and thermal characteristics of the blends were investigated. Change of tensile strength after exposing to UV radiation was also studied. Tensile strength, hardness, tear strength, melting temperature and gel content of TPVs show higher values compared to TPOs at every loading studied, while water absorption and swelling index show lower values, suggesting TPVs are more suitable than TPOs for roofing applications. Most of the optimum properties were shown at 10/90 (NR/HDPE) composition. However, high impact strength and gel content were shown at 20/80 (NR/HDPE) composition. Impact strength, as being an energy absorbing property, is the most important for a roofing material in order to resist impact loads. Therefore, 20/80 (NR/HDPE) is identified as the best blend composition. UV resistance and other properties required for a roofing material could be achieved by incorporating suitable additives to TPVs.

Keywords: Thermoplastic elastomer, natural rubber, high density polyethylene, roofing material.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 959
1291 Obtaining High-Dimensional Configuration Space for Robotic Systems Operating in a Common Environment

Authors: U. Yerlikaya, R. T. Balkan

Abstract:

In this research, a method is developed to obtain high-dimensional configuration space for path planning problems. In typical cases, the path planning problems are solved directly in the 3-dimensional (D) workspace. However, this method is inefficient in handling the robots with various geometrical and mechanical restrictions. To overcome these difficulties, path planning may be formalized and solved in a new space which is called configuration space. The number of dimensions of the configuration space comes from the degree of freedoms of the system of interest. The method can be applied in two ways. In the first way, the point clouds of all the bodies of the system and interaction of them are used. The second way is performed via using the clearance function of simulation software where the minimum distances between surfaces of bodies are simultaneously measured. A double-turret system is held in the scope of this study. The 4-D configuration space of a double-turret system is obtained in these two ways. As a result, the difference between these two methods is around 1%, depending on the density of the point cloud. The disparity between the two forms steadily decreases as the point cloud density increases. At the end of the study, in order to verify 4-D configuration space obtained, 4-D path planning problem was realized as 2-D + 2-D and a sample path planning is carried out with using A* algorithm. Then, the accuracy of the configuration space is proved using the obtained paths on the simulation model of the double-turret system.

Keywords: A* Algorithm, autonomous turrets, high-dimensional C-Space, manifold C-Space, point clouds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 386
1290 Predictive Analytics of Student Performance Determinants in Education

Authors: Mahtab Davari, Charles Edward Okon, Somayeh Aghanavesi

Abstract:

Every institute of learning is usually interested in the performance of enrolled students. The level of these performances determines the approach an institute of study may adopt in rendering academic services. The focus of this paper is to evaluate students' academic performance in given courses of study using machine learning methods. This study evaluated various supervised machine learning classification algorithms such as Logistic Regression (LR), Support Vector Machine (SVM), Random Forest, Decision Tree, K-Nearest Neighbors, Linear Discriminant Analysis (LDA), and Quadratic Discriminant Analysis, using selected features to predict study performance. The accuracy, precision, recall, and F1 score obtained from a 5-Fold Cross-Validation were used to determine the best classification algorithm to predict students’ performances. SVM (using a linear kernel), LDA, and LR were identified as the best-performing machine learning methods. Also, using the LR model, this study identified students' educational habits such as reading and paying attention in class as strong determinants for a student to have an above-average performance. Other important features include the academic history of the student and work. Demographic factors such as age, gender, high school graduation, etc., had no significant effect on a student's performance.

Keywords: Student performance, supervised machine learning, prediction, classification, cross-validation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 548
1289 Strong Adhesion and High Wettability at Polyetheretherketone-Resin/Titanium-Dioxide Interface Obtained with Crystal-Orientation Control

Authors: Tomio Iwasaki, Yosuke Kawahito

Abstract:

The adhesion strength and wettability at the interfaces between a polyetheretherketone (PEEK) resin and titanium dioxide (TiO2) have become more important because direct joining of PEEK resin and titanium (Ti), whose surface has usually the oxide (TiO2), is needed not only in vehicles such as airplanes, automobiles, and space vehicles, but also in medical devices such as implants. To realize strong joint between the PEEK resin and TiO2, the dependence of the adhesion strength and wettability on crystal orientations of rutile TiO2 were investigated by using molecular simulations. Molecular dynamics simulations were conducted by combining quantum-mechanics equation of electrons with Newton’s equation of motion of nuclear coordinates (atomic coordinates). By putting a PEEK-resin sphere on a rutile TiO2 surface and by heating the system to 650 K, the contact angles at the interfaces were calculated to evaluate the wettability. After the system is cooled to 300 K from 650 K, to evaluate the adhesin strength, the adhesive fracture energy is calculated as the difference between the energy of the PEEK-TiO2 attached state and that of the PEEK-TiO2 detached state. The results of the contact angles showed that PEEK resin on the TiO2(100) and that on the TiO2(001) surface has low wettability with large contact angles. On the other hand, PEEK resin on the TiO2(110) surface has high wettability with a small contact angle. The results of the adhesive fracture energies showed that the adhesion at the PEEK-resin/TiO2(100) and PEEK-resin/TiO2(001) interfaces are weak. On the other hand, the adhesion at the PEEK-resin/TiO2(110) interface is strong. To clarify the reason that the higher wettability and stronger adhesion are obtained at the PEEK/TiO2(110) interface than at the at the PEEK/TiO2(100) and PEEK/TiO2(001) interfaces, atomic configurations at the interfaces were visualized. The atomic configuration at the PEEK/TiO2(110) interface showed that the lattice-matched coherent interface is realized, and the atomic density is high. On the other hand, the atomic configuration at the PEEK/TiO2(001) interface showed the lattice-unmatched incoherent interface. The atomic configuration at the PEEK/TiO2(100) interface showed that the atomic density is very low although the lattice-matched interface is realized. Therefore, the lattice matching and the high atomic density at the PEEK/TiO2(001) interface are considered to be dominant factors in the high wettability and strong adhesion.

Keywords: Adhesion, direct joining, PEEK, TiO2, wettability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 449
1288 Simulation of Hydrogenated Boron Nitride Nanotube’s Mechanical Properties for Radiation Shielding Applications

Authors: Joseph E. Estevez, Mahdi Ghazizadeh, James G. Ryan, Ajit D. Kelkar

Abstract:

Radiation shielding is an obstacle in long duration space exploration. Boron Nitride Nanotubes (BNNTs) have attracted attention as an additive to radiation shielding material due to B10’s large neutron capture cross section. The B10 has an effective neutron capture cross section suitable for low energy neutrons ranging from 10-5 to 104 eV and hydrogen is effective at slowing down high energy neutrons. Hydrogenated BNNTs are potentially an ideal nanofiller for radiation shielding composites. We use Molecular Dynamics (MD) Simulation via Material Studios Accelrys 6.0 to model the Young’s Modulus of Hydrogenated BNNTs. An extrapolation technique was employed to determine the Young’s Modulus due to the deformation of the nanostructure at its theoretical density. A linear regression was used to extrapolate the data to the theoretical density of 2.62g/cm3. Simulation data shows that the hydrogenated BNNTs will experience a 11% decrease in the Young’s Modulus for (6,6) BNNTs and 8.5% decrease for (8,8) BNNTs compared to non-hydrogenated BNNT’s. Hydrogenated BNNTs are a viable option as a nanofiller for radiation shielding nanocomposite materials for long range and long duration space exploration.

Keywords: Boron Nitride Nanotube, Radiation Shielding, Young Modulus, Atomistic Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6679
1287 A Study of the Replacement of Natural Coarse Aggregate by Spherically-Shaped and Crushed Waste Cathode Ray Tube Glass in Concrete

Authors: N. N. M. Pauzi, M. R. Karim, M. Jamil, R. Hamid, M. F. M. Zain

Abstract:

The aim of this study is to conduct an experimental investigation on the influence of complete replacement of natural coarse aggregate with spherically-shape and crushed waste cathode ray tube (CRT) glass to the aspect of workability, density, and compressive strength of the concrete. After characterizing the glass, a group of concrete mixes was prepared to contain a 40% spherical CRT glass and 60% crushed CRT glass as a complete (100%) replacement of natural coarse aggregates. From a total of 16 types of concrete mixes, the optimum proportion was selected based on its best performance. The test results showed that the use of spherical and crushed glass that possesses a smooth surface, rounded, irregular and elongated shape, and low water absorption affects the workability of concrete. Due to a higher specific gravity of crushed glass, concrete mixes containing CRT glass had a higher density compared to ordinary concrete. Despite the spherical and crushed CRT glass being stronger than gravel, the results revealed a reduction in compressive strength of the concrete. However, using a lower water to binder (w/b) ratio and a higher superplasticizer (SP) dosage, it is found to enhance the compressive strength of 60.97 MPa at 28 days that is lower by 13% than the control specimen. These findings indicate that waste CRT glass in the form of spherical and crushed could be used as an alternative of coarse aggregate that may pave the way for the disposal of hazardous e-waste.

Keywords: Cathode ray tube, glass, coarse aggregate, compressive strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1373
1286 Breakdown of LDPE Film under Heavy Water Absorption

Authors: Eka PW, T. Okazaki, Y. Murakami, N., Hozumi, M. Nagao

Abstract:

The breakdown strength characteristic of Low Density Polyethylene films (LDPE) under DC voltage application and the effect of water absorption have been studied. Mainly, our experiment was investigated under two conditions; dry and heavy water absorption. Under DC ramp voltage, the result found that the breakdown strength under heavy water absorption has a lower value than dry condition. In order to clarify the effect, the temperature rise of film was observed using non contact thermograph until the occurrence of the electrical breakdown and the conduction current of the sample was also measured in correlation with the thermograph measurement. From the observations, it was shown that under the heavy water absorption, the hot spot in the samples appeared at lower voltage. At the same voltage the temperature of the hot spot and conduction current was higher than that under the dry condition. The measurement result has a good correlation between the existence of a critical field for conduction current and thermograph observation. In case of the heavy water absorption, the occurrence of the threshold field was earlier than the dry condition as result lead to higher of conduction current and the temperature rise appears after threshold field was significantly increased in increasing of field. The higher temperature rise was caused by the higher current conduction as the result the insulation leads to breakdown to the lower field application.

Keywords: Low density polyethylene, heavy water absorption, conduction current, temperature rise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1880
1285 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling

Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow

Abstract:

Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.

Keywords: Dynamic modeling, missing data, multiple imputation, physiological measures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 810
1284 Research on the Optimization of the Facility Layout of Efficient Cafeterias for Troops

Authors: Qing Zhang, Jiachen Nie, Yujia Wen, Guanyuan Kou, Peng Yu, Kun Xia, Qin Yang, Li Ding

Abstract:

Background: A facility layout problem (FLP) is an NP-complete (non-deterministic polynomial) problem, for which is hard to obtain an exact optimal solution. FLP has been widely studied in various limited spaces and workflows. For example, cafeterias with many types of equipment for troops cause chaotic processes when dining. Objective: This article tried to optimize the layout of a troops’ cafeteria and to improve the overall efficiency of the dining process. Methods: First, the original cafeteria layout design scheme was analyzed from an ergonomic perspective and two new design schemes were generated. Next, three facility layout models were designed, and further simulation was applied to compare the total time and density of troops between each scheme. Last, an experiment of the dining process with video observation and analysis verified the simulation results. Results: In a simulation, the dining time under the second new layout is shortened by 2.25% and 1.89% (p<0.0001, p=0.0001) compared with the other two layouts, while troops-flow density and interference both greatly reduced in the two new layouts. In the experiment, process completing time and the number of interferences reduced as well, which verified corresponding simulation results. Conclusion: Our two new layout schemes are tested to be optimal by a series of simulation and space experiments. In future research, similar approaches could be applied when taking layout-design algorithm calculation into consideration.

Keywords: Troops’ cafeteria, layout optimization, dining efficiency, AnyLogic simulation, field experiment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 509
1283 Performance Comparison and Analysis of Different Schemes and Limiters

Authors: Wang Wen-long, Li Hua, Pan Sha

Abstract:

Eight difference schemes and five limiters are applied to numerical computation of Riemann problem. The resolution of discontinuities of each scheme produced is compared. Numerical dissipation and its estimation are discussed. The result shows that the numerical dissipation of each scheme is vital to improve scheme-s accuracy and stability. MUSCL methodology is an effective approach to increase computational efficiency and resolution. Limiter should be selected appropriately by balancing compressive and diffusive performance.

Keywords: Scheme; Limiter, Numerical simulation, Riemannproblem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2484