Search results for: principal component regression
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1830

Search results for: principal component regression

1320 Automatic Reusability Appraisal of Software Components using Neuro-fuzzy Approach

Authors: Parvinder S. Sandhu, Hardeep Singh

Abstract:

Automatic reusability appraisal could be helpful in evaluating the quality of developed or developing reusable software components and in identification of reusable components from existing legacy systems; that can save cost of developing the software from scratch. But the issue of how to identify reusable components from existing systems has remained relatively unexplored. In this paper, we have mentioned two-tier approach by studying the structural attributes as well as usability or relevancy of the component to a particular domain. Latent semantic analysis is used for the feature vector representation of various software domains. It exploits the fact that FeatureVector codes can be seen as documents containing terms -the idenifiers present in the components- and so text modeling methods that capture co-occurrence information in low-dimensional spaces can be used. Further, we devised Neuro- Fuzzy hybrid Inference System, which takes structural metric values as input and calculates the reusability of the software component. Decision tree algorithm is used to decide initial set of fuzzy rules for the Neuro-fuzzy system. The results obtained are convincing enough to propose the system for economical identification and retrieval of reusable software components.

Keywords: Clustering, ID3, LSA, Neuro-fuzzy System, SVD

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1662
1319 Hierarchically Modeling Cognition and Behavioral Problems of an Under-Represented Group

Authors: Zhidong Zhang, Zhi-Chao Zhang

Abstract:

This study examined the mental health and behavioral problems in early adolescence with the instrument of Achenbach System of Empirically Based Assessment (ASEBA). The purpose of the study was stratified sampling method was used to collect data from 1975 participants. Multiple regression models and hierarchical regression models were applied to examine the relations between the background variables and internalizing problems, and the ones between students’ performance and internalizing problems. The results indicated that several background variables as predictors could significantly predict the anxious/depressed problem; reading and social study scores could significantly predict the anxious/depressed problem. However the class as a hierarchical macro factor did not indicate the significant effect. In brief, the majority of these models represented that the background variables, behaviors and academic performance were significantly related to the anxious/depressed problem.

Keywords: Behavioral problems, anxious/depression problems, empirical-based assessment, hierarchical modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1759
1318 Using Artificial Neural Networks for Optical Imaging of Fluorescent Biomarkers

Authors: K. A. Laptinskiy, S. A. Burikov, A. M. Vervald, S. A. Dolenko, T. A. Dolenko

Abstract:

The article presents the results of the application of artificial neural networks to separate the fluorescent contribution of nanodiamonds used as biomarkers, adsorbents and carriers of drugs in biomedicine, from a fluorescent background of own biological fluorophores. The principal possibility of solving this problem is shown. Use of neural network architecture let to detect fluorescence of nanodiamonds against the background autofluorescence of egg white with high accuracy - better than 3 ug/ml.

Keywords: Artificial neural networks, fluorescence, data aggregation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2109
1317 Study on Extraction of Ceric Oxide from Monazite Concentrate

Authors: Lwin Thuzar Shwe, Nwe Nwe Soe, Kay Thi Lwin

Abstract:

Cerium oxide is to be recovered from monazite, which contains about 27.35% CeO2. The principal objective of this study is to be able to extract cerium oxide from monazite of Moemeik Myitsone Area. The treatment of monazite in this study involves three main steps; extraction of cerium hydroxide from monazite, solvent extraction of cerium hydroxide, and precipitation with oxalic acid and calcination of cerium oxalate.

Keywords: Calcination, Digestion, Precipitation, SolventExtraction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2589
1316 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: Expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1715
1315 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model

Authors: Yepeng Cheng, Yasuhiko Morimoto

Abstract:

Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.

Keywords: Customer value, Huff's Gravity Model, POS, retailer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 612
1314 Using Structural Equation Modeling in Causal Relationship Design for Balanced-Scorecards' Strategic Map

Authors: A. Saghaei, R. Ghasemi

Abstract:

Through 1980s, management accounting researchers described the increasing irrelevance of traditional control and performance measurement systems. The Balanced Scorecard (BSC) is a critical business tool for a lot of organizations. It is a performance measurement system which translates mission and strategy into objectives. Strategy map approach is a development variant of BSC in which some necessary causal relations must be established. To recognize these relations, experts usually use experience. It is also possible to utilize regression for the same purpose. Structural Equation Modeling (SEM), which is one of the most powerful methods of multivariate data analysis, obtains more appropriate results than traditional methods such as regression. In the present paper, we propose SEM for the first time to identify the relations between objectives in the strategy map, and a test to measure the importance of relations. In SEM, factor analysis and test of hypotheses are done in the same analysis. SEM is known to be better than other techniques at supporting analysis and reporting. Our approach provides a framework which permits the experts to design the strategy map by applying a comprehensive and scientific method together with their experience. Therefore this scheme is a more reliable method in comparison with the previously established methods.

Keywords: BSC, SEM, Strategy map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2705
1313 Perceptual JPEG Compliant Coding by Using DCT-Based Visibility Thresholds of Color Images

Authors: Kuo-Cheng Liu

Abstract:

Effective estimation of just noticeable distortion (JND) for images is helpful to increase the efficiency of a compression algorithm in which both the statistical redundancy and the perceptual redundancy should be accurately removed. In this paper, we design a DCT-based model for estimating JND profiles of color images. Based on a mathematical model of measuring the base detection threshold for each DCT coefficient in the color component of color images, the luminance masking adjustment, the contrast masking adjustment, and the cross masking adjustment are utilized for luminance component, and the variance-based masking adjustment based on the coefficient variation in the block is proposed for chrominance components. In order to verify the proposed model, the JND estimator is incorporated into the conventional JPEG coder to improve the compression performance. A subjective and fair viewing test is designed to evaluate the visual quality of the coding image under the specified viewing condition. The simulation results show that the JPEG coder integrated with the proposed DCT-based JND model gives better coding bit rates at visually lossless quality for a variety of color images.

Keywords: Just-noticeable distortion (JND), discrete cosine transform (DCT), JPEG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2581
1312 Analysis of Bio-Oil Produced by Pyrolysis of Coconut Shell

Authors: D. S. Fardhyanti, A. Damayanti

Abstract:

The utilization of biomass as a source of new and renewable energy is being carried out. One of the technologies to convert biomass as an energy source is pyrolysis which is converting biomass into more valuable products, such as bio-oil. Bio-oil is a liquid which is produced by steam condensation process from the pyrolysis of coconut shells. The composition of a coconut shell e.g. hemicellulose, cellulose and lignin will be oxidized to phenolic compounds as the main component of the bio-oil. The phenolic compounds in bio-oil are corrosive; they cause various difficulties in the combustion system because of a high viscosity, low calorific value, corrosiveness, and instability. Phenolic compounds are very valuable components which phenol has used as the main component for the manufacture of antiseptic, disinfectant (known as Lysol) and deodorizer. The experiments typically occurred at the atmospheric pressure in a pyrolysis reactor at temperatures ranging from 300 oC to 350 oC with a heating rate of 10 oC/min and a holding time of 1 hour at the pyrolysis temperature. The Gas Chromatography-Mass Spectroscopy (GC-MS) was used to analyze the bio-oil components. The obtained bio-oil has the viscosity of 1.46 cP, the density of 1.50 g/cm3, the calorific value of 16.9 MJ/kg, and the molecular weight of 1996.64. By GC-MS, the analysis of bio-oil showed that it contained phenol (40.01%), ethyl ester (37.60%), 2-methoxy-phenol (7.02%), furfural (5.45%), formic acid (4.02%), 1-hydroxy-2-butanone (3.89%), and 3-methyl-1,2-cyclopentanedione (2.01%).

Keywords: Bio-oil, pyrolysis, coconut shell, phenol, gas chromatography-mass spectroscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772
1311 Predictive Models for Compressive Strength of High Performance Fly Ash Cement Concrete for Pavements

Authors: S. M. Gupta, Vanita Aggarwal, Som Nath Sachdeva

Abstract:

The work reported through this paper is an experimental work conducted on High Performance Concrete (HPC) with super plasticizer with the aim to develop some models suitable for prediction of compressive strength of HPC mixes. In this study, the effect of varying proportions of fly ash (0% to 50% @ 10% increment) on compressive strength of high performance concrete has been evaluated. The mix designs studied were M30, M40 and M50 to compare the effect of fly ash addition on the properties of these concrete mixes. In all eighteen concrete mixes that have been designed, three were conventional concretes for three grades under discussion and fifteen were HPC with fly ash with varying percentages of fly ash. The concrete mix designing has been done in accordance with Indian standard recommended guidelines. All the concrete mixes have been studied in terms of compressive strength at 7 days, 28 days, 90 days, and 365 days. All the materials used have been kept same throughout the study to get a perfect comparison of values of results. The models for compressive strength prediction have been developed using Linear Regression method (LR), Artificial Neural Network (ANN) and Leave-One-Out Validation (LOOV) methods.

Keywords: ANN, concrete mixes, compressive strength, fly ash, high performance concrete, linear regression, strength prediction models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2077
1310 Economic Loss due to Ganoderma Disease in Oil Palm

Authors: K. Assis, K. P. Chong, A. S. Idris, C. M. Ho

Abstract:

Oil palm or Elaeis guineensis is considered as the golden crop in Malaysia. But oil palm industry in this country is now facing with the most devastating disease called as Ganoderma Basal Stem Rot disease. The objective of this paper is to analyze the economic loss due to this disease. There were three commercial oil palm sites selected for collecting the required data for economic analysis. Yield parameter used to measure the loss was the total weight of fresh fruit bunch in six months. The predictors include disease severity, change in disease severity, number of infected neighbor palms, age of palm, planting generation, topography, and first order interaction variables. The estimation model of yield loss was identified by using backward elimination based regression method. Diagnostic checking was conducted on the residual of the best yield loss model. The value of mean absolute percentage error (MAPE) was used to measure the forecast performance of the model. The best yield loss model was then used to estimate the economic loss by using the current monthly price of fresh fruit bunch at mill gate.

Keywords: Ganoderma, oil palm, regression model, yield loss, economic loss.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3237
1309 Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty

Authors: D. S. Gomes, A. T. Silva

Abstract:

Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.

Keywords: Logistic regression, reactivity-initiated accident, safety margins, uncertainty propagation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1018
1308 Feature Vector Fusion for Image Based Human Age Estimation

Authors: D. Karthikeyan, G. Balakrishnan

Abstract:

Human faces, as important visual signals, express a significant amount of nonverbal info for usage in human-to-human communication. Age, specifically, is more significant among these properties. Human age estimation using facial image analysis as an automated method which has numerous potential real‐world applications. In this paper, an automated age estimation framework is presented. Support Vector Regression (SVR) strategy is utilized to investigate age prediction. This paper depicts a feature extraction taking into account Gray Level Co-occurrence Matrix (GLCM), which can be utilized for robust face recognition framework. It applies GLCM operation to remove the face's features images and Active Appearance Models (AAMs) to assess the human age based on image. A fused feature technique and SVR with GA optimization are proposed to lessen the error in age estimation.

Keywords: Support vector regression, feature extraction, gray level co-occurrence matrix, active appearance models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1314
1307 Modeling and Simulation of In-vessel Core Handling in PFBR Operator Training Simulator

Authors: Bindu Sankar, Jaideep Chakraborty, Rashmi Nawlakha, A. Venkatesan, S. Raghupathy, T. Jayanthi, S.A.V. Satya Murty

Abstract:

Component handling system is one of the important sub systems of Prototype Fast Breeder Reactor (PFBR) used for fuel handling. Core handling system is again a sub system of component handling system. Core handling system consists of in-vessel and ex-vessel subassembly handling. In-vessel core handling involves transfer arm, large rotatable plug and small rotatable plug operations. Modeling and simulation of in-vessel core handling is a part of development of Prototype Fast Breeder Reactor Operator Training Simulator. This paper deals with simulation and modeling of operations of transfer arm, large rotatable plug and small rotatable plug needed for in-vessel core handling. Process modeling was developed in house using platform independent Cµ code with OpenGL (Open Graphics Library). The control logic models and virtual panel were modeled using simulation tool.

Keywords: Animation, Core Handling System, Prototype Fast Breeder Reactor, Simulator

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1709
1306 Prediction of Compressive Strength Using Artificial Neural Network

Authors: Vijay Pal Singh, Yogesh Chandra Kotiyal

Abstract:

Structures are a combination of various load carrying members which transfer the loads to the foundation from the superstructure safely. At the design stage, the loading of the structure is defined and appropriate material choices are made based upon their properties, mainly related to strength. The strength of materials kept on reducing with time because of many factors like environmental exposure and deformation caused by unpredictable external loads. Hence, to predict the strength of materials used in structures, various techniques are used. Among these techniques, Non-destructive techniques (NDT) are the one that can be used to predict the strength without damaging the structure. In the present study, the compressive strength of concrete has been predicted using Artificial Neural Network (ANN). The predicted strength was compared with the experimentally obtained actual compressive strength of concrete and equations were developed for different models. A good co-relation has been obtained between the predicted strength by these models and experimental values. Further, the co-relation has been developed using two NDT techniques for prediction of strength by regression analysis. It was found that the percentage error has been reduced between the predicted strength by using combined techniques in place of single techniques.

Keywords: Rebound, ultra-sonic pulse, penetration, ANN, NDT, regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4384
1305 Experimental Analysis of the Plate-on-Tube Evaporator on a Domestic Refrigerator’s Performance

Authors: Mert Tosun, Tuğba Tosun

Abstract:

The evaporator is the utmost important component in the refrigeration system, since it enables the refrigerant to draw heat from the desired environment, i.e. the refrigerated space. Studies are being conducted on this component which generally affects the performance of the system, where energy efficient products are important. This study was designed to enhance the effectiveness of the evaporator in the refrigeration cycle of a domestic refrigerator by adjusting the capillary tube length, refrigerant amount, and the evaporator pipe diameter to reduce energy consumption. The experiments were conducted under identical thermal and ambient conditions. Experiment data were analysed using the Design of Experiment (DOE) technique which is a six-sigma method to determine effects of parameters. As a result, it has been determined that the most important parameters affecting the evaporator performance among the selected parameters are found to be the refrigerant amount and pipe diameter. It has been determined that the minimum energy consumption is 6-mm pipe diameter and 16-g refrigerant. It has also been noted that the overall consumption of the experiment sample decreased by 16.6% with respect to the reference system, which has 7-mm pipe diameter and 18-g refrigerant.

Keywords: Heat exchanger, refrigerator, design of experiment, energy consumption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 603
1304 Automated Service Scene Detection for Badminton Game Analysis Using CHLAC and MRA

Authors: Fumito Yoshikawa, Takumi Kobayashi, Kenji Watanabe, Nobuyuki Otsu

Abstract:

Extracting in-play scenes in sport videos is essential for quantitative analysis and effective video browsing of the sport activities. Game analysis of badminton as of the other racket sports requires detecting the start and end of each rally period in an automated manner. This paper describes an automatic serve scene detection method employing cubic higher-order local auto-correlation (CHLAC) and multiple regression analysis (MRA). CHLAC can extract features of postures and motions of multiple persons without segmenting and tracking each person by virtue of shift-invariance and additivity, and necessitate no prior knowledge. Then, the specific scenes, such as serve, are detected by linear regression (MRA) from the CHLAC features. To demonstrate the effectiveness of our method, the experiment was conducted on video sequences of five badminton matches captured by a single ceiling camera. The averaged precision and recall rates for the serve scene detection were 95.1% and 96.3%, respectively.

Keywords: Badminton, CHLAC, MRA, Video-based motiondetection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2714
1303 Effects of Video Games and Online Chat on Mathematics Performance in High School: An Approach of Multivariate Data Analysis

Authors: Lina Wu, Wenyi Lu, Ye Li

Abstract:

Regarding heavy video game players for boys and super online chat lovers for girls as a symbolic phrase in the current adolescent culture, this project of data analysis verifies the displacement effect on deteriorating mathematics performance. To evaluate correlation or regression coefficients between a factor of playing video games or chatting online and mathematics performance compared with other factors, we use multivariate analysis technique and take gender difference into account. We find the most important reason for the negative sign of the displacement effect on mathematics performance due to students’ poor academic background. Statistical analysis methods in this project could be applied to study internet users’ academic performance from the high school education to the college education.

Keywords: Correlation coefficients, displacement effect, gender difference, multivariate analysis technique, regression coefficients.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2170
1302 Artificial Neural Network Modeling of a Closed Loop Pulsating Heat Pipe

Authors: Vipul M. Patel, Hemantkumar B. Mehta

Abstract:

Technological innovations in electronic world demand novel, compact, simple in design, less costly and effective heat transfer devices. Closed Loop Pulsating Heat Pipe (CLPHP) is a passive phase change heat transfer device and has potential to transfer heat quickly and efficiently from source to sink. Thermal performance of a CLPHP is governed by various parameters such as number of U-turns, orientations, input heat, working fluids and filling ratio. The present paper is an attempt to predict the thermal performance of a CLPHP using Artificial Neural Network (ANN). Filling ratio and heat input are considered as input parameters while thermal resistance is set as target parameter. Types of neural networks considered in the present paper are radial basis, generalized regression, linear layer, cascade forward back propagation, feed forward back propagation; feed forward distributed time delay, layer recurrent and Elman back propagation. Linear, logistic sigmoid, tangent sigmoid and Radial Basis Gaussian Function are used as transfer functions. Prediction accuracy is measured based on the experimental data reported by the researchers in open literature as a function of Mean Absolute Relative Deviation (MARD). The prediction of a generalized regression ANN model with spread constant of 4.8 is found in agreement with the experimental data for MARD in the range of ±1.81%.

Keywords: ANN models, CLPHP, filling ratio, generalized regression, spread constant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1184
1301 Application of the Least Squares Method in the Adjustment of Chlorodifluoromethane (HCFC-142b) Regression Models

Authors: L. J. de Bessa Neto, V. S. Filho, J. V. Ferreira Nunes, G. C. Bergamo

Abstract:

There are many situations in which human activities have significant effects on the environment. Damage to the ozone layer is one of them. The objective of this work is to use the Least Squares Method, considering the linear, exponential, logarithmic, power and polynomial models of the second degree, to analyze through the coefficient of determination (R²), which model best fits the behavior of the chlorodifluoromethane (HCFC-142b) in parts per trillion between 1992 and 2018, as well as estimates of future concentrations between 5 and 10 periods, i.e. the concentration of this pollutant in the years 2023 and 2028 in each of the adjustments. A total of 809 observations of the concentration of HCFC-142b in one of the monitoring stations of gases precursors of the deterioration of the ozone layer during the period of time studied were selected and, using these data, the statistical software Excel was used for make the scatter plots of each of the adjustment models. With the development of the present study, it was observed that the logarithmic fit was the model that best fit the data set, since besides having a significant R² its adjusted curve was compatible with the natural trend curve of the phenomenon.

Keywords: Chlorodifluoromethane (HCFC-142b), ozone (O3), least squares method, regression models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 827
1300 Correlations between Cleaning Frequency of Reservoir and Water Tower and Parameters of Water Quality

Authors: Chen Bi-Hsiang, Yang Hung-Wen, Lou Jie-Chung, Han Jia-Yun

Abstract:

This study was investigated on sampling and analyzing water quality in water reservoir & water tower installed in two kind of residential buildings and school facilities. Data of water quality was collected for correlation analysis with frequency of sanitization of water reservoir through questioning managers of building about the inspection charts recorded on equipment for water reservoir. Statistical software packages (SPSS) were applied to the data of two groups (cleaning frequency and water quality) for regression analysis to determine the optimal cleaning frequency of sanitization. The correlation coefficient (R) in this paper represented the degree of correlation, with values of R ranging from +1 to -1.After investigating three categories of drinking water users; this study found that the frequency of sanitization of water reservoir significantly influenced the water quality of drinking water. A higher frequency of sanitization (more than four times per 1 year) implied a higher quality of drinking water. Results indicated that sanitizing water reservoir & water tower should at least twice annually for achieving the aim of safety of drinking water.

Keywords: cleaning frequency of sanitization, parameters ofwater quality, regression analysis, water reservoir & water tower

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735
1299 Capacity of Anchors in Structural Connections

Authors: T. Cornelius, G. Secilmis

Abstract:

When dealing with safety in structures, the connections between structural components play an important role. Robustness of a structure as a whole depends both on the load- bearing capacity of the structural component and on the structures capacity to resist total failure, even though a local failure occurs in a component or a connection between components. To avoid progressive collapse it is necessary to be able to carry out a design for connections. A connection may be executed with anchors to withstand local failure of the connection in structures built with prefabricated components. For the design of these anchors, a model is developed for connections in structures performed in prefabricated autoclaved aerated concrete components. The design model takes into account the effect of anchors placed close to the edge, which may result in splitting failure. Further the model is developed to consider the effect of reinforcement diameter and anchor depth. The model is analytical and theoretically derived assuming a static equilibrium stress distribution along the anchor. The theory is compared to laboratory test, including the relevant parameters and the model is refined and theoretically argued analyzing the observed test results. The method presented can be used to improve safety in structures or even optimize the design of the connections

Keywords: Robustness, anchors, connections, aircrete, prefabricated components.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2022
1298 Resting-State Functional Connectivity Analysis Using an Independent Component Approach

Authors: Eric Jacob Bacon, Chaoyang Jin, Dianning He, Shuaishuai Hu, Lanbo Wang, Han Li, Shouliang Qi

Abstract:

Refractory epilepsy is a complicated type of epilepsy that can be difficult to diagnose. Recent technological advancements have made resting-state functional magnetic resonance (rsfMRI) a vital technique for studying brain activity. However, there is still much to learn about rsfMRI. Investigating rsfMRI connectivity may aid in the detection of abnormal activities. In this paper, we propose studying the functional connectivity of rsfMRI candidates to diagnose epilepsy. 45 rsfMRI candidates, comprising 26 with refractory epilepsy and 19 healthy controls, were enrolled in this study. A data-driven approach known as Independent Component Analysis (ICA) was used to achieve our goal. First, rsfMRI data from both patients and healthy controls were analyzed using group ICA. The components that were obtained were then spatially sorted to find and select meaningful ones. A two-sample t-test was also used to identify abnormal networks in patients and healthy controls. Finally, based on the fractional amplitude of low-frequency fluctuations (fALFF), a chi-square statistic test was used to distinguish the network properties of the patient and healthy control groups. The two-sample t-test analysis yielded abnormal in the default mode network, including the left superior temporal lobe and the left supramarginal. The right precuneus was found to be abnormal in the dorsal attention network. In addition, the frontal cortex showed an abnormal cluster in the medial temporal gyrus. In contrast, the temporal cortex showed an abnormal cluster in the right middle temporal gyrus and the right fronto-operculum gyrus. Finally, the chi-square statistic test was significant, producing a p-value of 0.001 for the analysis. This study offers evidence that investigating rsfMRI connectivity provides an excellent diagnosis option for refractory epilepsy.

Keywords: Independent Component Analysis, Resting State Network, refractory epilepsy, rsfMRI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 291
1297 Modeling Ambient Carbon Monoxide Pollutant Due to Road Traffic

Authors: Anjaneyulu M.V.L.R., Harikrishna M., Chenchuobulu S.

Abstract:

Rapid urbanization, industrialization and population growth have led to an increase in number of automobiles that cause air pollution. It is estimated that road traffic contributes 60% of air pollution in urban areas. A case by case assessment is required to predict the air quality in urban situations, so as to evolve certain traffic management measures to maintain the air quality levels with in the tolerable limits. Calicut city in the state of Kerala, India has been chosen as the study area. Carbon Monoxide (CO) concentration was monitored at 15 links in Calicut city and air quality performance was evaluated over each link. The CO pollutant concentration values were compared with the National Ambient Air Quality Standards (NAAQS), and the CO values were predicted by using CALINE4 and IITLS and Linear regression models. The study has revealed that linear regression model performs better than the CALINE4 and IITLS models. The possible association between CO pollutant concentration and traffic parameters like traffic flow, type of vehicle, and traffic stream speed was also evaluated.

Keywords: CO pollution, Modelling, Traffic stream parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2366
1296 Influence of Parameters of Modeling and Data Distribution for Optimal Condition on Locally Weighted Projection Regression Method

Authors: Farhad Asadi, Mohammad Javad Mollakazemi, Aref Ghafouri

Abstract:

Recent research in neural networks science and neuroscience for modeling complex time series data and statistical learning has focused mostly on learning from high input space and signals. Local linear models are a strong choice for modeling local nonlinearity in data series. Locally weighted projection regression is a flexible and powerful algorithm for nonlinear approximation in high dimensional signal spaces. In this paper, different learning scenario of one and two dimensional data series with different distributions are investigated for simulation and further noise is inputted to data distribution for making different disordered distribution in time series data and for evaluation of algorithm in locality prediction of nonlinearity. Then, the performance of this algorithm is simulated and also when the distribution of data is high or when the number of data is less the sensitivity of this approach to data distribution and influence of important parameter of local validity in this algorithm with different data distribution is explained.

Keywords: Local nonlinear estimation, LWPR algorithm, Online training method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1601
1295 Study on Extraction of Lanthanum Oxide from Monazite Concentrate

Authors: Nwe Nwe Soe, Lwin Thuzar Shwe, Kay Thi Lwin

Abstract:

Lanthanum oxide is to be recovered from monazite, which contains about 13.44% lanthanum oxide. The principal objective of this study is to be able to extract lanthanum oxide from monazite of Moemeik Myitsone Area. The treatment of monazite in this study involves three main steps; extraction of lanthanum hydroxide from monazite by using caustic soda, digestion with nitric acid and precipitation with ammonium hydroxide and calcination of lanthanum oxalate to lanthanum oxide.

Keywords: Calcination, Digestion, Precipitation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4034
1294 Optimization of Turbocharged Diesel Engines

Authors: Ebrahim Safarian, Kadir Bilen, Akif Ceviz

Abstract:

The turbocharger and turbocharging have been the inherent component of diesel engines, so that critical parameters of such engines, as BSFC (Brake Specific Fuel Consumption) or thermal efficiency, fuel consumption, BMEP (Brake Mean Effective Pressure), the power density output and emission level have been improved extensively. In general, the turbocharger can be considered as the most complex component of diesel engines, because it has closely interrelated turbomachinery concepts of the turbines and the compressors to thermodynamic fundamentals of internal combustion engines and stress analysis of all components. In this paper, a waste gate for a conventional single stage radial turbine is investigated by consideration of turbochargers operation constrains and engine operation conditions, without any detail designs in the turbine and the compressor. Amount of opening waste gate which extended between the ranges of full opened and closed valve, is demonstrated by limiting compressor boost pressure ratio. Obtaining of an optimum point by regard above mentioned items is surveyed by three linked meanline modeling programs together which consist of Turbomatch®, Compal®, Rital® madules in concepts NREC® respectively.

Keywords: Turbocharger, Wastegate, diesel engine, CONCEPT NREC programs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3421
1293 Level of Concentration in Banking Markets and Length of EU Membership

Authors: Ivan Pavic, Fran Galetic, Tomislava Pavic Kramaric

Abstract:

The purpose of this article is to analyze the degree of concentration in the banking market in EU member states as well as to determine the impact of the length of EU membership on the degree of concentration. In that sense several analysis were conducted, specifically, panel analysis, calculation of correlation coefficient and regression analysis of the impact of the length of EU membership on the degree of concentration. Panel analysis was conducted to determine whether there is a similar trend of concentration in three groups of countries - countries with a low, moderate and high level of concentration. The conducted panel analysis showed that in EU countries with a moderate level of concentration, the level of concentration decreases. The calculation of correlation showed that, to some extent, with other influential factors, the length of EU membership negatively affects the market concentration of the banking market. Using the regression analysis for investigation of the influence of the length of EU membership on the level of concentration in the banking sector in a particular country, the results reveal that there is a negative effect of the length in EU membership on market concentration, although it is not significantly influential variable.

Keywords: Banking sector, concentration, EU

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1863
1292 Integrated Flavor Sensor Using Microbead Array

Authors: Ziba Omidi, Min-Ki Kim

Abstract:

This research presents the design, fabrication and application of a flavor sensor for an integrated electronic tongue and electronic nose that can allow rapid characterization of multi-component mixtures in a solution. The odor gas and liquid are separated using hydrophobic porous membrane in micro fluidic channel. The sensor uses an array composed of microbeads in micromachined cavities localized on silicon wafer. Sensing occurs via colorimetric and fluorescence changes to receptors and indicator molecules that are attached to termination sites on the polymeric microbeads. As a result, the sensor array system enables simultaneous and near-real-time analyses using small samples and reagent volumes with the capacity to incorporate significant redundancies. One of the key parts of the system is a passive pump driven only by capillary force. The hydrophilic surface of the fluidic structure draws the sample into the sensor array without any moving mechanical parts. Since there is no moving mechanical component in the structure, the size of the fluidic structure can be compact and the fabrication becomes simple when compared to the device including active microfluidic components. These factors should make the proposed system inexpensive to mass-produce, portable and compatible with biomedical applications.

Keywords: Optical Sensor, Semiconductor manufacturing, Smell sensor, Taste sensor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1711
1291 Statistical (Radio) Path Loss Modelling: For RF Propagations within localized Indoor and Outdoor Environments of the Academic Building of INTI University College (Laureate International Universities)

Authors: Emmanuel O.O. Ojakominor, Tian F. Lai

Abstract:

A handful of propagation textbooks that discuss radio frequency (RF) propagation models merely list out the models and perhaps discuss them rather briefly; this may well be frustrating for the potential first time modeller who's got no idea on how these models could have been derived. This paper fundamentally provides an overture in modelling the radio channel. Explicitly, for the modelling practice discussed here, signal strength field measurements had to be conducted beforehand (this was done at 469 MHz); to be precise, this paper primarily concerns empirically/statistically modelling the radio channel, and thus provides results obtained from empirically modelling the environments in question. This paper, on the whole, proposes three propagation models, corresponding to three experimented environments. Perceptibly, the models have been derived by way of making the most use of statistical measures. Generally speaking, the first two models were derived via simple linear regression analysis, whereas the third have been originated using multiple regression analysis (with five various predictors). Additionally, as implied by the title of this paper, both indoor and outdoor environments have been experimented; however, (somewhat) two of the environments are neither entirely indoor nor entirely outdoor. The other environment, however, is completely indoor.

Keywords: RF propagation, radio channel modelling, statistical methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2434