Search results for: orthogonal regression
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1008

Search results for: orthogonal regression

528 An Optimization of the New Die Design of Sheet Hydroforming by Taguchi Method

Authors: M. Hosseinzadeh, S. A. Zamani, A. Taheri

Abstract:

During the last few years, several sheet hydroforming processes have been introduced. Despite the advantages of these methods, they have some limitations. Of the processes, the two main ones are the standard hydroforming and hydromechanical deep drawing. A new sheet hydroforming die set was proposed that has the advantages of both processes and eliminates their limitations. In this method, a polyurethane plate was used as a part of the die-set to control the blank holder force. This paper outlines the Taguchi optimization methodology, which is applied to optimize the effective parameters in forming cylindrical cups by the new die set of sheet hydroforming process. The process parameters evaluated in this research are polyurethane hardness, polyurethane thickness, forming pressure path and polyurethane hole diameter. The design of experiments based upon L9 orthogonal arrays by Taguchi was used and analysis of variance (ANOVA) was employed to analyze the effect of these parameters on the forming pressure. The analysis of the results showed that the optimal combination for low forming pressure is harder polyurethane, bigger diameter of polyurethane hole and thinner polyurethane. Finally, the confirmation test was derived based on the optimal combination of parameters and it was shown that the Taguchi method is suitable to examine the optimization process.

Keywords: Sheet Hydroforming, Optimization, Taguchi Method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2570
527 The Effect of Randomly Distributed Polypropylene Fibers Borogypsum Fly Ash and Cement on Freezing-Thawing Durability of a Fine-Grained Soil

Authors: Ahmet Şahin Zaimoğlu

Abstract:

A number of studies have been conducted recently to investigate the influence of randomly oriented fibers on some engineering properties of cohesive and cohesionless soils. However, few studies have been carried out on freezing-thawing behavior of fine-grained soils modified with discrete fiber inclusions and additive materials. This experimental study was performed to investigate the effect of randomly distributed polypropylene fibers (PP) and some additive materials [e.g.., borogypsum (BG), fly ash (FA) and cement (C)] on freezing-thawing durability (mass losses) of a fine-grained soil for 6, 12, and 18 cycles. The Taguchi method was applied to the experiments and a standard L9 orthogonal array (OA) with four factors and three levels were chosen. A series of freezing-thawing tests were conducted on each specimen. 0-20% BG, 0-20% FA, 0- 0.25% PP and 0-3% of C by total dry weight of mixture were used in the preparation of specimens. Experimental results showed that the most effective materials for the freezing-thawing durability (mass losses) of the samples were borogypsum and fly ash. The values of mass losses for 6, 12 and 18 cycles in optimum conditions were 16.1%, 5.1% and 3.6%, respectively.

Keywords: Additive materials, Freezing-thawing, Optimization, Reinforced soil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1705
526 Indoor Air Pollution of the Flexographic Printing Environment

Authors: Jelena S. Kiurski, Vesna S. Kecić, Snežana M. Aksentijević

Abstract:

The identification and evaluation of organic and inorganic pollutants were performed in a flexographic facility in Novi Sad, Serbia. Air samples were collected and analyzed in situ, during 4-hours working time at five sampling points by the mobile gas chromatograph and ozonometer at the printing of collagen casing. Experimental results showed that the concentrations of isopropyl alcohol, acetone, total volatile organic compounds and ozone varied during the sampling times. The highest average concentrations of 94.80 ppm and 102.57 ppm were achieved at 200 minutes from starting the production for isopropyl alcohol and total volatile organic compounds, respectively. The mutual dependences between target hazardous and microclimate parameters were confirmed using a multiple linear regression model with software package STATISTICA 10. Obtained multiple coefficients of determination in the case of ozone and acetone (0.507 and 0.589) with microclimate parameters indicated a moderate correlation between the observed variables. However, a strong positive correlation was obtained for isopropyl alcohol and total volatile organic compounds (0.760 and 0.852) with microclimate parameters. Higher values of parameter F than Fcritical for all examined dependences indicated the existence of statistically significant difference between the concentration levels of target pollutants and microclimates parameters. Given that, the microclimate parameters significantly affect the emission of investigated gases and the application of eco-friendly materials in production process present a necessity.

Keywords: Flexographic printing, indoor air, multiple regression analysis, pollution emission.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1281
525 Thermal Analysis of Circular Pin-fin with Rectangular Slot at the Center by Forced Convection

Authors: Kavita H. Dhanawade, Hanamant S. Dhanawade, Ajay Kashikar, Shweta Matey, Mahesh Bhadane, Sunny Sarraf

Abstract:

Extended surfaces are commonly used in practice to enhance heat transfer. Most of the engineering problems require high performance heat transfer components with light weight, volumes, accommodating shapes, costs and reliability depending on industrial applications. This paper reports an experimental analysis to investigate heat transfer enhancement by forced convection using different sizes of pin-fin with rectangular slots at the center. The cross sectional area of the oblong duct was 200 mm x 80 mm. The info utilized in performance analysis was obtained experimentally for material, aluminum at 200 Watts heat input varying velocity 1 m/s to 5 m/s. Using the Taguchi experimental design method, optimum design parameters and their levels were analysed. Nusselt number and friction factor were considered as a performance characteristic parameter. An An L9 (33) orthogonal array was designated as an experimental proposal. Optimum results were found by experimenting. It is observed that pin-fins with different slots sizes have a better impact on Nusselt Number.

Keywords: Heat transfer coefficient, Nusselt Number, pin-fin, forced convection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 757
524 Current-Mode Resistorless SIMO Universal Filter and Four-Phase Quadrature Oscillator

Authors: Jie Jin

Abstract:

In this paper, a new CMOS current-mode single input and multi-outputs (SIMO) universal filter and quadrature oscillator with a similar circuit are proposed. The circuits only consist of three Current differencing transconductance amplifiers (CDTA) and two grounded capacitors, which are resistorless, and they are suitable for monolithic integration. The universal filter uses minimum CDTAs and passive elements to realize SIMO type low-pass (LP), high-pass (HP), band-pass (BP) band-stop (BS) and all-pass (AP) filter functions simultaneously without any component matching conditions. The angular frequency (ω0) and the quality factor (Q) of the proposed filter can be electronically controlled and tuned orthogonal. By some modifications of the filter, a new current-mode four-phase quadrature oscillator (QO) can be obtained easily. The condition of oscillation (CO) and frequency of oscillation (FO) of the QO can be controlled electronically and independently through the bias current of the CDTAs, and it is suitable for variable frequency oscillator. Moreover, all the passive and active sensitivities of the circuits are low. SPICE simulation results are included to confirm the theory.

Keywords: Universal Filter, Quadrature Oscillator, Current mode, Current differencing transconductance amplifiers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1929
523 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform

Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu

Abstract:

Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.

Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance empirical formula, typical SQL query tasks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 809
522 Optimal Channel Equalization for MIMO Time-Varying Channels

Authors: Ehab F. Badran, Guoxiang Gu

Abstract:

We consider optimal channel equalization for MIMO (multi-input/multi-output) time-varying channels in the sense of MMSE (minimum mean-squared-error), where the observation noise can be non-stationary. We show that all ZF (zero-forcing) receivers can be parameterized in an affine form which eliminates completely the ISI (inter-symbol-interference), and optimal channel equalizers can be designed through minimization of the MSE (mean-squarederror) between the detected signals and the transmitted signals, among all ZF receivers. We demonstrate that the optimal channel equalizer is a modified Kalman filter, and show that under the AWGN (additive white Gaussian noise) assumption, the proposed optimal channel equalizer minimizes the BER (bit error rate) among all possible ZF receivers. Our results are applicable to optimal channel equalization for DWMT (discrete wavelet multitone), multirate transmultiplexers, OFDM (orthogonal frequency division multiplexing), and DS (direct sequence) CDMA (code division multiple access) wireless data communication systems. A design algorithm for optimal channel equalization is developed, and several simulation examples are worked out to illustrate the proposed design algorithm.

Keywords: Channel equalization, Kalman filtering, Time-varying systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1809
521 Child Homicide Victimization and Community Context: A Research Note

Authors: Bohsiu Wu

Abstract:

Among serious crimes, child homicide is a rather rare event. However, the killing of children stirs up a special type of emotion in society that pales other criminal acts. This study examines the relevancy of three possible community-level explanations for child homicide: social deprivation, female empowerment, and social isolation. The social deprivation hypothesis posits that child homicide results from lack of resources in communities. The female empowerment hypothesis argues that a higher female status translates into a higher level of capability to prevent child homicide. Finally, the social isolation hypothesis regards child homicide as a result of lack of social connectivity. Child homicide data, aggregated by US postal ZIP codes in California from 1990 to 1999, were analyzed with a negative binomial regression. The results of the negative binomial analysis demonstrate that social deprivation is the most salient and consistent predictor among all other factors in explaining child homicide victimization at the ZIP-code level. Both social isolation and female labor force participation are weak predictors of child homicide victimization across communities. Further, results from the negative binomial regression show that it is the communities with a higher, not lower, degree of female labor force participation that are associated with a higher count of child homicide. It is possible that poor communities with a higher level of female employment have a lesser capacity to provide the necessary care and protection for the children. Policies aiming at reducing social deprivation and strengthening female empowerment possess the potential to reduce child homicide in the community.

Keywords: Child homicide, deprivation, empowerment, isolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 662
520 Factors Affecting Slot Machine Performance in an Electronic Gaming Machine Facility

Authors: Etienne Provencal, David L. St-Pierre

Abstract:

A facility exploiting only electronic gambling machines (EGMs) opened in 2007 in Quebec City, Canada under the name of Salons de Jeux du Québec (SdjQ). This facility is one of the first worldwide to rely on that business model. This paper models the performance of such EGMs. The interest from a managerial point of view is to identify the variables that can be controlled or influenced so that a comprehensive model can help improve the overall performance of the business. The EGM individual performance model contains eight different variables under study (Game Title, Progressive jackpot, Bonus Round, Minimum Coin-in, Maximum Coin-in, Denomination, Slant Top and Position). Using data from Quebec City’s SdjQ, a linear regression analysis explains 90.80% of the EGM performance. Moreover, results show a behavior slightly different than that of a casino. The addition of GameTitle as a factor to predict the EGM performance is one of the main contributions of this paper. The choice of the game (GameTitle) is very important. Games having better position do not have significantly better performance than games located elsewhere on the gaming floor. Progressive jackpots have a positive and significant effect on the individual performance of EGMs. The impact of BonusRound on the dependent variable is significant but negative. The effect of Denomination is significant but weakly negative. As expected, the Language of an EGMS does not impact its individual performance. This paper highlights some possible improvements by indicating which features are performing well. Recommendations are given to increase the performance of the EGMs performance.

Keywords: EGM, linear regression, model prediction, slot operations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1532
519 Machine Learning Framework: Competitive Intelligence and Key Drivers Identification of Market Share Trends among Healthcare Facilities

Authors: A. Appe, B. Poluparthi, L. Kasivajjula, U. Mv, S. Bagadi, P. Modi, A. Singh, H. Gunupudi, S. Troiano, J. Paul, J. Stovall, J. Yamamoto

Abstract:

The necessity of data-driven decisions in healthcare strategy formulation is rapidly increasing. A reliable framework which helps identify factors impacting a healthcare provider facility or a hospital (from here on termed as facility) market share is of key importance. This pilot study aims at developing a data-driven machine learning-regression framework which aids strategists in formulating key decisions to improve the facility’s market share which in turn impacts in improving the quality of healthcare services. The US (United States) healthcare business is chosen for the study, and the data spanning 60 key facilities in Washington State and about 3 years of historical data are considered. In the current analysis, market share is termed as the ratio of the facility’s encounters to the total encounters among the group of potential competitor facilities. The current study proposes a two-pronged approach of competitor identification and regression approach to evaluate and predict market share, respectively. Leveraged model agnostic technique, SHAP (SHapley Additive exPlanations), to quantify the relative importance of features impacting the market share. Typical techniques in literature to quantify the degree of competitiveness among facilities use an empirical method to calculate a competitive factor to interpret the severity of competition. The proposed method identifies a pool of competitors, develops Directed Acyclic Graphs (DAGs) and feature level word vectors, and evaluates the key connected components at the facility level. This technique is robust since it is data-driven, which minimizes the bias from empirical techniques. The DAGs factor in partial correlations at various segregations and key demographics of facilities along with a placeholder to factor in various business rules (for e.g., quantifying the patient exchanges, provider references, and sister facilities). Identified are the multiple groups of competitors among facilities. Leveraging the competitors' identified developed and fine-tuned Random Forest Regression model to predict the market share. To identify key drivers of market share at an overall level, permutation feature importance of the attributes was calculated. For relative quantification of features at a facility level, incorporated SHAP, a model agnostic explainer. This helped to identify and rank the attributes at each facility which impacts the market share. This approach proposes an amalgamation of the two popular and efficient modeling practices, viz., machine learning with graphs and tree-based regression techniques to reduce the bias. With these, we helped to drive strategic business decisions.

Keywords: Competition, DAGs, hospital, healthcare, machine learning, market share, random forest, SHAP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 203
518 Electricity Load Modeling: An Application to Italian Market

Authors: Giovanni Masala, Stefania Marica

Abstract:

Forecasting electricity load plays a crucial role regards decision making and planning for economical purposes. Besides, in the light of the recent privatization and deregulation of the power industry, the forecasting of future electricity load turned out to be a very challenging problem. Empirical data about electricity load highlights a clear seasonal behavior (higher load during the winter season), which is partly due to climatic effects. We also emphasize the presence of load periodicity at a weekly basis (electricity load is usually lower on weekends or holidays) and at daily basis (electricity load is clearly influenced by the hour). Finally, a long-term trend may depend on the general economic situation (for example, industrial production affects electricity load). All these features must be captured by the model. The purpose of this paper is then to build an hourly electricity load model. The deterministic component of the model requires non-linear regression and Fourier series while we will investigate the stochastic component through econometrical tools. The calibration of the parameters’ model will be performed by using data coming from the Italian market in a 6 year period (2007- 2012). Then, we will perform a Monte Carlo simulation in order to compare the simulated data respect to the real data (both in-sample and out-of-sample inspection). The reliability of the model will be deduced thanks to standard tests which highlight a good fitting of the simulated values.

Keywords: ARMA-GARCH process, electricity load, fitting tests, Fourier series, Monte Carlo simulation, non-linear regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469
517 Regional Analysis of Streamflow Drought: A Case Study for Southwestern Iran

Authors: M. Byzedi, B. Saghafian

Abstract:

Droughts are complex, natural hazards that, to a varying degree, affect some parts of the world every year. The range of drought impacts is related to drought occurring in different stages of the hydrological cycle and usually different types of droughts, such as meteorological, agricultural, hydrological, and socioeconomical are distinguished. Streamflow drought was analyzed by the method of truncation level (at 70% level) on daily discharges measured in 54 hydrometric stations in southwestern Iran. Frequency analysis was carried out for annual maximum series (AMS) of drought deficit volume and duration series. Some factors including physiographic, climatic, geologic, and vegetation cover were studied as influential factors in the regional analysis. According to the results of factor analysis, six most effective factors were identified as area, rainfall from December to February, the percent of area with Normalized Difference Vegetation Index (NDVI) <0.1, the percent of convex area, drainage density and the minimum of watershed elevation that explained 90.9% of variance. The homogenous regions were determined by cluster analysis and discriminate function analysis. Suitable multivariate regression models were evaluated for streamflow drought deficit volume with 2 years return period. The significance level of regression models was 0.01. The results showed that the watershed area is the most effective factor with high correlation with deficit volume. Also, drought duration was not a suitable drought index for regional analysis.

Keywords: Iran, Streamflow drought, truncation level method, regional analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1715
516 Evaluation of the Impact of Dataset Characteristics for Classification Problems in Biological Applications

Authors: Kanthida Kusonmano, Michael Netzer, Bernhard Pfeifer, Christian Baumgartner, Klaus R. Liedl, Armin Graber

Abstract:

Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.

Keywords: Classification, High dimensional data, Machine learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2350
515 Optimization of Surface Roughness in Turning Process Utilizing Live Tooling via Taguchi Methodology

Authors: Weinian Wang, Joseph C. Chen

Abstract:

The objective of this research is to optimize the process of cutting cylindrical workpieces utilizing live tooling on a HAAS ST-20 lathe. Surface roughness (Ra) has been investigated as the indicator of quality characteristics for machining process. Aluminum alloy was used to conduct experiments due to its wide range usages in engineering structures and components where light weight or corrosion resistance is required. In this study, Taguchi methodology is utilized to determine the effects that each of the parameters has on surface roughness (Ra). A total of 18 experiments of each process were designed according to Taguchi’s L9 orthogonal array (OA) with four control factors at three levels of each and signal-to-noise ratios (S/N) were computed with Smaller the better equation for minimizing the system. The optimal parameters identified for the surface roughness of the turning operation utilizing live tooling were a feed rate of 3 inches/min(A3); a spindle speed of 1300 rpm(B3); a 2-flute titanium nitrite coated 3/8” endmill (C1); and a depth of cut of 0.025 inches (D2). The mean surface roughness of the confirmation runs in turning operation was 8.22 micro inches. The final results demonstrate that Taguchi methodology is a sufficient way of process improvement in turning process on surface roughness.

Keywords: Live tooling, surface roughness, Taguchi Parameter Design, CNC turning operation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 768
514 Image Transmission: A Case Study on Combined Scheme of LDPC-STBC in Asynchronous Cooperative MIMO Systems

Authors: Shan Ding, Lijia Zhang, Hongming Xu

Abstract:

this paper presents a novel scheme which is capable of reducing the error rate and improves the transmission performance in the asynchronous cooperative MIMO systems. A case study of image transmission is applied to prove the efficient of scheme. The linear dispersion structure is employed to accommodate the cooperative wireless communication network in the dynamic topology of structure, as well as to achieve higher throughput than conventional space–time codes based on orthogonal designs. The LDPC encoder without girth-4 and the STBC encoder with guard intervals are respectively introduced. The experiment results show that the combined coder of LDPC-STBC with guard intervals can be the good error correcting coders and BER performance in the asynchronous cooperative communication. In the case study of image transmission, the results show that in the transmission process, the image quality which is obtained by applied combined scheme is much better than it which is not applied the scheme in the asynchronous cooperative MIMO systems.

Keywords: Cooperative MIMO, image transmission, lineardispersion codes, Low-Density Parity-Check (LDPC)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1913
513 Predictor Factors for Treatment Failure among Patients on Second Line Antiretroviral Therapy

Authors: Mohd. A. M. Rahim, Yahaya Hassan, Mathumalar L. Fahrni

Abstract:

Second line antiretroviral therapy (ART) regimen is used when patients fail their first line regimen. There are many factors such as non-adherence, drug resistance as well as virological and immunological failure that lead to second line highly active antiretroviral therapy (HAART) regimen treatment failure. This study was aimed at determining predictor factors to treatment failure with second line HAART and analyzing median survival time. An observational, retrospective study was conducted in Sungai Buloh Hospital (HSB) to assess current status of HIV patients treated with second line HAART regimen. Convenience sampling was used and 104 patients were included based on the study’s inclusion and exclusion criteria. Data was collected for six months i.e. from July until December 2013. Data was then analysed using SPSS version 18. Kaplan-Meier and Cox regression analyses were used to measure median survival times and predictor factors for treatment failure. The study population consisted mainly of male subjects, aged 30- 45 years, who were heterosexual, and had HIV infection for less than 6 years. The most common second line HAART regimen given was lopinavir/ritonavir (LPV/r)-based combination. Kaplan-Meier analysis showed that patients on LPV/r demonstrated longer median survival times than patients on indinavir/ritonavir (IDV/r) based combination (p<0.001). The commonest reason for a treatment to fail with second line HAART was non-adherence. Based on Cox regression analysis, other predictor factors for treatment failure with second line HAART regimen were age and mode of HIV transmission.

Keywords: Adherence, antiretroviral therapy, second line, treatment failure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2699
512 A Three Elements Vector Valued Structure’s Ultimate Strength-Strong Motion-Intensity Measure

Authors: A. Nicknam, N. Eftekhari, A. Mazarei, M. Ganjvar

Abstract:

This article presents an alternative collapse capacity intensity measure in the three elements form which is influenced by the spectral ordinates at periods longer than that of the first mode period at near and far source sites. A parameter, denoted by β, is defined by which the spectral ordinate effects, up to the effective period (2T1), on the intensity measure are taken into account. The methodology permits to meet the hazard-levelled target extreme event in the probabilistic and deterministic forms. A MATLAB code is developed involving OpenSees to calculate the collapse capacities of the 8 archetype RC structures having 2 to 20 stories for regression process. The incremental dynamic analysis (IDA) method is used to calculate the structure’s collapse values accounting for the element stiffness and strength deterioration. The general near field set presented by FEMA is used in a series of performing nonlinear analyses. 8 linear relationships are developed for the 8structutres leading to the correlation coefficient up to 0.93. A collapse capacity near field prediction equation is developed taking into account the results of regression processes obtained from the 8 structures. The proposed prediction equation is validated against a set of actual near field records leading to a good agreement. Implementation of the proposed equation to the four archetype RC structures demonstrated different collapse capacities at near field site compared to those of FEMA. The reasons of differences are believed to be due to accounting for the spectral shape effects.

Keywords: Collapse capacity, fragility analysis, spectral shape effects, IDA method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1773
511 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach

Authors: Rajvir Kaur, Jeewani Anupama Ginige

Abstract:

With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.

Keywords: Artificial neural networks, breast cancer, cancer dataset, classifiers, cervical cancer, F-score, logistic regression, machine learning, precision, recall, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516
510 Process Parameter Optimization in Resistance Spot Welding of Dissimilar Thickness Materials

Authors: Pradeep M., N. S. Mahesh, Raja Hussain

Abstract:

Resistance spot welding (RSW) has been used widely to join sheet metals. It has been a challenge to get required weld quality in spot welding of dissimilar thickness materials. Weld parameters are not generally available in standards for thickness beyond 4mm. This paper presents the welding process design and parameter optimization of RSW used in joining of low carbon steel sheet of thickness 0.8 mm and metal strips of cross section 10 x 5mm for electrical motor applications. Taguchi quality design was adopted for weld current and time optimization using L9 orthogonal array. Optimum process parameters (current- 3.5kA and time- 10 cycles) were obtained from the Taguchi analysis and shear test results. Confirmation experiment result revealed that the weld quality was within acceptable interval. Further, numerical simulation of RSW process was carried out with selected weld parameters to quantify the temperature at faying surface and check for formation of appropriate nugget. The nugget geometry measured after peel test and predicted from numerical validation method were similar and in accordance with the standards.

Keywords: Resistance spot welding, dissimilar thickness, weld parameters, Taguchi method, numerical modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5164
509 Evaluation of the Beach Erosion Process in Varadero, Matanzas, Cuba: Effects of Different Hurricane Trajectories

Authors: Ana Gabriela Diaz, Luis Fermín Córdova, Jr., Roberto Lamazares

Abstract:

The island of Cuba, the largest of the Greater Antilles, is located in the tropical North Atlantic. It is annually affected by numerous weather events, which have caused severe damage to our coastal areas. In the same way that many other coastlines around the world, the beautiful beaches of the Hicacos Peninsula also suffer from erosion. This leads to a structural regression of the coastline. If measures are not taken, the hotels will be exposed to the advance of the sea, and it will be a serious problem for the economy. With the aim of studying the intensity of this type of activity, specialists of group of coastal and marine engineering from CIH, in the framework of the research conducted within the project MEGACOSTAS 2, provide their research to simulate extreme events and assess their impact in coastal areas, mainly regarding the definition of flood volumes and morphodynamic changes in sandy beaches. The main objective of this work is the evaluation of the process of Varadero beach erosion (the coastal sector has an important impact in the country's economy) on the Hicacos Peninsula for different paths of hurricanes. The mathematical model XBeach, which was integrated into the Coastal engineering system introduced by the project of MEGACOSTA 2 to determine the area and the more critical profiles for the path of hurricanes under study, was applied. The results of this project have shown that Center area is the greatest dynamic area in the simulation of the three paths of hurricanes under study, showing high erosion volumes and the greatest average length of regression of the coastline, from 15- 22 m.

Keywords: Beach, erosion, mathematical model, coastal areas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1173
508 Potentials of Raphia hookeri Wine in Livelihood Sustenance among Rural and Urban Populations in Nigeria

Authors: A. A. Aiyeloja, A.T. Oladele, O. Tumulo

Abstract:

Raphia wine is an important forest product with cultural significance besides its use as medicine and food in southern Nigeria. This work aims to evaluate the profitability of Raphia wine production and marketing in Sapele Local Government Area, Nigeria. Four communities (Sapele, Ogiede, Okuoke and Elume) were randomly selected for data collection via questionnaires among producers and marketers. A total of 50 producers and 34 marketers were randomly selected for interview. Data was analyzed using descriptive statistics, profit margin, multiple regression and rate of returns on investment (RORI). Annual average profit was highest in Okuoke (Producers – N90, 000.00, Marketers - N70, 000.00) and least in Sapele (Producers N50, 000.00, Marketers – N45, 000.00). Calculated RORI for marketers were Elume (40.0%), Okuoke (25.0%), Ogiede (33.3%) and Sapele (50.0%). Regression results showed that location has significant effects (0.000, ρ ≤ 0.05) on profit margins. Male (58.8%) and female (41.2%) invest in Raphia wine marketing, while males (100.0%) dominate production. Results showed that Raphia wine has potentials to generate household income, enhance food security and improve quality of life in rural, semi-urban and urban communities. Improved marketing channels, storage facilities and credit facilities via cooperative groups are recommended for producers and marketers by concerned agencies.

Keywords: Raphia wine, Profit margin, RORI, Livelihood, Nigeria.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2396
507 Adaptive Square-Rooting Companding Technique for PAPR Reduction in OFDM Systems

Authors: Wisam F. Al-Azzo, Borhanuddin Mohd. Ali

Abstract:

This paper addresses the problem of peak-to-average power ratio (PAPR) in orthogonal frequency division multiplexing (OFDM) systems. It also introduces a new PAPR reduction technique based on adaptive square-rooting (SQRT) companding process. The SQRT process of the proposed technique changes the statistical characteristics of the OFDM output signals from Rayleigh distribution to Gaussian-like distribution. This change in statistical distribution results changes of both the peak and average power values of OFDM signals, and consequently reduces significantly the PAPR. For the 64QAM OFDM system using 512 subcarriers, up to 6 dB reduction in PAPR was achieved by square-rooting technique with fixed degradation in bit error rate (BER) equal to 3 dB. However, the PAPR is reduced at the expense of only -15 dB out-ofband spectral shoulder re-growth below the in-band signal level. The proposed adaptive SQRT technique is superior in terms of BER performance than the original, non-adaptive, square-rooting technique when the required reduction in PAPR is no more than 5 dB. Also, it provides fixed amount of PAPR reduction in which it is not available in the original SQRT technique.

Keywords: complementary cumulative distribution function(CCDF), OFDM, peak-to-average power ratio (PAPR), adaptivesquare-rooting PAPR reduction technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2178
506 Multi-Objective Optimization Contingent on Subcarrier-Wise Beamforming for Multiuser MIMO-OFDM Interference Channels

Authors: R. Vedhapriya Vadhana, Ruba Soundar, K. G. Jothi Shalini

Abstract:

We address the problem of interference over all the channels in multiuser MIMO-OFDM systems. This paper contributes three beamforming strategies designed for multiuser multiple-input and multiple-output by way of orthogonal frequency division multiplexing, in which the transmit and receive beamformers are acquired repetitious by secure-form stages. In the principal case, the transmit (TX) beamformers remain fixed then the receive (RX) beamformers are computed. This eradicates one interference span for every user by means of extruding the transmit beamformers into a null space of relevant channels. Formerly, by gratifying the orthogonality condition to exclude the residual interferences in RX beamformer for every user is done by maximizing the signal-to-noise ratio (SNR). The second case comprises mutually optimizing the TX and RX beamformers from controlled SNR maximization. The outcomes of first case is used here. The third case also includes combined optimization of TX-RX beamformers; however, uses the both controlled SNR and signal-to-interference-plus-noise ratio maximization (SINR). By the standardized channel model for IEEE 802.11n, the proposed simulation experiments offer rapid beamforming and enhanced error performance.

Keywords: Beamforming, interference channels, MIMO-OFDM, multi-objective optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1099
505 A Linear Regression Model for Estimating Anxiety Index Using Wide Area Frontal Lobe Brain Blood Volume

Authors: Takashi Kaburagi, Masashi Takenaka, Yosuke Kurihara, Takashi Matsumoto

Abstract:

Major depressive disorder (MDD) is one of the most common mental illnesses today. It is believed to be caused by a combination of several factors, including stress. Stress can be quantitatively evaluated using the State-Trait Anxiety Inventory (STAI), one of the best indices to evaluate anxiety. Although STAI scores are widely used in applications ranging from clinical diagnosis to basic research, the scores are calculated based on a self-reported questionnaire. An objective evaluation is required because the subject may intentionally change his/her answers if multiple tests are carried out. In this article, we present a modified index called the “multi-channel Laterality Index at Rest (mc-LIR)” by recording the brain activity from a wider area of the frontal lobe using multi-channel functional near-infrared spectroscopy (fNIRS). The presented index aims to measure multiple positions near the Fpz defined by the international 10-20 system positioning. Using 24 subjects, the dependencies on the number of measuring points used to calculate the mc-LIR and its correlation coefficients with the STAI scores are reported. Furthermore, a simple linear regression was performed to estimate the STAI scores from mc-LIR. The cross-validation error is also reported. The experimental results show that using multiple positions near the Fpz will improve the correlation coefficients and estimation than those using only two positions.

Keywords: Stress, functional near-infrared spectroscopy, frontal lobe, state-trait anxiety inventory score.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1133
504 Influence of Taguchi Selected Parameters on Properties of CuO-ZrO2 Nanoparticles Produced via Sol-gel Method

Authors: H. Abdizadeh, Y. Vahidshad

Abstract:

The present paper discusses the selection of process parameters for obtaining optimal nanocrystallites size in the CuOZrO2 catalyst. There are some parameters changing the inorganic structure which have an influence on the role of hydrolysis and condensation reaction. A statistical design test method is implemented in order to optimize the experimental conditions of CuO-ZrO2 nanoparticles preparation. This method is applied for the experiments and L16 orthogonal array standard. The crystallites size is considered as an index. This index will be used for the analysis in the condition where the parameters vary. The effect of pH, H2O/ precursor molar ratio (R), time and temperature of calcination, chelating agent and alcohol volume are particularity investigated among all other parameters. In accordance with the results of Taguchi, it is found that temperature has the greatest impact on the particle size. The pH and H2O/ precursor molar ratio have low influences as compared with temperature. The alcohol volume as well as the time has almost no effect as compared with all other parameters. Temperature also has an influence on the morphology and amorphous structure of zirconia. The optimal conditions are determined by using Taguchi method. The nanocatalyst is studied by DTA-TG, XRD, EDS, SEM and TEM. The results of this research indicate that it is possible to vary the structure, morphology and properties of the sol-gel by controlling the above-mentioned parameters.

Keywords: CuO-ZrO2 Nanoparticles, Sol-gel, Taguchi method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1714
503 Conjugate Mixed Convection Heat Transfer and Entropy Generation of Cu-Water Nanofluid in an Enclosure with Thick Wavy Bottom Wall

Authors: Sanjib Kr Pal, S. Bhattacharyya

Abstract:

Mixed convection of Cu-water nanofluid in an enclosure with thick wavy bottom wall has been investigated numerically. A co-ordinate transformation method is used to transform the computational domain into an orthogonal co-ordinate system. The governing equations in the computational domain are solved through a pressure correction based iterative algorithm. The fluid flow and heat transfer characteristics are analyzed for a wide range of Richardson number (0.1 ≤ Ri ≤ 5), nanoparticle volume concentration (0.0 ≤ ϕ ≤ 0.2), amplitude (0.0 ≤ α ≤ 0.1) of the wavy thick- bottom wall and the wave number (ω) at a fixed Reynolds number. Obtained results showed that heat transfer rate increases remarkably by adding the nanoparticles. Heat transfer rate is dependent on the wavy wall amplitude and wave number and decreases with increasing Richardson number for fixed amplitude and wave number. The Bejan number and the entropy generation are determined to analyze the thermodynamic optimization of the mixed convection.

Keywords: Entropy generation, mixed convection, conjugate heat transfer, numerical, nanofluid, wall waviness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1024
502 New Method for Determining the Distribution of Birefringence and Linear Dichroism in Polymer Materials Based On Polarization-Holographic Grating

Authors: Barbara Kilosanidze, George Kakauridze, Levan Nadareishvili, Yuri Mshvenieradze

Abstract:

A new method for determining the distribution of birefringence and linear dichroism in optical polymer materials is presented. The method is based on the use of polarizationholographic diffraction grating that forms an orthogonal circular basis in the process of diffraction of probing laser beam on the grating. The intensities ratio of the orders of diffraction on this grating enables the value of birefringence and linear dichroism in the sample to be determined. The distribution of birefringence in the sample is determined by scanning with a circularly polarized beam with a wavelength far from the absorption band of the material. If the scanning is carried out by probing beam with the wavelength near to a maximum of the absorption band of the chromophore then the distribution of linear dichroism can be determined. An appropriate theoretical model of this method is presented. A laboratory setup was created for the proposed method. An optical scheme of the laboratory setup is presented. The results of measurement in polymer films with two-dimensional gradient distribution of birefringence and linear dichroism are discussed.

Keywords: Birefringence, graded oriented polymers, linear dichroism, optical polymers, optical anisotropy, polarization-holographic grating,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486
501 SLM Using Riemann Sequence Combined with DCT Transform for PAPR Reduction in OFDM Communication Systems

Authors: Pepin Magnangana Zoko Goyoro, Ibrahim James Moumouni, Sroy Abouty

Abstract:

Orthogonal Frequency Division Multiplexing (OFDM) is an efficient method of data transmission for high speed communication systems. However, the main drawback of OFDM systems is that, it suffers from the problem of high Peak-to-Average Power Ratio (PAPR) which causes inefficient use of the High Power Amplifier and could limit transmission efficiency. OFDM consist of large number of independent subcarriers, as a result of which the amplitude of such a signal can have high peak values. In this paper, we propose an effective reduction scheme that combines DCT and SLM techniques. The scheme is composed of the DCT followed by the SLM using the Riemann matrix to obtain phase sequences for the SLM technique. The simulation results show PAPR can be greatly reduced by applying the proposed scheme. In comparison with OFDM, while OFDM had high values of PAPR –about 10.4dB our proposed method achieved about 4.7dB reduction of the PAPR with low complexities computation. This approach also avoids randomness in phase sequence selection, which makes it simpler to decode at the receiver. As an added benefit, the matrices can be generated at the receiver end to obtain the data signal and hence it is not required to transmit side information (SI).

Keywords: DCT transform, OFDM, PAPR, Riemann matrix, SLM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2612
500 The Design of Axisymmetric Ducts for Incompressible Flow with a Parabolic Axial Velocity Inlet Profile

Authors: V.Pavlika

Abstract:

In this paper a numerical algorithm is described for solving the boundary value problem associated with axisymmetric, inviscid, incompressible, rotational (and irrotational) flow in order to obtain duct wall shapes from prescribed wall velocity distributions. The governing equations are formulated in terms of the stream function ψ (x,y)and the function φ (x,y)as independent variables where for irrotational flow φ (x,y)can be recognized as the velocity potential function, for rotational flow φ (x,y)ceases being the velocity potential function but does remain orthogonal to the stream lines. A numerical method based on the finite difference scheme on a uniform mesh is employed. The technique described is capable of tackling the so-called inverse problem where the velocity wall distributions are prescribed from which the duct wall shape is calculated, as well as the direct problem where the velocity distribution on the duct walls are calculated from prescribed duct geometries. The two different cases as outlined in this paper are in fact boundary value problems with Neumann and Dirichlet boundary conditions respectively. Even though both approaches are discussed, only numerical results for the case of the Dirichlet boundary conditions are given. A downstream condition is prescribed such that cylindrical flow, that is flow which is independent of the axial coordinate, exists.

Keywords: Inverse problem, irrotational incompressible flow, Boundary value problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1610
499 Simultaneous HPAM/SDS Injection in Heterogeneous/Layered Models

Authors: M. H. Sedaghat, A. Zamani, S. Morshedi, R. Janamiri, M. Safdari, I. Mahdavi, A. Hosseini, A. Hatampour

Abstract:

Although lots of experiments have been done in enhanced oil recovery, the number of experiments which consider the effects of local and global heterogeneity on efficiency of enhanced oil recovery based on the polymer-surfactant flooding is low and rarely done. In this research, we have done numerous experiments of water flooding and polymer-surfactant flooding on a five spot glass micromodel in different conditions such as different positions of layers. In these experiments, five different micromodels with three different pore structures are designed. Three models with different layer orientation, one homogenous model and one heterogeneous model are designed. In order to import the effect of heterogeneity of porous media, three types of pore structures are distributed accidentally and with equal ratio throughout heterogeneous micromodel network according to random normal distribution. The results show that maximum EOR recovery factor will happen in a situation where the layers are orthogonal to the path of mainstream and the minimum EOR recovery factor will happen in a situation where the model is heterogeneous. This experiments show that in polymer-surfactant flooding, with increase of angles of layers the EOR recovery factor will increase and this recovery factor is strongly affected by local heterogeneity around the injection zone.

Keywords: Layered Reservoir, Micromodel, Local Heterogeneity, Polymer-Surfactant Flooding, Enhanced Oil Recovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2198