Search results for: orthogonal regression
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1008

Search results for: orthogonal regression

498 Design and Performance of Adaptive Polarized MIMO MC-SS-CDMA System for Downlink Mobile Communications

Authors: Joseph V. M. Halim, Hesham El-Badawy, Hadia M. El-Hennawy

Abstract:

In this paper, an adaptive polarized Multiple-Input Multiple-Output (MIMO) Multicarrier Spread Spectrum Code Division Multiple Access (MC-SS-CDMA) system is designed for downlink mobile communications. The proposed system will be examined in Frequency Division Duplex (FDD) mode for both macro urban and suburban environments. For the same transmission bandwidth, a performance comparison between both nonoverlapped and orthogonal Frequency Division Multiplexing (FDM) schemes will be presented. Also, the proposed system will be compared with both the closed loop vertical MIMO MC-SS-CDMA system and the synchronous vertical STBC-MIMO MC-SS-CDMA system. As will be shown, the proposed system introduces a significant performance gain as well as reducing the spatial dimensions of the MIMO system and simplifying the receiver implementation. The effect of the polarization diversity characteristics on the BER performance will be discussed. Also, the impact of excluding the cross-polarization MCSS- CDMA blocks in the base station will be investigated. In addition, the system performance will be evaluated under different Feedback Information (FBI) rates for slowly-varying channels. Finally, a performance comparison for vehicular and pedestrian environments will be presented

Keywords: Closed loop technique, MC-SS-CDMA, Polarized MIMO systems, Transmit diversity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1600
497 An Optimization of Machine Parameters for Modified Horizontal Boring Tool Using Taguchi Method

Authors: Thirasak Panyaphirawat, Pairoj Sapsmarnwong, Teeratas Pornyungyuen

Abstract:

This paper presents the findings of an experimental investigation of important machining parameters for the horizontal boring tool modified to mouth with a horizontal lathe machine to bore an overlength workpiece. In order to verify a usability of a modified tool, design of experiment based on Taguchi method is performed. The parameters investigated are spindle speed, feed rate, depth of cut and length of workpiece. Taguchi L9 orthogonal array is selected for four factors three level parameters in order to minimize surface roughness (Ra and Rz) of S45C steel tubes. Signal to noise ratio analysis and analysis of variance (ANOVA) is performed to study an effect of said parameters and to optimize the machine setting for best surface finish. The controlled factors with most effect are depth of cut, spindle speed, length of workpiece, and feed rate in order. The confirmation test is performed to test the optimal setting obtained from Taguchi method and the result is satisfactory.

Keywords: Design of Experiment, Taguchi Design, Optimization, Analysis of Variance, Machining Parameters, Horizontal Boring Tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2685
496 Optimization of Control Parameters for EWR in Injection Flushing Type of EDM on Stainless Steel 304 Workpiece

Authors: M. S. Reza, M. Hamdi, S. H. Tomadi, A. R. Ismail

Abstract:

The operating control parameters of injection flushing type of electrical discharge machining process on stainless steel 304 workpiece using copper tools are being optimized according to its individual machining characteristic i.e. Electrode Wear Ratio (EWR). Higher EWR would give bad dimensional precision for the EDM machined workpiece because of high electrode wear. Hence, the quality characteristic for EWR is set to lower-the-better to achieve the optimum dimensional precision for the machined workpiece. Taguchi method has been used for the construction, layout and analysis of the experiment for EWR machining characteristic. The use of Taguchi method in the experiment saves a lot of time and cost of preparing and machining the experiment samples. Therefore, an L18 Orthogonal array which was the fundamental component in the statistical design of experiments has been used to plan the experiments and Analysis of Variance (ANOVA) is used to determine the optimum machining parameters for this machining characteristic. The control parameters selected for this optimization experiments are polarity, pulse on duration, discharge current, discharge voltage, machining depth, machining diameter and dielectric liquid pressure. The result had shown that negative polarity machining parameter setting will decreases EWR.

Keywords: ANOVA, EDM, Injection Flushing, L18Orthogonal Array, EWR, Stainless Steel 304

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1825
495 Optimization of Process Parameters of Pressure Die Casting using Taguchi Methodology

Authors: Satish Kumar, Arun Kumar Gupta, Pankaj Chandna

Abstract:

The present work analyses different parameters of pressure die casting to minimize the casting defects. Pressure diecasting is usually applied for casting of aluminium alloys. Good surface finish with required tolerances and dimensional accuracy can be achieved by optimization of controllable process parameters such as solidification time, molten temperature, filling time, injection pressure and plunger velocity. Moreover, by selection of optimum process parameters the pressure die casting defects such as porosity, insufficient spread of molten material, flash etc. are also minimized. Therefore, a pressure die casting component, carburetor housing of aluminium alloy (Al2Si2O5) has been considered. The effects of selected process parameters on casting defects and subsequent setting of parameters with the levels have been accomplished by Taguchi-s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L18 orthogonal array. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the percent contribution of different process parameters. Confidence interval has also been estimated for 95% consistency level and three conformational experiments have been performed to validate the optimum level of different parameters. Overall 2.352% reduction in defects has been observed with the help of suggested optimum process parameters.

Keywords: Aluminium Casting, Pressure Die Casting, Taguchi Methodology, Design of Experiments

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7309
494 Comparative Analysis of Various Multiuser Detection Techniques in SDMA-OFDM System Over the Correlated MIMO Channel Model for IEEE 802.16n

Authors: Susmita Das, Kala Praveen Bagadi

Abstract:

SDMA (Space-Division Multiple Access) is a MIMO (Multiple-Input and Multiple-Output) based wireless communication network architecture which has the potential to significantly increase the spectral efficiency and the system performance. The maximum likelihood (ML) detection provides the optimal performance, but its complexity increases exponentially with the constellation size of modulation and number of users. The QR decomposition (QRD) MUD can be a substitute to ML detection due its low complexity and near optimal performance. The minimum mean-squared-error (MMSE) multiuser detection (MUD) minimises the mean square error (MSE), which may not give guarantee that the BER of the system is also minimum. But the minimum bit error rate (MBER) MUD performs better than the classic MMSE MUD in term of minimum probability of error by directly minimising the BER cost function. Also the MBER MUD is able to support more users than the number of receiving antennas, whereas the rest of MUDs fail in this scenario. In this paper the performance of various MUD techniques is verified for the correlated MIMO channel models based on IEEE 802.16n standard.

Keywords: Multiple input multiple output, multiuser detection, orthogonal frequency division multiplexing, space division multiple access, Bit error rate

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1905
493 Performance Analysis in 5th Generation Massive Multiple-Input-Multiple-Output Systems

Authors: Jihad S. Daba, Jean-Pierre Dubois, Georges El Soury

Abstract:

Fifth generation wireless networks guarantee significant capacity enhancement to suit more clients and services at higher information rates with better reliability while consuming less power. The deployment of massive multiple-input-multiple-output technology guarantees broadband wireless networks with the use of base station antenna arrays to serve a large number of users on the same frequency and time-slot channels. In this work, we evaluate the performance of massive multiple-input-multiple-output systems (MIMO) systems in 5th generation cellular networks in terms of capacity and bit error rate. Several cases were considered and analyzed to compare the performance of massive MIMO systems while varying the number of antennas at both transmitting and receiving ends. We found that, unlike classical MIMO systems, reducing the number of transmit antennas while increasing the number of antennas at the receiver end provides a better solution to performance enhancement. In addition, enhanced orthogonal frequency division multiplexing and beam division multiple access schemes further improve the performance of massive MIMO systems and make them more reliable.

Keywords: Beam division multiple access, D2D communication, enhanced OFDM, fifth generation broadband, massive MIMO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 721
492 Comparison between Turbo Code and Convolutional Product Code (CPC) for Mobile WiMAX

Authors: Ahmed Ebian, Mona Shokair, Kamal Awadalla

Abstract:

Mobile WiMAX is a broadband wireless solution that enables convergence of mobile and fixed broadband networks through a common wide area broadband radio access technology and flexible network architecture. It adopts Orthogonal Frequency Division Multiple Access (OFDMA) for improved multi-path performance in Non-Line-Of-Sight (NLOS) environments. Scalable OFDMA (SOFDMA) is introduced in the IEEE 802e[1]. WIMAX system uses one of different types of channel coding but The mandatory channel coding scheme is based on binary nonrecursive Convolutional Coding (CC). There are other several optional channel coding schemes such as block turbo codes, convolutional turbo codes, and low density parity check (LDPC). In this paper a comparison between the performance of WIMAX using turbo code and using convolutional product code (CPC) [2] is made. Also a combination between them had been done. The CPC gives good results at different SNR values compared to both the turbo system, and the combination between them. For example, at BER equal to 10-2 for 128 subcarriers, the amount of improvement in SNR equals approximately 3 dB higher than turbo code and equals approximately 2dB higher than the combination respectively. Several results are obtained at different modulating schemes (16QAM and 64QAM) and different numbers of sub-carriers (128 and 512).

Keywords: Turbo Code, Convolutional Product Code (CPC), Convolutional Product Code (CPC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3371
491 Foreign Direct Investment on Economic Growth by Industries in Central and Eastern European Countries

Authors: Shorena Pharjiani

Abstract:

Present empirical paper investigates the relationship between FDI and economic growth by 10 selected industries in 10 Central and Eastern European countries from the period 1995 to 2012. Different estimation approaches were used to explore the connection between FDI and economic growth, for example OLS, RE, FE with and without time dummies. Obtained empirical results leads to some main consequences: First, the Central and East European countries (CEEC) attracted foreign direct investment, which raised the productivity of industries they entered in. It should be concluded that the linkage between FDI and output growth by industries is positive and significant enough to suggest that foreign firm’s participation enhanced the productivity of the industries they occupied. There had been an endogeneity problem in the regression and fixed effects estimation approach was used which partially corrected the regression analysis in order to make the results less biased. Second, it should be stressed that the results show that time has an important role in making FDI operational for enhancing output growth by industries via total factor productivity. Third, R&D positively affected economic growth and at the same time, it should take some time for research and development to influence economic growth. Fourth, the general trends masked crucial differences at the country level: over the last 20 years, the analysis of the tables and figures at the country level show that the main recipients of FDI of the 11 Central and Eastern European countries were Hungary, Poland and the Czech Republic. The main reason was that these countries had more open door policies for attracting the FDI. Fifth, according to the graphical analysis, while Hungary had the highest FDI inflow in this region, it was not reflected in the GDP growth as much as in other Central and Eastern European countries.

Keywords: Central and East European countries (CEEC), economic growth, FDI, panel data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1639
490 Peak Data Rate Enhancement Using Switched Micro-Macro Diversity in Cellular Multiple-Input-Multiple-Output Systems

Authors: Jihad S. Daba, J. P. Dubois, Yvette Antar

Abstract:

With the exponential growth of cellular users, a new generation of cellular networks is needed to enhance the required peak data rates. The co-channel interference between neighboring base stations inhibits peak data rate increase. To overcome this interference, multi-cell cooperation known as coordinated multipoint transmission is proposed. Such a solution makes use of multiple-input-multiple-output (MIMO) systems under two different structures: Micro- and macro-diversity. In this paper, we study the capacity and bit error rate in cellular networks using MIMO technology. We analyse both micro- and macro-diversity schemes and develop a hybrid model that switches between macro- and micro-diversity in the case of hard handoff based on a cut-off range of signal-to-noise ratio values. We conclude that our hybrid switched micro-macro MIMO system outperforms classical MIMO systems at the cost of increased hardware and software complexity.

Keywords: Cooperative multipoint transmission, ergodic capacity, hard handoff, macro-diversity, micro-diversity, multiple-input-multiple-output systems, MIMO, orthogonal frequency division multiplexing, OFDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1072
489 Trajectory Tracking of a Redundant Hybrid Manipulator Using a Switching Control Method

Authors: Atilla Bayram

Abstract:

This paper presents the trajectory tracking control of a spatial redundant hybrid manipulator. This manipulator consists of two parallel manipulators which are a variable geometry truss (VGT) module. In fact, each VGT module with 3-degress of freedom (DOF) is a planar parallel manipulator and their operational planes of these VGT modules are arranged to be orthogonal to each other. Also, the manipulator contains a twist motion part attached to the top of the second VGT module to supply the missing orientation of the endeffector. These three modules constitute totally 7-DOF hybrid (parallel-parallel) redundant spatial manipulator. The forward kinematics equations of this manipulator are obtained, then, according to these equations, the inverse kinematics is solved based on an optimization with the joint limit avoidance. The dynamic equations are formed by using virtual work method. In order to test the performance of the redundant manipulator and the controllers presented, two different desired trajectories are followed by using the computed force control method and a switching control method. The switching control method is combined with the computed force control method and genetic algorithm. In the switching control method, the genetic algorithm is only used for fine tuning in the compensation of the trajectory tracking errors.

Keywords: Computed force control method, genetic algorithm, hybrid manipulator, inverse kinematics of redundant manipulators, variable geometry truss.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1554
488 Performance Analysis of 5G for Low Latency Transmission Based on Universal Filtered Multi-Carrier Technique and Interleave Division Multiple Access

Authors: A. Asgharzadeh, M. Maroufi

Abstract:

5G mobile communication system has drawn more and more attention. The 5G system needs to provide three different types of services, including enhanced Mobile BroadBand (eMBB), massive machine-type communication (mMTC), and ultra-reliable and low-latency communication (URLLC). Universal Filtered Multi-Carrier (UFMC), Filter Bank Multicarrier (FBMC), and Filtered Orthogonal Frequency Division Multiplexing (f-OFDM) are suggested as a well-known candidate waveform for the coming 5G system. Themachine-to-machine (M2M) communications are one of the essential applications in 5G, and it involves exchanging of concise messages with a very short latency. However, in UFMC systems, the subcarriers are grouped into subbands but f-OFDM only one subband covers the entire band. Furthermore, in FBMC, a subband includes only one subcarrier, and the number of subbands is the same as the number of subcarriers. This paper mainly discusses the performance of UFMC with different parameters for the UFMC system. Also, paper shows that UFMC is the best choice outperforming OFDM in any case and FBMC in case of very short packets while performing similarly for long sequences with channel estimation techniques for Interleave Division Multiple Access (IDMA) systems.

Keywords: UFMC, IDMA, 5G, subband.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 441
487 Customer Churn Prediction Using Four Machine Learning Algorithms Integrating Feature Selection and Normalization in the Telecom Sector

Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh

Abstract:

A crucial part of maintaining a customer-oriented business in the telecommunications industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years, which has made it more important to understand customers’ needs in this strong market. For those who are looking to turn over their service providers, understanding their needs is especially important. Predictive churn is now a mandatory requirement for retaining customers in the telecommunications industry. Machine learning can be used to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.

Keywords: Machine Learning, Gradient Boosting, Logistic Regression, Churn, Random Forest, Decision Tree, ROC, AUC, F1-score.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 355
486 Replicating Brain’s Resting State Functional Connectivity Network Using a Multi-Factor Hub-Based Model

Authors: B. L. Ho, L. Shi, D. F. Wang, V. C. T. Mok

Abstract:

The brain’s functional connectivity while temporally non-stationary does express consistency at a macro spatial level. The study of stable resting state connectivity patterns hence provides opportunities for identification of diseases if such stability is severely perturbed. A mathematical model replicating the brain’s spatial connections will be useful for understanding brain’s representative geometry and complements the empirical model where it falls short. Empirical computations tend to involve large matrices and become infeasible with fine parcellation. However, the proposed analytical model has no such computational problems. To improve replicability, 92 subject data are obtained from two open sources. The proposed methodology, inspired by financial theory, uses multivariate regression to find relationships of every cortical region of interest (ROI) with some pre-identified hubs. These hubs acted as representatives for the entire cortical surface. A variance-covariance framework of all ROIs is then built based on these relationships to link up all the ROIs. The result is a high level of match between model and empirical correlations in the range of 0.59 to 0.66 after adjusting for sample size; an increase of almost forty percent. More significantly, the model framework provides an intuitive way to delineate between systemic drivers and idiosyncratic noise while reducing dimensions by more than 30 folds, hence, providing a way to conduct attribution analysis. Due to its analytical nature and simple structure, the model is useful as a standalone toolkit for network dependency analysis or as a module for other mathematical models.

Keywords: Functional magnetic resonance imaging, multivariate regression, network hubs, resting state functional connectivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781
485 Design and Optimization for a Compliant Gripper with Force Regulation Mechanism

Authors: Nhat Linh Ho, Thanh-Phong Dao, Shyh-Chour Huang, Hieu Giang Le

Abstract:

This paper presents a design and optimization for a compliant gripper. The gripper is constructed based on the concept of compliant mechanism with flexure hinge. A passive force regulation mechanism is presented to control the grasping force a micro-sized object instead of using a sensor force. The force regulation mechanism is designed using the planar springs. The gripper is expected to obtain a large range of displacement to handle various sized objects. First of all, the statics and dynamics of the gripper are investigated by using the finite element analysis in ANSYS software. And then, the design parameters of the gripper are optimized via Taguchi method. An orthogonal array L9 is used to establish an experimental matrix. Subsequently, the signal to noise ratio is analyzed to find the optimal solution. Finally, the response surface methodology is employed to model the relationship between the design parameters and the output displacement of the gripper. The design of experiment method is then used to analyze the sensitivity so as to determine the effect of each parameter on the displacement. The results showed that the compliant gripper can move with a large displacement of 213.51 mm and the force regulation mechanism is expected to be used for high precision positioning systems.

Keywords: Flexure hinge, compliant mechanism, compliant gripper, force regulation mechanism, Taguchi method, response surface methodology, design of experiment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577
484 Evaluation of Short-Term Load Forecasting Techniques Applied for Smart Micro Grids

Authors: Xiaolei Hu, Enrico Ferrera, Riccardo Tomasi, Claudio Pastrone

Abstract:

Load Forecasting plays a key role in making today's and future's Smart Energy Grids sustainable and reliable. Accurate power consumption prediction allows utilities to organize in advance their resources or to execute Demand Response strategies more effectively, which enables several features such as higher sustainability, better quality of service, and affordable electricity tariffs. It is easy yet effective to apply Load Forecasting at larger geographic scale, i.e. Smart Micro Grids, wherein the lower available grid flexibility makes accurate prediction more critical in Demand Response applications. This paper analyses the application of short-term load forecasting in a concrete scenario, proposed within the EU-funded GreenCom project, which collect load data from single loads and households belonging to a Smart Micro Grid. Three short-term load forecasting techniques, i.e. linear regression, artificial neural networks, and radial basis function network, are considered, compared, and evaluated through absolute forecast errors and training time. The influence of weather conditions in Load Forecasting is also evaluated. A new definition of Gain is introduced in this paper, which innovatively serves as an indicator of short-term prediction capabilities of time spam consistency. Two models, 24- and 1-hour-ahead forecasting, are built to comprehensively compare these three techniques.

Keywords: Short-term load forecasting, smart micro grid, linear regression, artificial neural networks, radial basis function network, Gain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2579
483 The Impact of Socio-Economic and Type of Religion on the Behavior of Obedience among Arab-Israeli Teenagers

Authors: Sadhana Ghnayem

Abstract:

This article examines the relationship between several socio-economic and background variables of Arab-Israeli families and their effect on the conflict management style of forcing, where teenage children are expected to obey their parents without questioning. The article explores the inter-generational gap and the desire of Arab-Israeli parents to force their teenage children to obey without questioning. The independent variables include: the sex of the parent, religion (Christian or Muslim), income of the parent, years of education of the parent, and the sex of the teenage child. We use the dependent variable of “Obedience Without Questioning” that is reported twice: by each of the parents as well as by the children. We circulated a questionnaire and collected data from a sample of 180 parents and their adolescent child living in the Galilee area during 2018. In this questionnaire we asked each of the parent and his/her teenage child about whether the latter is expected to follow the instructions of the former without questioning. The outcome of this article indicates, first, that Christian-Arab families are less authoritarian than Muslims families in demanding sheer obedience from their children. Second, female parents indicate more than male parents that their teenage child indeed obeys without questioning. Third, there is a negative correlation between the variable “Income” and “Obedience without Questioning.” Yet, the regression coefficient of this variable is close zero. Fourth, there is a positive correlation between years of education and obedience reported by the children. In other words, more educated parents are more likely to demand obedience from their children.  Finally, after running the regression, the study also found that the impact of the variables of religion as well as the sex of the child on the dependent variable of obedience is also significant at above 95 and 90%, respectively.

Keywords: Arab-Israeli parents, Obedience, Forcing, Inter-generational gap.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 767
482 Analysis of Hard Turning Process of AISI D3-Thermal Aspects

Authors: B. Varaprasad, C. Srinivasa Rao

Abstract:

In the manufacturing sector, hard turning has emerged as vital machining process for cutting hardened steels. Besides many advantages of hard turning operation, one has to implement to achieve close tolerances in terms of surface finish, high product quality, reduced machining time, low operating cost and environmentally friendly characteristics. In the present study, three-dimensional CAE (Computer Aided Engineering) based simulation of  hard turning by using commercial software DEFORM 3D has been compared to experimental results of  stresses, temperatures and tool forces in machining of AISI D3 steel using mixed Ceramic inserts (CC6050). In the present analysis, orthogonal cutting models are proposed, considering several processing parameters such as cutting speed, feed, and depth of cut. An exhaustive friction modeling at the tool-work interfaces is carried out. Work material flow around the cutting edge is carefully modeled with adaptive re-meshing simulation capability. In process simulations, feed rate and cutting speed are constant (i.e.,. 0.075 mm/rev and 155 m/min), and analysis is focused on stresses, forces, and temperatures during machining. Close agreement is observed between CAE simulation and experimental values.

Keywords: Hard-turning, computer-aided engineering, computational machining, finite element method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1329
481 Optimization of Surface Roughness in Additive Manufacturing Processes via Taguchi Methodology

Authors: Anjian Chen, Joseph C. Chen

Abstract:

This paper studies a case where the targeted surface roughness of fused deposition modeling (FDM) additive manufacturing process is improved. The process is designing to reduce or eliminate the defects and improve the process capability index Cp and Cpk for an FDM additive manufacturing process. The baseline Cp is 0.274 and Cpk is 0.654. This research utilizes the Taguchi methodology, to eliminate defects and improve the process. The Taguchi method is used to optimize the additive manufacturing process and printing parameters that affect the targeted surface roughness of FDM additive manufacturing. The Taguchi L9 orthogonal array is used to organize the parameters' (four controllable parameters and one non-controllable parameter) effectiveness on the FDM additive manufacturing process. The four controllable parameters are nozzle temperature [°C], layer thickness [mm], nozzle speed [mm/s], and extruder speed [%]. The non-controllable parameter is the environmental temperature [°C]. After the optimization of the parameters, a confirmation print was printed to prove that the results can reduce the amount of defects and improve the process capability index Cp from 0.274 to 1.605 and the Cpk from 0.654 to 1.233 for the FDM additive manufacturing process. The final results confirmed that the Taguchi methodology is sufficient to improve the surface roughness of FDM additive manufacturing process.

Keywords: Additive manufacturing, fused deposition modeling, surface roughness, Six-Sigma, Taguchi method, 3D printing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1348
480 Meta Model for Optimum Design Objective Function of Steel Frames Subjected to Seismic Loads

Authors: Salah R. Al Zaidee, Ali S. Mahdi

Abstract:

Except for simple problems of statically determinate structures, optimum design problems in structural engineering have implicit objective functions where structural analysis and design are essential within each searching loop. With these implicit functions, the structural engineer is usually enforced to write his/her own computer code for analysis, design, and searching for optimum design among many feasible candidates and cannot take advantage of available software for structural analysis, design, and searching for the optimum solution. The meta-model is a regression model used to transform an implicit objective function into objective one and leads in turn to decouple the structural analysis and design processes from the optimum searching process. With the meta-model, well-known software for structural analysis and design can be used in sequence with optimum searching software. In this paper, the meta-model has been used to develop an explicit objective function for plane steel frames subjected to dead, live, and seismic forces. Frame topology is assumed as predefined based on architectural and functional requirements. Columns and beams sections and different connections details are the main design variables in this study. Columns and beams are grouped to reduce the number of design variables and to make the problem similar to that adopted in engineering practice. Data for the implicit objective function have been generated based on analysis and assessment for many design proposals with CSI SAP software. These data have been used later in SPSS software to develop a pure quadratic nonlinear regression model for the explicit objective function. Good correlations with a coefficient, R2, in the range from 0.88 to 0.99 have been noted between the original implicit functions and the corresponding explicit functions generated with meta-model.

Keywords: Meta-modal, objective function, steel frames, seismic analysis, design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1305
479 Seismic Directionality Effects on In-Structure Response Spectra in Seismic Probabilistic Risk Assessment

Authors: S. Jarernprasert, E. Bazan-Zurita, P. C. Rizzo

Abstract:

Currently, seismic probabilistic risk assessments (SPRA) for nuclear facilities use In-Structure Response Spectra (ISRS) in the calculation of fragilities for systems and components. ISRS are calculated via dynamic analyses of the host building subjected to two orthogonal components of horizontal ground motion. Each component is defined as the median motion in any horizontal direction. Structural engineers applied the components along selected X and Y Cartesian axes. The ISRS at different locations in the building are also calculated in the X and Y directions. The choice of the directions of X and Y are not specified by the ground motion model with respect to geographic coordinates, and are rather arbitrarily selected by the structural engineer. Normally, X and Y coincide with the “principal” axes of the building, in the understanding that this practice is generally conservative. For SPRA purposes, however, it is desirable to remove any conservatism in the estimates of median ISRS. This paper examines the effects of the direction of horizontal seismic motion on the ISRS on typical nuclear structure. We also evaluate the variability of ISRS calculated along different horizontal directions. Our results indicate that some central measures of the ISRS provide robust estimates that are practically independent of the selection of the directions of the horizontal Cartesian axes.

Keywords: Seismic, Directionality, In-Structure Response Spectra, Probabilistic Risk Assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2512
478 Modeling and Analysis for Effective Capacity of a Cross-Layer Optimized Wireless Networks

Authors: Reham A. El-mayet, Hesham M. El-Badawy, Salwa H. Elramly

Abstract:

New generation mobile communication networks have the ability of supporting triple play. In order that, Orthogonal Frequency Division Multiplexing (OFDM) access techniques have been chosen to enlarge the system ability for high data rates networks. Many of cross-layer modeling and optimization schemes for Quality of Service (QoS) and capacity of downlink multiuser OFDM system were proposed. In this paper, the Maximum Weighted Capacity (MWC) based resource allocation at the Physical (PHY) layer is used. This resource allocation scheme provides a much better QoS than the previous resource allocation schemes, while maintaining the highest or nearly highest capacity and costing similar complexity. In addition, the Delay Satisfaction (DS) scheduling at the Medium Access Control (MAC) layer, which allows more than one connection to be served in each slot is used. This scheduling technique is more efficient than conventional scheduling to investigate both of the number of users as well as the number of subcarriers against system capacity. The system will be optimized for different operational environments: the outdoor deployment scenarios as well as the indoor deployment scenarios are investigated and also for different channel models. In addition, effective capacity approach [1] is used not only for providing QoS for different mobile users, but also to increase the total wireless network's throughput.

Keywords: Cross-layer, effective capacity, LTE, OFDM, QoS, resource allocation, wireless networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772
477 Forecast of the Small Wind Turbines Sales with Replacement Purchases and with or without Account of Price Changes

Authors: V. Churkin, M. Lopatin

Abstract:

The purpose of the paper is to estimate the US small wind turbines market potential and forecast the small wind turbines sales in the US. The forecasting method is based on the application of the Bass model and the generalized Bass model of innovations diffusion under replacement purchases. In the work an exponential distribution is used for modeling of replacement purchases. Only one parameter of such distribution is determined by average lifetime of small wind turbines. The identification of the model parameters is based on nonlinear regression analysis on the basis of the annual sales statistics which has been published by the American Wind Energy Association (AWEA) since 2001 up to 2012. The estimation of the US average market potential of small wind turbines (for adoption purchases) without account of price changes is 57080 (confidence interval from 49294 to 64866 at P = 0.95) under average lifetime of wind turbines 15 years, and 62402 (confidence interval from 54154 to 70648 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 90,7%, while in the second - 91,8%. The effect of the wind turbines price changes on their sales was estimated using generalized Bass model. This required a price forecast. To do this, the polynomial regression function, which is based on the Berkeley Lab statistics, was used. The estimation of the US average market potential of small wind turbines (for adoption purchases) in that case is 42542 (confidence interval from 32863 to 52221 at P = 0.95) under average lifetime of wind turbines 15 years, and 47426 (confidence interval from 36092 to 58760 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 95,3%, while in the second – 95,3%.

Keywords: Bass model, generalized Bass model, replacement purchases, sales forecasting of innovations, statistics of sales of small wind turbines in the United States.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1859
476 Broadband PowerLine Communications: Performance Analysis

Authors: Justinian Anatory, Nelson Theethayi, M. M. Kissaka, N. H. Mvungi

Abstract:

Power line channel is proposed as an alternative for broadband data transmission especially in developing countries like Tanzania [1]. However the channel is affected by stochastic attenuation and deep notches which can lead to the limitation of channel capacity and achievable data rate. Various studies have characterized the channel without giving exactly the maximum performance and limitation in data transfer rate may be this is due to complexity of channel modeling being used. In this paper the channel performance of medium voltage, low voltage and indoor power line channel is presented. In the investigations orthogonal frequency division multiplexing (OFDM) with phase shift keying (PSK) as carrier modulation schemes is considered, for indoor, medium and low voltage channels with typical ten branches and also Golay coding is applied for medium voltage channel. From channels, frequency response deep notches are observed in various frequencies which can lead to reduce the achievable data rate. However, is observed that data rate up to 240Mbps is realized for a signal to noise ratio of about 50dB for indoor and low voltage channels, however for medium voltage a typical link with ten branches is affected by strong multipath and coding is required for feasible broadband data transfer.

Keywords: Powerline Communications, branched network, channel model, modulation, channel performance, OFDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1806
475 Precision Grinding of Titanium (Ti-6Al-4V) Alloy Using Nanolubrication

Authors: Ahmed A. D. Sarhan, Hong Wan Ping, M. Sayuti

Abstract:

In this current era of competitive machinery productions, the industries are designed to place more emphasis on the product quality and reduction of cost whilst abiding by the pollution-preventing policy. In attempting to delve into the concerns, the industries are aware that the effectiveness of existing lubrication systems must be improved to achieve power-efficient and pollution-preventing machining processes. As such, this research is targeted to study on a plausible solution to the issue in grinding titanium alloy (Ti-6Al-4V) by using nanolubrication, as an alternative to flood grinding. The aim of this research is to evaluate the optimum condition of grinding force and surface roughness using MQL lubricating system to deliver nano-oil at different level of weight concentration of Silicon Dioxide (SiO2) mixed normal mineral oil. Taguchi Design of Experiment (DoE) method is carried out using a standard Taguchi orthogonal array of L16(43) to find the optimized combination of weight concentration mixture of SiO2, nozzle orientation and pressure of MQL. Surface roughness and grinding force are also analyzed using signal-to-noise(S/N) ratio to determine the best level of each factor that are tested. Consequently, the best combination of parameters is tested for a period of time and the results are compared with conventional grinding method of dry and flood condition. The results show a positive performance of MQL nanolubrication.  

Keywords: Grinding, MQL, precision grinding, Taguchi optimization, titanium alloy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1860
474 Full-genomic Network Inference for Non-model organisms: A Case Study for the Fungal Pathogen Candida albicans

Authors: Jörg Linde, Ekaterina Buyko, Robert Altwasser, Udo Hahn, Reinhard Guthke

Abstract:

Reverse engineering of full-genomic interaction networks based on compendia of expression data has been successfully applied for a number of model organisms. This study adapts these approaches for an important non-model organism: The major human fungal pathogen Candida albicans. During the infection process, the pathogen can adapt to a wide range of environmental niches and reversibly changes its growth form. Given the importance of these processes, it is important to know how they are regulated. This study presents a reverse engineering strategy able to infer fullgenomic interaction networks for C. albicans based on a linear regression, utilizing the sparseness criterion (LASSO). To overcome the limited amount of expression data and small number of known interactions, we utilize different prior-knowledge sources guiding the network inference to a knowledge driven solution. Since, no database of known interactions for C. albicans exists, we use a textmining system which utilizes full-text research papers to identify known regulatory interactions. By comparing with these known regulatory interactions, we find an optimal value for global modelling parameters weighting the influence of the sparseness criterion and the prior-knowledge. Furthermore, we show that soft integration of prior-knowledge additionally improves the performance. Finally, we compare the performance of our approach to state of the art network inference approaches.

Keywords: Pathogen, network inference, text-mining, Candida albicans, LASSO, mutual information, reverse engineering, linear regression, modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
473 Taguchi-Based Optimization of Surface Roughness and Dimensional Accuracy in Wire EDM Process with S7 Heat Treated Steel

Authors: Joseph C. Chen, Joshua Cox

Abstract:

This research focuses on the use of the Taguchi method to reduce the surface roughness and improve dimensional accuracy of parts machined by Wire Electrical Discharge Machining (EDM) with S7 heat treated steel material. Due to its high impact toughness, the material is a candidate for a wide variety of tooling applications which require high precision in dimension and desired surface roughness. This paper demonstrates that Taguchi Parameter Design methodology is able to optimize both dimensioning and surface roughness successfully by investigating seven wire-EDM controllable parameters: pulse on time (ON), pulse off time (OFF), servo voltage (SV), voltage (V), servo feed (SF), wire tension (WT), and wire speed (WS). The temperature of the water in the Wire EDM process is investigated as the noise factor in this research. Experimental design and analysis based on L18 Taguchi orthogonal arrays are conducted. This paper demonstrates that the Taguchi-based system enables the wire EDM process to produce (1) high precision parts with an average of 0.6601 inches dimension, while the desired dimension is 0.6600 inches; and (2) surface roughness of 1.7322 microns which is significantly improved from 2.8160 microns.

Keywords: Taguchi parameter design, surface roughness, dimensional accuracy, Wire EDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1057
472 An Automated Stock Investment System Using Machine Learning Techniques: An Application in Australia

Authors: Carol Anne Hargreaves

Abstract:

A key issue in stock investment is how to select representative features for stock selection. The objective of this paper is to firstly determine whether an automated stock investment system, using machine learning techniques, may be used to identify a portfolio of growth stocks that are highly likely to provide returns better than the stock market index. The second objective is to identify the technical features that best characterize whether a stock’s price is likely to go up and to identify the most important factors and their contribution to predicting the likelihood of the stock price going up. Unsupervised machine learning techniques, such as cluster analysis, were applied to the stock data to identify a cluster of stocks that was likely to go up in price – portfolio 1. Next, the principal component analysis technique was used to select stocks that were rated high on component one and component two – portfolio 2. Thirdly, a supervised machine learning technique, the logistic regression method, was used to select stocks with a high probability of their price going up – portfolio 3. The predictive models were validated with metrics such as, sensitivity (recall), specificity and overall accuracy for all models. All accuracy measures were above 70%. All portfolios outperformed the market by more than eight times. The top three stocks were selected for each of the three stock portfolios and traded in the market for one month. After one month the return for each stock portfolio was computed and compared with the stock market index returns. The returns for all three stock portfolios was 23.87% for the principal component analysis stock portfolio, 11.65% for the logistic regression portfolio and 8.88% for the K-means cluster portfolio while the stock market performance was 0.38%. This study confirms that an automated stock investment system using machine learning techniques can identify top performing stock portfolios that outperform the stock market.

Keywords: Machine learning, stock market trading, logistic principal component analysis, automated stock investment system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1058
471 Experimental Study on the Variation of Young's Modulus of Hollow Clay Brick Obtained from Static and Dynamic Tests

Authors: M. Aboudalle, Le Btth, M. Sari, F. Meftah

Abstract:

In parallel with the appearance of new materials, brick masonry had and still has an essential part of the construction market today, with new technical challenges in designing bricks to meet additional requirements. Being used in structural applications, predicting the performance of clay brick masonry allows a significant cost reduction, in terms of practical experimentation. The behavior of masonry walls depends on the behavior of their elementary components, such as bricks, joints, and coatings. Therefore, it is necessary to consider it at different scales (from the scale of the intrinsic material to the real scale of the wall) and then to develop appropriate models, using numerical simulations. The work presented in this paper focuses on the mechanical characterization of the terracotta material at ambient temperature. As a result, the static Young’s modulus obtained from the flexural test shows different values in comparison with the compression test, as well as with the dynamic Young’s modulus obtained from the Impulse excitation of vibration test. Moreover, the Young's modulus varies according to the direction in which samples are extracted, where the values in the extrusion direction diverge from the ones in the orthogonal directions. Based on these results, hollow bricks can be considered as transversely isotropic bimodulus material.

Keywords: Bimodulus material, hollow clay brick, impulse excitation of vibration, transversely isotropic material, Young’s modulus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 420
470 Analysis of Codebook Based Channel Feedback Techniques for MIMO-OFDM Systems

Authors: Muhammad Rehan Khalid, Ahmed Farhan Hanif, Adnan Ahmed Khan

Abstract:

This paper investigates the performance of Multiple- Input Multiple-Output (MIMO) feedback system combined with Orthogonal Frequency Division Multiplexing (OFDM). Two types of codebook based channel feedback techniques are used in this work. The first feedback technique uses a combination of both the long-term and short-term channel state information (CSI) at the transmitter, whereas the second technique uses only the short term CSI. The long-term and short-term CSI at the transmitter is used for efficient channel utilization. OFDM is a powerful technique employed in communication systems suffering from frequency selectivity. Combined with multiple antennas at the transmitter and receiver, OFDM proves to be robust against delay spread. Moreover, it leads to significant data rates with improved bit error performance over links having only a single antenna at both the transmitter and receiver. The effectiveness of these techniques has been demonstrated through the simulation of a MIMO-OFDM feedback system. The results have been evaluated for 4x4 MIMO channels. Simulation results indicate the benefits of the MIMO-OFDM channel feedback system over the one without incorporating OFDM. Performance gain of about 3 dB is observed for MIMO-OFDM feedback system as compared to the one without employing OFDM. Hence MIMO-OFDM becomes an attractive approach for future high speed wireless communication systems.

Keywords: MIMO systems, OFDM, Codebooks, Channel Feedback

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1652
469 Machine Learning Techniques in Bank Credit Analysis

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner

Abstract:

The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.

Keywords: Artificial Neural Networks, ANNs, classifier algorithms, credit risk assessment, logistic regression, machine learning, support vector machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1225