Search results for: Active linear control
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5880

Search results for: Active linear control

1950 The Impact of Transaction Costs on Rebalancing an Investment Portfolio in Portfolio Optimization

Authors: B. Marasović, S. Pivac, S. V. Vukasović

Abstract:

Constructing a portfolio of investments is one of the most significant financial decisions facing individuals and institutions. In accordance with the modern portfolio theory maximization of return at minimal risk should be the investment goal of any successful investor. In addition, the costs incurred when setting up a new portfolio or rebalancing an existing portfolio must be included in any realistic analysis. In this paper rebalancing an investment portfolio in the presence of transaction costs on the Croatian capital market is analyzed. The model applied in the paper is an extension of the standard portfolio mean-variance optimization model in which transaction costs are incurred to rebalance an investment portfolio. This model allows different costs for different securities, and different costs for buying and selling. In order to find efficient portfolio, using this model, first, the solution of quadratic programming problem of similar size to the Markowitz model, and then the solution of a linear programming problem have to be found. Furthermore, in the paper the impact of transaction costs on the efficient frontier is investigated. Moreover, it is shown that global minimum variance portfolio on the efficient frontier always has the same level of the risk regardless of the amount of transaction costs. Although efficient frontier position depends of both transaction costs amount and initial portfolio it can be concluded that extreme right portfolio on the efficient frontier always contains only one stock with the highest expected return and the highest risk.

Keywords: Croatian capital market, Fractional quadratic programming, Markowitz model, Portfolio optimization, Transaction costs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2946
1949 Stature Prediction Model Based On Hand Anthropometry

Authors: Arunesh Chandra, Pankaj Chandna, Surinder Deswal, Rajesh Kumar Mishra, Rajender Kumar

Abstract:

The arm length, hand length, hand breadth and middle finger length of 1540 right-handed industrial workers of Haryana state was used to assess the relationship between the upper limb dimensions and stature. Initially, the data were analyzed using basic univariate analysis and independent t-tests; then simple and multiple linear regression models were used to estimate stature using SPSS (version 17). There was a positive correlation between upper limb measurements (hand length, hand breadth, arm length and middle finger length) and stature (p < 0.01), which was highest for hand length. The accuracy of stature prediction ranged from ± 54.897 mm to ± 58.307 mm. The use of multiple regression equations gave better results than simple regression equations. This study provides new forensic standards for stature estimation from the upper limb measurements of male industrial workers of Haryana (India). The results of this research indicate that stature can be determined using hand dimensions with accuracy, when only upper limb is available due to any reasons likewise explosions, train/plane crashes, mutilated bodies, etc. The regression formula derived in this study will be useful for anatomists, archaeologists, anthropologists, design engineers and forensic scientists for fairly prediction of stature using regression equations.

Keywords: Anthropometric dimensions, Forensic identification, Industrial workers, Stature prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2961
1948 A Structural Constitutive Model for Viscoelastic Rheological Behavior of Human Saphenous Vein Using Experimental Assays

Authors: Rassoli Aisa, Abrishami Movahhed Arezu, Faturaee Nasser, Seddighi Amir Saeed, Shafigh Mohammad

Abstract:

Cardiovascular diseases are one of the most common causes of mortality in developed countries. Coronary artery abnormalities and carotid artery stenosis, also known as silent death, are among these diseases. One of the treatment methods for these diseases is to create a deviatory pathway to conduct blood into the heart through a bypass surgery. The saphenous vein is usually used in this surgery to create the deviatory pathway. Unfortunately, a re-surgery will be necessary after some years due to ignoring the disagreement of mechanical properties of graft tissue and/or applied prostheses with those of host tissue. The objective of the present study is to clarify the viscoelastic behavior of human saphenous tissue. The stress relaxation tests in circumferential and longitudinal direction were done in this vein by exerting 20% and 50% strains. Considering the stress relaxation curves obtained from stress relaxation tests and the coefficients of the standard solid model, it was demonstrated that the saphenous vein has a non-linear viscoelastic behavior. Thereafter, the fitting with Fung’s quasilinear viscoelastic (QLV) model was performed based on stress relaxation time curves. Finally, the coefficients of Fung’s QLV model, which models the behavior of saphenous tissue very well, were presented.

Keywords: Fung’s quasilinear viscoelastic (QLV) model, strain rate, stress relaxation test, uniaxial tensile test, viscoelastic behavior.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 786
1947 Intelligent Recognition of Diabetes Disease via FCM Based Attribute Weighting

Authors: Kemal Polat

Abstract:

In this paper, an attribute weighting method called fuzzy C-means clustering based attribute weighting (FCMAW) for classification of Diabetes disease dataset has been used. The aims of this study are to reduce the variance within attributes of diabetes dataset and to improve the classification accuracy of classifier algorithm transforming from non-linear separable datasets to linearly separable datasets. Pima Indians Diabetes dataset has two classes including normal subjects (500 instances) and diabetes subjects (268 instances). Fuzzy C-means clustering is an improved version of K-means clustering method and is one of most used clustering methods in data mining and machine learning applications. In this study, as the first stage, fuzzy C-means clustering process has been used for finding the centers of attributes in Pima Indians diabetes dataset and then weighted the dataset according to the ratios of the means of attributes to centers of theirs. Secondly, after weighting process, the classifier algorithms including support vector machine (SVM) and k-NN (k- nearest neighbor) classifiers have been used for classifying weighted Pima Indians diabetes dataset. Experimental results show that the proposed attribute weighting method (FCMAW) has obtained very promising results in the classification of Pima Indians diabetes dataset.

Keywords: Fuzzy C-means clustering, Fuzzy C-means clustering based attribute weighting, Pima Indians diabetes dataset, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1763
1946 Designing Social Care Policies in the Long Term: A Study Using Regression, Clustering and Backpropagation Neural Nets

Authors: Sotirios Raptis

Abstract:

Linking social needs to social classes using different criteria may lead to social services misuse. The paper discusses using ML and Neural Networks (NNs) in linking public services in Scotland in the long term and advocates, this can result in a reduction of the services cost connecting resources needed in groups for similar services. The paper combines typical regression models with clustering and cross-correlation as complementary constituents to predict the demand. Insurance companies and public policymakers can pack linked services such as those offered to the elderly or to low-income people in the longer term. The work is based on public data from 22 services offered by Public Health Services (PHS) Scotland and from the Scottish Government (SG) from 1981 to 2019 that are broken into 110 years series called factors and uses Linear Regression (LR), Autoregression (ARMA) and 3 types of back-propagation (BP) Neural Networks (BPNN) to link them under specific conditions. Relationships found were between smoking related healthcare provision, mental health-related health services, and epidemiological weight in Primary 1(Education) Body Mass Index (BMI) in children. Primary component analysis (PCA) found 11 significant factors while C-Means (CM) clustering gave 5 major factors clusters.

Keywords: Probability, cohorts, data frames, services, prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 460
1945 Face Recognition Using Principal Component Analysis, K-Means Clustering, and Convolutional Neural Network

Authors: Zukisa Nante, Wang Zenghui

Abstract:

Face recognition is the problem of identifying or recognizing individuals in an image. This paper investigates a possible method to bring a solution to this problem. The method proposes an amalgamation of Principal Component Analysis (PCA), K-Means clustering, and Convolutional Neural Network (CNN) for a face recognition system. It is trained and evaluated using the ORL dataset. This dataset consists of 400 different faces with 40 classes of 10 face images per class. Firstly, PCA enabled the usage of a smaller network. This reduces the training time of the CNN. Thus, we get rid of the redundancy and preserve the variance with a smaller number of coefficients. Secondly, the K-Means clustering model is trained using the compressed PCA obtained data which select the K-Means clustering centers with better characteristics. Lastly, the K-Means characteristics or features are an initial value of the CNN and act as input data. The accuracy and the performance of the proposed method were tested in comparison to other Face Recognition (FR) techniques namely PCA, Support Vector Machine (SVM), as well as K-Nearest Neighbour (kNN). During experimentation, the accuracy and the performance of our suggested method after 90 epochs achieved the highest performance: 99% accuracy F1-Score, 99% precision, and 99% recall in 463.934 seconds. It outperformed the PCA that obtained 97% and KNN with 84% during the conducted experiments. Therefore, this method proved to be efficient in identifying faces in the images.

Keywords: Face recognition, Principal Component Analysis, PCA, Convolutional Neural Network, CNN, Rectified Linear Unit, ReLU, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 506
1944 Integrating Hedgerow into Town Planning: A Framework for Sustainable Residential Development

Authors: Siqing Chen

Abstract:

The vast rural landscape in the southern United States is conspicuously characterized by the hedgerow trees or groves. The patchwork landscape of fields surrounded by high hedgerows is a traditional and familiar feature of the American countryside. Hedgerows are in effect linear strips of trees, groves, or woodlands, which are often critical habitats for wildlife and important for the visual quality of the landscape. As landscape interfaces, hedgerows define the spaces in the landscape, give the landscape life and meaning, and enrich ecologies and cultural heritages of the American countryside. Although hedgerows were originally intended as fences and to mark property and townland boundaries, they are not merely the natural or man-made additions to the landscape--they have gradually become “naturalized" into the landscape, deeply rooted in the rural culture, and now formed an important component of the southern American rural environment. However, due to the ever expanding real estate industry and high demand for new residential development, substantial areas of authentic hedgerow landscape in the southern United States are being urbanized. Using Hudson Farm as an example, this study illustrated guidelines of how hedgerows can be integrated into town planning as green infrastructure and landscape interface to innovate and direct sustainable land use, and suggest ways in which such vernacular landscapes can be preserved and integrated into new development without losing their contextual inspiration.

Keywords: Hedgerow, Town planning, Sustainable design, Ecological infrastructure

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670
1943 Some Mechanical Properties of Cement Stabilized Malaysian Soft Clay

Authors: Meei-Hoan Ho, Chee-Ming Chan

Abstract:

Soft clays are defined as cohesive soil whose water content is higher than its liquid limits. Thus, soil-cement mixing is adopted to improve the ground conditions by enhancing the strength and deformation characteristics of the soft clays. For the above mentioned reasons, a series of laboratory tests were carried out to study some fundamental mechanical properties of cement stabilized soft clay. The test specimens were prepared by varying the portion of ordinary Portland cement to the soft clay sample retrieved from the test site of RECESS (Research Centre for Soft Soil). Comparisons were made for both homogeneous and columnar system specimens by relating the effects of cement stabilized clay of for 0, 5 and 10 % cement and curing for 3, 28 and 56 days. The mechanical properties examined included one-dimensional compressibility and undrained shear strength. For the mechanical properties, both homogeneous and columnar system specimens were prepared to examine the effect of different cement contents and curing periods on the stabilized soil. The one-dimensional compressibility test was conducted using an oedometer, while a direct shear box was used for measuring the undrained shear strength. The higher the value of cement content, the greater is the enhancement of the yield stress and the decrease of compression index. The value of cement content in a specimen is a more active parameter than the curing period.

Keywords: Soft soil, Oedometer, Direct shear box, Cementstabilisedcolumn.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3247
1942 Sensor Optimisation via H∞ Applied to a MAGLEV Suspension System

Authors: Konstantinos Michail, Argyrios Zolotas, Roger Goodall, John Pearson

Abstract:

In this paper a systematic method via H∞ control design is proposed to select a sensor set that satisfies a number of input criteria for a MAGLEV suspension system. The proposed method recovers a number of optimised controllers for each possible sensor set that satisfies the performance and constraint criteria using evolutionary algorithms.

Keywords: H-infinity, Sensor optimisation, Genetic algorithms, MAGLEV vehicles

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1480
1941 Competitive Adsorption of Heavy Metals onto Natural and Activated Clay: Equilibrium, Kinetics and Modeling

Authors: L. Khalfa, M. Bagane, M. L. Cervera, S. Najjar

Abstract:

The aim of this work is to present a low cost adsorbent for removing toxic heavy metals from aqueous solutions. Therefore, we are interested to investigate the efficiency of natural clay minerals collected from south Tunisia and their modified form using sulfuric acid in the removal of toxic metal ions: Zn(II) and Pb(II) from synthetic waste water solutions. The obtained results indicate that metal uptake is pH-dependent and maximum removal was detected to occur at pH 6. Adsorption equilibrium is very rapid and it was achieved after 90 min for both metal ions studied. The kinetics results show that the pseudo-second-order model describes the adsorption and the intraparticle diffusion models are the limiting step. The treatment of natural clay with sulfuric acid creates more active sites and increases the surface area, so it showed an increase of the adsorbed quantities of lead and zinc in single and binary systems. The competitive adsorption study showed that the uptake of lead was inhibited in the presence of 10 mg/L of zinc. An antagonistic binary adsorption mechanism was observed. These results revealed that clay is an effective natural material for removing lead and zinc in single and binary systems from aqueous solution.

Keywords: Lead, zinc heavy metal, activated clay, kinetic study, competitive adsorption, modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827
1940 An Approach for Ensuring Data Flow in Freight Delivery and Management Systems

Authors: Aurelija Burinskienė, Dalė Dzemydienė, Arūnas Miliauskas

Abstract:

This research aims at developing the approach for more effective freight delivery and transportation process management. The road congestions and the identification of causes are important, as well as the context information recognition and management. The measure of many parameters during the transportation period and proper control of driver work became the problem. The number of vehicles per time unit passing at a given time and point for drivers can be evaluated in some situations. The collection of data is mainly used to establish new trips. The flow of the data is more complex in urban areas. Herein, the movement of freight is reported in detail, including the information on street level. When traffic density is extremely high in congestion cases, and the traffic speed is incredibly low, data transmission reaches the peak. Different data sets are generated, which depend on the type of freight delivery network. There are three types of networks: long-distance delivery networks, last-mile delivery networks and mode-based delivery networks; the last one includes different modes, in particular, railways and other networks. When freight delivery is switched from one type of the above-stated network to another, more data could be included for reporting purposes and vice versa. In this case, a significant amount of these data is used for control operations, and the problem requires an integrated methodological approach. The paper presents an approach for providing e-services for drivers by including the assessment of the multi-component infrastructure needed for delivery of freights following the network type. The construction of such a methodology is required to evaluate data flow conditions and overloads, and to minimize the time gaps in data reporting. The results obtained show the possibilities of the proposing methodological approach to support the management and decision-making processes with functionality of incorporating networking specifics, by helping to minimize the overloads in data reporting.

Keywords: Transportation networks, freight delivery, data flow, monitoring, e-services.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 607
1939 Modeling and Analysis of Concrete Slump Using Hybrid Artificial Neural Networks

Authors: Vinay Chandwani, Vinay Agrawal, Ravindra Nagar

Abstract:

Artificial Neural Networks (ANN) trained using backpropagation (BP) algorithm are commonly used for modeling material behavior associated with non-linear, complex or unknown interactions among the material constituents. Despite multidisciplinary applications of back-propagation neural networks (BPNN), the BP algorithm possesses the inherent drawback of getting trapped in local minima and slowly converging to a global optimum. The paper present a hybrid artificial neural networks and genetic algorithm approach for modeling slump of ready mix concrete based on its design mix constituents. Genetic algorithms (GA) global search is employed for evolving the initial weights and biases for training of neural networks, which are further fine tuned using the BP algorithm. The study showed that, hybrid ANN-GA model provided consistent predictions in comparison to commonly used BPNN model. In comparison to BPNN model, the hybrid ANNGA model was able to reach the desired performance goal quickly. Apart from the modeling slump of ready mix concrete, the synaptic weights of neural networks were harnessed for analyzing the relative importance of concrete design mix constituents on the slump value. The sand and water constituents of the concrete design mix were found to exhibit maximum importance on the concrete slump value.

Keywords: Artificial neural networks, Genetic algorithms, Back-propagation algorithm, Ready Mix Concrete, Slump value.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2903
1938 Battery/Supercapacitor Emulator for Chargers Functionality Testing

Authors: S. Farag, A. Kupeman

Abstract:

In this paper, design of solid-state battery/supercapacitor emulator based on dc-dc boost converter is described. The emulator mimics charging behavior of any storage device based on a predefined behavior set by the user. The device is operated by a two-level control structure: high-level emulating controller and low- level input voltage controller. Simulation and experimental results are shown to demonstrate the emulator operation.

Keywords: Battery, Charger, Energy, Storage, Supercapacitor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2860
1937 MHD Boundary Layer Flow of a Nanofluid Past a Wedge Shaped Wick in Heat Pipe

Authors: Ziya Uddin

Abstract:

This paper deals with the theoretical and numerical investigation of magneto hydrodynamic boundary layer flow of a nanofluid past a wedge shaped wick in heat pipe used for the cooling of electronic components and different type of machines. To incorporate the effect of nanoparticle diameter, concentration of nanoparticles in the pure fluid, nanothermal layer formed around the nanoparticle and Brownian motion of nanoparticles etc., appropriate models are used for the effective thermal and physical properties of nanofluids. To model the rotation of nanoparticles inside the base fluid, microfluidics theory is used. In this investigation ethylene glycol (EG) based nanofluids, are taken into account. The non-linear equations governing the flow and heat transfer are solved by using a very effective particle swarm optimization technique along with Runge-Kutta method. The values of heat transfer coefficient are found for different parameters involved in the formulation viz. nanoparticle concentration, nanoparticle size, magnetic field and wedge angle etc. It is found that, the wedge angle, presence of magnetic field, nanoparticle size and nanoparticle concentration etc. have prominent effects on fluid flow and heat transfer characteristics for the considered configuration.

Keywords: Heat transfer, Heat pipe, numerical modeling, nanofluid applications, particle swarm optimization, wedge shaped wick.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2309
1936 Finite Element Modelling of a 3D Woven Composite for Automotive Applications

Authors: Ahmad R. Zamani, Luigi Sanguigno, Angelo R. Maligno

Abstract:

A 3D woven composite, designed for automotive applications, is studied using Abaqus Finite Element (FE) software suite. Python scripts were developed to build FE models of the woven composite in Complete Abaqus Environment (CAE). They can read TexGen or WiseTex files and automatically generate consistent meshes of the fabric and the matrix. A user menu is provided to help define parameters for the FE models, such as type and size of the elements in fabric and matrix as well as the type of matrix-fabric interaction. Node-to-node constraints were imposed to guarantee periodicity of the deformed shapes at the boundaries of the representative volume element of the composite. Tensile loads in three axes and biaxial loads in x-y directions have been applied at different Fibre Volume Fractions (FVFs). A simple damage model was implemented via an Abaqus user material (UMAT) subroutine. Existing tools for homogenization were also used, including voxel mesh generation from TexGen as well as Abaqus Micromechanics plugin. Linear relations between homogenised elastic properties and the FVFs are given. The FE models of composite exhibited balanced behaviour with respect to warp and weft directions in terms of both stiffness and strength.

Keywords: 3D woven composite, meso-scale finite element modelling, homogenisation of elastic material properties, Abaqus Python scripting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 922
1935 Simulation of Hydrogenated Boron Nitride Nanotube’s Mechanical Properties for Radiation Shielding Applications

Authors: Joseph E. Estevez, Mahdi Ghazizadeh, James G. Ryan, Ajit D. Kelkar

Abstract:

Radiation shielding is an obstacle in long duration space exploration. Boron Nitride Nanotubes (BNNTs) have attracted attention as an additive to radiation shielding material due to B10’s large neutron capture cross section. The B10 has an effective neutron capture cross section suitable for low energy neutrons ranging from 10-5 to 104 eV and hydrogen is effective at slowing down high energy neutrons. Hydrogenated BNNTs are potentially an ideal nanofiller for radiation shielding composites. We use Molecular Dynamics (MD) Simulation via Material Studios Accelrys 6.0 to model the Young’s Modulus of Hydrogenated BNNTs. An extrapolation technique was employed to determine the Young’s Modulus due to the deformation of the nanostructure at its theoretical density. A linear regression was used to extrapolate the data to the theoretical density of 2.62g/cm3. Simulation data shows that the hydrogenated BNNTs will experience a 11% decrease in the Young’s Modulus for (6,6) BNNTs and 8.5% decrease for (8,8) BNNTs compared to non-hydrogenated BNNT’s. Hydrogenated BNNTs are a viable option as a nanofiller for radiation shielding nanocomposite materials for long range and long duration space exploration.

Keywords: Boron Nitride Nanotube, Radiation Shielding, Young Modulus, Atomistic Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6678
1934 A Case Study on Management of Coal Seam Gas By-Product Water

Authors: Mojibul Sajjad, Mohammad G. Rasul, Md. Sharif Imam Ibne Amir

Abstract:

The rate of natural gas dissociation from the Coal Matrix depends on depressurization of reservoir through removing of the cleat water from the coal seam. These waters are similar to brine and aged of very long years. For improving the connectivity through fracking /fracturing, high pressure liquids are pumped off inside the coal body. A significant quantity of accumulated water, a combined mixture of cleat water and fracking fluids (back flow water) is pumped out through gas well. In Queensland, Australia Coal Seam Gas (CSG) industry is in booming state and estimated of 30,000 wells would be active for CSG production forecasting life span of 30 years. Integrated water management along with water softening programs is practiced for subsequent treatment and later on discharge to nearby surface water catchment. Water treatment is an important part of the CSG industry. A case study on a CSG site and review on the test results are discussed for assessing the Standards & Practices for management of CSG by-product water and their subsequent disposal activities. This study was directed toward (i) water management and softening process in Spring Gully CSG field, (ii) Comparative analysis on experimental study and standards and (iii) Disposal of the treated water. This study also aimed for alternative usages and their impact on vegetation, living species as well as long term effects.

Keywords: Coal Seam Gas (CSG), Cleat Water, Hydro-Fracking, Desalination, Reverse Osmosis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2685
1933 A Combined Approach of a Sequential Life Testing and an Accelerated Life Testing Applied to a Low-Alloy High Strength Steel Component

Authors: D. I. De Souza, D. R. Fonseca, G. P. Azevedo

Abstract:

Sometimes the amount of time available for testing could be considerably less than the expected lifetime of the component. To overcome such a problem, there is the accelerated life-testing alternative aimed at forcing components to fail by testing them at much higher-than-intended application conditions. These models are known as acceleration models. One possible way to translate test results obtained under accelerated conditions to normal using conditions could be through the application of the “Maxwell Distribution Law.” In this paper we will apply a combined approach of a sequential life testing and an accelerated life testing to a low alloy high-strength steel component used in the construction of overpasses in Brazil. The underlying sampling distribution will be three-parameter Inverse Weibull model. To estimate the three parameters of the Inverse Weibull model we will use a maximum likelihood approach for censored failure data. We will be assuming a linear acceleration condition. To evaluate the accuracy (significance) of the parameter values obtained under normal conditions for the underlying Inverse Weibull model we will apply to the expected normal failure times a sequential life testing using a truncation mechanism. An example will illustrate the application of this procedure.

Keywords: Sequential Life Testing, Accelerated Life Testing, Underlying Three-Parameter Weibull Model, Maximum Likelihood Approach, Hypothesis Testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1639
1932 Kinematic and Dynamic Analysis of a Lower Limb Exoskeleton

Authors: Tawakal Hasnain Baluch, Adnan Masood, Javaid Iqbal, Umer Izhar, Umar Shahbaz Khan

Abstract:

This paper will provide the kinematic and dynamic analysis of a lower limb exoskeleton. The forward and inverse kinematics of proposed exoskeleton is performed using Denevit and Hartenberg method. The torques required for the actuators will be calculated using Lagrangian formulation technique. This research can be used to design the control of the proposed exoskeleton.

Keywords: Dynamic Analysis, Exoskeleton, Kinematic Analysis, Lower Limb, Rehabilitation Robotics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4596
1931 On Deterministic Chaos: Disclosing the Missing Mathematics from the Lorenz-Haken Equations

Authors: Belkacem Meziane

Abstract:

The original 3D Lorenz-Haken equations -which describe laser dynamics- are converted into 2-second-order differential equations out of which the so far missing mathematics is extracted. Leaning on high-order trigonometry, important outcomes are pulled out: A fundamental result attributes chaos to forbidden periodic solutions, inside some precisely delimited region of the control parameter space that governs self-pulsing.

Keywords: chaos, Lorenz-Haken equations, laser dynamics, nonlinearities

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 611
1930 Hypolipidemic and Antioxidant Effects of Black Tea Extract and Quercetin in Atherosclerotic Rats

Authors: Wahyu Widowati, Hana Ratnawati, Tjandrawati Mozefis, Dwiyati Pujimulyani, Yelliantty Yelliantty

Abstract:

Background: Atherosclerosis is the main cause of cardiovascular disease (CVD) with complex and multifactorial process including atherogenic lipoprotein, oxidized low density lipoprotein (LDL), endothelial dysfunction, plaque stability, vascular inflammation, thrombotic and fibrinolytic disorder, exercises and genetic factor Epidemiological studies have shown tea consumption inversely associated with the development and progression of atherosclerosis. The research objectives: to elucidate hypolipidemic, antioxidant effects, as well as ability to improve coronary artery’s histopathologyof black tea extract (BTE) and quercetin in atherosclerotic rats. Methods: The antioxidant activity was determined by using Superoxide Dismutase activity (SOD) of serum and lipid peroxidation product (Malondialdehyde) of plasma and lipid profile including cholesterol total, LDL, triglyceride (TG), High Density Lipoprotein (HDL) of atherosclerotic rats. Inducing atherosclerotic, rats were given cholesterol and cholic acid in feed during ten weeks until rats indicated atherosclerotic symptom with narrowed artery and foamy cells in the artery’s wall. After rats suffered atherosclerotic, the high cholesterol feed and cholic acid were stopped and rats were given BTE 450; 300; 150 mg/kg body weight (BW) daily, quercetin 15; 10; 5 mg/kg BW daily, compared to rats were given vitamin E 60 mg/kg/BW; simvastatin 2.7 mg/kg BW, probucol 30 mg/kg BW daily for 21 days (first treatment) and 42 days (second treatment), negative control (normal feed), positive control (atherosclerotic rats). Results: BTE and quercetin could lower cholesterol total, triglyceride, LDL MDA and increase HDL, SOD were comparable with simvastatin, probucol both for 21 days and 42 days treatment, as well to improve coronary arteries histopathology. Conclusions: BTE andquercetin have hypolipidemic and antioxidant effects, as well as improve coronary arteries histopathology in atherosclerotic rats.

Keywords: Black tea, quercetin, atherosclerosis, antioxidant, hypolipidemic, cardiovascular disease.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2732
1929 Non-Methane Hydrocarbons Emission during the Photocopying Process

Authors: Kiurski S. Jelena, Aksentijević M. Snežana, Kecić S. Vesna, Oros B. Ivana

Abstract:

Prosperity of electronic equipment in photocopying environment not only has improved work efficiency, but also has changed indoor air quality. Considering the number of photocopying employed, indoor air quality might be worse than in general office environments. Determining the contribution from any type of equipment to indoor air pollution is a complex matter. Non-methane hydrocarbons are known to have an important role on air quality due to their high reactivity. The presence of hazardous pollutants in indoor air has been detected in one photocopying shop in Novi Sad, Serbia. Air samples were collected and analyzed for five days, during 8-hr working time in three time intervals, whereas three different sampling points were determined. Using multiple linear regression model and software package STATISTICA 10 the concentrations of occupational hazards and microclimates parameters were mutually correlated. Based on the obtained multiple coefficients of determination (0.3751, 0.2389 and 0.1975), a weak positive correlation between the observed variables was determined. Small values of parameter F indicated that there was no statistically significant difference between the concentration levels of nonmethane hydrocarbons and microclimates parameters. The results showed that variable could be presented by the general regression model: y = b0 + b1xi1+ b2xi2. Obtained regression equations allow to measure the quantitative agreement between the variables and thus obtain more accurate knowledge of their mutual relations.

Keywords: Indoor air quality, multiple regression analysis, nonmethane hydrocarbons, photocopying process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1974
1928 Transient Thermal Modeling of an Axial Flux Permanent Magnet (AFPM) Machine Using a Hybrid Thermal Model

Authors: J. Hey, D. A. Howey, R. Martinez-Botas, M. Lamperth

Abstract:

This paper presents the development of a hybrid thermal model for the EVO Electric AFM 140 Axial Flux Permanent Magnet (AFPM) machine as used in hybrid and electric vehicles. The adopted approach is based on a hybrid lumped parameter and finite difference method. The proposed method divides each motor component into regular elements which are connected together in a thermal resistance network representing all the physical connections in all three dimensions. The element shape and size are chosen according to the component geometry to ensure consistency. The fluid domain is lumped into one region with averaged heat transfer parameters connecting it to the solid domain. Some model parameters are obtained from Computation Fluid Dynamic (CFD) simulation and empirical data. The hybrid thermal model is described by a set of coupled linear first order differential equations which is discretised and solved iteratively to obtain the temperature profile. The computation involved is low and thus the model is suitable for transient temperature predictions. The maximum error in temperature prediction is 3.4% and the mean error is consistently lower than the mean error due to uncertainty in measurements. The details of the model development, temperature predictions and suggestions for design improvements are presented in this paper.

Keywords: Electric vehicle, hybrid thermal model, transient temperature prediction, Axial Flux Permanent Magnet machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2158
1927 Bifurcation Study and Parameter Analyses Boost Converter

Authors: S. Ben Said, K. Ben Saad, M. Benrejeb

Abstract:

This paper deals with bifurcation analyses in current programmed DC/DC Boost converter and exhibition of chaotic behavior. This phenomenon occurs due to variation of a set of the studied circuit parameters (input voltage and a reference current). Two different types of bifurcation paths have been observed as part as part of another bifurcation arising from variation of suitable chosen parameter.

Keywords: Bifurcation, Chaos, Boost converter, Current- programmed control, Initial parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2418
1926 A Novel Optimized JTAG Interface Circuit Design

Authors: Chenguang Guo, Lei Chen, Yanlong Zhang

Abstract:

This paper describes a novel optimized JTAG interface circuit between a JTAG controller and target IC. Being able to access JTAG using only one or two pins, this circuit does not change the original boundary scanning test frequency of target IC. Compared with the traditional JTAG interface which based on IEEE std. 1149.1, this reduced pin technology is more applicability in pin limited devices, and it is easier to control the scale of target IC for the designer.

Keywords: Boundary scan, JTAG interface, Test frequency, Reduced pin

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1374
1925 Numerical Modelling of Dust Propagation in the Atmosphere of Tbilisi City in Case of Western Background Light Air

Authors: N. Gigauri, V. Kukhalashvili, A. Surmava, L. Intskirveli, L. Gverdtsiteli

Abstract:

Tbilisi, a large city of the South Caucasus, is a junction point connecting Asia and Europe, Russia and republics of the Asia Minor. Over the last years, its atmosphere has been experienced an increasing anthropogenic load. Numerical modeling method is used for study of Tbilisi atmospheric air pollution. By means of 3D non-linear non-steady numerical model a peculiarity of city atmosphere pollution is investigated during background western light air. Dust concentration spatial and time changes are determined. There are identified the zones of high, average and less pollution, dust accumulation areas, transfer directions etc. By numerical modeling, there is shown that the process of air pollution by the dust proceeds in four stages, and they depend on the intensity of motor traffic, the micro-relief of the city, and the location of city mains. In the interval of time 06:00-09:00 the intensive growth, 09:00-15:00 a constancy or weak decrease, 18:00-21:00 an increase, and from 21:00 to 06:00 a reduction of the dust concentrations take place. The highly polluted areas are located in the vicinity of the city center and at some peripherical territories of the city, where the maximum dust concentration at 9PM is equal to 2 maximum allowable concentrations. The similar investigations conducted in case of various meteorological situations will enable us to compile the map of background urban pollution and to elaborate practical measures for ambient air protection.

Keywords: Numerical modelling, source of pollution, dust propagation, western light air.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 489
1924 Neural Network Implementation Using FPGA: Issues and Application

Authors: A. Muthuramalingam, S. Himavathi, E. Srinivasan

Abstract:

.Hardware realization of a Neural Network (NN), to a large extent depends on the efficient implementation of a single neuron. FPGA-based reconfigurable computing architectures are suitable for hardware implementation of neural networks. FPGA realization of ANNs with a large number of neurons is still a challenging task. This paper discusses the issues involved in implementation of a multi-input neuron with linear/nonlinear excitation functions using FPGA. Implementation method with resource/speed tradeoff is proposed to handle signed decimal numbers. The VHDL coding developed is tested using Xilinx XC V50hq240 Chip. To improve the speed of operation a lookup table method is used. The problems involved in using a lookup table (LUT) for a nonlinear function is discussed. The percentage saving in resource and the improvement in speed with an LUT for a neuron is reported. An attempt is also made to derive a generalized formula for a multi-input neuron that facilitates to estimate approximately the total resource requirement and speed achievable for a given multilayer neural network. This facilitates the designer to choose the FPGA capacity for a given application. Using the proposed method of implementation a neural network based application, namely, a Space vector modulator for a vector-controlled drive is presented

Keywords: FPGA implementation, multi-input neuron, neural network, nn based space vector modulator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4424
1923 Machine Learning Based Approach for Measuring Promotion Effectiveness in Multiple Parallel Promotions’ Scenarios

Authors: Revoti Prasad Bora, Nikita Katyal

Abstract:

Promotion is a key element in the retail business. Thus, analysis of promotions to quantify their effectiveness in terms of Revenue and/or Margin is an essential activity in the retail industry. However, measuring the sales/revenue uplift is based on estimations, as the actual sales/revenue without the promotion is not present. Further, the presence of Halo and Cannibalization in a multiple parallel promotions’ scenario complicates the problem. Calculating Baseline by considering inter-brand/competitor items or using Halo and Cannibalization's impact on Revenue calculations by considering Baseline as an interpretation of items’ unit sales in neighboring nonpromotional weeks individually may not capture the overall Revenue uplift in the case of multiple parallel promotions. Hence, this paper proposes a Machine Learning based method for calculating the Revenue uplift by considering the Halo and Cannibalization impact on the Baseline and the Revenue. In the first section of the proposed methodology, Baseline of an item is calculated by incorporating the impact of the promotions on its related items. In the later section, the Revenue of an item is calculated by considering both Halo and Cannibalization impacts. Hence, this methodology enables correct calculation of the overall Revenue uplift due a given promotion.

Keywords: Halo, cannibalization, promotion, baseline, temporary price reduction, retail, elasticity, cross price elasticity, machine learning, random forest, linear regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1324
1922 Daily Probability Model of Storm Events in Peninsular Malaysia

Authors: Mohd Aftar Abu Bakar, Noratiqah Mohd Ariff, Abdul Aziz Jemain

Abstract:

Storm Event Analysis (SEA) provides a method to define rainfalls events as storms where each storm has its own amount and duration. By modelling daily probability of different types of storms, the onset, offset and cycle of rainfall seasons can be determined and investigated. Furthermore, researchers from the field of meteorology will be able to study the dynamical characteristics of rainfalls and make predictions for future reference. In this study, four categories of storms; short, intermediate, long and very long storms; are introduced based on the length of storm duration. Daily probability models of storms are built for these four categories of storms in Peninsular Malaysia. The models are constructed by using Bernoulli distribution and by applying linear regression on the first Fourier harmonic equation. From the models obtained, it is found that daily probability of storms at the Eastern part of Peninsular Malaysia shows a unimodal pattern with high probability of rain beginning at the end of the year and lasting until early the next year. This is very likely due to the Northeast monsoon season which occurs from November to March every year. Meanwhile, short and intermediate storms at other regions of Peninsular Malaysia experience a bimodal cycle due to the two inter-monsoon seasons. Overall, these models indicate that Peninsular Malaysia can be divided into four distinct regions based on the daily pattern for the probability of various storm events.

Keywords: Daily probability model, monsoon seasons, regions, storm events.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1632
1921 Identifying Autism Spectrum Disorder Using Optimization-Based Clustering

Authors: Sharifah Mousli, Sona Taheri, Jiayuan He

Abstract:

Autism spectrum disorder (ASD) is a complex developmental condition involving persistent difficulties with social communication, restricted interests, and repetitive behavior. The challenges associated with ASD can interfere with an affected individual’s ability to function in social, academic, and employment settings. Although there is no effective medication known to treat ASD, to our best knowledge, early intervention can significantly improve an affected individual’s overall development. Hence, an accurate diagnosis of ASD at an early phase is essential. The use of machine learning approaches improves and speeds up the diagnosis of ASD. In this paper, we focus on the application of unsupervised clustering methods in ASD, as a large volume of ASD data generated through hospitals, therapy centers, and mobile applications has no pre-existing labels. We conduct a comparative analysis using seven clustering approaches, such as K-means, agglomerative hierarchical, model-based, fuzzy-C-means, affinity propagation, self organizing maps, linear vector quantisation – as well as the recently developed optimization-based clustering (COMSEP-Clust) approach. We evaluate the performances of the clustering methods extensively on real-world ASD datasets encompassing different age groups: toddlers, children, adolescents, and adults. Our experimental results suggest that the COMSEP-Clust approach outperforms the other seven methods in recognizing ASD with well-separated clusters.

Keywords: Autism spectrum disorder, clustering, optimization, unsupervised machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 417