Search results for: Clenshaw method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8089

Search results for: Clenshaw method

4129 Viscoelastic Modeling of Brain MRE Data Using FE Method

Authors: H. Ajabi Naeeni, M. Haghpanahi

Abstract:

Dynamic shear test on simulated phantom can be used to validate magnetic resonance elastography (MRE) measurements. Phantom gel has been usually utilized for the cell culture of cartilage and soft tissue and also been used for mechanical property characterization using imaging systems. The viscoelastic property of the phantom would be important for dynamic experiments and analyses. In this study, An axisymmetric FE model is presented for determining the dynamic shear behaviour of brain simulated phantom using ABAQUS. The main objective of this study was to investigate the effect of excitation frequencies and boundary conditions on shear modulus and shear viscosity in viscoelastic media.

Keywords: Viscoelastic, MR Elastography, Finite Element, Brain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1752
4128 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou

Abstract:

Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.

Keywords: Calibration, dynamic range, radiometric resolution, SNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1340
4127 Fail-safe Modeling of Discrete Event Systems using Petri Nets

Authors: P. Nazemzadeh, A. Dideban, M. Zareiee

Abstract:

In this paper the effect of faults in the elements and parts of discrete event systems is investigated. In the occurrence of faults, some states of the system must be changed and some of them must be forbidden. For this goal, different states of these elements are examined and a model for fail-safe behavior of each state is introduced. Replacing new models of the target elements in the preliminary model by a systematic method, leads to a fail-safe discrete event system.

Keywords: Discrete event systems, Fail-safe, Petri nets, Supervisory control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1620
4126 Simulating a Single-Server Queue using the Q – Simulator

Authors: Irene K. Amponsah, Bennony K. Gordor, Francis Dogbey

Abstract:

This paper introduces a technique for simulating a single-server exponential queuing system. The technique called the Q-Simulator is a computer program which can simulate the effect of traffic intensity on all system average quantities given the arrival and/or service rates. The Q-Simulator has three phases namely: the formula based method, the uncontrolled simulation, and the controlled simulation. The Q-Simulator generates graphs (crystal solutions) for all results of the simulation or calculation and can be used to estimate desirable average quantities such as waiting times, queue lengths, etc.

Keywords: Automation system-Simulator, Simulation, Singleserver exponential system

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2299
4125 Optimizing the Design of Radial/Axial PMSM and SRM used for Powered Wheel-Chairs

Authors: D. Fodorean, D.C. Popa, F. Jurca, M. Ruba

Abstract:

the paper presents the optimization results for several electrical machines dedicated for powered electric wheel-chairs. The optimization, using the Hook-Jeeves algorithm, was employed based on a design approach which takes into consideration the road conditions. Also, through numerical simulations (based on finite element method), the analytical approach was validated. The optimization approach gave satisfactory results and the best suited variant was chosen for the motorization of the wheel-chair.

Keywords: electrical machines, numerical validation, optimization, electric wheel chair.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2072
4124 Solutions to Probabilistic Constrained Optimal Control Problems Using Concentration Inequalities

Authors: Tomoaki Hashimoto

Abstract:

Recently, optimal control problems subject to probabilistic constraints have attracted much attention in many research field. Although probabilistic constraints are generally intractable in optimization problems, several methods haven been proposed to deal with probabilistic constraints. In most methods, probabilistic constraints are transformed to deterministic constraints that are tractable in optimization problems. This paper examines a method for transforming probabilistic constraints into deterministic constraints for a class of probabilistic constrained optimal control problems.

Keywords: Optimal control, stochastic systems, discrete-time systems, probabilistic constraints.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1376
4123 Synchronization of Non-Identical Chaotic Systems with Different Orders Based On Vector Norms Approach

Authors: Rihab Gam, Anis Sakly, Faouzi M'sahli

Abstract:

A new strategy of control is formulated for chaos synchronization of non-identical chaotic systems with different orders using the Borne and Gentina practical criterion associated with the Benrejeb canonical arrow form matrix, to drift the stability property of dynamic complex systems. The designed controller ensures that the state variables of controlled chaotic slave systems globally synchronize with the state variables of the master systems, respectively. Numerical simulations are performed to illustrate the efficiency of the proposed method.

Keywords: Synchronization, Non-identical chaotic systems, Different orders, Arrow form matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1799
4122 Design of Hydroxyapatite-Polyetheretherketone Fixation Plates for Diaphysis Femur Fracture

Authors: Abhishek Soni, Bhagat Singh

Abstract:

In this study, scanned data of a damaged femur diaphysis are used to generate three dimensional model of the bone. Further, customized implant of Hydroxyapatite-Polyetheretherketone (HA-PEEK) material for this damaged bone is prepared using CAD modeling. Damaged bone and implant have been assembled to prepare the intact bone. This assembled model has been analyzed to evaluate the stresses and deformation developed during the static loading. It has been observed that these stresses and deformation are very less thus imply that the proposed method of preparing implant is appropriate.

Keywords: Customized implant, deformation, femur diaphysis, stress.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 551
4121 Bootstrap Confidence Intervals and Parameter Estimation for Zero Inflated Strict Arcsine Model

Authors: Y. N. Phang, E. F. Loh

Abstract:

Zero inflated Strict Arcsine model is a newly developed model which is found to be appropriate in modeling overdispersed count data. In this study, maximum likelihood estimation method is used in estimating the parameters for zero inflated strict arcsine model. Bootstrapping is then employed to compute the confidence intervals for the estimated parameters.

Keywords: overdispersed count data, maximum likelihood estimation, simulated annealing, BCa confidence intervals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2281
4120 Estimating Enzyme Kinetic Parameters from Apparent KMs and Vmaxs

Authors: Simon Brown, Noorzaid Muhamad, David C Simcock

Abstract:

The kinetic properties of enzymes are often reported using the apparent KM and Vmax appropriate to the standard Michaelis-Menten enzyme. However, this model is inappropriate to enzymes that have more than one substrate or where the rate expression does not apply for other reasons. Consequently, it is desirable to have a means of estimating the appropriate kinetic parameters from the apparent values of KM and Vmax reported for each substrate. We provide a means of estimating the range within which the parameters should lie and apply the method to data for glutamate dehydrogenase from the nematode parasite of sheep Teladorsagia circumcincta.

Keywords: enzyme kinetics, glutamate dehydrogenase, intervalanalysis, parameter estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965
4119 Finite Element Modelling of Ground Vibrations Due to Tunnelling Activities

Authors: Muhammad E. Rahman, Trevor Orr

Abstract:

This paper presents the use of three-dimensional finite elements coupled with infinite elements to investigate the ground vibrations at the surface in terms of the peak particle velocity (PPV) due to construction of the first bore of the Dublin Port Tunnel. This situation is analysed using a commercially available general-purpose finite element package ABAQUS. A series of parametric studies is carried out to examine the sensitivity of the predicted vibrations to variations in the various input parameters required by finite element method, including the stiffness and the damping of ground. The results of this study show that stiffness has a more significant effect on the PPV rather than the damping of the ground.

Keywords: Finite Elements, PPV, Tunnelling, Vibration

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3256
4118 A Comparative Analysis of Activity-Based Costing and Traditional Costing

Authors: Derya Eren Akyol, Gonca Tuncel, G. Mirac Bayhan

Abstract:

Activity-Based Costing (ABC) which has become an important aspect of manufacturing/service organizations can be defined as a methodology that measures the cost and performance of activities, resources and cost objects. It can be considered as an alternative paradigm to traditional cost-based accounting systems. The objective of this paper is to illustrate an application of ABC method and to compare the results of ABC with traditional costing methods. The results of the application highlight the weak points of traditional costing methods and an S-Curve obtained is used to identify the undercosted and overcosted products of the firm.

Keywords: Activity-based costing, cost drivers, overheads, traditional costing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12644
4117 Developing Improvements to Multi-Hazard Risk Assessments

Authors: A. Fathianpour, M. B. Jelodar, S. Wilkinson

Abstract:

This paper outlines the approaches taken to assess multi-hazard assessments. There is currently confusion in assessing multi-hazard impacts, and so this study aims to determine which of the available options are the most useful. The paper uses an international literature search, and analysis of current multi-hazard assessments and a case study to illustrate the effectiveness of the chosen method. Findings from this study will help those wanting to assess multi-hazards to undertake a straightforward approach. The paper is significant as it helps to interpret the various approaches and concludes with the preferred method. Many people in the world live in hazardous environments and are susceptible to disasters. Unfortunately, when a disaster strikes it is often compounded by additional cascading hazards, thus people would confront more than one hazard simultaneously. Hazards include natural hazards (earthquakes, floods, etc.) or cascading human-made hazards (for example, Natural Hazard Triggering Technological disasters (Natech) such as fire, explosion, toxic release). Multi-hazards have a more destructive impact on urban areas than one hazard alone. In addition, climate change is creating links between different disasters such as causing landslide dams and debris flows leading to more destructive incidents. Much of the prevailing literature deals with only one hazard at a time. However, recently sophisticated multi-hazard assessments have started to appear. Given that multi-hazards occur, it is essential to take multi-hazard risk assessment under consideration. This paper aims to review the multi-hazard assessment methods through articles published to date and categorize the strengths and disadvantages of using these methods in risk assessment. Napier City is selected as a case study to demonstrate the necessity of using multi-hazard risk assessments. In order to assess multi-hazard risk assessments, first, the current multi-hazard risk assessment methods were described. Next, the drawbacks of these multi-hazard risk assessments were outlined. Finally, the improvements to current multi-hazard risk assessments to date were summarised. Generally, the main problem of multi-hazard risk assessment is to make a valid assumption of risk from the interactions of different hazards. Currently, risk assessment studies have started to assess multi-hazard situations, but drawbacks such as uncertainty and lack of data show the necessity for more precise risk assessment. It should be noted that ignoring or partial considering multi-hazards in risk assessment will lead to an overestimate or overlook in resilient and recovery action managements.

Keywords: Cascading hazards, multi-hazard, risk assessment, risk reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1097
4116 A Methodology for Definition of Road Networks in Rural Areas of Nepal

Authors: J. K. Shrestha, A. Benta, R. B. Lopes, N. Lopes

Abstract:

This work provides a practical method for the development of rural road networks in rural areas of developing countries. The proposed methodology enables to determine obligatory points in the rural road network maximizing the number of settlements that have access to basic services within a given maximum distance. The proposed methodology is simple and practical, hence, highly applicable to real-world scenarios, as demonstrated in the definition of the road network for the rural areas of Nepal.

Keywords: Minimum spanning tree, nodal points, rural road network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2882
4115 A Frame Work for the Development of a Suitable Method to Find Shoot Length at Maturity of Mustard Plant Using Soft Computing Model

Authors: Satyendra Nath Mandal, J. Pal Choudhury, Dilip De, S. R. Bhadra Chaudhuri

Abstract:

The production of a plant can be measured in terms of seeds. The generation of seeds plays a critical role in our social and daily life. The fruit production which generates seeds, depends on the various parameters of the plant, such as shoot length, leaf number, root length, root number, etc When the plant is growing, some leaves may be lost and some new leaves may appear. It is very difficult to use the number of leaves of the tree to calculate the growth of the plant.. It is also cumbersome to measure the number of roots and length of growth of root in several time instances continuously after certain initial period of time, because roots grow deeper and deeper under ground in course of time. On the contrary, the shoot length of the tree grows in course of time which can be measured in different time instances. So the growth of the plant can be measured using the data of shoot length which are measured at different time instances after plantation. The environmental parameters like temperature, rain fall, humidity and pollution are also play some role in production of yield. The soil, crop and distance management are taken care to produce maximum amount of yields of plant. The data of the growth of shoot length of some mustard plant at the initial stage (7,14,21 & 28 days after plantation) is available from the statistical survey by a group of scientists under the supervision of Prof. Dilip De. In this paper, initial shoot length of Ken( one type of mustard plant) has been used as an initial data. The statistical models, the methods of fuzzy logic and neural network have been tested on this mustard plant and based on error analysis (calculation of average error) that model with minimum error has been selected and can be used for the assessment of shoot length at maturity. Finally, all these methods have been tested with other type of mustard plants and the particular soft computing model with the minimum error of all types has been selected for calculating the predicted data of growth of shoot length. The shoot length at the stage of maturity of all types of mustard plants has been calculated using the statistical method on the predicted data of shoot length.

Keywords: Fuzzy time series, neural network, forecasting error, average error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1591
4114 C-LNRD: A Cross-Layered Neighbor Route Discovery for Effective Packet Communication in Wireless Sensor Network

Authors: K. Kalaikumar, E. Baburaj

Abstract:

One of the problems to be addressed in wireless sensor networks is the issues related to cross layer communication. Cross layer architecture shares the information across the layer, ensuring Quality of Services (QoS). With this shared information, MAC protocol adapts effective functionality maintenance such as route selection on changeable sensor network environment. However, time slot assignment and neighbour route selection time duration for cross layer have not been carried out. The time varying physical layer communication over cross layer causes high traffic load in the sensor network. Though, the traffic load was reduced using cross layer optimization procedure, the computational cost is high. To improve communication efficacy in the sensor network, a self-determined time slot based Cross-Layered Neighbour Route Discovery (C-LNRD) method is presented in this paper. In the presented work, the initial process is to discover the route in the sensor network using Dynamic Source Routing based Medium Access Control (MAC) sub layers. This process considers MAC layer operation with dynamic route neighbour table discovery. Then, the discovered route path for packet communication employs Broad Route Distributed Time Slot Assignment method on Cross-Layered Sensor Network system. Broad Route means time slotting on varying length of the route paths. During packet communication in this sensor network, transmission of packets is adjusted over the different time with varying ranges for controlling the traffic rate. Finally, Rayleigh fading model is developed in C-LNRD to identify the performance of the sensor network communication structure. The main task of Rayleigh Fading is to measure the power level of each communication under MAC sub layer. The minimized power level helps to easily reduce the computational cost of packet communication in the sensor network. Experiments are conducted on factors such as power factor, on packet communication, neighbour route discovery time, and information (i.e., packet) propagation speed.

Keywords: Medium access control, neighbour route discovery, wireless sensor network, Rayleigh fading, distributed time slot assignment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 774
4113 Positive Solutions of Initial Value Problem for the Systems of Second Order Integro-Differential Equations in Banach Space

Authors: Lv Yuhua

Abstract:

In this paper, by establishing a new comparison result, we investigate the existence of positive solutions for initial value problems of nonlinear systems of second order integro-differential equations in Banach space.We improve and generalize some results  (see[5,6]), and the results is new even in finite dimensional spaces.

Keywords: Systems of integro-differential equations, monotone iterative method, comparison result, cone.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1500
4112 Solvatochromic Shift and Estimation of Dipole Moment of Quinine Sulphate Dication

Authors: S. Joshi, D. Pant

Abstract:

Absorption and fluorescence spectra of quinine sulphate (QSD) have been recorded at room temperature in wide range of solvents of different polarities. The ground-state dipole moment of QSD was obtained from quantum mechanical calculations and the excited state dipole moment of QSD was estimated from Bakhshiev-s and Kawski-Chamma-Viallet-s equations by means of solvatochromic shift method. Higher value of dipole moment is observed for excited state as compared to the corresponding ground state value and this is attributed to the more polar excited state of QSD.

Keywords: Dipole moment, Quinine sulphate dication, Solvatochromic shift

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2374
4111 Software Architectural Design Ontology

Authors: Muhammad Irfan Marwat, Sadaqat Jan, Syed Zafar Ali Shah

Abstract:

Software Architecture plays a key role in software development but absence of formal description of Software Architecture causes different impede in software development. To cope with these difficulties, ontology has been used as artifact. This paper proposes ontology for Software Architectural design based on IEEE model for architecture description and Kruchten 4+1 model for viewpoints classification. For categorization of style and views, ISO/IEC 42010 has been used. Corpus method has been used to evaluate ontology. The main aim of the proposed ontology is to classify and locate Software Architectural design information.

Keywords: Software Architecture Ontology, Semantic based Software Architecture, Software Architecture, Ontology, Software Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4189
4110 Ignition Time Delay in Swirling Supersonic Flow Combustion

Authors: A. M. Tahsini

Abstract:

Supersonic hydrogen-air cylindrical mixing layer is numerically analyzed to investigate the effect of inlet swirl on ignition time delay in scramjets. Combustion is treated using detail chemical kinetics. One-equation turbulence model of Spalart and Allmaras is chosen to study the problem and advection upstream splitting method is used as computational scheme. The results show that swirling both fuel and oxidizer streams may drastically decrease the ignition distance in supersonic combustion, unlike using the swirl just in fuel stream which has no helpful effect.

Keywords: Ignition delay, Supersonic combustion, Swirl, Numerical simulation, Turbulence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2194
4109 Effect of Catalyst Preparation on the Performance of CaO-ZnO Catalysts for Transesterification

Authors: Pathravut Klinklom, Apanee Luengnaruemitchai, Samai Jai-In

Abstract:

In this research, CaO-ZnO catalysts (with various Ca:Zn atomic ratios of 1:5, 1:3, 1:1, and 3:1) prepared by incipientwetness impregnation (IWI) and co-precipitation (CP) methods were used as a catalyst in the transesterification of palm oil with methanol for biodiesel production. The catalysts were characterized by several techniques, including BET method, CO2-TPD, and Hemmett Indicator. The effects of precursor concentration, and calcination temperature on the catalytic performance were studied under reaction conditions of a 15:1 methanol to oil molar ratio, 6 wt% catalyst, reaction temperature of 60°C, and reaction time of 8 h. At Ca:Zn atomic ratio of 1:3 gave the highest FAME value owing to a basic properties and surface area of the prepared catalyst.

Keywords: CaO, ZnO, Biodiesel, Impregnation, Coprecipitation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2709
4108 Solving the Economic Dispatch Problem by Using Differential Evolution

Authors: S. Khamsawang, S. Jiriwibhakorn

Abstract:

This paper proposes an application of the differential evolution (DE) algorithm for solving the economic dispatch problem (ED). Furthermore, the regenerating population procedure added to the conventional DE in order to improve escaping the local minimum solution. To test performance of DE algorithm, three thermal generating units with valve-point loading effects is used for testing. Moreover, investigating the DE parameters is presented. The simulation results show that the DE algorithm, which had been adjusted parameters, is better convergent time than other optimization methods.

Keywords: Differential evolution, Economic dispatch problem, Valve-point loading effect, Optimization method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1691
4107 A Comparison of Exact and Heuristic Approaches to Capital Budgeting

Authors: Jindřiška Šedová, Miloš Šeda

Abstract:

This paper summarizes and compares approaches to solving the knapsack problem and its known application in capital budgeting. The first approach uses deterministic methods and can be applied to small-size tasks with a single constraint. We can also apply commercial software systems such as the GAMS modelling system. However, because of NP-completeness of the problem, more complex problem instances must be solved by means of heuristic techniques to achieve an approximation of the exact solution in a reasonable amount of time. We show the problem representation and parameter settings for a genetic algorithm framework.

Keywords: Capital budgeting, knapsack problem, GAMS, heuristic method, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740
4106 Kalman Filter Based Adaptive Reduction of Motion Artifact from Photoplethysmographic Signal

Authors: S. Seyedtabaii, L. Seyedtabaii

Abstract:

Artifact free photoplethysmographic (PPG) signals are necessary for non-invasive estimation of oxygen saturation (SpO2) in arterial blood. Movement of a patient corrupts the PPGs with motion artifacts, resulting in large errors in the computation of Sp02. This paper presents a study on using Kalman Filter in an innovative way by modeling both the Artillery Blood Pressure (ABP) and the unwanted signal, additive motion artifact, to reduce motion artifacts from corrupted PPG signals. Simulation results show acceptable performance regarding LMS and variable step LMS, thus establishing the efficacy of the proposed method.

Keywords: Kalman filter, Motion artifact, PPG, Photoplethysmography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4261
4105 Improvement of Reaction Technology of Decalin Halogenation

Authors: Dmitriy Yu. Korulkin, Ravshan M. Nuraliev, Raissa A. Muzychkina

Abstract:

In this research paper were investigated the main regularities of a radical bromination reaction of decalin. There had been studied the temperature effect, durations of reaction, frequency rate of process, a ratio of initial components, type and number of the initiator on decalin bromination degree. There were specified optimum conditions of synthesis of a perbromodecalin by the method of a decalin bromination. There are developed the technological flowchart of receiving a perbromodecalin and the mass balance of process on the first and the subsequent loadings of components. The results of research of antibacterial and antifungal activity of synthesized bromoderivatives have been represented.

Keywords: Decalin, optimum technology, perbromodecalin, radical bromination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2218
4104 Towards Real-Time Classification of Finger Movement Direction Using Encephalography Independent Components

Authors: Mohamed Mounir Tellache, Hiroyuki Kambara, Yasuharu Koike, Makoto Miyakoshi, Natsue Yoshimura

Abstract:

This study explores the practicality of using electroencephalographic (EEG) independent components to predict eight-direction finger movements in pseudo-real-time. Six healthy participants with individual-head MRI images performed finger movements in eight directions with two different arm configurations. The analysis was performed in two stages. The first stage consisted of using independent component analysis (ICA) to separate the signals representing brain activity from non-brain activity signals and to obtain the unmixing matrix. The resulting independent components (ICs) were checked, and those reflecting brain-activity were selected. Finally, the time series of the selected ICs were used to predict eight finger-movement directions using Sparse Logistic Regression (SLR). The second stage consisted of using the previously obtained unmixing matrix, the selected ICs, and the model obtained by applying SLR to classify a different EEG dataset. This method was applied to two different settings, namely the single-participant level and the group-level. For the single-participant level, the EEG dataset used in the first stage and the EEG dataset used in the second stage originated from the same participant. For the group-level, the EEG datasets used in the first stage were constructed by temporally concatenating each combination without repetition of the EEG datasets of five participants out of six, whereas the EEG dataset used in the second stage originated from the remaining participants. The average test classification results across datasets (mean ± S.D.) were 38.62 ± 8.36% for the single-participant, which was significantly higher than the chance level (12.50 ± 0.01%), and 27.26 ± 4.39% for the group-level which was also significantly higher than the chance level (12.49% ± 0.01%). The classification accuracy within [–45°, 45°] of the true direction is 70.03 ± 8.14% for single-participant and 62.63 ± 6.07% for group-level which may be promising for some real-life applications. Clustering and contribution analyses further revealed the brain regions involved in finger movement and the temporal aspect of their contribution to the classification. These results showed the possibility of using the ICA-based method in combination with other methods to build a real-time system to control prostheses.

Keywords: Brain-computer interface, BCI, electroencephalography, EEG, finger motion decoding, independent component analysis, pseudo-real-time motion decoding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 599
4103 Virtual Speaking Head for Hearing Impaired Students

Authors: Eva Pajorová, Ladislav Hluchý

Abstract:

Developed tool is one of system tools for easier access to various scientific areas and real time interactive learning between lecturer and for hearing impaired students. There is no demand for the lecturer to know Sign Language (SL). Instead, the new software tools will perform the translation of the regular speech into SL, after which it will be transferred to the student. On the other side, the questions of the student (in SL) will be translated and transferred to the lecturer in text or speech. One of those tools is presented tool. It-s too for developing the correct Speech Visemes as a root of total communication method for hearing impared students.

Keywords: Impared people, sing language, communication methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1844
4102 Utilization of Industrial Byproducts in Concrete Applications by Adopting Grey Taguchi Method for Optimization

Authors: V. K. Bansal, M. Kumar, P. P. Bansal, A. Batish

Abstract:

This paper presents the results of an experimental investigation carried out to evaluate the effects of partial replacement of cement and fine aggregate with industrial waste by-products on concrete strength properties. The Grey Taguchi approach has been used to optimize the mix proportions for desired properties. In this research work, a ternary combination of industrial waste by-products has been used. The experiments have been designed using Taguchi's L9 orthogonal array with four factors having three levels each. The cement was partially replaced by ladle furnace slag (LFS), fly ash (FA) and copper slag (CS) at 10%, 25% and 40% level and fine aggregate (sand) was partially replaced with electric arc furnace slag (EAFS), iron slag (IS) and glass powder (GP) at 20%, 30% and 40% level. Three water to binder ratios, fixed at 0.40, 0.44 and 0.48, were used, and the curing age was fixed at 7, 28 and 90 days. Thus, a series of nine experiments was conducted on the specimens for water to binder ratios of 0.40, 0.44 and 0.48 at 7, 28 and 90 days of the water curing regime. It is evident from the investigations that Grey Taguchi approach for optimization helps in identifying the factors affecting the final outcomes, i.e. compressive strength and split tensile strength of concrete. For the materials and a range of parameters used in this research, the present study has established optimum mixes in terms of strength properties. The best possible levels of mix proportions were determined for maximization through compressive and splitting tensile strength. To verify the results, the optimal mix was produced and tested. The mixture results in higher compressive strength and split tensile strength than other mixes. The compressive strength and split tensile strength of optimal mixtures are also compared with the control concrete mixtures. The results show that compressive strength and split tensile strength of concrete made with partial replacement of cement and fine aggregate is more than control concrete at all ages and w/c ratios. Based on the overall observations, it can be recommended that industrial waste by-products in ternary combinations can effectively be utilized as partial replacements of cement and fine aggregates in all concrete applications.

Keywords: Analysis of variance, ANOVA, compressive strength, concrete, grey Taguchi method, industrial by-products, split tensile strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 821
4101 Application of the Neural Network to the Synthesis of Multibeam Antennas Arrays

Authors: Ridha Ghayoula, Mbarek Traii, Ali Gharsallah

Abstract:

In this paper, we intend to study the synthesis of the multibeam arrays. The synthesis implementation-s method for this type of arrays permits to approach the appropriated radiance-s diagram. The used approach is based on neural network that are capable to model the multibeam arrays, consider predetermined general criteria-s, and finally it permits to predict the appropriated diagram from the neural model. Our main contribution in this paper is the extension of a synthesis model of these multibeam arrays.

Keywords: Multibeam, modelling, neural networks, synthesis, antennas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1228
4100 Application of the Neural Network to the Synthesis of Vertical Dipole Antenna over Imperfect Ground

Authors: Kais Hafsaoui

Abstract:

In this paper, we propose to study the synthesis of the vertical dipole antenna over imperfect ground. The synthesis implementation-s method for this type of antenna permits to approach the appropriated radiance-s diagram. The used approach is based on neural network. Our main contribution in this paper is the extension of a synthesis model of this vertical dipole antenna over imperfect ground.

Keywords: Vertical dipole antenna, imperfect ground, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1206