Search results for: Conservative level set method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10709

Search results for: Conservative level set method

6659 Modeling and Simulation of Delaminations in FML Using Step Pulsed Active Thermography

Authors: S. Sundaravalli, M. C. Majumder, G. K. Vijayaraghavan

Abstract:

The study focuses to investigate the thermal response of delaminations and develop mathematical models using numerical results to obtain the optimum heat requirement and time to identify delaminations in GLARE type of Fibre Metal Laminates (FML) in both reflection mode and through-transmission (TT) mode of step pulsed active thermography (SPAT) method in the type of nondestructive testing and evaluation (NDTE) technique. The influence of applied heat flux and time on various sizes and depth of delaminations in FML is analyzed to investigate the thermal response through numerical simulations. A finite element method (FEM) is applied to simulate SPAT through ANSYS software based on 3D transient heat transfer principle with the assumption of reflection mode and TT mode of observation individually.

The results conclude that the numerical approach based on SPAT in reflection mode is more suitable for analysing smaller size of near-surface delaminations located at the thermal stimulator side and TT mode is more suitable for analysing smaller size of deeper delaminations located far from thermal stimulator side or near thermal detector/Infrared camera side. The mathematical models provide the optimum q and T at the required MRTD to identify unidentified delamination 7 with 25015.0022W/m2 at 2.531sec and delamination 8 with 16663.3356 W/m2 at 1.37857sec in reflection mode. In TT mode, the delamination 1 with 34954W/m2 at 13.0399sec, delamination 2 with 20002.67W/m2 at 1.998sec and delamination 7 with 20010.87 W/m2 at 0.6171sec could be identified.

Keywords: Step pulsed active thermography (SPAT), NDTE, FML, Delaminations, Finite element method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2528
6658 Fuzzy Group Decision Making for the Assessment of Health-Care Waste Disposal Alternatives in Istanbul

Authors: Mehtap Dursun, E. Ertugrul Karsak, Melis Almula Karadayi

Abstract:

Disposal of health-care waste (HCW) is considered as an important environmental problem especially in large cities. Multiple criteria decision making (MCDM) techniques are apt to deal with quantitative and qualitative considerations of the health-care waste management (HCWM) problems. This research proposes a fuzzy multi-criteria group decision making approach with a multilevel hierarchical structure including qualitative as well as quantitative performance attributes for evaluating HCW disposal alternatives for Istanbul. Using the entropy weighting method, objective weights as well as subjective weights are taken into account to determine the importance weighting of quantitative performance attributes. The results obtained using the proposed methodology are thoroughly analyzed.

Keywords: Entropy weighting method, group decision making, health-care waste management, hierarchical fuzzy multi-criteriadecision making

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668
6657 Inferences on Compound Rayleigh Parameters with Progressively Type-II Censored Samples

Authors: Abdullah Y. Al-Hossain

Abstract:

This paper considers inference under progressive type II censoring with a compound Rayleigh failure time distribution. The maximum likelihood (ML), and Bayes methods are used for estimating the unknown parameters as well as some lifetime parameters, namely reliability and hazard functions. We obtained Bayes estimators using the conjugate priors for two shape and scale parameters. When the two parameters are unknown, the closed-form expressions of the Bayes estimators cannot be obtained. We use Lindley.s approximation to compute the Bayes estimates. Another Bayes estimator has been obtained based on continuous-discrete joint prior for the unknown parameters. An example with the real data is discussed to illustrate the proposed method. Finally, we made comparisons between these estimators and the maximum likelihood estimators using a Monte Carlo simulation study.

Keywords: Progressive type II censoring, compound Rayleigh failure time distribution, maximum likelihood estimation, Bayes estimation, Lindley's approximation method, Monte Carlo simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2367
6656 Embedded Throughput Improving of Low-rate EDR Packets for Lower-latency

Authors: M. A. M. El-Bendary, A. E. Abu El-Azm, N. A. El-Fishawy, F. Shawky, F. E. El-Samie

Abstract:

With increasing utilization of the wireless devices in different fields such as medical devices and industrial fields, the paper presents a method for simplify the Bluetooth packets with throughput enhancing. The paper studies a vital issue in wireless communications, which is the throughput of data over wireless networks. In fact, the Bluetooth and ZigBee are a Wireless Personal Area Network (WPAN). With taking these two systems competition consideration, the paper proposes different schemes for improve the throughput of Bluetooth network over a reliable channel. The proposition depends on the Channel Quality Driven Data Rate (CQDDR) rules, which determines the suitable packet in the transmission process according to the channel conditions. The proposed packet is studied over additive White Gaussian Noise (AWGN) and fading channels. The Experimental results reveal the capability of extension of the PL length by 8, 16, 24 bytes for classic and EDR packets, respectively. Also, the proposed method is suitable for the low throughput Bluetooth.

Keywords: Bluetooth, throughput, adaptive packets, EDRpackets, CQDDR, low latency. Channel condition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1877
6655 Solution of Optimal Reactive Power Flow using Biogeography-Based Optimization

Authors: Aniruddha Bhattacharya, Pranab Kumar Chattopadhyay

Abstract:

Optimal reactive power flow is an optimization problem with one or more objective of minimizing the active power losses for fixed generation schedule. The control variables are generator bus voltages, transformer tap settings and reactive power output of the compensating devices placed on different bus bars. Biogeography- Based Optimization (BBO) technique has been applied to solve different kinds of optimal reactive power flow problems subject to operational constraints like power balance constraint, line flow and bus voltages limits etc. BBO searches for the global optimum mainly through two steps: Migration and Mutation. In the present work, BBO has been applied to solve the optimal reactive power flow problems on IEEE 30-bus and standard IEEE 57-bus power systems for minimization of active power loss. The superiority of the proposed method has been demonstrated. Considering the quality of the solution obtained, the proposed method seems to be a promising one for solving these problems.

Keywords: Active Power Loss, Biogeography-Based Optimization, Migration, Mutation, Optimal Reactive Power Flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4251
6654 Crystalline Structure of Starch Based Nano Composites

Authors: Farid Amidi Fazli, Afshin Babazadeh, Farnaz Amidi Fazli

Abstract:

In contrast with literal meaning of nano, researchers have been achieved mega adventures in this area and every day more nanomaterials are being introduced to the market. After long time application of fossil-based plastics, nowadays accumulation of their waste seems to be a big problem to the environment. On the other hand, mankind has more attention to safety and living environment. Replacing common plastic packaging materials with degradable ones that degrade faster and convert to non-dangerous components like water and carbon dioxide have more attractions; these new materials are based on renewable and inexpensive sources of starch and cellulose. However, the functional properties of them do not suitable for packaging. At this point, nanotechnology has an important role. Utilizing of nanomaterials in polymer structure will improve mechanical and physical properties of them; nanocrystalline cellulose (NCC) has this ability. This work has employed a chemical method to produce NCC and starch bio nanocomposite containing NCC. X-Ray Diffraction technique has characterized the obtained materials. Results showed that applied method is a suitable one as well as applicable one to NCC production.

Keywords: Biofilm, cellulose, nanocomposite, starch.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1751
6653 Fabrication of Poly(Ethylene Oxide)/Chitosan/Indocyanine Green Nanoprobe by Co-Axial Electrospinning Method for Early Detection

Authors: Zeynep R. Ege, Aydin Akan, Faik N. Oktar, Betul Karademir, Oguzhan Gunduz

Abstract:

Early detection of cancer could save human life and quality in insidious cases by advanced biomedical imaging techniques. Designing targeted detection system is necessary in order to protect of healthy cells. Electrospun nanofibers are efficient and targetable nanocarriers which have important properties such as nanometric diameter, mechanical properties, elasticity, porosity and surface area to volume ratio. In the present study, indocyanine green (ICG) organic dye was stabilized and encapsulated in polymer matrix which polyethylene oxide (PEO) and chitosan (CHI) multilayer nanofibers via co-axial electrospinning method at one step. The co-axial electrospun nanofibers were characterized as morphological (SEM), molecular (FT-IR), and entrapment efficiency of Indocyanine Green (ICG) (confocal imaging). Controlled release profile of PEO/CHI/ICG nanofiber was also evaluated up to 40 hours.

Keywords: Chitosan, coaxial electrospinning, controlled releasing, indocyanine green, nanoprobe, polyethylene oxide.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 739
6652 A New Graphical Password: Combination of Recall & Recognition Based Approach

Authors: Md. Asraful Haque, Babbar Imam

Abstract:

Information Security is the most describing problem in present times. To cop up with the security of the information, the passwords were introduced. The alphanumeric passwords are the most popular authentication method and still used up to now. However, text based passwords suffer from various drawbacks such as they are easy to crack through dictionary attacks, brute force attacks, keylogger, social engineering etc. Graphical Password is a good replacement for text password. Psychological studies say that human can remember pictures better than text. So this is the fact that graphical passwords are easy to remember. But at the same time due to this reason most of the graphical passwords are prone to shoulder surfing. In this paper, we have suggested a shoulder-surfing resistant graphical password authentication method. The system is a combination of recognition and pure recall based techniques. Proposed scheme can be useful for smart hand held devices (like smart phones i.e. PDAs, iPod, iPhone, etc) which are more handy and convenient to use than traditional desktop computer systems.

Keywords: Authentication, Graphical Password, Text Password, Information Security, Shoulder-surfing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4110
6651 Analysis of Web User Identification Methods

Authors: Renáta Iváncsy, Sándor Juhász

Abstract:

Web usage mining has become a popular research area, as a huge amount of data is available online. These data can be used for several purposes, such as web personalization, web structure enhancement, web navigation prediction etc. However, the raw log files are not directly usable; they have to be preprocessed in order to transform them into a suitable format for different data mining tasks. One of the key issues in the preprocessing phase is to identify web users. Identifying users based on web log files is not a straightforward problem, thus various methods have been developed. There are several difficulties that have to be overcome, such as client side caching, changing and shared IP addresses and so on. This paper presents three different methods for identifying web users. Two of them are the most commonly used methods in web log mining systems, whereas the third on is our novel approach that uses a complex cookie-based method to identify web users. Furthermore we also take steps towards identifying the individuals behind the impersonal web users. To demonstrate the efficiency of the new method we developed an implementation called Web Activity Tracking (WAT) system that aims at a more precise distinction of web users based on log data. We present some statistical analysis created by the WAT on real data about the behavior of the Hungarian web users and a comprehensive analysis and comparison of the three methods

Keywords: Data preparation, Tracking individuals, Web useridentification, Web usage mining

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4370
6650 Absence of Leave and Job Morality in the ICU

Authors: Li-Ping Hsiao, Feng-Chuan Pan

Abstract:

Leave of absence is important in maintaining a good status of human resource quality. Allowing the employees temporarily free from the routine assignments can vitalize the workers- morality and productivity. This is particularly critical to secure a satisfactory service quality for healthcare professionals of which were typically featured with labor intensive and complicated works to perform. As one of the veteran hospitals that were found and operated by the Veteran Department of Taiwan, the nursing staff of the case hospital was squeezed to an extreme minimum level under the pressure of a tight budgeting. Leave of absence on schedule became extremely difficult, especially for the intensive care units (ICU), in which required close monitoring over the cared patients, and that had more easily driven the ICU nurses nervous. Even worse, the deferred leaves were more than 10 days at any time in the ICU because of a fluctuating occupancy. As a result, these had brought a bad setback to this particular nursing team, and consequently defeated the job performance and service quality. To solve this problem and accordingly to strengthen their morality, a project team was organized across different departments specific for this. Sufficient information regarding jobs and positions requirements, labor resources, and actual working hours in detail were collected and analyzed in the team meetings. Several alternatives were finalized. These included job rotating, job combination, leave on impromptu and cross-departmental redeployment. Consequently, the deferred leave days sharply reduced 70% to a level of 3 or less days. This improvement had not only provided good shelter for the ICU nurses that improved their job performance and patient safety but also encouraged the nurses active participating of a project and learned the skills of solving problems with colleagues.

Keywords: Information, job rotating, human resource, intensive care unit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1505
6649 Clinical Parameters Response to Low-Level Laser versus Monochromatic Near-Infrared Photo Energy in Diabetic Patients with Peripheral Neuropathy

Authors: Abeer A. Abdelhamed

Abstract:

Background: Diabetic sensorimotor polyneuropathy (DSP) is one of the most common microvascular complications of type 2 diabetes. Loss of sensation is thought to contribute to a lack of static and dynamic stability and increased risk of falling. Purpose: The purpose of this study was to compare the effects of low-level laser (LLL) and monochromatic near-infrared photo energy (MIRE) on pain, cutaneous sensation, static stability, and index of lower limb blood flow in diabetic patients with peripheral neuropathy. Methods: Forty diabetic patients with peripheral neuropathy were recruited for participation in this study. They were divided into two groups: The MIRE group, which contained 20 patients, and the LLL group, which contained 20 patients. All patients who participated in the study had been subjected to various physical assessment procedures, including pain, cutaneous sensation, Doppler flow meter, and static stability assessments. The baseline measurements were followed by treatment sessions that were conducted twice a week for six successive weeks. Results: The statistical analysis of the data revealed significant improvement of pain in both groups, with significant improvement in cutaneous sensation and static balance in the MIRE group compared to the LLL group; on the other hand, the results showed no significant differences in lower limb blood flow between the groups. Conclusion: LLL and MIRE can improve painful symptoms in patients with diabetic neuropathy. On the other hand, MIRE is also useful in improving cutaneous sensation and static stability in patients with diabetic neuropathy.

Keywords: Diabetic neuropathy, Doppler flow meter, –Lowlevel laser, Monochromatic near-infrared photo energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1867
6648 An Efficient 3D Animation Data Reduction Using Frame Removal

Authors: Jinsuk Yang, Choongjae Joo, Kyoungsu Oh

Abstract:

Existing methods in which the animation data of all frames are stored and reproduced as with vertex animation cannot be used in mobile device environments because these methods use large amounts of the memory. So 3D animation data reduction methods aimed at solving this problem have been extensively studied thus far and we propose a new method as follows. First, we find and remove frames in which motion changes are small out of all animation frames and store only the animation data of remaining frames (involving large motion changes). When playing the animation, the removed frame areas are reconstructed using the interpolation of the remaining frames. Our key contribution is to calculate the accelerations of the joints of individual frames and the standard deviations of the accelerations using the information of joint locations in the relevant 3D model in order to find and delete frames in which motion changes are small. Our methods can reduce data sizes by approximately 50% or more while providing quality which is not much lower compared to original animations. Therefore, our method is expected to be usefully used in mobile device environments or other environments in which memory sizes are limited.

Keywords: Data Reduction, Interpolation, Vertex Animation, 3D Animation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1628
6647 Stochastic Modeling and Combined Spatial Pattern Analysis of Epidemic Spreading

Authors: S. Chadsuthi, W. Triampo, C. Modchang, P. Kanthang, D. Triampo, N. Nuttavut

Abstract:

We present analysis of spatial patterns of generic disease spread simulated by a stochastic long-range correlation SIR model, where individuals can be infected at long distance in a power law distribution. We integrated various tools, namely perimeter, circularity, fractal dimension, and aggregation index to characterize and investigate spatial pattern formations. Our primary goal was to understand for a given model of interest which tool has an advantage over the other and to what extent. We found that perimeter and circularity give information only for a case of strong correlation– while the fractal dimension and aggregation index exhibit the growth rule of pattern formation, depending on the degree of the correlation exponent (β). The aggregation index method used as an alternative method to describe the degree of pathogenic ratio (α). This study may provide a useful approach to characterize and analyze the pattern formation of epidemic spreading

Keywords: spatial pattern epidemics, aggregation index, fractaldimension, stochastic, long-rang epidemics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
6646 Dynamic Soil-Structure Interaction Analysis of Reinforced Concrete Buildings

Authors: Abdelhacine Gouasmia, Abdelhamid Belkhiri, Allaeddine Athmani

Abstract:

The objective of this paper is to evaluate the effects of soil-structure interaction (SSI) on the modal characteristics and on the dynamic response of current structures. The objective is on the overall behaviour of a real structure of five storeys reinforced concrete (R/C) building typically encountered in Algeria. Sensitivity studies are undertaken in order to study the effects of frequency content of the input motion, frequency of the soil-structure system, rigidity and depth of the soil layer on the dynamic response of such structures. This investigation indicated that the rigidity of the soil layer is the predominant factor in soil-structure interaction and its increases would definitely reduce the deformation in the R/C structure. On the other hand, increasing the period of the underlying soil will cause an increase in the lateral displacements at story levels and create irregularity in the distribution of story shears. Possible resonance between the frequency content of the input motion and soil could also play an important role in increasing the structural response.

Keywords: Direct method, finite element method, foundation, R/C frame, soil-structure interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2660
6645 Primary School Teachers’ Conceptual and Procedural Knowledge of Rational Number and Its Effects on Pupils’ Achievement in Rational Numbers

Authors: R. M. Kashim

Abstract:

The study investigated primary school teachers’ conceptual and procedural knowledge of rational numbers and its effects on pupil’s achievement in rational numbers. Specifically, primary school teachers’ level of conceptual knowledge about rational numbers, primary school teachers’ level of procedural knowledge about rational numbers, and the effects of teachers conceptual and procedural knowledge on their pupils understanding of rational numbers in primary schools is investigated. The study was carried out in Bauchi metropolis in the Bauchi state of Nigeria. The design of the study was a multi-stage design. The first stage was a descriptive design. The second stage involves a pre-test, post-test only quasi-experimental design. Two instruments were used for the data collection in the study. These were Conceptual and Procedural knowledge test (CPKT) and Rational number achievement test (RAT), the population of the study comprises of three (3) mathematics teachers’ holders of Nigerian Certificate in Education (NCE) teaching primary six and 210 pupils in their intact classes were used for the study. The data collected were analyzed using mean, standard deviation, analysis of variance, analysis of covariance and t- test. The findings indicated that the pupils taught rational number by a teacher that has high conceptual and procedural knowledge understand and perform better than the pupil taught by a teacher who has low conceptual and procedural knowledge of rational number. It is, therefore, recommended that teachers in primary schools should be encouraged to enrich their conceptual knowledge of rational numbers. Also, the superiority performance of teachers in procedural knowledge in rational number should not become an obstruction of understanding. Teachers Conceptual and procedural knowledge of rational numbers should be balanced so that primary school pupils will have a view of better teaching and learning of rational number in our contemporary schools.

Keywords: Achievement, conceptual knowledge, procedural knowledge, rational numbers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 868
6644 Effects of Roughness on Forward Facing Step in an Open Channel

Authors: S. M. Rifat, André L. Marchildon, Mark F. Tachie

Abstract:

Experiments were performed to investigate the effects of roughness on the reattachment and redevelopment regions over a 12 mm forward facing step (FFS) in an open channel flow. The experiments were performed over an upstream smooth wall and a smooth FFS, an upstream wall coated with sandpaper 36 grit and a smooth FFS and an upstream rough wall produced from sandpaper 36 grit and a FFS coated with sandpaper 36 grit. To investigate only the wall roughness effects, Reynolds number, Froude number, aspect ratio and blockage ratio were kept constant. Upstream profiles showed reduced streamwise mean velocities close to the rough wall compared to the smooth wall, but the turbulence level was increased by upstream wall roughness. The reattachment length for the smooth-smooth wall experiment was 1.78h; however, when it is replaced with rough-smooth wall the reattachment length decreased to 1.53h. It was observed that the upstream roughness increased the physical size of contours of maximum turbulence level; however, the downstream roughness decreased both the size and magnitude of contours in the vicinity of the leading edge of the step. Quadrant analysis was performed to investigate the dominant Reynolds shear stress contribution in the recirculation region. The Reynolds shear stress and turbulent kinetic energy profiles after the reattachment showed slower recovery compared to the streamwise mean velocity, however all the profiles fairly collapse on their corresponding upstream profiles at x/h = 60. It was concluded that to obtain a complete collapse several more streamwise distances would be required.

Keywords: Forward facing step, open channel, separated and reattached turbulent flows, wall roughness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1629
6643 Complexity Analysis of Some Known Graph Coloring Instances

Authors: Jeffrey L. Duffany

Abstract:

Graph coloring is an important problem in computer science and many algorithms are known for obtaining reasonably good solutions in polynomial time. One method of comparing different algorithms is to test them on a set of standard graphs where the optimal solution is already known. This investigation analyzes a set of 50 well known graph coloring instances according to a set of complexity measures. These instances come from a variety of sources some representing actual applications of graph coloring (register allocation) and others (mycieleski and leighton graphs) that are theoretically designed to be difficult to solve. The size of the graphs ranged from ranged from a low of 11 variables to a high of 864 variables. The method used to solve the coloring problem was the square of the adjacency (i.e., correlation) matrix. The results show that the most difficult graphs to solve were the leighton and the queen graphs. Complexity measures such as density, mobility, deviation from uniform color class size and number of block diagonal zeros are calculated for each graph. The results showed that the most difficult problems have low mobility (in the range of .2-.5) and relatively little deviation from uniform color class size.

Keywords: graph coloring, complexity, algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1383
6642 Mathematical Model and Control Strategy on DQ Frame for Shunt Active Power Filters

Authors: P. Santiprapan, K-L. Areerak, K-N. Areerak

Abstract:

This paper presents the mathematical model and control strategy on DQ frame of shunt active power filter. The structure of the shunt active power filter is the voltage source inverter (VSI). The pulse width modulation (PWM) with PI controller is used in the paper. The concept of DQ frame to apply with the shunt active power filter is described. Moreover, the detail of the PI controller design for two current loops and one voltage loop are fully explained. The DQ axis with Fourier (DQF) method is applied to calculate the reference currents on DQ frame. The simulation results show that the control strategy and the design method presented in the paper can provide the good performance of the shunt active power filter. Moreover, the %THD of the source currents after compensation can follow the IEEE Std.519-1992.

Keywords: shunt active power filter, mathematical model, DQ control strategy, DQ axis with Fourier, pulse width modulation control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5323
6641 A New Fuzzy DSS/ES for Stock Portfolio Selection using Technical and Fundamental Approaches in Parallel

Authors: H. Zarei, M. H. Fazel Zarandi, M. Karbasian

Abstract:

A Decision Support System/Expert System for stock portfolio selection presented where at first step, both technical and fundamental data used to estimate technical and fundamental return and risk (1st phase); Then, the estimated values are aggregated with the investor preferences (2nd phase) to produce convenient stock portfolio. In the 1st phase, there are two expert systems, each of which is responsible for technical or fundamental estimation. In the technical expert system, for each stock, twenty seven candidates are identified and with using rough sets-based clustering method (RC) the effective variables have been selected. Next, for each stock two fuzzy rulebases are developed with fuzzy C-Mean method and Takai-Sugeno- Kang (TSK) approach; one for return estimation and the other for risk. Thereafter, the parameters of the rule-bases are tuned with backpropagation method. In parallel, for fundamental expert systems, fuzzy rule-bases have been identified in the form of “IF-THEN" rules through brainstorming with the stock market experts and the input data have been derived from financial statements; as a result two fuzzy rule-bases have been generated for all the stocks, one for return and the other for risk. In the 2nd phase, user preferences represented by four criteria and are obtained by questionnaire. Using an expert system, four estimated values of return and risk have been aggregated with the respective values of user preference. At last, a fuzzy rule base having four rules, treats these values and produce a ranking score for each stock which will lead to a satisfactory portfolio for the user. The stocks of six manufacturing companies and the period of 2003-2006 selected for data gathering.

Keywords: Stock Portfolio Selection, Fuzzy Rule-Base ExpertSystems, Financial Decision Support Systems, Technical Analysis, Fundamental Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824
6640 Pyrolysis Characteristics and Kinetics of Macroalgae Biomass Using Thermogravimetric Analyzer

Authors: Zhao Hui, Yan Huaxiao, Zhang Mengmeng, Qin Song

Abstract:

The pyrolysis characteristics and kinetics of seven marine biomass, which are fixed Enteromorpha clathrata, floating Enteromorpha clathrata, Ulva lactuca L., Zosterae Marinae L., Thallus Laminariae, Asparagus schoberioides kunth and Undaria pinnatifida (Harv.), were studied with thermogravimetric analysis method. Simultaneously, cornstalk, which is a grass biomass, and sawdust, which is a lignocellulosic biomass, were references. The basic pyrolysis characteristics were studied by using TG- DTG-DTA curves. The results showed that there were three stages (dehydration, dramatic weight loss and slow weight loss) during the whole pyrolysis process of samples. The Tmax of marine biomass was significantly lower than two kinds of terrestrial biomass. Zosterae Marinae L. had a relatively high stability of pyrolysis, but floating Enteromorpha clathrata had lowest stability of pyrolysis and a good combustion characteristics. The corresponding activation energy E and frequency factor A were obtained by Coats-Redfern method. It was found that the pyrolysis reaction mechanism functions of three kinds of biomass are different.

Keywords: macroalgae biomass, pyrolysis, thermogravimetric analysis, thermolysis kinetics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2717
6639 N-Grams: A Tool for Repairing Word Order Errors in Ill-formed Texts

Authors: Theologos Athanaselis, Stelios Bakamidis, Ioannis Dologlou, Konstantinos Mamouras

Abstract:

This paper presents an approach for repairing word order errors in English text by reordering words in a sentence and choosing the version that maximizes the number of trigram hits according to a language model. A possible way for reordering the words is to use all the permutations. The problem is that for a sentence with length N words the number of all permutations is N!. The novelty of this method concerns the use of an efficient confusion matrix technique for reordering the words. The confusion matrix technique has been designed in order to reduce the search space among permuted sentences. The limitation of search space is succeeded using the statistical inference of N-grams. The results of this technique are very interesting and prove that the number of permuted sentences can be reduced by 98,16%. For experimental purposes a test set of TOEFL sentences was used and the results show that more than 95% can be repaired using the proposed method.

Keywords: Permutations filtering, Statistical language model N-grams, Word order errors, TOEFL

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1642
6638 Automated Service Scene Detection for Badminton Game Analysis Using CHLAC and MRA

Authors: Fumito Yoshikawa, Takumi Kobayashi, Kenji Watanabe, Nobuyuki Otsu

Abstract:

Extracting in-play scenes in sport videos is essential for quantitative analysis and effective video browsing of the sport activities. Game analysis of badminton as of the other racket sports requires detecting the start and end of each rally period in an automated manner. This paper describes an automatic serve scene detection method employing cubic higher-order local auto-correlation (CHLAC) and multiple regression analysis (MRA). CHLAC can extract features of postures and motions of multiple persons without segmenting and tracking each person by virtue of shift-invariance and additivity, and necessitate no prior knowledge. Then, the specific scenes, such as serve, are detected by linear regression (MRA) from the CHLAC features. To demonstrate the effectiveness of our method, the experiment was conducted on video sequences of five badminton matches captured by a single ceiling camera. The averaged precision and recall rates for the serve scene detection were 95.1% and 96.3%, respectively.

Keywords: Badminton, CHLAC, MRA, Video-based motiondetection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2685
6637 Short Time Identification of Feed Drive Systems using Nonlinear Least Squares Method

Authors: M.G.A. Nassef, Linghan Li, C. Schenck, B. Kuhfuss

Abstract:

Design and modeling of nonlinear systems require the knowledge of all inside acting parameters and effects. An empirical alternative is to identify the system-s transfer function from input and output data as a black box model. This paper presents a procedure using least squares algorithm for the identification of a feed drive system coefficients in time domain using a reduced model based on windowed input and output data. The command and response of the axis are first measured in the first 4 ms, and then least squares are applied to predict the transfer function coefficients for this displacement segment. From the identified coefficients, the next command response segments are estimated. The obtained results reveal a considerable potential of least squares method to identify the system-s time-based coefficients and predict accurately the command response as compared to measurements.

Keywords: feed drive systems, least squares algorithm, onlineparameter identification, short time window

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2073
6636 Comparative Analysis of DTC Based Switched Reluctance Motor Drive Using Torque Equation and FEA Models

Authors: P. Srinivas, P. V. N. Prasad

Abstract:

Since torque ripple is the main cause of noise and vibrations, the performance of Switched Reluctance Motor (SRM) can be improved by minimizing its torque ripple using a novel control technique called Direct Torque Control (DTC). In DTC technique, torque is controlled directly through control of magnitude of the flux and change in speed of the stator flux vector. The flux and torque are maintained within set hysteresis bands.

The DTC of SRM is analyzed by two methods. In one method, the actual torque is computed by conducting Finite Element Analysis (FEA) on the design specifications of the motor. In the other method, the torque is computed by Simplified Torque Equation. The variation of peak current, average current, torque ripple and speed settling time with Simplified Torque Equation model is compared with FEA based model.

Keywords: Direct Toque Control, Simplified Torque Equation, Finite Element Analysis, Torque Ripple.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3478
6635 Refined Buckling Analysis of Rectangular Plates Under Uniaxial and Biaxial Compression

Authors: V. Piscopo

Abstract:

In the traditional buckling analysis of rectangular plates the classical thin plate theory is generally applied, so neglecting the plating shear deformation. It seems quite clear that this method is not totally appropriate for the analysis of thick plates, so that in the following the two variable refined plate theory proposed by Shimpi (2006), that permits to take into account the transverse shear effects, is applied for the buckling analysis of simply supported isotropic rectangular plates, compressed in one and two orthogonal directions. The relevant results are compared with the classical ones and, for rectangular plates under uniaxial compression, a new direct expression, similar to the classical Bryan-s formula, is proposed for the Euler buckling stress. As the buckling analysis is a widely diffused topic for a variety of structures, such as ship ones, some applications for plates uniformly compressed in one and two orthogonal directions are presented and the relevant theoretical results are compared with those ones obtained by a FEM analysis, carried out by ANSYS, to show the feasibility of the presented method.

Keywords: Buckling analysis, Thick plates, Biaxial stresses

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2592
6634 Robust Heart Sounds Segmentation Based on the Variation of the Phonocardiogram Curve Length

Authors: Mecheri Zeid Belmecheri, Maamar Ahfir, Izzet Kale

Abstract:

Automatic cardiac auscultation is still a subject of research in order to establish an objective diagnosis. Recorded heart sounds as Phonocardiogram (PCG) signals can be used for automatic segmentation into components that have clinical meanings. These are the first sound, S1, the second sound, S2, and the systolic and diastolic components, respectively. In this paper, an automatic method is proposed for the robust segmentation of heart sounds. This method is based on calculating an intermediate sawtooth-shaped signal from the length variation of the recorded PCG signal in the time domain and, using its positive derivative function that is a binary signal in training a Recurrent Neural Network (RNN). Results obtained in the context of a large database of recorded PCGs with their simultaneously recorded Electrocardiograms (ECGs) from different patients in clinical settings, including normal and abnormal subjects, show on average a segmentation testing performance average of 76% sensitivity and 94% specificity.

Keywords: Heart sounds, PCG segmentation, event detection, Recurrent Neural Networks, PCG curve length.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 269
6633 Synthesis of New Bio-Based Solid Polymer Electrolyte Polyurethane-LiClO4 via Prepolymerization Method: Effect of NCO/OH Ratio on Their Chemical, Thermal Properties and Ionic Conductivity

Authors: C. S. Wong, K. H. Badri, N. Ataollahi, K. P. Law, M. S. Su’ait, N. I. Hassan

Abstract:

Novel bio-based polymer electrolyte was synthesized with LiClO4 as the main source of charge carrier. Initially, polyurethane-LiClO4 polymer electrolytes were synthesized via prepolymerization method with different NCO/OH ratios and labelled them as PU1, PU2, PU3 and PU4. Fourier transform infrared (FTIR) analysis indicates the co-ordination between Li+ ion and polyurethane in PU1. Differential scanning calorimetry (DSC) analysis indicates PU1 has the highest glass transition temperature (Tg) corresponds to the most abundant urethane group which is the hard segment in PU1. Scanning electron microscopy (SEM) shows the good miscibility between lithium salt and the polymer. The study found that PU1 possessed the greatest ionic conductivity and the lowest activation energy, Ea. All the polyurethanes exhibited linear Arrhenius variations indicating ion transport via simple lithium ion hopping in polyurethane. This research proves the NCO content in polyurethane plays an important role in affecting the ionic conductivity of this polymer electrolyte.

Keywords: Ionic conductivity, Palm kernel oil-based monoester polyol, polyurethane, solid polymer electrolyte.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3110
6632 Appraisal of Trace Elements in Scalp Hair of School Children in Kandal Province, Cambodia

Authors: A. Yavar, S. Sarmani, K. S. Khoo

Abstract:

The analysis of trace elements in human hair provides crucial insights into an individual's nutritional status and environmental exposure. This research aimed to examine the levels of toxic and essential elements in the scalp hair of school children aged 12-17 from three villages (Anglong Romiot (AR), Svay Romiot (SR), and Kampong Kong (KK)) in Cambodia's Kandal province, a region where residents are especially vulnerable to toxic elements, notably arsenic (As), due to their dietary habits, lifestyle, and environmental conditions. The scalp hair samples were analyzed using the k0-Instrumental Neutron Activation method (k0-INAA), with a six-hour irradiation period in the Malaysian Nuclear Agency (MNA) research reactor followed by High Purity Germanium (HPGe) detector use to identify the gamma peaks of radionuclides. The analysis identified 31 elements in the human hair from the study area, including As, Au, Br, Ca, Ce, Co, Dy, Eu-152m, Hg-197, Hg-203, Ho, Ir, K, La, Lu, Mn, Na, Pa, Pt-195m, Pt-197, Sb, Sc-46, Sc-47, Sm, Sn-117m, W-181, W-187, Yb-169, Yb-175, Zn, and Zn-69m. The accuracy of the method was verified through the analysis of ERM-DB001-human hair as a Certified Reference Material (CRM), with the results demonstrating consistency with the certified values. Given the prevalent arsenic pollution in the research area, the study also examined the relationship between the concentration of As and other elements using Pearson's correlation test. The outcomes offer a comprehensive resource for future investigations into toxic and essential element presence in the region. In the main body of the paper, a more extensive discussion on the implications of arsenic pollution and the correlations observed is provided to enhance understanding and inform future research directions.

Keywords: Human scalp hair, toxic and essential elements, Kandal Province, Cambodia, k0-Instrumental Neutron Activation Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 216
6631 Robot Movement Using the Trust Region Policy Optimization

Authors: Romisaa Ali

Abstract:

The Policy Gradient approach is a subset of the Deep Reinforcement Learning (DRL) combines Deep Neural Networks (DNN) with Reinforcement Learning (RL). This approach finds the optimal policy of robot movement, based on the experience it gains from interaction with its environment. Unlike previous policy gradient algorithms, which were unable to handle the two types of error variance and bias introduced by the DNN model due to over- or underestimation, this algorithm is capable of handling both types of error variance and bias. This article will discuss the state-of-the-art SOTA policy gradient technique, trust region policy optimization (TRPO), by applying this method in various environments compared to another policy gradient method, the Proximal Policy Optimization (PPO), to explain their robust optimization, using this SOTA to gather experience data during various training phases after observing the impact of hyper-parameters on neural network performance.

Keywords: Deep neural networks, deep reinforcement learning, Proximal Policy Optimization, state-of-the-art, trust region policy optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 142
6630 Optimization of Doubly Fed Induction Generator Equivalent Circuit Parameters by Direct Search Method

Authors: Mamidi Ramakrishna Rao

Abstract:

Doubly-fed induction generator (DFIG) is currently the choice for many wind turbines. These generators, when connected to the grid through a converter, is subjected to varied power system conditions like voltage variation, frequency variation, short circuit fault conditions, etc. Further, many countries like Canada, Germany, UK, Scotland, etc. have distinct grid codes relating to wind turbines. Accordingly, following the network faults, wind turbines have to supply a definite reactive current. To satisfy the requirements including reactive current capability, an optimum electrical design becomes a mandate for DFIG to function. This paper intends to optimize the equivalent circuit parameters of an electrical design for satisfactory DFIG performance. Direct search method has been used for optimization of the parameters. The variables selected include electromagnetic core dimensions (diameters and stack length), slot dimensions, radial air gap between stator and rotor and winding copper cross section area. Optimization for 2 MW DFIG has been executed separately for three objective functions - maximum reactive power capability (Case I), maximum efficiency (Case II) and minimum weight (Case III). In the optimization analysis program, voltage variations (10%), power factor- leading and lagging (0.95), speeds for corresponding to slips (-0.3 to +0.3) have been considered. The optimum designs obtained for objective functions were compared. It can be concluded that direct search method of optimization helps in determining an optimum electrical design for each objective function like efficiency or reactive power capability or weight minimization.

Keywords: Direct search, DFIG, equivalent circuit parameters, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 887