Search results for: level set method.
6707 Transient Combined Conduction and Radiation in a Two-Dimensional Participating Cylinder in Presence of Heat Generation
Authors: Raoudha Chaabane, Faouzi Askri, Sassi Ben Nasrallah
Abstract:
Simultaneous transient conduction and radiation heat transfer with heat generation is investigated. Analysis is carried out for both steady and unsteady situations. two-dimensional gray cylindrical enclosure with an absorbing, emitting, and isotropically scattering medium is considered. Enclosure boundaries are assumed at specified temperatures. The heat generation rate is considered uniform and constant throughout the medium. The lattice Boltzmann method (LBM) was used to solve the energy equation of a transient conduction-radiation heat transfer problem. The control volume finite element method (CVFEM) was used to compute the radiative information. To study the compatibility of the LBM for the energy equation and the CVFEM for the radiative transfer equation, transient conduction and radiation heat transfer problems in 2-D cylindrical geometries were considered. In order to establish the suitability of the LBM, the energy equation of the present problem was also solved using the the finite difference method (FDM) of the computational fluid dynamics. The CVFEM used in the radiative heat transfer was employed to compute the radiative information required for the solution of the energy equation using the LBM or the FDM (of the CFD). To study the compatibility and suitability of the LBM for the solution of energy equation and the CVFEM for the radiative information, results were analyzed for the effects of various parameters such as the boundary emissivity. The results of the LBMCVFEM combination were found to be in excellent agreement with the FDM-CVFEM combination. The number of iterations and the steady state temperature in both of the combinations were found comparable. Results are found for situations with and without heat generation. Heat generation is found to have significant bearing on temperature distribution.Keywords: heat generation, cylindrical coordinates; RTE;transient; coupled conduction radiation; heat transfer; CVFEM; LBM
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22036706 Mathematical Expression for Machining Performance
Authors: Md. Ashikur Rahman Khan, M. M. Rahman
Abstract:
In electrical discharge machining (EDM), a complete and clear theory has not yet been established. The developed theory (physical models) yields results far from reality due to the complexity of the physics. It is difficult to select proper parameter settings in order to achieve better EDM performance. However, modelling can solve this critical problem concerning the parameter settings. Therefore, the purpose of the present work is to develop mathematical model to predict performance characteristics of EDM on Ti-5Al-2.5Sn titanium alloy. Response surface method (RSM) and artificial neural network (ANN) are employed to develop the mathematical models. The developed models are verified through analysis of variance (ANOVA). The ANN models are trained, tested, and validated utilizing a set of data. It is found that the developed ANN and mathematical model can predict performance of EDM effectively. Thus, the model has found a precise tool that turns EDM process cost-effective and more efficient.
Keywords: Analysis of variance, artificial neural network, material removal rate, modelling, response surface method, surface finish.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7316705 New Hybrid Method to Model Extreme Rainfalls
Authors: Y. Laaroussi, Z. Guennoun, A. Amar
Abstract:
Modeling and forecasting dynamics of rainfall occurrences constitute one of the major topics, which have been largely treated by statisticians, hydrologists, climatologists and many other groups of scientists. In the same issue, we propose, in the present paper, a new hybrid method, which combines Extreme Values and fractal theories. We illustrate the use of our methodology for transformed Emberger Index series, constructed basing on data recorded in Oujda (Morocco). The index is treated at first by Peaks Over Threshold (POT) approach, to identify excess observations over an optimal threshold u. In the second step, we consider the resulting excess as a fractal object included in one dimensional space of time. We identify fractal dimension by the box counting. We discuss the prospect descriptions of rainfall data sets under Generalized Pareto Distribution, assured by Extreme Values Theory (EVT). We show that, despite of the appropriateness of return periods given by POT approach, the introduction of fractal dimension provides accurate interpretation results, which can ameliorate apprehension of rainfall occurrences.
Keywords: Extreme values theory, Fractals dimensions, Peaks Over Threshold, Rainfall occurrences.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20996704 Modeling and Simulation of Delaminations in FML Using Step Pulsed Active Thermography
Authors: S. Sundaravalli, M. C. Majumder, G. K. Vijayaraghavan
Abstract:
The study focuses to investigate the thermal response of delaminations and develop mathematical models using numerical results to obtain the optimum heat requirement and time to identify delaminations in GLARE type of Fibre Metal Laminates (FML) in both reflection mode and through-transmission (TT) mode of step pulsed active thermography (SPAT) method in the type of nondestructive testing and evaluation (NDTE) technique. The influence of applied heat flux and time on various sizes and depth of delaminations in FML is analyzed to investigate the thermal response through numerical simulations. A finite element method (FEM) is applied to simulate SPAT through ANSYS software based on 3D transient heat transfer principle with the assumption of reflection mode and TT mode of observation individually.
The results conclude that the numerical approach based on SPAT in reflection mode is more suitable for analysing smaller size of near-surface delaminations located at the thermal stimulator side and TT mode is more suitable for analysing smaller size of deeper delaminations located far from thermal stimulator side or near thermal detector/Infrared camera side. The mathematical models provide the optimum q and T at the required MRTD to identify unidentified delamination 7 with 25015.0022W/m2 at 2.531sec and delamination 8 with 16663.3356 W/m2 at 1.37857sec in reflection mode. In TT mode, the delamination 1 with 34954W/m2 at 13.0399sec, delamination 2 with 20002.67W/m2 at 1.998sec and delamination 7 with 20010.87 W/m2 at 0.6171sec could be identified.
Keywords: Step pulsed active thermography (SPAT), NDTE, FML, Delaminations, Finite element method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25516703 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow
Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat
Abstract:
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.
Keywords: Affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, Signal Detection Theory, student engagement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12626702 Military Court’s Jurisdiction over Military Members Who Commit General Crimes under Indonesian Military Judiciary System in Comparison with Other Countries
Authors: Dini Dewi Heniarti
Abstract:
The importance of this study is to understand how Indonesian military court asserts its jurisdiction over military members who commit general crimes within the Indonesian military judiciary system in comparison to other countries. This research employs a normative-juridical approach in combination with historical and comparative-juridical approaches. The research specification is analytical-descriptive in nature, i.e. describing or outlining the principles, basic concepts, and norms related to military judiciary system, which are further analyzed within the context of implementation and as the inputs for military justice regulation under the Indonesian legal system. Main data used in this research are secondary data, including primary, secondary and tertiary legal sources. The research focuses on secondary data, while primary data are supplementary in nature. The validity of data is checked using multi-methods commonly known as triangulation, i.e. to reflect the efforts to gain an in-depth understanding of phenomena being studied. Here, the military element is kept intact in the judiciary process with due observance of the Military Criminal Justice System and the Military Command Development Principle. The Indonesian military judiciary jurisdiction over military members committing general crimes is based on national legal system and global development while taking into account the structure, composition and position of military forces within the state structure. Jurisdiction is formulated by setting forth the substantive norm of crimes that are military in nature. At the level of adjudication jurisdiction, the military court has a jurisdiction to adjudicate military personnel who commit general offences. At the level of execution jurisdiction, the military court has a jurisdiction to execute the sentence against military members who have been convicted with a final and binding judgement. Military court's jurisdiction needs to be expanded when the country is in the state of war.
Keywords: Military courts, Jurisdiction, Military members, Military justice system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24386701 Fuzzy Group Decision Making for the Assessment of Health-Care Waste Disposal Alternatives in Istanbul
Authors: Mehtap Dursun, E. Ertugrul Karsak, Melis Almula Karadayi
Abstract:
Disposal of health-care waste (HCW) is considered as an important environmental problem especially in large cities. Multiple criteria decision making (MCDM) techniques are apt to deal with quantitative and qualitative considerations of the health-care waste management (HCWM) problems. This research proposes a fuzzy multi-criteria group decision making approach with a multilevel hierarchical structure including qualitative as well as quantitative performance attributes for evaluating HCW disposal alternatives for Istanbul. Using the entropy weighting method, objective weights as well as subjective weights are taken into account to determine the importance weighting of quantitative performance attributes. The results obtained using the proposed methodology are thoroughly analyzed.Keywords: Entropy weighting method, group decision making, health-care waste management, hierarchical fuzzy multi-criteriadecision making
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16876700 Inferences on Compound Rayleigh Parameters with Progressively Type-II Censored Samples
Authors: Abdullah Y. Al-Hossain
Abstract:
This paper considers inference under progressive type II censoring with a compound Rayleigh failure time distribution. The maximum likelihood (ML), and Bayes methods are used for estimating the unknown parameters as well as some lifetime parameters, namely reliability and hazard functions. We obtained Bayes estimators using the conjugate priors for two shape and scale parameters. When the two parameters are unknown, the closed-form expressions of the Bayes estimators cannot be obtained. We use Lindley.s approximation to compute the Bayes estimates. Another Bayes estimator has been obtained based on continuous-discrete joint prior for the unknown parameters. An example with the real data is discussed to illustrate the proposed method. Finally, we made comparisons between these estimators and the maximum likelihood estimators using a Monte Carlo simulation study.
Keywords: Progressive type II censoring, compound Rayleigh failure time distribution, maximum likelihood estimation, Bayes estimation, Lindley's approximation method, Monte Carlo simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23906699 Embedded Throughput Improving of Low-rate EDR Packets for Lower-latency
Authors: M. A. M. El-Bendary, A. E. Abu El-Azm, N. A. El-Fishawy, F. Shawky, F. E. El-Samie
Abstract:
With increasing utilization of the wireless devices in different fields such as medical devices and industrial fields, the paper presents a method for simplify the Bluetooth packets with throughput enhancing. The paper studies a vital issue in wireless communications, which is the throughput of data over wireless networks. In fact, the Bluetooth and ZigBee are a Wireless Personal Area Network (WPAN). With taking these two systems competition consideration, the paper proposes different schemes for improve the throughput of Bluetooth network over a reliable channel. The proposition depends on the Channel Quality Driven Data Rate (CQDDR) rules, which determines the suitable packet in the transmission process according to the channel conditions. The proposed packet is studied over additive White Gaussian Noise (AWGN) and fading channels. The Experimental results reveal the capability of extension of the PL length by 8, 16, 24 bytes for classic and EDR packets, respectively. Also, the proposed method is suitable for the low throughput Bluetooth.Keywords: Bluetooth, throughput, adaptive packets, EDRpackets, CQDDR, low latency. Channel condition
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19016698 Solution of Optimal Reactive Power Flow using Biogeography-Based Optimization
Authors: Aniruddha Bhattacharya, Pranab Kumar Chattopadhyay
Abstract:
Optimal reactive power flow is an optimization problem with one or more objective of minimizing the active power losses for fixed generation schedule. The control variables are generator bus voltages, transformer tap settings and reactive power output of the compensating devices placed on different bus bars. Biogeography- Based Optimization (BBO) technique has been applied to solve different kinds of optimal reactive power flow problems subject to operational constraints like power balance constraint, line flow and bus voltages limits etc. BBO searches for the global optimum mainly through two steps: Migration and Mutation. In the present work, BBO has been applied to solve the optimal reactive power flow problems on IEEE 30-bus and standard IEEE 57-bus power systems for minimization of active power loss. The superiority of the proposed method has been demonstrated. Considering the quality of the solution obtained, the proposed method seems to be a promising one for solving these problems.Keywords: Active Power Loss, Biogeography-Based Optimization, Migration, Mutation, Optimal Reactive Power Flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42776697 Crystalline Structure of Starch Based Nano Composites
Authors: Farid Amidi Fazli, Afshin Babazadeh, Farnaz Amidi Fazli
Abstract:
In contrast with literal meaning of nano, researchers have been achieved mega adventures in this area and every day more nanomaterials are being introduced to the market. After long time application of fossil-based plastics, nowadays accumulation of their waste seems to be a big problem to the environment. On the other hand, mankind has more attention to safety and living environment. Replacing common plastic packaging materials with degradable ones that degrade faster and convert to non-dangerous components like water and carbon dioxide have more attractions; these new materials are based on renewable and inexpensive sources of starch and cellulose. However, the functional properties of them do not suitable for packaging. At this point, nanotechnology has an important role. Utilizing of nanomaterials in polymer structure will improve mechanical and physical properties of them; nanocrystalline cellulose (NCC) has this ability. This work has employed a chemical method to produce NCC and starch bio nanocomposite containing NCC. X-Ray Diffraction technique has characterized the obtained materials. Results showed that applied method is a suitable one as well as applicable one to NCC production.
Keywords: Biofilm, cellulose, nanocomposite, starch.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17716696 Ratio-Dependent Food Chain Models with Three Trophic Levels
Abstract:
In this paper we study a food chain model with three trophic levels and Michaelis-Menten type ratio-dependent functional response. Distinctive feature of this model is the sensitive dependence of the dynamical behavior on the initial populations and parameters of the real world. The stability of the equilibrium points are also investigated.
Keywords: Food chain, Ratio dependent models, Three level models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15246695 Fabrication of Poly(Ethylene Oxide)/Chitosan/Indocyanine Green Nanoprobe by Co-Axial Electrospinning Method for Early Detection
Authors: Zeynep R. Ege, Aydin Akan, Faik N. Oktar, Betul Karademir, Oguzhan Gunduz
Abstract:
Early detection of cancer could save human life and quality in insidious cases by advanced biomedical imaging techniques. Designing targeted detection system is necessary in order to protect of healthy cells. Electrospun nanofibers are efficient and targetable nanocarriers which have important properties such as nanometric diameter, mechanical properties, elasticity, porosity and surface area to volume ratio. In the present study, indocyanine green (ICG) organic dye was stabilized and encapsulated in polymer matrix which polyethylene oxide (PEO) and chitosan (CHI) multilayer nanofibers via co-axial electrospinning method at one step. The co-axial electrospun nanofibers were characterized as morphological (SEM), molecular (FT-IR), and entrapment efficiency of Indocyanine Green (ICG) (confocal imaging). Controlled release profile of PEO/CHI/ICG nanofiber was also evaluated up to 40 hours.
Keywords: Chitosan, coaxial electrospinning, controlled releasing, indocyanine green, nanoprobe, polyethylene oxide.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7646694 A New Graphical Password: Combination of Recall & Recognition Based Approach
Authors: Md. Asraful Haque, Babbar Imam
Abstract:
Information Security is the most describing problem in present times. To cop up with the security of the information, the passwords were introduced. The alphanumeric passwords are the most popular authentication method and still used up to now. However, text based passwords suffer from various drawbacks such as they are easy to crack through dictionary attacks, brute force attacks, keylogger, social engineering etc. Graphical Password is a good replacement for text password. Psychological studies say that human can remember pictures better than text. So this is the fact that graphical passwords are easy to remember. But at the same time due to this reason most of the graphical passwords are prone to shoulder surfing. In this paper, we have suggested a shoulder-surfing resistant graphical password authentication method. The system is a combination of recognition and pure recall based techniques. Proposed scheme can be useful for smart hand held devices (like smart phones i.e. PDAs, iPod, iPhone, etc) which are more handy and convenient to use than traditional desktop computer systems.
Keywords: Authentication, Graphical Password, Text Password, Information Security, Shoulder-surfing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41456693 Analysis of Web User Identification Methods
Authors: Renáta Iváncsy, Sándor Juhász
Abstract:
Web usage mining has become a popular research area, as a huge amount of data is available online. These data can be used for several purposes, such as web personalization, web structure enhancement, web navigation prediction etc. However, the raw log files are not directly usable; they have to be preprocessed in order to transform them into a suitable format for different data mining tasks. One of the key issues in the preprocessing phase is to identify web users. Identifying users based on web log files is not a straightforward problem, thus various methods have been developed. There are several difficulties that have to be overcome, such as client side caching, changing and shared IP addresses and so on. This paper presents three different methods for identifying web users. Two of them are the most commonly used methods in web log mining systems, whereas the third on is our novel approach that uses a complex cookie-based method to identify web users. Furthermore we also take steps towards identifying the individuals behind the impersonal web users. To demonstrate the efficiency of the new method we developed an implementation called Web Activity Tracking (WAT) system that aims at a more precise distinction of web users based on log data. We present some statistical analysis created by the WAT on real data about the behavior of the Hungarian web users and a comprehensive analysis and comparison of the three methodsKeywords: Data preparation, Tracking individuals, Web useridentification, Web usage mining
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 43926692 An Efficient 3D Animation Data Reduction Using Frame Removal
Authors: Jinsuk Yang, Choongjae Joo, Kyoungsu Oh
Abstract:
Existing methods in which the animation data of all frames are stored and reproduced as with vertex animation cannot be used in mobile device environments because these methods use large amounts of the memory. So 3D animation data reduction methods aimed at solving this problem have been extensively studied thus far and we propose a new method as follows. First, we find and remove frames in which motion changes are small out of all animation frames and store only the animation data of remaining frames (involving large motion changes). When playing the animation, the removed frame areas are reconstructed using the interpolation of the remaining frames. Our key contribution is to calculate the accelerations of the joints of individual frames and the standard deviations of the accelerations using the information of joint locations in the relevant 3D model in order to find and delete frames in which motion changes are small. Our methods can reduce data sizes by approximately 50% or more while providing quality which is not much lower compared to original animations. Therefore, our method is expected to be usefully used in mobile device environments or other environments in which memory sizes are limited.
Keywords: Data Reduction, Interpolation, Vertex Animation, 3D Animation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16616691 Stochastic Modeling and Combined Spatial Pattern Analysis of Epidemic Spreading
Authors: S. Chadsuthi, W. Triampo, C. Modchang, P. Kanthang, D. Triampo, N. Nuttavut
Abstract:
We present analysis of spatial patterns of generic disease spread simulated by a stochastic long-range correlation SIR model, where individuals can be infected at long distance in a power law distribution. We integrated various tools, namely perimeter, circularity, fractal dimension, and aggregation index to characterize and investigate spatial pattern formations. Our primary goal was to understand for a given model of interest which tool has an advantage over the other and to what extent. We found that perimeter and circularity give information only for a case of strong correlation– while the fractal dimension and aggregation index exhibit the growth rule of pattern formation, depending on the degree of the correlation exponent (β). The aggregation index method used as an alternative method to describe the degree of pathogenic ratio (α). This study may provide a useful approach to characterize and analyze the pattern formation of epidemic spreadingKeywords: spatial pattern epidemics, aggregation index, fractaldimension, stochastic, long-rang epidemics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16766690 Dynamic Soil-Structure Interaction Analysis of Reinforced Concrete Buildings
Authors: Abdelhacine Gouasmia, Abdelhamid Belkhiri, Allaeddine Athmani
Abstract:
The objective of this paper is to evaluate the effects of soil-structure interaction (SSI) on the modal characteristics and on the dynamic response of current structures. The objective is on the overall behaviour of a real structure of five storeys reinforced concrete (R/C) building typically encountered in Algeria. Sensitivity studies are undertaken in order to study the effects of frequency content of the input motion, frequency of the soil-structure system, rigidity and depth of the soil layer on the dynamic response of such structures. This investigation indicated that the rigidity of the soil layer is the predominant factor in soil-structure interaction and its increases would definitely reduce the deformation in the R/C structure. On the other hand, increasing the period of the underlying soil will cause an increase in the lateral displacements at story levels and create irregularity in the distribution of story shears. Possible resonance between the frequency content of the input motion and soil could also play an important role in increasing the structural response.Keywords: Direct method, finite element method, foundation, R/C frame, soil-structure interaction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26806689 Substitution of Phosphate with Liquid Smoke as a Binder on the Quality of Chicken Nugget
Authors: E. Abustam, M. Yusuf, M. I. Said
Abstract:
One of functional properties of the meat is decrease of water holding capacity (WHC) during rigor mortis. At the time of pre-rigor, WHC is higher than post-rigor. The decline of WHC has implication to the other functional properties such as decreased cooking lost and yields resulting in lower elasticity and compactness of processed meat product. In many cases, the addition of phosphate in the meat will increase the functional properties of the meat such as WHC. Furthermore, liquid smoke has also been known in increasing the WHC of fresh meat. For food safety reasons, liquid smoke in the present study was used as a substitute to phosphate in production of chicken nuggets. This study aimed to know the effect of substitution of phosphate with liquid smoke on the quality of nuggets made from post-rigor chicken thigh and breast. The study was arranged using completely randomized design of factorial pattern 2x3 with three replications. Factor 1 was thigh and breast parts of the chicken, and factor 2 was different levels of liquid smoke in substitution to phosphate (0%, 50%, and 100%). The thigh and breast post-rigor broiler aged 40 days were used as the main raw materials in making nuggets. Auxiliary materials instead of meat were phosphate, liquid smoke at concentration of 10%, tapioca flour, salt, eggs and ice. Variables measured were flexibility, shear force value, cooking loss, elasticity level, and preferences. The results of this study showed that the substitution of phosphate with 100% liquid smoke resulting high quality nuggets. Likewise, the breast part of the meat showed higher quality nuggets than thigh part. This is indicated by high elasticity, low shear force value, low cooking loss, and a high level of preference of the nuggets. It can be concluded that liquid smoke can be used as a binder in making nuggets of chicken post-rigor.
Keywords: Liquid smoke, nugget quality, phosphate, post-rigor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11026688 Complexity Analysis of Some Known Graph Coloring Instances
Authors: Jeffrey L. Duffany
Abstract:
Graph coloring is an important problem in computer science and many algorithms are known for obtaining reasonably good solutions in polynomial time. One method of comparing different algorithms is to test them on a set of standard graphs where the optimal solution is already known. This investigation analyzes a set of 50 well known graph coloring instances according to a set of complexity measures. These instances come from a variety of sources some representing actual applications of graph coloring (register allocation) and others (mycieleski and leighton graphs) that are theoretically designed to be difficult to solve. The size of the graphs ranged from ranged from a low of 11 variables to a high of 864 variables. The method used to solve the coloring problem was the square of the adjacency (i.e., correlation) matrix. The results show that the most difficult graphs to solve were the leighton and the queen graphs. Complexity measures such as density, mobility, deviation from uniform color class size and number of block diagonal zeros are calculated for each graph. The results showed that the most difficult problems have low mobility (in the range of .2-.5) and relatively little deviation from uniform color class size.Keywords: graph coloring, complexity, algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14016687 Mathematical Model and Control Strategy on DQ Frame for Shunt Active Power Filters
Authors: P. Santiprapan, K-L. Areerak, K-N. Areerak
Abstract:
This paper presents the mathematical model and control strategy on DQ frame of shunt active power filter. The structure of the shunt active power filter is the voltage source inverter (VSI). The pulse width modulation (PWM) with PI controller is used in the paper. The concept of DQ frame to apply with the shunt active power filter is described. Moreover, the detail of the PI controller design for two current loops and one voltage loop are fully explained. The DQ axis with Fourier (DQF) method is applied to calculate the reference currents on DQ frame. The simulation results show that the control strategy and the design method presented in the paper can provide the good performance of the shunt active power filter. Moreover, the %THD of the source currents after compensation can follow the IEEE Std.519-1992.Keywords: shunt active power filter, mathematical model, DQ control strategy, DQ axis with Fourier, pulse width modulation control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 53636686 Investigating Transformations in the Cartesian Plane Using Spreadsheets
Authors: D. Allison, A. Didenko, G. Miller
Abstract:
The link between coordinate transformations in the plane and their effects on the graph of a function can be difficult for students studying college level mathematics to comprehend. To solidify this conceptual link in the mind of a student Microsoft Excel can serve as a convenient graphing tool and pedagogical aid. The authors of this paper describe how various transformations and their related functional symmetry properties can be graphically displayed with an Excel spreadsheet.
Keywords: Mathematics education, Microsoft Excel spreadsheet, technology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19886685 Greek Compounds: A Challenging Case for the Parsing Techniques of PC-KIMMO v.2
Authors: Angela Ralli, Eleni Galiotou
Abstract:
In this paper we describe the recognition process of Greek compound words using the PC-KIMMO software. We try to show certain limitations of the system with respect to the principles of compound formation in Greek. Moreover, we discuss the computational processing of phenomena such as stress and syllabification which are indispensable for the analysis of such constructions and we try to propose linguistically-acceptable solutions within the particular system.
Keywords: Morpho-phonological parsing, compound words, two-level morphology, natural language processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16096684 A New Fuzzy DSS/ES for Stock Portfolio Selection using Technical and Fundamental Approaches in Parallel
Authors: H. Zarei, M. H. Fazel Zarandi, M. Karbasian
Abstract:
A Decision Support System/Expert System for stock portfolio selection presented where at first step, both technical and fundamental data used to estimate technical and fundamental return and risk (1st phase); Then, the estimated values are aggregated with the investor preferences (2nd phase) to produce convenient stock portfolio. In the 1st phase, there are two expert systems, each of which is responsible for technical or fundamental estimation. In the technical expert system, for each stock, twenty seven candidates are identified and with using rough sets-based clustering method (RC) the effective variables have been selected. Next, for each stock two fuzzy rulebases are developed with fuzzy C-Mean method and Takai-Sugeno- Kang (TSK) approach; one for return estimation and the other for risk. Thereafter, the parameters of the rule-bases are tuned with backpropagation method. In parallel, for fundamental expert systems, fuzzy rule-bases have been identified in the form of “IF-THEN" rules through brainstorming with the stock market experts and the input data have been derived from financial statements; as a result two fuzzy rule-bases have been generated for all the stocks, one for return and the other for risk. In the 2nd phase, user preferences represented by four criteria and are obtained by questionnaire. Using an expert system, four estimated values of return and risk have been aggregated with the respective values of user preference. At last, a fuzzy rule base having four rules, treats these values and produce a ranking score for each stock which will lead to a satisfactory portfolio for the user. The stocks of six manufacturing companies and the period of 2003-2006 selected for data gathering.Keywords: Stock Portfolio Selection, Fuzzy Rule-Base ExpertSystems, Financial Decision Support Systems, Technical Analysis, Fundamental Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18416683 Pyrolysis Characteristics and Kinetics of Macroalgae Biomass Using Thermogravimetric Analyzer
Authors: Zhao Hui, Yan Huaxiao, Zhang Mengmeng, Qin Song
Abstract:
The pyrolysis characteristics and kinetics of seven marine biomass, which are fixed Enteromorpha clathrata, floating Enteromorpha clathrata, Ulva lactuca L., Zosterae Marinae L., Thallus Laminariae, Asparagus schoberioides kunth and Undaria pinnatifida (Harv.), were studied with thermogravimetric analysis method. Simultaneously, cornstalk, which is a grass biomass, and sawdust, which is a lignocellulosic biomass, were references. The basic pyrolysis characteristics were studied by using TG- DTG-DTA curves. The results showed that there were three stages (dehydration, dramatic weight loss and slow weight loss) during the whole pyrolysis process of samples. The Tmax of marine biomass was significantly lower than two kinds of terrestrial biomass. Zosterae Marinae L. had a relatively high stability of pyrolysis, but floating Enteromorpha clathrata had lowest stability of pyrolysis and a good combustion characteristics. The corresponding activation energy E and frequency factor A were obtained by Coats-Redfern method. It was found that the pyrolysis reaction mechanism functions of three kinds of biomass are different.
Keywords: macroalgae biomass, pyrolysis, thermogravimetric analysis, thermolysis kinetics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27406682 N-Grams: A Tool for Repairing Word Order Errors in Ill-formed Texts
Authors: Theologos Athanaselis, Stelios Bakamidis, Ioannis Dologlou, Konstantinos Mamouras
Abstract:
This paper presents an approach for repairing word order errors in English text by reordering words in a sentence and choosing the version that maximizes the number of trigram hits according to a language model. A possible way for reordering the words is to use all the permutations. The problem is that for a sentence with length N words the number of all permutations is N!. The novelty of this method concerns the use of an efficient confusion matrix technique for reordering the words. The confusion matrix technique has been designed in order to reduce the search space among permuted sentences. The limitation of search space is succeeded using the statistical inference of N-grams. The results of this technique are very interesting and prove that the number of permuted sentences can be reduced by 98,16%. For experimental purposes a test set of TOEFL sentences was used and the results show that more than 95% can be repaired using the proposed method.
Keywords: Permutations filtering, Statistical language model N-grams, Word order errors, TOEFL
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16686681 Distribution of Macrobenthic Polychaete Families in Relation to Environmental Parameters in North West Penang, Malaysia
Authors: Mohammad Gholizadeh, Khairun Yahya, Anita Talib, Omar Ahmad
Abstract:
The distribution of macrobenthic polychaetes along the coastal waters of Penang National Park was surveyed to estimate the effect of various environmental parameters at three stations (200m, 600m and 1200m) from the shoreline, during six sampling months, from June 2010 to April 2011.The use of polychaetes in descriptive ecology is surveyed in the light of a recent investigation particularly concerning the soft bottom biota environments. Polychaetes, often connected in the former to the notion of opportunistic species able to proliferate after an enhancement in organic matter, had performed a momentous role particularly with regard to effected soft-bottom habitats. The objective of this survey was to investigate different environment stress over soft bottom polychaete community along Teluk Ketapang and Pantai Acheh (Penang National Park) over a year period. Variations in the polychaete community were evaluated using univariate and multivariate methods. The results of PCA analysis displayed a positive relation between macrobenthic community structures and environmental parameters such as sediment particle size and organic matter in the coastal water. A total of 604 individuals were examined which was grouped into 23 families. Family Nereidae was the most abundant (22.68%), followed by Spionidae (22.02%), Hesionidae (12.58%), Nephtylidae (9.27%) and Orbiniidae (8.61%). It is noticeable that good results can only be obtained on the basis of good taxonomic resolution. We proposed that, in monitoring surveys, operative time could be optimized not only by working at a highertaxonomic level on the entire macrobenthic data set, but by also choosing an especially indicative group and working at lower taxonomic and good level.Keywords: Polychaete families, environment parameters, Bioindicators, Pantai Acheh, Teluk Ketapang.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20006680 Automated Service Scene Detection for Badminton Game Analysis Using CHLAC and MRA
Authors: Fumito Yoshikawa, Takumi Kobayashi, Kenji Watanabe, Nobuyuki Otsu
Abstract:
Extracting in-play scenes in sport videos is essential for quantitative analysis and effective video browsing of the sport activities. Game analysis of badminton as of the other racket sports requires detecting the start and end of each rally period in an automated manner. This paper describes an automatic serve scene detection method employing cubic higher-order local auto-correlation (CHLAC) and multiple regression analysis (MRA). CHLAC can extract features of postures and motions of multiple persons without segmenting and tracking each person by virtue of shift-invariance and additivity, and necessitate no prior knowledge. Then, the specific scenes, such as serve, are detected by linear regression (MRA) from the CHLAC features. To demonstrate the effectiveness of our method, the experiment was conducted on video sequences of five badminton matches captured by a single ceiling camera. The averaged precision and recall rates for the serve scene detection were 95.1% and 96.3%, respectively.Keywords: Badminton, CHLAC, MRA, Video-based motiondetection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27146679 Short Time Identification of Feed Drive Systems using Nonlinear Least Squares Method
Authors: M.G.A. Nassef, Linghan Li, C. Schenck, B. Kuhfuss
Abstract:
Design and modeling of nonlinear systems require the knowledge of all inside acting parameters and effects. An empirical alternative is to identify the system-s transfer function from input and output data as a black box model. This paper presents a procedure using least squares algorithm for the identification of a feed drive system coefficients in time domain using a reduced model based on windowed input and output data. The command and response of the axis are first measured in the first 4 ms, and then least squares are applied to predict the transfer function coefficients for this displacement segment. From the identified coefficients, the next command response segments are estimated. The obtained results reveal a considerable potential of least squares method to identify the system-s time-based coefficients and predict accurately the command response as compared to measurements.Keywords: feed drive systems, least squares algorithm, onlineparameter identification, short time window
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20966678 Comparative Analysis of DTC Based Switched Reluctance Motor Drive Using Torque Equation and FEA Models
Authors: P. Srinivas, P. V. N. Prasad
Abstract:
Since torque ripple is the main cause of noise and vibrations, the performance of Switched Reluctance Motor (SRM) can be improved by minimizing its torque ripple using a novel control technique called Direct Torque Control (DTC). In DTC technique, torque is controlled directly through control of magnitude of the flux and change in speed of the stator flux vector. The flux and torque are maintained within set hysteresis bands.
The DTC of SRM is analyzed by two methods. In one method, the actual torque is computed by conducting Finite Element Analysis (FEA) on the design specifications of the motor. In the other method, the torque is computed by Simplified Torque Equation. The variation of peak current, average current, torque ripple and speed settling time with Simplified Torque Equation model is compared with FEA based model.
Keywords: Direct Toque Control, Simplified Torque Equation, Finite Element Analysis, Torque Ripple.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3503