Search results for: direct vs. indirect values.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3223

Search results for: direct vs. indirect values.

583 Production of Pre-Reduction of Iron Ore Nuggets with Lesser Sulphur Intake by Devolatisation of Boiler Grade Coal

Authors: Chanchal Biswas, Anrin Bhattacharyya, Gopes Chandra Das, Mahua Ghosh Chaudhuri, Rajib Dey

Abstract:

Boiler coals with low fixed carbon and higher ash content have always challenged the metallurgists to develop a suitable method for their utilization. In the present study, an attempt is made to establish an energy effective method for the reduction of iron ore fines in the form of nuggets by using ‘Syngas’. By devolatisation (expulsion of volatile matter by applying heat) of boiler coal, gaseous product (enriched with reducing agents like CO, CO2, H2, and CH4 gases) is generated. Iron ore nuggets are reduced by this syngas. For that reason, there is no direct contact between iron ore nuggets and coal ash. It helps to control the minimization of the sulphur intake of the reduced nuggets. A laboratory scale devolatisation furnace designed with reduction facility is evaluated after in-depth studies and exhaustive experimentations including thermo-gravimetric (TG-DTA) analysis to find out the volatile fraction present in boiler grade coal, gas chromatography (GC) to find out syngas composition in different temperature and furnace temperature gradient measurements to minimize the furnace cost by applying one heating coil. The nuggets are reduced in the devolatisation furnace at three different temperatures and three different times. The pre-reduced nuggets are subjected to analytical weight loss calculations to evaluate the extent of reduction. The phase and surface morphology analysis of pre-reduced samples are characterized using X-ray diffractometry (XRD), energy dispersive x-ray spectrometry (EDX), scanning electron microscopy (SEM), carbon sulphur analyzer and chemical analysis method. Degree of metallization of the reduced nuggets is 78.9% by using boiler grade coal. The pre-reduced nuggets with lesser sulphur content could be used in the blast furnace as raw materials or coolant which would reduce the high quality of coke rate of the furnace due to its pre-reduced character. These can be used in Basic Oxygen Furnace (BOF) as coolant also.

Keywords: Alternative ironmaking, coal devolatisation, extent of reduction, nugget making, syngas based DRI, solid state reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
582 Using Artificial Neural Network to Predict Collisions on Horizontal Tangents of 3D Two-Lane Highways

Authors: Omer F. Cansiz, Said M. Easa

Abstract:

The purpose of this study is mainly to predict collision frequency on the horizontal tangents combined with vertical curves using artificial neural network methods. The proposed ANN models are compared with existing regression models. First, the variables that affect collision frequency were investigated. It was found that only the annual average daily traffic, section length, access density, the rate of vertical curvature, smaller curve radius before and after the tangent were statistically significant according to related combinations. Second, three statistical models (negative binomial, zero inflated Poisson and zero inflated negative binomial) were developed using the significant variables for three alignment combinations. Third, ANN models are developed by applying the same variables for each combination. The results clearly show that the ANN models have the lowest mean square error value than those of the statistical models. Similarly, the AIC values of the ANN models are smaller to those of the regression models for all the combinations. Consequently, the ANN models have better statistical performances than statistical models for estimating collision frequency. The ANN models presented in this paper are recommended for evaluating the safety impacts 3D alignment elements on horizontal tangents.

Keywords: Collision frequency, horizontal tangent, 3D two-lane highway, negative binomial, zero inflated Poisson, artificial neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1636
581 Recovery of Metals from Electronic Waste by Physical and Chemical Recycling Processes

Authors: Muammer Kaya

Abstract:

The main purpose of this article is to provide a comprehensive review of various physical and chemical processes for electronic waste (e-waste) recycling, their advantages and shortfalls towards achieving a cleaner process of waste utilization, with especial attention towards extraction of metallic values. Current status and future perspectives of waste printed circuit boards (PCBs) recycling are described. E-waste characterization, dismantling/ disassembly methods, liberation and classification processes, composition determination techniques are covered. Manual selective dismantling and metal-nonmetal liberation at – 150 µm at two step crushing are found to be the best. After size reduction, mainly physical separation/concentration processes employing gravity, electrostatic, magnetic separators, froth floatation etc., which are commonly used in mineral processing, have been critically reviewed here for separation of metals and non-metals, along with useful utilizations of the non-metallic materials. The recovery of metals from e-waste material after physical separation through pyrometallurgical, hydrometallurgical or biohydrometallurgical routes is also discussed along with purification and refining and some suitable flowsheets are also given. It seems that hydrometallurgical route will be a key player in the base and precious metals recoveries from e-waste. E-waste recycling will be a very important sector in the near future from economic and environmental perspectives.

Keywords: E-waste, WEEE, PCB, recycling, metal recovery, hydrometallurgy, pyrometallurgy, biohydrometallurgy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8342
580 Application of Extreme Learning Machine Method for Time Series Analysis

Authors: Rampal Singh, S. Balasundaram

Abstract:

In this paper, we study the application of Extreme Learning Machine (ELM) algorithm for single layered feedforward neural networks to non-linear chaotic time series problems. In this algorithm the input weights and the hidden layer bias are randomly chosen. The ELM formulation leads to solving a system of linear equations in terms of the unknown weights connecting the hidden layer to the output layer. The solution of this general system of linear equations will be obtained using Moore-Penrose generalized pseudo inverse. For the study of the application of the method we consider the time series generated by the Mackey Glass delay differential equation with different time delays, Santa Fe A and UCR heart beat rate ECG time series. For the choice of sigmoid, sin and hardlim activation functions the optimal values for the memory order and the number of hidden neurons which give the best prediction performance in terms of root mean square error are determined. It is observed that the results obtained are in close agreement with the exact solution of the problems considered which clearly shows that ELM is a very promising alternative method for time series prediction.

Keywords: Chaotic time series, Extreme learning machine, Generalization performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3519
579 Control-Oriented Enhanced Zero-Dimensional Two-Zone Combustion Modelling of Internal Combustion Engines

Authors: Razieh Arian, Hadi Adibi-Asl

Abstract:

This paper investigates an efficient combustion modeling for cycle simulation of internal combustion engine (ICE) studies. The term “efficient model” means that the models must generate desired simulation results while having fast simulation time. In other words, the efficient model is defined based on the application of the model. The objective of this study is to develop math-based models for control applications or shortly control-oriented models. This study compares different modeling approaches used to model the ICEs such as mean-value models, zero dimensional, quasi-dimensional, and multi-dimensional models for control applications. Mean-value models have been widely used for model-based control applications, but recently by developing advanced simulation tools (e.g. Maple/MapleSim) the higher order models (more complex) could be considered as control-oriented models. This paper presents the enhanced zero-dimensional cycle-by-cycle modeling and simulation of a spark ignition engine with a two-zone combustion model. The simulation results are cross-validated against the simulation results from GT-Power package and show a good agreement in terms of trends and values.

Keywords: Two-zone combustion, control-oriented model, wiebe function, internal combustion engine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1095
578 Evaluation of Torsional Efforts on Thermal Machines Shaft with Gas Turbine resulting of Automatic Reclosing

Authors: Alvaro J. P. Ramos, Wellington S. Mota, Yendys S. Dantas

Abstract:

This paper analyses the torsional efforts in gas turbine-generator shafts caused by high speed automatic reclosing of transmission lines. This issue is especially important for cases of three phase short circuit and unsuccessful reclosure of lines in the vicinity of the thermal plant. The analysis was carried out for the thermal plant TERMOPERNAMBUCO located on Northeast region of Brazil. It is shown that stress level caused by lines unsuccessful reclosing can be several times higher than terminal three-phase short circuit. Simulations were carried out with detailed shaft torsional model provided by machine manufacturer and with the “Alternative Transient Program – ATP" program [1]. Unsuccessful three phase reclosing for selected lines in the area closed to the plant indicated most critical cases. Also, reclosing first the terminal next to the gas turbine gererator will lead also to the most critical condition. Considering that the values of transient torques are very sensible to the instant of reclosing, simulation of unsuccessful reclosing with statistics ATP switch were carried out for determination of most critical transient torques for each section of the generator turbine shaft.

Keywords: Torsional Efforts, Thermal Machine, GasTurbine, Automatic Reclosing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2147
577 Attribute Analysis of Quick Response Code Payment Users Using Discriminant Non-negative Matrix Factorization

Authors: Hironori Karachi, Haruka Yamashita

Abstract:

Recently, the system of quick response (QR) code is getting popular. Many companies introduce new QR code payment services and the services are competing with each other to increase the number of users. For increasing the number of users, we should grasp the difference of feature of the demographic information, usage information, and value of users between services. In this study, we conduct an analysis of real-world data provided by Nomura Research Institute including the demographic data of users and information of users’ usages of two services; LINE Pay, and PayPay. For analyzing such data and interpret the feature of them, Nonnegative Matrix Factorization (NMF) is widely used; however, in case of the target data, there is a problem of the missing data. EM-algorithm NMF (EMNMF) to complete unknown values for understanding the feature of the given data presented by matrix shape. Moreover, for comparing the result of the NMF analysis of two matrices, there is Discriminant NMF (DNMF) shows the difference of users features between two matrices. In this study, we combine EMNMF and DNMF and also analyze the target data. As the interpretation, we show the difference of the features of users between LINE Pay and Paypay.

Keywords: Data science, non-negative matrix factorization, missing data, quality of services.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 453
576 Control of an Asymmetrical Design of a Pneumatically Actuated Ambidextrous Robot Hand

Authors: Emre Akyürek, Anthony Huynh, Tatiana Kalganova

Abstract:

The Ambidextrous Robot Hand is a robotic device with the purpose to mimic either the gestures of a right or a left hand. The symmetrical behavior of its fingers allows them to bend in one way or another keeping a compliant and anthropomorphic shape. However, in addition to gestures they can reproduce on both sides, an asymmetrical mechanical design with a three tendons routing has been engineered to reduce the number of actuators. As a consequence, control algorithms must be adapted to drive efficiently the ambidextrous fingers from one position to another and to include grasping features. These movements are controlled by pneumatic muscles, which are nonlinear actuators. As their elasticity constantly varies when they are under actuation, the length of pneumatic muscles and the force they provide may differ for a same value of pressurized air. The control algorithms introduced in this paper take both the fingers asymmetrical design and the pneumatic muscles nonlinearity into account to permit an accurate control of the Ambidextrous Robot Hand. The finger motion is achieved by combining a classic PID controller with a phase plane switching control that turns the gain constants into dynamic values. The grasping ability is made possible because of a sliding mode control that makes the fingers adapt to the shape of an object before strengthening their positions.

Keywords: Ambidextrous hand, intelligent algorithms, nonlinear actuators, pneumatic muscles, robotics, sliding control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2242
575 Software Maintenance Severity Prediction for Object Oriented Systems

Authors: Parvinder S. Sandhu, Roma Jaswal, Sandeep Khimta, Shailendra Singh

Abstract:

As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done in time especially for the critical applications. As, Neural networks, which have been already applied in software engineering applications to build reliability growth models predict the gross change or reusability metrics. Neural networks are non-linear sophisticated modeling techniques that are able to model complex functions. Neural network techniques are used when exact nature of input and outputs is not known. A key feature is that they learn the relationship between input and output through training. In this present work, various Neural Network Based techniques are explored and comparative analysis is performed for the prediction of level of need of maintenance by predicting level severity of faults present in NASA-s public domain defect dataset. The comparison of different algorithms is made on the basis of Mean Absolute Error, Root Mean Square Error and Accuracy Values. It is concluded that Generalized Regression Networks is the best algorithm for classification of the software components into different level of severity of impact of the faults. The algorithm can be used to develop model that can be used for identifying modules that are heavily affected by the faults.

Keywords: Neural Network, Software faults, Software Metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1575
574 Thermogravimetry Study on Pyrolysis of Various Lignocellulosic Biomass for Potential Hydrogen Production

Authors: S.S. Abdullah, S. Yusup, M.M. Ahmad, A. Ramli, L. Ismail

Abstract:

This paper aims to study decomposition behavior in pyrolytic environment of four lignocellulosic biomass (oil palm shell, oil palm frond, rice husk and paddy straw), and two commercial components of biomass (pure cellulose and lignin), performed in a thermogravimetry analyzer (TGA). The unit which consists of a microbalance and a furnace flowed with 100 cc (STP) min-1 Nitrogen, N2 as inert. Heating rate was set at 20⁰C min-1 and temperature started from 50 to 900⁰C. Hydrogen gas production during the pyrolysis was observed using Agilent Gas Chromatography Analyzer 7890A. Oil palm shell, oil palm frond, paddy straw and rice husk were found to be reactive enough in a pyrolytic environment of up to 900°C since pyrolysis of these biomass starts at temperature as low as 200°C and maximum value of weight loss is achieved at about 500°C. Since there was not much different in the cellulose, hemicelluloses and lignin fractions between oil palm shell, oil palm frond, paddy straw and rice husk, the T-50 and R-50 values obtained are almost similar. H2 productions started rapidly at this temperature as well due to the decompositions of biomass inside the TGA. Biomass with more lignin content such as oil palm shell was found to have longer duration of H2 production compared to materials of high cellulose and hemicelluloses contents.

Keywords: biomass, decomposition, hydrogen, lignocellulosic, thermogravimetry

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2268
573 A Six-Year Case Study Evaluating the Stakeholders’ Requirements and Satisfaction in Higher Educational Establishments

Authors: Ioannis I. Αngeli

Abstract:

Worldwide and mainly in the European Union, many standards, regulations, models and systems exists for the evaluation and identification of stakeholders’ requirements of individual universities and higher education (HE) in general. All systems are targeting to measure or evaluate the Universities’ Quality Assurance Systems and the services offered to the recipients of HE, mainly the students. Numerous surveys were conducted in the past either by each university or by organized bodies to identify the students’ satisfaction or to evaluate to what extent these requirements are fulfilled. In this paper, the main results of an ongoing 6-year joint research will be presented very briefly. This research deals with an in depth investigation of student’s satisfaction, students personal requirements, a cup analysis among these two parameters and compares different universities. Through this research an attempt will be made to address four very important questions in higher education establishments (HEE): (1) Are there any common requirements, parameters, good practices or questions that apply to a large number of universities that will assure that students’ requirements are fulfilled? (2) Up to what extent the individual programs of HEE fulfil the requirements of the stakeholders? (3) Are there any similarities on specific programs among European HEE? (4) To what extent the knowledge acquired in a specific course program is utilized or used in a specific country? For the execution of the research an internationally accepted questionnaire(s) was used to evaluate up to what extent the students’ requirements and satisfaction were fulfilled in 2012 and five years later (2017). Samples of students and or universities were taken from many European Universities. The questionnaires used, the sampling method and methodology adopted, as well as the comparison tables and results will be very valuable to any university that is willing to follow the same route and methodology or compare the results with their own HHE. Apart from the unique methodology, valuable results are demonstrated from the four case studies. There is a great difference between the student’s expectations or importance from what they are getting from their universities (in all parameters they are getting less). When there is a crisis or budget cut in HEE there is a direct impact to students. There are many differences on subjects taught in European universities.

Keywords: Quality in higher education, students’ requirements, education standards, student’s survey, stakeholder’s requirements, Mechanical Engineering courses.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 782
572 Laser Transmission through Vegetative Material

Authors: Juliana A. Fracarolli, Adilson M. Enes, Inácio M. Dal Fabbro, Silvestre Rodrigues

Abstract:

The dynamic speckle or biospeckle is an interference phenomenon generated at the reflection of a coherent light by an active surface or even by a particulate or living body surface. The above mentioned phenomenon gave scientific support to a method named biospeckle which has been employed to study seed viability, biological activity, tissue senescence, tissue water content, fruit bruising, etc. Since the above mentioned method is not invasive and yields numerical values, it can be considered for possible automation associated to several processes, including selection and sorting. Based on these preliminary considerations, this research work proposed to study the interaction of a laser beam with vegetative samples by measuring the incident light intensity and the transmitted light beam intensity at several vegetative slabs of varying thickness. Tests were carried on fifteen slices of apple tissue divided into three thickness groups, i.e., 4 mm, 5 mm, 18 mm and 22 mm. A diode laser beam of 10mW and 632 nm wavelength and a Samsung digital camera were employed to carry the tests. Outgoing images were analyzed by comparing the gray gradient of a fixed image column of each image to obtain a laser penetration scale into the tissue, according to the slice thickness.

Keywords: Fruit, laser, laser transmission, vegetative tissue.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1575
571 Reliability Analysis of Press Unit using Vague Set

Authors: S. P. Sharma, Monica Rani

Abstract:

In conventional reliability assessment, the reliability data of system components are treated as crisp values. The collected data have some uncertainties due to errors by human beings/machines or any other sources. These uncertainty factors will limit the understanding of system component failure due to the reason of incomplete data. In these situations, we need to generalize classical methods to fuzzy environment for studying and analyzing the systems of interest. Fuzzy set theory has been proposed to handle such vagueness by generalizing the notion of membership in a set. Essentially, in a Fuzzy Set (FS) each element is associated with a point-value selected from the unit interval [0, 1], which is termed as the grade of membership in the set. A Vague Set (VS), as well as an Intuitionistic Fuzzy Set (IFS), is a further generalization of an FS. Instead of using point-based membership as in FS, interval-based membership is used in VS. The interval-based membership in VS is more expressive in capturing vagueness of data. In the present paper, vague set theory coupled with conventional Lambda-Tau method is presented for reliability analysis of repairable systems. The methodology uses Petri nets (PN) to model the system instead of fault tree because it allows efficient simultaneous generation of minimal cuts and path sets. The presented method is illustrated with the press unit of the paper mill.

Keywords: Lambda -Tau methodology, Petri nets, repairable system, vague fuzzy set.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527
570 Discrete-time Phase and Delay Locked Loops Analyses in Tracking Mode

Authors: Jiri Sebesta

Abstract:

Phase locked loops (PLL) and delay locked loops (DLL) play an important role in establishing coherent references (phase of carrier and symbol timing) in digital communication systems. Fully digital receiver including digital carrier synchronizer and symbol timing synchronizer fulfils the conditions for universal multi-mode communication receiver with option of symbol rate setting over several digit places and long-term stability of requirement parameters. Afterwards it is necessary to realize PLL and DLL in synchronizer in digital form and to approach to these subsystems as a discrete representation of analog template. Analysis of discrete phase locked loop (DPLL) or discrete delay locked loop (DDLL) and technique to determine their characteristics based on analog (continuous-time) template is performed in this posed paper. There are derived transmission response and error function for 1st order discrete locked loop and resulting equations and graphical representations for 2nd order one. It is shown that the spectrum translation due to sampling takes effect at frequency characteristics computing for specific values of loop parameters.

Keywords: Carrier synchronization, coherent demodulation, software defined receiver, symbol timing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2627
569 Development of Piezoelectric Gas Micro Pumps with the PDMS Check Valve Design

Authors: Chiang-Ho Cheng, An-Shik Yang, Hong-Yih Cheng, Ming-Yu Lai

Abstract:

This paper presents the design and fabrication of a novel piezoelectric actuator for a gas micro pump with check valve having the advantages of miniature size, light weight and low power consumption. The micro pump is designed to have eight major components, namely a stainless steel upper cover layer, a piezoelectric actuator, a stainless steel diaphragm, a PDMS chamber layer, two stainless steel channel layers with two valve seats, a PDMS check valve layer with two cantilever-type check valves and an acrylic substrate. A prototype of the gas micro pump, with a size of 52 mm × 50 mm × 5.0 mm, is fabricated by precise manufacturing. This device is designed to pump gases with the capability of performing the self-priming and bubble-tolerant work mode by maximizing the stroke volume of the membrane as well as the compression ratio via minimization of the dead volume of the micro pump chamber and channel. By experiment apparatus setup, we can get the real-time values of the flow rate of micro pump and the displacement of the piezoelectric actuator, simultaneously. The gas micro pump obtained higher output performance under the sinusoidal waveform of 250 Vpp. The micro pump achieved the maximum pumping rates of 1185 ml/min and back pressure of 7.14 kPa at the corresponding frequency of 120 and 50 Hz.

Keywords: PDMS, Check valve, Micro pump, Piezoelectric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2026
568 Fuzzy Uncertainty Theory for Stealth Fighter Aircraft Selection in Entropic Fuzzy TOPSIS Decision Analysis Process

Authors: C. Ardil

Abstract:

The purpose of this paper is to present fuzzy TOPSIS in an entropic fuzzy environment. Due to the ambiguous concepts often represented in decision data, exact values are insufficient to model real-life situations. In this paper, the rating of each alternative is defined in fuzzy linguistic terms, which can be expressed with triangular fuzzy numbers. The weight of each criterion is then derived from the decision matrix using the entropy weighting method. Next, a vertex method is proposed to calculate the distance between two triangular fuzzy numbers. According to the TOPSIS concept, a closeness coefficient is defined to determine the ranking order of all alternatives by simultaneously calculating the distances to both the fuzzy positive-ideal solution (FPIS) and the fuzzy negative-ideal solution (FNIS). Finally, an illustrative example of selecting stealth fighter aircraft is shown at the end of this article to highlight the procedure of the proposed method. Correlation analysis and validation analysis using TOPSIS, WSM, and WPM methods were performed to compare the ranking order of the alternatives.

Keywords: stealth fighter aircraft selection, fuzzy uncertainty theory (FUT), fuzzy entropic decision (FED), fuzzy linguistic variables, triangular fuzzy numbers, multiple criteria decision making analysis, MCDMA, TOPSIS, WSM, WPM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 601
567 Laboratory Evaluation of Bacillus subtilis Bioactivity on Musca domestica (Linn) (Diptera: Muscidae) Larvae from Poultry Farms in South Western Nigeria

Authors: Funmilola O. Omoya

Abstract:

Muscid flies are known to be vectors of disease agents and species that annoy humans and domesticated animals. An example of these flies is Musca domestica (house fly) whose adult and immature stages occur in a variety of filthy organic substances including household garbage and animal manures. They contribute to microbial contamination of foods. It is therefore imperative to control these flies as a result of their role in Public health. The second and third instars of Musca domestica (Linn) were infected with varying cell loads of Bacillus subtilis in vitro for a period of 48 hours to evaluate its larvicidal activities. Mortality of the larvae increased with incubation period after treatment with the varying cell loads. Investigation revealed that the second instars larvae were more susceptible to treatment than the third instars treatments. Values obtained from the third instar group were significantly different (P<0.05) from those obtained from the second instars group in all the treatments. Lethal concentration (LC50) at 24 hours for 2nd instars was 2.35 while LC50 at 48 hours was 4.31.This study revealed that Bacillus subtilis possess good larvicidal potential for use in the control of Musca domestica in poultry farms.

Keywords: Bacillus subtilis, larvicidal activities, Musca domestica, poultry farms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2237
566 A Systems Approach to Gene Ranking from DNA Microarray Data of Cervical Cancer

Authors: Frank Emmert Streib, Matthias Dehmer, Jing Liu, Max Mühlhauser

Abstract:

In this paper we present a method for gene ranking from DNA microarray data. More precisely, we calculate the correlation networks, which are unweighted and undirected graphs, from microarray data of cervical cancer whereas each network represents a tissue of a certain tumor stage and each node in the network represents a gene. From these networks we extract one tree for each gene by a local decomposition of the correlation network. The interpretation of a tree is that it represents the n-nearest neighbor genes on the n-th level of a tree, measured by the Dijkstra distance, and, hence, gives the local embedding of a gene within the correlation network. For the obtained trees we measure the pairwise similarity between trees rooted by the same gene from normal to cancerous tissues. This evaluates the modification of the tree topology due to progression of the tumor. Finally, we rank the obtained similarity values from all tissue comparisons and select the top ranked genes. For these genes the local neighborhood in the correlation networks changes most between normal and cancerous tissues. As a result we find that the top ranked genes are candidates suspected to be involved in tumor growth and, hence, indicates that our method captures essential information from the underlying DNA microarray data of cervical cancer.

Keywords: Graph similarity, DNA microarray data, cancer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1756
565 QSAR Studies of Certain Novel Heterocycles Derived from Bis-1, 2, 4 Triazoles as Anti-Tumor Agents

Authors: Madhusudan Purohit, Stephen Philip, Bharathkumar Inturi

Abstract:

In this paper we report the quantitative structure activity relationship of novel bis-triazole derivatives for predicting the activity profile. The full model encompassed a dataset of 46 Bis- triazoles. Tripos Sybyl X 2.0 program was used to conduct CoMSIA QSAR modeling. The Partial Least-Squares (PLS) analysis method was used to conduct statistical analysis and to derive a QSAR model based on the field values of CoMSIA descriptor. The compounds were divided into test and training set. The compounds were evaluated by various CoMSIA parameters to predict the best QSAR model. An optimum numbers of components were first determined separately by cross-validation regression for CoMSIA model, which were then applied in the final analysis. A series of parameters were used for the study and the best fit model was obtained using donor, partition coefficient and steric parameters. The CoMSIA models demonstrated good statistical results with regression coefficient (r2) and the cross-validated coefficient (q2) of 0.575 and 0.830 respectively. The standard error for the predicted model was 0.16322. In the CoMSIA model, the steric descriptors make a marginally larger contribution than the electrostatic descriptors. The finding that the steric descriptor is the largest contributor for the CoMSIA QSAR models is consistent with the observation that more than half of the binding site area is occupied by steric regions.

Keywords: 3D QSAR, CoMSIA, Triazoles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1480
564 An Efficient Architecture for Interleaved Modular Multiplication

Authors: Ahmad M. Abdel Fattah, Ayman M. Bahaa El-Din, Hossam M.A. Fahmy

Abstract:

Modular multiplication is the basic operation in most public key cryptosystems, such as RSA, DSA, ECC, and DH key exchange. Unfortunately, very large operands (in order of 1024 or 2048 bits) must be used to provide sufficient security strength. The use of such big numbers dramatically slows down the whole cipher system, especially when running on embedded processors. So far, customized hardware accelerators - developed on FPGAs or ASICs - were the best choice for accelerating modular multiplication in embedded environments. On the other hand, many algorithms have been developed to speed up such operations. Examples are the Montgomery modular multiplication and the interleaved modular multiplication algorithms. Combining both customized hardware with an efficient algorithm is expected to provide a much faster cipher system. This paper introduces an enhanced architecture for computing the modular multiplication of two large numbers X and Y modulo a given modulus M. The proposed design is compared with three previous architectures depending on carry save adders and look up tables. Look up tables should be loaded with a set of pre-computed values. Our proposed architecture uses the same carry save addition, but replaces both look up tables and pre-computations with an enhanced version of sign detection techniques. The proposed architecture supports higher frequencies than other architectures. It also has a better overall absolute time for a single operation.

Keywords: Montgomery multiplication, modular multiplication, efficient architecture, FPGA, RSA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2454
563 Disaggregation the Daily Rainfall Dataset into Sub-Daily Resolution in the Temperate Oceanic Climate Region

Authors: Mohammad Bakhshi, Firas Al Janabi

Abstract:

High resolution rain data are very important to fulfill the input of hydrological models. Among models of high-resolution rainfall data generation, the temporal disaggregation was chosen for this study. The paper attempts to generate three different rainfall resolutions (4-hourly, hourly and 10-minutes) from daily for around 20-year record period. The process was done by DiMoN tool which is based on random cascade model and method of fragment. Differences between observed and simulated rain dataset are evaluated with variety of statistical and empirical methods: Kolmogorov-Smirnov test (K-S), usual statistics, and Exceedance probability. The tool worked well at preserving the daily rainfall values in wet days, however, the generated data are cumulated in a shorter time period and made stronger storms. It is demonstrated that the difference between generated and observed cumulative distribution function curve of 4-hourly datasets is passed the K-S test criteria while in hourly and 10-minutes datasets the P-value should be employed to prove that their differences were reasonable. The results are encouraging considering the overestimation of generated high-resolution rainfall data.

Keywords: DiMoN tool, disaggregation, exceedance probability, Kolmogorov-Smirnov Test, rainfall.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1007
562 Estimation of Time -Varying Linear Regression with Unknown Time -Volatility via Continuous Generalization of the Akaike Information Criterion

Authors: Elena Ezhova, Vadim Mottl, Olga Krasotkina

Abstract:

The problem of estimating time-varying regression is inevitably concerned with the necessity to choose the appropriate level of model volatility - ranging from the full stationarity of instant regression models to their absolute independence of each other. In the stationary case the number of regression coefficients to be estimated equals that of regressors, whereas the absence of any smoothness assumptions augments the dimension of the unknown vector by the factor of the time-series length. The Akaike Information Criterion is a commonly adopted means of adjusting a model to the given data set within a succession of nested parametric model classes, but its crucial restriction is that the classes are rigidly defined by the growing integer-valued dimension of the unknown vector. To make the Kullback information maximization principle underlying the classical AIC applicable to the problem of time-varying regression estimation, we extend it onto a wider class of data models in which the dimension of the parameter is fixed, but the freedom of its values is softly constrained by a family of continuously nested a priori probability distributions.

Keywords: Time varying regression, time-volatility of regression coefficients, Akaike Information Criterion (AIC), Kullback information maximization principle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
561 Study on Plasma Creation and Propagation in a Pulsed Magnetoplasmadynamic Thruster

Authors: Tony Schönherr, Kimiya Komurasaki, Georg Herdrich

Abstract:

The performance and the plasma created by a pulsed magnetoplasmadynamic thruster for small satellite application is studied to understand better the ablation and plasma propagation processes occurring during the short-time discharge. The results can be applied to improve the quality of the thruster in terms of efficiency, and to tune the propulsion system to the needs required by the satellite mission. Therefore, plasma measurements with a high-speed camera and induction probes, and performance measurements of mass bit and impulse bit were conducted. Values for current sheet propagation speed, mean exhaust velocity and thrust efficiency were derived from these experimental data. A maximum in current sheet propagation was found by the high-speed camera measurements for a medium energy input and confirmed by the induction probes. A quasilinear tendency between the mass bit and the energy input, the current action integral respectively, was found, as well as a linear tendency between the created impulse and the discharge energy. The highest mean exhaust velocity and thrust efficiency was found for the highest energy input.

Keywords: electric propulsion, low-density plasma, pulsed magnetoplasmadynamicthruster, space engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2525
560 Acceptance and Commitment Therapy for Work Stress: Variation in Perceived Group Process and Outcomes

Authors: William H. O'Brien, Erin Bannon, M.A., Heather McCarren, Eileen Delaney

Abstract:

Employees commonly encounter unpredictable and unavoidable work related stressors. Exposure to such stressors can evoke negative appraisals and associated adverse mental, physical, and behavioral responses. Because Acceptance and Commitment Therapy (ACT) emphasizes acceptance of unavoidable stressors and diffusion from negative appraisals, it may be particularly beneficial for work stress. Forty-five workers were randomly assigned to an ACT intervention for work stress (n = 21) or a waitlist control group (n = 24). The intervention consisted of two 3-hour sessions spaced one week apart. An examination of group process and outcomes was conducted using the Revised Sessions Rating Scale. Results indicated that the ACT participants reported that they perceived the intervention to be supportive, task focused, and without adverse therapist behaviors (e.g., feelings of being criticized or discounted). Additionally, the second session (values clarification and commitment to action) was perceived to be more supportive and task focused than the first session (mindfulness, defusion). Process ratings were correlated with outcomes. Results indicated that perceptions of therapy supportiveness and task focus were associated with reduced psychological distress and improved perceived physical health.

Keywords: Work stress, Acceptance and Commitment Therapy, therapy process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2302
559 CFD Simulations to Validate Two and Three Phase Up-flow in Bubble Columns

Authors: Shyam Kumar, Nannuri Srinivasulu, Ashok Khanna

Abstract:

Bubble columns have a variety of applications in absorption, bio-reactions, catalytic slurry reactions, and coal liquefaction; because they are simple to operate, provide good heat and mass transfer, having less operational cost. The use of Computational Fluid Dynamics (CFD) for bubble column becomes important, since it can describe the fluid hydrodynamics on both local and global scale. Euler- Euler two-phase fluid model has been used to simulate two-phase (air and water) transient up-flow in bubble column (15cm diameter) using FLUENT6.3. These simulations and experiments were operated over a range of superficial gas velocities in the bubbly flow and churn turbulent regime (1 to16 cm/s) at ambient conditions. Liquid velocity was varied from 0 to 16cm/s. The turbulence in the liquid phase is described using the standard k-ε model. The interactions between the two phases are described through drag coefficient formulations (Schiller Neumann). The objectives are to validate CFD simulations with experimental data, and to obtain grid-independent numerical solutions. Quantitatively good agreements are obtained between experimental data for hold-up and simulation values. Axial liquid velocity profiles and gas holdup profiles were also obtained for the simulation.

Keywords: Bubble column, Computational fluid dynamics, Gas holdup profile, k-ε model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2719
558 Development of EN338 (2009) Strength Classes for Some Common Nigerian Timber Species Using Three Point Bending Test

Authors: Abubakar Idris, Nabade Abdullahi Muhammad

Abstract:

The work presents a development of EN338 strength classes for Strombosia pustulata, Pterygotama crocarpa, Nauclea diderrichii and Entandrophragma cyclindricum Nigerian timber species. The specimens for experimental measurements were obtained from the timber-shed at the famous Panteka market in Kaduna in the northern part of Nigeria. Laboratory experiments were conducted to determine the physical and mechanical properties of the selected timber species in accordance with EN 13183-1 and ASTM D193. The mechanical properties were determined using three point bending test. The generated properties were used to obtain the characteristic values of the material properties in accordance with EN384. The selected timber species were then classified according to EN 338. Strombosia pustulata, Pterygotama crocarpa, Nauclea diderrichii and Entandrophragma cyclindricum were assigned to strength classes D40, C14, D40 and D24 respectively. Other properties such as tensile and compressive strengths parallel and perpendicular to grains, shear strength as well as shear modulus were obtained in accordance with EN 338. 

Keywords: Mechanical properties, Nigerian timber, strength classes, three-point bending test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4075
557 Molecular Dynamics Simulation for Buckling Analysis at Nanocomposite Beams

Authors: Babak Safaei, A. M. Fattahi

Abstract:

In the present study we have investigated axial buckling characteristics of nanocomposite beams reinforced by single-walled carbon nanotubes (SWCNTs). Various types of beam theories including Euler-Bernoulli beam theory, Timoshenko beam theory and Reddy beam theory were used to analyze the buckling behavior of carbon nanotube-reinforced composite beams. Generalized differential quadrature (GDQ) method was utilized to discretize the governing differential equations along with four commonly used boundary conditions. The material properties of the nanocomposite beams were obtained using molecular dynamic (MD) simulation corresponding to both short-(10,10) SWCNT and long- (10,10) SWCNT composites which were embedded by amorphous polyethylene matrix. Then the results obtained directly from MD simulations were matched with those calculated by the mixture rule to extract appropriate values of carbon nanotube efficiency parameters accounting for the scale-dependent material properties. The selected numerical results were presented to indicate the influences of nanotube volume fractions and end supports on the critical axial buckling loads of nanocomposite beams relevant to long- and short-nanotube composites.

Keywords: Nanocomposites, molecular dynamics simulation, axial buckling, generalized differential quadrature (GDQ).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1899
556 A Numerical Study on the Seismic Performance of Built-Up Battened Columns

Authors: Sophia C. Alih, Mohammadreza Vafaei, Farnoud Rahimi Mansour, Nur Hajarul Falahi Abdul Halim

Abstract:

Built-up columns have been widely employed by practice engineers in the design and construction of buildings and bridges. However, failures have been observed in this type of columns in previous seismic events. This study analyses the performance of built-up columns with different configurations of battens when it is subjected to seismic loads. Four columns with different size of battens were simulated and subjected to three different intensities of axial load along with a lateral cyclic load. Results indicate that the size of battens influences significantly the seismic behavior of columns. Lower shear capacity of battens results in higher ultimate strength and ductility for built-up columns. It is observed that intensity of axial load has a significant effect on the ultimate strength of columns, but it is less influential on the yield strength. For a given drift value, the stress level in the centroid of smaller size battens is significantly more than that of larger size battens signifying damage concentration in battens rather than chords. It is concluded that design of battens for shear demand lower than code specified values only slightly reduces initial stiffness of columns; however, it improves seismic performance of battened columns.

Keywords: Battened column, built-up column, cyclic behavior, seismic design, steel column.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1303
555 Effects of Knitting Variables for Pressure Controlling of Tubular Compression Fabrics

Authors: Yu Shi, Rong Liu, Jingyun Lv

Abstract:

Compression textiles with ergonomic-fit and controllable pressure performance have demonstrated positive effect on prevention and treatment of chronic venous insufficiency (CVI). Well-designed compression textile products contribute to improving user compliance in their daily application. This study explored the effects of multiple knitting variables (yarn-machinery settings) on the physical-mechanical properties and the produced pressure magnitudes of tubular compression fabrics (TCFs) through experimental testing and multiple regression modeling. The results indicated that fabric physical (stitch densities and circumference) and mechanical (tensile) properties were affected by the linear density of inlay yarns, which, to some extent, influenced pressure magnitudes of the TCFs. Knitting variables (e.g., feeding velocity of inlay yarns and loop size settings) can alter circumferences and tensile properties of tubular fabrics, respectively, and significantly varied pressure values of the TCFs. This study enhanced the understanding of the effects of knitting factors on pressure controlling of TCFs, thus facilitating dimension and pressure design of compression textiles in future development.

Keywords: Laid-in knitted fabric, yarn-machinery settings, pressure magnitudes, quantitative analysis, compression textiles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 280
554 A Critics Study of Neural Networks Applied to ion-Exchange Process

Authors: John Kabuba, Antoine Mulaba-Bafubiandi, Kim Battle

Abstract:

This paper presents a critical study about the application of Neural Networks to ion-exchange process. Ionexchange is a complex non-linear process involving many factors influencing the ions uptake mechanisms from the pregnant solution. The following step includes the elution. Published data presents empirical isotherm equations with definite shortcomings resulting in unreliable predictions. Although Neural Network simulation technique encounters a number of disadvantages including its “black box", and a limited ability to explicitly identify possible causal relationships, it has the advantage to implicitly handle complex nonlinear relationships between dependent and independent variables. In the present paper, the Neural Network model based on the back-propagation algorithm Levenberg-Marquardt was developed using a three layer approach with a tangent sigmoid transfer function (tansig) at hidden layer with 11 neurons and linear transfer function (purelin) at out layer. The above mentioned approach has been used to test the effectiveness in simulating ion exchange processes. The modeling results showed that there is an excellent agreement between the experimental data and the predicted values of copper ions removed from aqueous solutions.

Keywords: Copper, ion-exchange process, neural networks, simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1632