Search results for: parameter identification and validation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2405

Search results for: parameter identification and validation

215 Drop Impact Study on Flexible Superhydrophobic Surface Containing Micro-Nano Hierarchical Structures

Authors: Abinash Tripathy, Girish Muralidharan, Amitava Pramanik, Prosenjit Sen

Abstract:

Superhydrophobic surfaces are abundant in nature. Several surfaces such as wings of butterfly, legs of water strider, feet of gecko and the lotus leaf show extreme water repellence behaviour. Self-cleaning, stain-free fabrics, spill-resistant protective wears, drag reduction in micro-fluidic devices etc. are few applications of superhydrophobic surfaces. In order to design robust superhydrophobic surface, it is important to understand the interaction of water with superhydrophobic surface textures. In this work, we report a simple coating method for creating large-scale flexible superhydrophobic paper surface. The surface consists of multiple layers of silanized zirconia microparticles decorated with zirconia nanoparticles. Water contact angle as high as 159±10 and contact angle hysteresis less than 80 was observed. Drop impact studies on superhydrophobic paper surface were carried out by impinging water droplet and capturing its dynamics through high speed imaging. During the drop impact, the Weber number was varied from 20 to 80 by altering the impact velocity of the drop and the parameters such as contact time, normalized spread diameter were obtained. In contrast to earlier literature reports, we observed contact time to be dependent on impact velocity on superhydrophobic surface. Total contact time was split into two components as spread time and recoil time. The recoil time was found to be dependent on the impact velocity while the spread time on the surface did not show much variation with the impact velocity. Further, normalized spreading parameter was found to increase with increase in impact velocity.

Keywords: Contact angle, contact angle hysteresis, contact time, superhydrophobic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1366
214 Trend Analysis of Annual Total Precipitation Data in Konya

Authors: Naci Büyükkaracığan

Abstract:

Hydroclimatic observation values ​​are used in the planning of the project of water resources. Climate variables are the first of the values ​​used in planning projects. At the same time, the climate system is a complex and interactive system involving the atmosphere, land surfaces, snow and bubbles, the oceans and other water structures. The amount and distribution of precipitation, which is an important climate parameter, is a limiting environmental factor for dispersed living things. Trend analysis is applied to the detection of the presence of a pattern or trend in the data set. Many trends work in different parts of the world are usually made for the determination of climate change. The detection and attribution of past trends and variability in climatic variables is essential for explaining potential future alteration resulting from anthropogenic activities. Parametric and non-parametric tests are used for determining the trends in climatic variables. In this study, trend tests were applied to annual total precipitation data obtained in period of 1972 and 2012, in the Konya Basin. Non-parametric trend tests, (Sen’s T, Spearman’s Rho, Mann-Kendal, Sen’s T trend, Wald-Wolfowitz) and parametric test (mean square) were applied to annual total precipitations of 15 stations for trend analysis. The linear slopes (change per unit time) of trends are calculated by using a non-parametric estimator developed by Sen. The beginning of trends is determined by using the Mann-Kendall rank correlation test. In addition, homogeneities in precipitation trends are tested by using a method developed by Van Belle and Hughes. As a result of tests, negative linear slopes were found in annual total precipitations in Konya.

Keywords: Trend analysis, precipitation, hydroclimatology, Konya, Turkey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 973
213 Hot Deformability of Si-Steel Strips Containing Al

Authors: Mohamed Yousef, Magdy Samuel, Maha El-Meligy, Taher El-Bitar

Abstract:

The present work is dealing with 2% Si-steel alloy. The alloy contains 0.05% C as well as 0.85% Al. The alloy under investigation would be used for electrical transformation purposes. A heating (expansion) - cooling (contraction) dilation investigation was executed to detect the a, a+g, and g transformation temperatures at the inflection points of the dilation curve. On heating, primary a  was detected at a temperature range between room temperature and 687 oC. The domain of a+g was detected in the range between 687 oC and 746 oC. g phase exists in the closed g region at the range between 746 oC and 1043 oC. The domain of a phase appears again at a temperature range between 1043 and 1105 oC, and followed by secondary a at temperature higher than 1105 oC. A physical simulation of thermo-mechanical processing on the as-cast alloy was carried out. The simulation process took into consideration the hot flat rolling pilot plant parameters. The process was executed on the thermo-mechanical simulator (Gleeble 3500). The process was designed to include seven consecutive passes. The 1st pass represents the roughing stage, while the remaining six passes represent finish rolling stage. The whole process was executed at the temperature range from 1100 oC to 900 oC. The amount of strain starts with 23.5% at the roughing pass and decreases continuously to reach 7.5 % at the last finishing pass. The flow curve of the alloy can be abstracted from the stress-strain curves representing simulated passes. It shows alloy hardening from a pass to the other up to pass no. 6, as a result of decreasing the deformation temperature and increasing of cumulative strain. After pass no. 6, the deformation process enhances the dynamic recrystallization phenomena to appear, where the z-parameter would be high.

Keywords: Si-steel, hot deformability, critical transformation temperature, physical simulation, thermo-mechanical processing, flow curve, dynamic softening.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 829
212 A Static Android Malware Detection Based on Actual Used Permissions Combination and API Calls

Authors: Xiaoqing Wang, Junfeng Wang, Xiaolan Zhu

Abstract:

Android operating system has been recognized by most application developers because of its good open-source and compatibility, which enriches the categories of applications greatly. However, it has become the target of malware attackers due to the lack of strict security supervision mechanisms, which leads to the rapid growth of malware, thus bringing serious safety hazards to users. Therefore, it is critical to detect Android malware effectively. Generally, the permissions declared in the AndroidManifest.xml can reflect the function and behavior of the application to a large extent. Since current Android system has not any restrictions to the number of permissions that an application can request, developers tend to apply more than actually needed permissions in order to ensure the successful running of the application, which results in the abuse of permissions. However, some traditional detection methods only consider the requested permissions and ignore whether it is actually used, which leads to incorrect identification of some malwares. Therefore, a machine learning detection method based on the actually used permissions combination and API calls was put forward in this paper. Meanwhile, several experiments are conducted to evaluate our methodology. The result shows that it can detect unknown malware effectively with higher true positive rate and accuracy while maintaining a low false positive rate. Consequently, the AdaboostM1 (J48) classification algorithm based on information gain feature selection algorithm has the best detection result, which can achieve an accuracy of 99.8%, a true positive rate of 99.6% and a lowest false positive rate of 0.

Keywords: Android, permissions combination, API calls, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881
211 Durian Marker Kit for Durian (Durio zibethinus Murr.) Identity

Authors: Emma K. Sales

Abstract:

Durian is the flagship fruit of Mindanao and there is an abundance of several cultivars with many confusing identities/ names. The project was conducted to develop procedure for reliable and rapid detection and sorting of durian planting materials. Moreover, it is also aimed to establish specific genetic or DNA markers for routine testing and authentication of durian cultivars in question. The project developed molecular procedures for routine testing. SSR primers were also screened and identified for their utility in discriminating durian cultivars collected. Results of the study showed the following accomplishments: 1. Twenty (29) SSR primers were selected and identified based on their ability to discriminate durian cultivars, 2. Optimized and established standard procedure for identification and authentication of Durian cultivars 3. Genetic profile of durian is now available at Biotech Unit Our results demonstrate the relevance of using molecular techniques in evaluating and identifying durian clones. The most polymorphic primers tested in this study could be useful tools for detecting variation even at the early stage of the plant especially for commercial purposes. The process developed combines the efficiency of the microsatellites development process with the optimization of non-radioactive detection process resulting in a user-friendly protocol that can be performed in two (2) weeks and easily incorporated into laboratories about to start microsatellite development projects. This can be of great importance to extend microsatellite analyses to other crop species where minimal genetic information is currently available. With this, the University can now be a service laboratory for routine testing and authentication of durian clones.

Keywords: DNA, SSR Analysis, genotype, genetic diversity, cultivars.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3354
210 Effect of Fire Retardant Painting Product on Smoke Optical Density of Burning Natural Wood Samples

Authors: Abdullah N. Olimat, Ahmad S. Awad, Faisal M. AL-Ghathian

Abstract:

Natural wood is used in many applications in Jordan such as furniture, partitions constructions, and cupboards. Experimental work for smoke produced by the combustion of certain wood samples was studied. Smoke generated from burning of natural wood, is considered as a major cause of death in furniture fires. The critical parameter for life safety in fires is the available time for escape, so the visual obscuration due to smoke release during fire is taken into consideration. The effect of smoke, produced by burning of wood, depends on the amount of smoke released in case of fire. The amount of smoke production, apparently, affects the time available for the occupants to escape. To achieve the protection of life of building occupants during fire growth, fire retardant painting products are tested. The tested samples of natural wood include Beech, Ash, Beech Pine, and white Beech Pine. A smoke density chamber manufactured by fire testing technology has been used to perform measurement of smoke properties. The procedure of test was carried out according to the ISO-5659. A nonflammable vertical radiant heat flux of 25 kW/m2 is exposed to the wood samples in a horizontal orientation. The main objective of the current study is to carry out the experimental tests for samples of natural woods to evaluate the capability to escape in case of fire and the fire safety requirements. Specific optical density, transmittance, thermal conductivity, and mass loss are main measured parameters. Also, comparisons between samples with paint and with no paint are carried out between the selected samples of woods.

Keywords: Optical density, specific optical density, transmittance, visibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1064
209 Examining the Perceived Usefulness of ICTs for Learning about Indigenous Foods

Authors: K. M. Ngcobo, S. D. Eyono Obono

Abstract:

Science and technology has a major impact on many societal domains such as communication, medicine, food, transportation, etc. However, this dominance of modern technology can have a negative unintended impact on indigenous systems, and in particular on indigenous foods. This problem serves as a motivation to this study whose aim is to examine the perceptions of learners on the usefulness of Information and Communication Technologies (ICTs) for learning about indigenous foods. This aim will be subdivided into two types of research objectives. The design and identification of theories and models will be achieved using literature content analysis. The objective on the empirical testing of such theories and models will be achieved through the survey of Hospitality studies learners from different schools in the iLembe and Umgungundlovu Districts of the South African Kwazulu-Natal province. SPSS is used to quantitatively analyze the data collected by the questionnaire of this survey using descriptive statistics and Pearson correlations after the assessment of the validity and the reliability of the data. The main hypothesis behind this study is that there is a connection between the demographics of learners, their perceptions on the usefulness of ICTs for learning about indigenous foods, and the following personality and eLearning related theories constructs: Computer self-efficacy, Trust in ICT systems, and Conscientiousness; as suggested by existing studies on learning theories. This hypothesis was fully confirmed by the survey conducted by this study except for the demographic factors where gender and age were not found to be determinant factors of learners’ perceptions on the usefulness of ICTs for learning about indigenous foods.

Keywords: E-learning, Indigenous Foods, Information and Communication Technologies, Learning Theories, Personality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2196
208 A Supervised Learning Data Mining Approach for Object Recognition and Classification in High Resolution Satellite Data

Authors: Mais Nijim, Rama Devi Chennuboyina, Waseem Al Aqqad

Abstract:

Advances in spatial and spectral resolution of satellite images have led to tremendous growth in large image databases. The data we acquire through satellites, radars, and sensors consists of important geographical information that can be used for remote sensing applications such as region planning, disaster management. Spatial data classification and object recognition are important tasks for many applications. However, classifying objects and identifying them manually from images is a difficult task. Object recognition is often considered as a classification problem, this task can be performed using machine-learning techniques. Despite of many machine-learning algorithms, the classification is done using supervised classifiers such as Support Vector Machines (SVM) as the area of interest is known. We proposed a classification method, which considers neighboring pixels in a region for feature extraction and it evaluates classifications precisely according to neighboring classes for semantic interpretation of region of interest (ROI). A dataset has been created for training and testing purpose; we generated the attributes by considering pixel intensity values and mean values of reflectance. We demonstrated the benefits of using knowledge discovery and data-mining techniques, which can be on image data for accurate information extraction and classification from high spatial resolution remote sensing imagery.

Keywords: Remote sensing, object recognition, classification, data mining, waterbody identification, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2020
207 Linear Prediction System in Measuring Glucose Level in Blood

Authors: Intan Maisarah Abd Rahim, Herlina Abdul Rahim, Rashidah Ghazali

Abstract:

Diabetes is a medical condition that can lead to various diseases such as stroke, heart disease, blindness and obesity. In clinical practice, the concern of the diabetic patients towards the blood glucose examination is rather alarming as some of the individual describing it as something painful with pinprick and pinch. As for some patient with high level of glucose level, pricking the fingers multiple times a day with the conventional glucose meter for close monitoring can be tiresome, time consuming and painful. With these concerns, several non-invasive techniques were used by researchers in measuring the glucose level in blood, including ultrasonic sensor implementation, multisensory systems, absorbance of transmittance, bio-impedance, voltage intensity, and thermography. This paper is discussing the application of the near-infrared (NIR) spectroscopy as a non-invasive method in measuring the glucose level and the implementation of the linear system identification model in predicting the output data for the NIR measurement. In this study, the wavelengths considered are at the 1450 nm and 1950 nm. Both of these wavelengths showed the most reliable information on the glucose presence in blood. Then, the linear Autoregressive Moving Average Exogenous model (ARMAX) model with both un-regularized and regularized methods was implemented in predicting the output result for the NIR measurement in order to investigate the practicality of the linear system in this study. However, the result showed only 50.11% accuracy obtained from the system which is far from the satisfying results that should be obtained.

Keywords: Diabetes, glucose level, linear, near-infrared (NIR), non-invasive, prediction system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 834
206 Assessment of Conventional Drinking Water Treatment Plants as Removal Systems of Virulent Microsporidia

Authors: M. A. Gad, A. Z. Al-Herrawy

Abstract:

Microsporidia comprises various pathogenic species can infect humans by means of water. Moreover, chlorine disinfection of drinking-water has limitations against this protozoan pathogen. A total of 48 water samples were collected from two drinking water treatment plants having two different filtration systems (slow sand filter and rapid sand filter) during one year period. Samples were collected from inlet and outlet of each plant. Samples were separately filtrated through nitrocellulose membrane (142 mm, 0.45 µm), then eluted and centrifuged. The obtained pellet from each sample was subjected to DNA extraction, then, amplification using genus-specific primer for microsporidia. Each microsporidia-PCR positive sample was performed by two species specific primers for Enterocytozoon bieneusi and Encephalitozoon intestinalis. The results of the present study showed that the percentage of removal for microsporidia through different treatment processes reached its highest rate in the station using slow sand filters (100%), while the removal by rapid sand filter system was 81.8%. Statistically, the two different drinking water treatment plants (slow and rapid) had significant effect for removal of microsporidia. Molecular identification of microsporidia-PCR positive samples using two different primers for Enterocytozoon bieneusi and Encephalitozoon intestinalis showed the presence of the two pervious species in the inlet water of the two stations, while Encephalitozoon intestinalis was detected in the outlet water only. In conclusion, the appearance of virulent microsporidia in treated drinking water may cause potential health threat.

Keywords: Removal, efficacy, microsporidia, drinking water treatment plants, PCR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 975
205 Machine Learning Techniques in Bank Credit Analysis

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner

Abstract:

The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.

Keywords: Artificial Neural Networks, ANNs, classifier algorithms, credit risk assessment, logistic regression, machine learning, support vector machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1211
204 An Automatic Bayesian Classification System for File Format Selection

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for the classification of an unstructured format description for identification of file formats. The main contribution of this work is the employment of data mining techniques to support file format selection with just the unstructured text description that comprises the most important format features for a particular organisation. Subsequently, the file format indentification method employs file format classifier and associated configurations to support digital preservation experts with an estimation of required file format. Our goal is to make use of a format specification knowledge base aggregated from a different Web sources in order to select file format for a particular institution. Using the naive Bayes method, the decision support system recommends to an expert, the file format for his institution. The proposed methods facilitate the selection of file format and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and specifications of file formats. To facilitate decision-making, the aggregated information about the file formats is presented as a file format vocabulary that comprises most common terms that are characteristic for all researched formats. The goal is to suggest a particular file format based on this vocabulary for analysis by an expert. The sample file format calculation and the calculation results including probabilities are presented in the evaluation section.

Keywords: Data mining, digital libraries, digital preservation, file format.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1628
203 Pollution and Water Quality of the Beshar River

Authors: Fardin Boustani , Mohammah Hosein Hojati

Abstract:

The Beshar River is one aquatic ecosystem,which is affected by pollutants. This study was conducted to evaluate the effects of human activities on the water quality of the Beshar river. This river is approximately 190 km in length and situated at the geographical positions of 51° 20' to 51° 48' E and 30° 18' to 30° 52' N it is one of the most important aquatic ecosystems of Kohkiloye and Boyerahmad province next to the city of Yasuj in southern Iran. The Beshar river has been contaminated by industrial, agricultural and other activities in this region such as factories, hospitals, agricultural farms, urban surface runoff and effluent of wastewater treatment plants. In order to evaluate the effects of these pollutants on the quality of the Beshar river, five monitoring stations were selected along its course. The first station is located upstream of Yasuj near the Dehnow village; stations 2 to 4 are located east, south and west of city; and the 5th station is located downstream of Yasuj. Several water quality parameters were sampled. These include pH, dissolved oxygen, biological oxygen demand (BOD), temperature, conductivity, turbidity, total dissolved solids and discharge or flow measurements. Water samples from the five stations were collected and analysed to determine the following physicochemical parameters: EC, pH, T.D.S, T.H, No2, DO, BOD5, COD during 2008 to 2009. The study shows that the BOD5 value of station 1 is at a minimum (1.5 ppm) and increases downstream from stations 2 to 4 to a maximum (7.2 ppm), and then decreases at station 5. The DO values of station 1 is a maximum (9.55 ppm), decreases downstream to stations 2 - 4 which are at a minimum (3.4 ppm), before increasing at station 5. The amount of BOD and TDS are highest at the 4th station and the amount of DO is lowest at this station, marking the 4th station as more highly polluted than the other stations. The physicochemical parameters improve at the 5th station due to pollutant degradation and dilution. Finally the point and nonpoint pollutant sources of Beshar river were determined and compared to the monitoring results.

Keywords: Beshar river, physicochemical parameter, waterpollution, Yasuj

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1567
202 sEMG Interface Design for Locomotion Identification

Authors: Rohit Gupta, Ravinder Agarwal

Abstract:

Surface electromyographic (sEMG) signal has the potential to identify the human activities and intention. This potential is further exploited to control the artificial limbs using the sEMG signal from residual limbs of amputees. The paper deals with the development of multichannel cost efficient sEMG signal interface for research application, along with evaluation of proposed class dependent statistical approach of the feature selection method. The sEMG signal acquisition interface was developed using ADS1298 of Texas Instruments, which is a front-end interface integrated circuit for ECG application. Further, the sEMG signal is recorded from two lower limb muscles for three locomotions namely: Plane Walk (PW), Stair Ascending (SA), Stair Descending (SD). A class dependent statistical approach is proposed for feature selection and also its performance is compared with 12 preexisting feature vectors. To make the study more extensive, performance of five different types of classifiers are compared. The outcome of the current piece of work proves the suitability of the proposed feature selection algorithm for locomotion recognition, as compared to other existing feature vectors. The SVM Classifier is found as the outperformed classifier among compared classifiers with an average recognition accuracy of 97.40%. Feature vector selection emerges as the most dominant factor affecting the classification performance as it holds 51.51% of the total variance in classification accuracy. The results demonstrate the potentials of the developed sEMG signal acquisition interface along with the proposed feature selection algorithm.

Keywords: Classifiers, feature selection, locomotion, sEMG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1454
201 Optimization of Quercus cerris Bark Liquefaction

Authors: Luísa P. Cruz-Lopes, Hugo Costa e Silva, Idalina Domingos, José Ferreira, Luís Teixeira de Lemos, Bruno Esteves

Abstract:

The liquefaction process of cork based tree barks has led to an increase of interest due to its potential innovation in the lumber and wood industries. In this particular study the bark of Quercus cerris (Turkish oak) is used due to its appreciable amount of cork tissue, although of inferior quality when compared to the cork provided by other Quercus trees. This study aims to optimize alkaline catalysis liquefaction conditions, regarding several parameters. To better comprehend the possible chemical characteristics of the bark of Quercus cerris, a complete chemical analysis was performed. The liquefaction process was performed in a double-jacket reactor heated with oil, using glycerol and a mixture of glycerol/ethylene glycol as solvents, potassium hydroxide as a catalyst, and varying the temperature, liquefaction time and granulometry. Due to low liquefaction efficiency resulting from the first experimental procedures a study was made regarding different washing techniques after the filtration process using methanol and methanol/water. The chemical analysis stated that the bark of Quercus cerris is mostly composed by suberin (ca. 30%) and lignin (ca. 24%) as well as insolvent hemicelluloses in hot water (ca. 23%). On the liquefaction stage, the results that led to higher yields were: using a mixture of methanol/ethylene glycol as reagents and a time and temperature of 120 minutes and 200 ºC, respectively. It is concluded that using a granulometry of <80 mesh leads to better results, even if this parameter barely influences the liquefaction efficiency. Regarding the filtration stage, washing the residue with methanol and then distilled water leads to a considerable increase on final liquefaction percentages, which proves that this procedure is effective at liquefying suberin content and lignocellulose fraction.

Keywords: Liquefaction, alkaline catalysis, optimization, Quercus cerris bark.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1458
200 Artificial Neural Network Modeling of a Closed Loop Pulsating Heat Pipe

Authors: Vipul M. Patel, Hemantkumar B. Mehta

Abstract:

Technological innovations in electronic world demand novel, compact, simple in design, less costly and effective heat transfer devices. Closed Loop Pulsating Heat Pipe (CLPHP) is a passive phase change heat transfer device and has potential to transfer heat quickly and efficiently from source to sink. Thermal performance of a CLPHP is governed by various parameters such as number of U-turns, orientations, input heat, working fluids and filling ratio. The present paper is an attempt to predict the thermal performance of a CLPHP using Artificial Neural Network (ANN). Filling ratio and heat input are considered as input parameters while thermal resistance is set as target parameter. Types of neural networks considered in the present paper are radial basis, generalized regression, linear layer, cascade forward back propagation, feed forward back propagation; feed forward distributed time delay, layer recurrent and Elman back propagation. Linear, logistic sigmoid, tangent sigmoid and Radial Basis Gaussian Function are used as transfer functions. Prediction accuracy is measured based on the experimental data reported by the researchers in open literature as a function of Mean Absolute Relative Deviation (MARD). The prediction of a generalized regression ANN model with spread constant of 4.8 is found in agreement with the experimental data for MARD in the range of ±1.81%.

Keywords: ANN models, CLPHP, filling ratio, generalized regression, spread constant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1146
199 Analysis of Surface Hardness, Surface Roughness, and Near Surface Microstructure of AISI 4140 Steel Worked with Turn-Assisted Deep Cold Rolling Process

Authors: P. R. Prabhu, S. M. Kulkarni, S. S. Sharma, K. Jagannath, Achutha Kini U.

Abstract:

In the present study, response surface methodology has been used to optimize turn-assisted deep cold rolling process of AISI 4140 steel. A regression model is developed to predict surface hardness and surface roughness using response surface methodology and central composite design. In the development of predictive model, deep cold rolling force, ball diameter, initial roughness of the workpiece, and number of tool passes are considered as model variables. The rolling force and the ball diameter are the significant factors on the surface hardness and ball diameter and numbers of tool passes are found to be significant for surface roughness. The predicted surface hardness and surface roughness values and the subsequent verification experiments under the optimal operating conditions confirmed the validity of the predicted model. The absolute average error between the experimental and predicted values at the optimal combination of parameter settings for surface hardness and surface roughness is calculated as 0.16% and 1.58% respectively. Using the optimal processing parameters, the surface hardness is improved from 225 to 306 HV, which resulted in an increase in the near surface hardness by about 36% and the surface roughness is improved from 4.84µm to 0.252 µm, which resulted in decrease in the surface roughness by about 95%. The depth of compression is found to be more than 300µm from the microstructure analysis and this is in correlation with the results obtained from the microhardness measurements. Taylor hobson talysurf tester, micro vickers hardness tester, optical microscopy and X-ray diffractometer are used to characterize the modified surface layer. 

Keywords: Surface hardness, response surface methodology, microstructure, central composite design, deep cold rolling, surface roughness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1767
198 Analyzing Microblogs: Exploring the Psychology of Political Leanings

Authors: Meaghan Bowman

Abstract:

Microblogging has become increasingly popular for commenting on current events, spreading gossip, and encouraging individualism--which favors its low-context communication channel. These social media (SM) platforms allow users to express opinions while interacting with a wide range of populations. Hashtags allow immediate identification of like-minded individuals worldwide on a vast array of topics. The output of the analytic tool, Linguistic Inquiry and Word Count (LIWC)--a program that associates psychological meaning with the frequency of use of specific words--may suggest the nature of individuals’ internal states and general sentiments. When applied to groupings of SM posts unified by a hashtag, such information can be helpful to community leaders during periods in which the forming of public opinion happens in parallel with the unfolding of political, economic, or social events. This is especially true when outcomes stand to impact the well-being of the group. Here, we applied the online tools, Google Translate and the University of Texas’s LIWC, to a 90-posting sample from a corpus of Colombian Spanish microblogs. On translated disjoint sets, identified by hashtag as being authored by advocates of voting “No,” advocates voting “Yes,” and entities refraining from hashtag use, we observed the value of LIWC’s Tone feature as distinguishing among the categories and the word “peace,” as carrying particular significance, due to its frequency of use in the data.

Keywords: Colombia peace referendum, FARC, hashtags, linguistics, microblogging, social media.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 818
197 Structural Damage Detection via Incomplete Modal Data Using Output Data Only

Authors: Ahmed Noor Al-Qayyim, Barlas Ozden Caglayan

Abstract:

Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on to obtain very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. The study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using ‘Two Points Condensation (TPC) technique’. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices obtain from optimization the equation of motion using the measured test data. The current stiffness matrices compare with original (undamaged) stiffness matrices. The large percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element, where two cases consider. The method detects the damage and determines its location accurately in both cases. In addition, the results illustrate these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can be used also for big structures.

Keywords: Damage detection, two points–condensation, structural health monitoring, signals processing, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2663
196 Identification of Critical Success Factors in Non-Formal Service Sector Using Delphi Technique

Authors: Amol A. Talankar, Prakash Verma, Nitin Seth

Abstract:

The purpose of this study is to identify the critical success factors (CSFs) for the effective implementation of Six Sigma in non-formal service Sectors.

Based on the survey of literature, the critical success factors (CSFs) for Six Sigma have been identified and are assessed for their importance in Non-formal service sector using Delphi Technique. These selected CSFs were put forth to the panel of expert to cluster them and prepare cognitive map to establish their relationship.

All the critical success factors examined and obtained from the review of literature have been assessed for their importance with respect to their contribution to Six Sigma effectiveness in non formal service sector.

The study is limited to the non-formal service sectors involved in the organization of religious festival only. However, the similar exercise can be conducted for broader sample of other non-formal service sectors like temple/ashram management, religious tours management etc.

The research suggests an approach to identify CSFs of Six Sigma for Non-formal service sector. All the CSFs of the formal service sector will not be applicable to Non-formal services, hence opinion of experts was sought to add or delete the CSFs. In the first round of Delphi, the panel of experts has suggested, two new CSFs-“competitive benchmarking (F19) and resident’s involvement (F28)”, which were added for assessment in the next round of Delphi.  One of the CSFs-“fulltime six sigma personnel (F15)” has been omitted in proposed clusters of CSFs for non-formal organization, as it is practically impossible to deploy full time trained Six Sigma recruits.

Keywords: Critical success factors (CSFs), Quality assurance, non-formal service sectors, Six Sigma.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2416
195 Probability-Based Damage Detection of Structures Using Model Updating with Enhanced Ideal Gas Molecular Movement Algorithm

Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee

Abstract:

Model updating method has received increasing attention in damage detection structures based on measured modal parameters. Therefore, a probability-based damage detection (PBDD) procedure based on a model updating procedure is presented in this paper, in which a one-stage model-based damage identification technique based on the dynamic features of a structure is investigated. The presented framework uses a finite element updating method with a Monte Carlo simulation that considers the uncertainty caused by measurement noise. Enhanced ideal gas molecular movement (EIGMM) is used as the main algorithm for model updating. Ideal gas molecular movement (IGMM) is a multiagent algorithm based on the ideal gas molecular movement. Ideal gas molecules disperse rapidly in different directions and cover all the space inside. This is embedded in the high speed of molecules, collisions between them and with the surrounding barriers. In IGMM algorithm to accomplish the optimal solutions, the initial population of gas molecules is randomly generated and the governing equations related to the velocity of gas molecules and collisions between those are utilized. In this paper, an enhanced version of IGMM, which removes unchanged variables after specified iterations, is developed. The proposed method is implemented on two numerical examples in the field of structural damage detection. The results show that the proposed method can perform well and competitive in PBDD of structures.

Keywords: Enhanced ideal gas molecular movement, ideal gas molecular movement, model updating method, probability-based damage detection, uncertainty quantification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1046
194 Automated Video Surveillance System for Detection of Suspicious Activities during Academic Offline Examination

Authors: G. Sandhya Devi, G. Suvarna Kumar, S. Chandini

Abstract:

This research work aims to develop a system that will analyze and identify students who indulge in malpractices/suspicious activities during the course of an academic offline examination. Automated Video Surveillance provides an optimal solution which helps in monitoring the students and identifying the malpractice event immediately. This work is organized into three modules. The first module deals with performing an impersonation check using a PCA-based face recognition method which is done by cross checking his profile with the database. The presence or absence of the student is even determined in this module by implementing an image registration technique wherein a grid is formed by considering all the images registered using the frontal camera at the determined positions. Second, detecting such facial malpractices in which a student gets involved in conversation with another, trying to obtain unauthorized information etc., based on the threshold range evaluated by considering his/her mouth state whether open or closed. The third module deals with identification of unauthorized material or gadgets used in the examination hall by training the positive samples of the object through various stages. Here, a top view camera feed is analyzed to detect the suspicious activities. The system automatically alerts the administration when any suspicious activities are identified, thereby reducing the error rate caused due to manual monitoring. This work is an improvement over our previous work published in identifying suspicious activities done by examinees in an offline examination.

Keywords: Impersonation, image registration, incrimination, object detection, threshold evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1539
193 Method of Estimating Absolute Entropy of Municipal Solid Waste

Authors: Francis Chinweuba Eboh, Peter Ahlström, Tobias Richards

Abstract:

Entropy, as an outcome of the second law of thermodynamics, measures the level of irreversibility associated with any process. The identification and reduction of irreversibility in the energy conversion process helps to improve the efficiency of the system. The entropy of pure substances known as absolute entropy is determined at an absolute reference point and is useful in the thermodynamic analysis of chemical reactions; however, municipal solid waste (MSW) is a structurally complicated material with unknown absolute entropy. In this work, an empirical model to calculate the absolute entropy of MSW based on the content of carbon, hydrogen, oxygen, nitrogen, sulphur, and chlorine on a dry ash free basis (daf) is presented. The proposed model was derived from 117 relevant organic substances which represent the main constituents in MSW with known standard entropies using statistical analysis. The substances were divided into different waste fractions; namely, food, wood/paper, textiles/rubber and plastics waste and the standard entropies of each waste fraction and for the complete mixture were calculated. The correlation of the standard entropy of the complete waste mixture derived was found to be somsw= 0.0101C + 0.0630H + 0.0106O + 0.0108N + 0.0155S + 0.0084Cl (kJ.K-1.kg) and the present correlation can be used for estimating the absolute entropy of MSW by using the elemental compositions of the fuel within the range of 10.3%  C 95.1%, 0.0%  H  14.3%, 0.0%  O  71.1%, 0.0  N  66.7%, 0.0%  S  42.1%, 0.0%  Cl  89.7%. The model is also applicable for the efficient modelling of a combustion system in a waste-to-energy plant.

Keywords: Absolute entropy, irreversibility, municipal solid waste, waste-to-energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1799
192 ICT for Smart Appliances: Current Technology and Identification of Future ICT Trend

Authors: Abubakar Uba Ibrahim, Ibrahim Haruna Shanono

Abstract:

Smart metering and demand response are gaining ground in industrial and residential applications. Smart Appliances have been given concern towards achieving Smart home. The success of Smart grid development relies on the successful implementation of Information and Communication Technology (ICT) in power sector. Smart Appliances have been the technology under development and many new contributions to its realization have been reported in the last few years. The role of ICT here is to capture data in real time, thereby allowing bi-directional flow of information/data between producing and utilization point; that lead a way for the attainment of Smart appliances where home appliances can communicate between themselves and provide a self-control (switch on and off) using the signal (information) obtained from the grid. This paper depicts the background on ICT for smart appliances paying a particular attention to the current technology and identifying the future ICT trends for load monitoring through which smart appliances can be achieved to facilitate an efficient smart home system which promote demand response program. This paper grouped and reviewed the recent contributions, in order to establish the current state of the art and trends of the technology, so that the reader can be provided with a comprehensive and insightful review of where ICT for smart appliances stands and is heading to. The paper also presents a brief overview of communication types, and then narrowed the discussion to the load monitoring (Non-intrusive Appliances Load Monitoring ‘NALM’). Finally, some future trends and challenges in the further development of the ICT framework are discussed to motivate future contributions that address open problems and explore new possibilities.

Keywords: Communication technology between appliances, demand response, load monitoring, smart appliances and smart grid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2513
191 Estimation of Individual Power of Noise Sources Operating Simultaneously

Authors: Pankaj Chandna, Surinder Deswal, Arunesh Chandra, SK Sharma

Abstract:

Noise has adverse effect on human health and comfort. Noise not only cause hearing impairment, but it also acts as a causal factor for stress and raising systolic pressure. Additionally it can be a causal factor in work accidents, both by marking hazards and warning signals and by impeding concentration. Industry workers also suffer psychological and physical stress as well as hearing loss due to industrial noise. This paper proposes an approach to enable engineers to point out quantitatively the noisiest source for modification, while multiple machines are operating simultaneously. The model with the point source and spherical radiation in a free field was adopted to formulate the problem. The procedure works very well in ideal cases (point source and free field). However, most of the industrial noise problems are complicated by the fact that the noise is confined in a room. Reflections from the walls, floor, ceiling, and equipment in a room create a reverberant sound field that alters the sound wave characteristics from those for the free field. So the model was validated for relatively low absorption room at NIT Kurukshetra Central Workshop. The results of validation pointed out that the estimated sound power of noise sources under simultaneous conditions were on lower side, within the error limits 3.56 - 6.35 %. Thus suggesting the use of this methodology for practical implementation in industry. To demonstrate the application of the above analytical procedure for estimating the sound power of noise sources under simultaneous operating conditions, a manufacturing facility (Railway Workshop at Yamunanagar, India) having five sound sources (machines) on its workshop floor is considered in this study. The findings of the case study had identified the two most effective candidates (noise sources) for noise control in the Railway Workshop Yamunanagar, India. The study suggests that the modification in the design and/or replacement of these two identified noisiest sources (machine) would be necessary so as to achieve an effective reduction in noise levels. Further, the estimated data allows engineers to better understand the noise situations of the workplace and to revise the map when changes occur in noise level due to a workplace re-layout.

Keywords: Industrial noise, sound power level, multiple noise sources, sources contribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1818
190 Unsupervised Segmentation Technique for Acute Leukemia Cells Using Clustering Algorithms

Authors: N. H. Harun, A. S. Abdul Nasir, M. Y. Mashor, R. Hassan

Abstract:

Leukaemia is a blood cancer disease that contributes to the increment of mortality rate in Malaysia each year. There are two main categories for leukaemia, which are acute and chronic leukaemia. The production and development of acute leukaemia cells occurs rapidly and uncontrollable. Therefore, if the identification of acute leukaemia cells could be done fast and effectively, proper treatment and medicine could be delivered. Due to the requirement of prompt and accurate diagnosis of leukaemia, the current study has proposed unsupervised pixel segmentation based on clustering algorithm in order to obtain a fully segmented abnormal white blood cell (blast) in acute leukaemia image. In order to obtain the segmented blast, the current study proposed three clustering algorithms which are k-means, fuzzy c-means and moving k-means algorithms have been applied on the saturation component image. Then, median filter and seeded region growing area extraction algorithms have been applied, to smooth the region of segmented blast and to remove the large unwanted regions from the image, respectively. Comparisons among the three clustering algorithms are made in order to measure the performance of each clustering algorithm on segmenting the blast area. Based on the good sensitivity value that has been obtained, the results indicate that moving kmeans clustering algorithm has successfully produced the fully segmented blast region in acute leukaemia image. Hence, indicating that the resultant images could be helpful to haematologists for further analysis of acute leukaemia.

Keywords: Acute Leukaemia Images, Clustering Algorithms, Image Segmentation, Moving k-Means.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2743
189 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture

Authors: Charbel Geryes Aoun, Loic Lagadec

Abstract:

A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g. Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple-views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.

Keywords: Smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 164
188 Household Indebtedness Risks in the Czech Republic

Authors: Jindřiška Šedová

Abstract:

In the past 20 years the economy of the Czech Republic has experienced substantial changes. In the 1990s the development was affected by the transformation which sought to establish the right conditions for privatization and creation of elementary market relations. In the last decade the characteristic elements such as private ownership and corresponding institutional framework have been strengthened. This development was marked by the accession of the Czech Republic to the EU. The Czech Republic is striving to reduce the difference between its level of economic development and the quality of institutional framework in comparison with other developed countries. The process of finding the adequate solutions has been hampered by the negative impact of the world financial crisis on the Czech Republic and the standard of living of its inhabitants. This contribution seeks to address the question of whether and to which extent the economic development of the transitive Czech economy is affected by the change in behaviour of households and their tendency to consumption, i.e. in the sense of reduction or increase in demand for goods and services. It aims to verify whether the increasing trend of household indebtedness and decreasing trend of saving pose a significant risk in the Czech Republic. At a general level the analysis aims to contribute to finding an answer to the question of whether the debt increase of Czech households is connected to the risk of "eating through" the borrowed money and whether Czech households risk falling into a debt trap. In addition to household indebtedness risks in the Czech Republic the analysis will focus on identification of specifics of the transformation phase of the Czech economy in comparison with the EU countries, or selected OECD countries.

Keywords: household indebtedness, household consumption, credits, financial literacy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1760
187 Combining ASTER Thermal Data and Spatial-Based Insolation Model for Identification of Geothermal Active Areas

Authors: Khalid Hussein, Waleed Abdalati, Pakorn Petchprayoon, Khaula Alkaabi

Abstract:

In this study, we integrated ASTER thermal data with an area-based spatial insolation model to identify and delineate geothermally active areas in Yellowstone National Park (YNP). Two pairs of L1B ASTER day- and nighttime scenes were used to calculate land surface temperature. We employed the Emissivity Normalization Algorithm which separates temperature from emissivity to calculate surface temperature. We calculated the incoming solar radiation for the area covered by each of the four ASTER scenes using an insolation model and used this information to compute temperature due to solar radiation. We then identified the statistical thermal anomalies using land surface temperature and the residuals calculated from modeled temperatures and ASTER-derived surface temperatures. Areas that had temperatures or temperature residuals greater than 2σ and between 1σ and 2σ were considered ASTER-modeled thermal anomalies. The areas identified as thermal anomalies were in strong agreement with the thermal areas obtained from the YNP GIS database. Also the YNP hot springs and geysers were located within areas identified as anomalous thermal areas. The consistency between our results and known geothermally active areas indicate that thermal remote sensing data, integrated with a spatial-based insolation model, provides an effective means for identifying and locating areas of geothermal activities over large areas and rough terrain.

Keywords: Thermal remote sensing, insolation model, land surface temperature, geothermal anomalies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 978
186 Modeling, Simulation and Monitoring of Nuclear Reactor Using Directed Graph and Bond Graph

Authors: A. Badoud, M. Khemliche, S. Latreche

Abstract:

The main objective developed in this paper is to find a graphic technique for modeling, simulation and diagnosis of the industrial systems. This importance is much apparent when it is about a complex system such as the nuclear reactor with pressurized water of several form with various several non-linearity and time scales. In this case the analytical approach is heavy and does not give a fast idea on the evolution of the system. The tool Bond Graph enabled us to transform the analytical model into graphic model and the software of simulation SYMBOLS 2000 specific to the Bond Graphs made it possible to validate and have the results given by the technical specifications. We introduce the analysis of the problem involved in the faults localization and identification in the complex industrial processes. We propose a method of fault detection applied to the diagnosis and to determine the gravity of a detected fault. We show the possibilities of application of the new diagnosis approaches to the complex system control. The industrial systems became increasingly complex with the faults diagnosis procedures in the physical systems prove to become very complex as soon as the systems considered are not elementary any more. Indeed, in front of this complexity, we chose to make recourse to Fault Detection and Isolation method (FDI) by the analysis of the problem of its control and to conceive a reliable system of diagnosis making it possible to apprehend the complex dynamic systems spatially distributed applied to the standard pressurized water nuclear reactor.

Keywords: Bond Graph, Modeling, Simulation, Monitoring, Analytical Redundancy Relations, Pressurized Water Reactor, Directed Graph.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1948