Search results for: work values.
1104 Assertion-Driven Test Repair Based on Priority Criteria
Authors: Ruilian Zhao, Shukai Zhang, Yan Wang, Weiwei Wang
Abstract:
Repairing broken test cases is an expensive and challenging task in evolving software systems. Although an automated repair technique with intent-preservation has been proposed, it does not take into account the association between test repairs and assertions, leading a large number of irrelevant candidates and decreasing the repair capability. This paper proposes a assertion-driven test repair approach. Furthermore, a intent-oriented priority criterion is raised to guide the repair candidate generation, making the repairs closer to the intent of the test. In more detail, repair targets are determined through post-dominance relations between assertions and the methods that directly cause compilation errors. Then, test repairs are generated from the target in a bottom-up way, guided by the the intent-oriented priority criteria. Finally, the generated repair candidates are prioritized to match the original test intent. The approach is implemented and evaluated on the benchmark of 4 open-source programs and 91 broken test cases. The result shows that the approach can fix 89% (81/91) broken test cases, which are more effective than the existing intent-preserved test repair approach, and our intent-oriented priority criteria work well.
Keywords: Test repair, test intent, software test, test case evolution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1551103 Reconsidering the Legitimacy of Capital Punishment in the Interpretation of the Human Right to Life in the Two Traditional Approaches
Authors: Yujie Zhang
Abstract:
There are debates around the legitimacy of capital punishment, i.e., whether death could serve as a proper execution in our legal system or not. Different arguments have been raised. However, none of them seem able to provide a determined answer to the issue; this results in a lack of instruction in the legal practice. This article, therefore, devotes itself to the effort to find such an answer. It takes the perspective of rights, through interpreting the concept of right to life, which capital punishment appears to be in confliction with in the two traditional approaches, to reveal a possibly best account of the right and its conclusion on capital punishment. However, this effort is not a normative one which focuses on what ought to be. It means the article does not try to work out which argument we should choose and solve the hot debate on whether capital punishment should be allowed or not. It, again, does not propose which perspective we should take to approach this issue or generally which account of right must be better; rather, it is more a thought experiment. It attempts to raise a new perspective to approach the issue of the legitimacy of capital punishment. Both its perspective and conclusion therefore are tentative: what if we view this issue in a way we have never tried before, for example the different accounts of right to life? In this sense, the perspective could be defied, while the conclusion could be rejected. Other perspectives and conclusions are also possible. Notwithstanding, this tentative perspective and account of the right still could not be denied from serving as a potential approach, since it does have the ability to provide us with a determined attitude toward capital punishment that is hard to achieve through existing arguments.Keywords: Capital punishment, right to life, theories of rights, the choice theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15941102 The Acceptance of E-Assessment Considering Security Perspective: Work in Progress
Authors: Kavitha Thamadharan, Nurazean Maarop
Abstract:
The implementation of e-assessment as tool to support the process of teaching and learning in university has become a popular technological means in universities. E-Assessment provides many advantages to the users especially the flexibility in teaching and learning. The e-assessment system has the capability to improve its quality of delivering education. However, there still exists a drawback in terms of security which limits the user acceptance of the online learning system. Even though there are studies providing solutions for identified security threats in e-learning usage, there is no particular model which addresses the factors that influences the acceptance of e-assessment system by lecturers from security perspective. The aim of this study is to explore security aspects of eassessment in regard to the acceptance of the technology. As a result a conceptual model of secure acceptance of e-assessment is proposed. Both human and security factors are considered in formulation of this conceptual model. In order to increase understanding of critical issues related to the subject of this study, interpretive approach involving convergent mixed method research method is proposed to be used to execute the research. This study will be useful in providing more insightful understanding regarding the factors that influence the user acceptance of e-assessment system from security perspective.
Keywords: Secure Technology Acceptance, E-Assessment Security, E-Assessment, Education Technology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24381101 Ligand-Depended Adsorption Characteristics of Silver Nanoparticles on Activated Carbon
Authors: Hamza Simsir, Nurettin Eltugral, Selhan Karagoz
Abstract:
Surface modification and functionalization has been an important tool for scientists in order to open new frontiers in nanoscience and nanotechnology. Desired surface characteristics for the intended applications can be achieved with surface functionalization. In this work, the effect of water soluble ligands on the adsorption capabilities of silver nanoparticles onto AC which was synthesized from German beech wood was investigated. Sodium borohydride (NaBH4) and polyvinyl alcohol (PVA) were used as the ligands. Silver nanoparticles with different surface coatings have average sizes range from 10 to 13 nm. They were synthesized in aqueous media by reducing Ag (I) ion in the presence of ligands. These particles displayed adsorption tendencies towards AC when they were mixed together and shaken in distilled water. Silver nanoparticles (NaBH4-AgNPs) reduced and stabilized by NaBH4 adsorbed onto AC with a homogenous dispersion of aggregates with sizes in the range of 100-400 nm. Beside, silver nanoparticles, which were prepared in the presence of both NaBH4 and PVA (NaBH4/PVA-Ag NPs), demonstrated that NaBH4/PVA-Ag NPs adsorbed and dispersed homogenously but, they aggregated with larger sizes on the AC surface (range from 300 to 600 nm). In addition, desorption resistance of Ag nanoparticles were investigated in distilled water. According to the results AgNPs were not desorbed on the AC surface in distilled water.
Keywords: Activated carbon, adsorption, ligand, silver nanoparticles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34181100 Multiple Model and Neural based Adaptive Multi-loop PID Controller for a CSTR Process
Authors: R.Vinodha S. Abraham Lincoln, J. Prakash
Abstract:
Multi-loop (De-centralized) Proportional-Integral- Derivative (PID) controllers have been used extensively in process industries due to their simple structure for control of multivariable processes. The objective of this work is to design multiple-model adaptive multi-loop PID strategy (Multiple Model Adaptive-PID) and neural network based multi-loop PID strategy (Neural Net Adaptive-PID) for the control of multivariable system. The first method combines the output of multiple linear PID controllers, each describing process dynamics at a specific level of operation. The global output is an interpolation of the individual multi-loop PID controller outputs weighted based on the current value of the measured process variable. In the second method, neural network is used to calculate the PID controller parameters based on the scheduling variable that corresponds to major shift in the process dynamics. The proposed control schemes are simple in structure with less computational complexity. The effectiveness of the proposed control schemes have been demonstrated on the CSTR process, which exhibits dynamic non-linearity.Keywords: Multiple-model Adaptive PID controller, Multivariableprocess, CSTR process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20121099 Computational Model for Predicting Effective siRNA Sequences Using Whole Stacking Energy (% G) for Gene Silencing
Authors: Reena Murali, David Peter S.
Abstract:
The small interfering RNA (siRNA) alters the regulatory role of mRNA during gene expression by translational inhibition. Recent studies show that upregulation of mRNA because serious diseases like cancer. So designing effective siRNA with good knockdown effects plays an important role in gene silencing. Various siRNA design tools had been developed earlier. In this work, we are trying to analyze the existing good scoring second generation siRNA predicting tools and to optimize the efficiency of siRNA prediction by designing a computational model using Artificial Neural Network and whole stacking energy (%G), which may help in gene silencing and drug design in cancer therapy. Our model is trained and tested against a large data set of siRNA sequences. Validation of our results is done by finding correlation coefficient of experimental versus observed inhibition efficacy of siRNA. We achieved a correlation coefficient of 0.727 in our previous computational model and we could improve the correlation coefficient up to 0.753 when the threshold of whole tacking energy is greater than or equal to -32.5 kcal/mol.
Keywords: Artificial Neural Network, Double Stranded RNA, RNA Interference, Short Interfering RNA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26661098 Order Statistics-based “Anti-Bayesian“ Parametric Classification for Asymmetric Distributions in the Exponential Family
Authors: A. Thomas, B. John Oommen
Abstract:
Although the field of parametric Pattern Recognition (PR) has been thoroughly studied for over five decades, the use of the Order Statistics (OS) of the distributions to achieve this has not been reported. The pioneering work on using OS for classification was presented in [1] for the Uniform distribution, where it was shown that optimal PR can be achieved in a counter-intuitive manner, diametrically opposed to the Bayesian paradigm, i.e., by comparing the testing sample to a few samples distant from the mean. This must be contrasted with the Bayesian paradigm in which, if we are allowed to compare the testing sample with only a single point in the feature space from each class, the optimal strategy would be to achieve this based on the (Mahalanobis) distance from the corresponding central points, for example, the means. In [2], we showed that the results could be extended for a few symmetric distributions within the exponential family. In this paper, we attempt to extend these results significantly by considering asymmetric distributions within the exponential family, for some of which even the closed form expressions of the cumulative distribution functions are not available. These distributions include the Rayleigh, Gamma and certain Beta distributions. As in [1] and [2], the new scheme, referred to as Classification by Moments of Order Statistics (CMOS), attains an accuracy very close to the optimal Bayes’ bound, as has been shown both theoretically and by rigorous experimental testing.
Keywords: Classification using Order Statistics (OS), Exponential family, Moments of OS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15291097 Perception-Oriented Model Driven Development for Designing Data Acquisition Process in Wireless Sensor Networks
Authors: K. Indra Gandhi
Abstract:
Wireless Sensor Networks (WSNs) have always been characterized for application-specific sensing, relaying and collection of information for further analysis. However, software development was not considered as a separate entity in this process of data collection which has posed severe limitations on the software development for WSN. Software development for WSN is a complex process since the components involved are data-driven, network-driven and application-driven in nature. This implies that there is a tremendous need for the separation of concern from the software development perspective. A layered approach for developing data acquisition design based on Model Driven Development (MDD) has been proposed as the sensed data collection process itself varies depending upon the application taken into consideration. This work focuses on the layered view of the data acquisition process so as to ease the software point of development. A metamodel has been proposed that enables reusability and realization of the software development as an adaptable component for WSN systems. Further, observing users perception indicates that proposed model helps in improving the programmer's productivity by realizing the collaborative system involved.
Keywords: Model-driven development, wireless sensor networks, data acquisition, separation of concern, layered design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9571096 Using Blockchain Technology to Extend the Vendor Managed Inventory for Sustainability
Authors: Elham Ahmadi, Roshaali Khaturia, Pardis Sahraei, Mohammad Niyayesh, Omid Fatahi Valilai
Abstract:
Nowadays, Information Technology (IT) is changing the way traditional enterprise management concepts work. One of the most dominant IT achievements is the Blockchain Technology. This technology enables the distributed collaboration of stakeholders for their interactions while fulfilling the security and consensus rules among them. This paper has focused on the application of Blockchain technology to enhance one of traditional inventory management models. The Vendor Managed Inventory (VMI) has been considered one of the most efficient mechanisms for vendor inventory planning by the suppliers. While VMI has brought competitive advantages for many industries, however its centralized mechanism limits the collaboration of a pool of suppliers and vendors simultaneously. This paper has studied the recent research for VMI application in industries and also has investigated the applications of Blockchain technology for decentralized collaboration of stakeholders. Focusing on sustainability issue for total supply chain consisting suppliers and vendors, it has proposed a Blockchain based VMI conceptual model. The different capabilities of this model for enabling the collaboration of stakeholders while maintaining the competitive advantages and sustainability issues have been discussed.Keywords: Vendor Managed Inventory, Blockchain Technology, supply chain planning, sustainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8031095 Physics of Decision for Polling Place Management: A Case Study from the 2020 USA Presidential Election
Authors: Nafe Moradkhani, Frederick Benaben, Benoit Montreuil, Ali Vatankhah Barenji, Dima Nazzal
Abstract:
In the context of the global pandemic, the practical management of the 2020 presidential election in the USA was a strong concern. To anticipate and prepare for this election accurately, one of the main challenges was to confront: (i) forecasts of voter turnout, (ii) capacities of the facilities and, (iii) potential configuration options of resources. The approach chosen to conduct this anticipative study consists of collecting data about forecasts and using simulation models to work simultaneously on resource allocation and facility configuration of polling places in Fulton County, Georgia’s largest county. This article presents the results of the simulations of such places facing pre-identified potential risks. These results are oriented towards the efficiency of these places according to different criteria (health, trust, comfort). Then a dynamic framework is introduced to describe risks as physical forces perturbing the efficiency of the observed system. Finally, the main benefits and contributions resulting from this simulation campaign are presented.
Keywords: performance, decision support, simulation, artificial intelligence, risk management, election, pandemics, information system
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6401094 Relay Node Placement for Connectivity Restoration in Wireless Sensor Networks Using Genetic Algorithms
Authors: Hanieh Tarbiat Khosrowshahi, Mojtaba Shakeri
Abstract:
Wireless Sensor Networks (WSNs) consist of a set of sensor nodes with limited capability. WSNs may suffer from multiple node failures when they are exposed to harsh environments such as military zones or disaster locations and lose connectivity by getting partitioned into disjoint segments. Relay nodes (RNs) are alternatively introduced to restore connectivity. They cost more than sensors as they benefit from mobility, more power and more transmission range, enforcing a minimum number of them to be used. This paper addresses the problem of RN placement in a multiple disjoint network by developing a genetic algorithm (GA). The problem is reintroduced as the Steiner tree problem (which is known to be an NP-hard problem) by the aim of finding the minimum number of Steiner points where RNs are to be placed for restoring connectivity. An upper bound to the number of RNs is first computed to set up the length of initial chromosomes. The GA algorithm then iteratively reduces the number of RNs and determines their location at the same time. Experimental results indicate that the proposed GA is capable of establishing network connectivity using a reasonable number of RNs compared to the best existing work.
Keywords: Connectivity restoration, genetic algorithms, multiple-node failure, relay nodes, wireless sensor networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11031093 Secure Socket Layer in the Network and Web Security
Authors: Roza Dastres, Mohsen Soori
Abstract:
In order to electronically exchange information between network users in the web of data, different software such as outlook is presented. So, the traffic of users on a site or even the floors of a building can be decreased as a result of applying a secure and reliable data sharing software. It is essential to provide a fast, secure and reliable network system in the data sharing webs to create an advanced communication systems in the users of network. In the present research work, different encoding methods and algorithms in data sharing systems is studied in order to increase security of data sharing systems by preventing the access of hackers to the transferred data. To increase security in the networks, the possibility of textual conversation between customers of a local network is studied. Application of the encryption and decryption algorithms is studied in order to increase security in networks by preventing hackers from infiltrating. As a result, a reliable and secure communication system between members of a network can be provided by preventing additional traffic in the website environment in order to increase speed, accuracy and security in the network and web systems of data sharing.
Keywords: Secure Socket Layer, Security of networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5101092 Adomian Decomposition Method Associated with Boole-s Integration Rule for Goursat Problem
Authors: Mohd Agos Salim Nasir, Ros Fadilah Deraman, Siti Salmah Yasiran
Abstract:
The Goursat partial differential equation arises in linear and non linear partial differential equations with mixed derivatives. This equation is a second order hyperbolic partial differential equation which occurs in various fields of study such as in engineering, physics, and applied mathematics. There are many approaches that have been suggested to approximate the solution of the Goursat partial differential equation. However, all of the suggested methods traditionally focused on numerical differentiation approaches including forward and central differences in deriving the scheme. An innovation has been done in deriving the Goursat partial differential equation scheme which involves numerical integration techniques. In this paper we have developed a new scheme to solve the Goursat partial differential equation based on the Adomian decomposition (ADM) and associated with Boole-s integration rule to approximate the integration terms. The new scheme can easily be applied to many linear and non linear Goursat partial differential equations and is capable to reduce the size of computational work. The accuracy of the results reveals the advantage of this new scheme over existing numerical method.Keywords: Goursat problem, partial differential equation, Adomian decomposition method, Boole's integration rule.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18561091 Enhanced-Delivery Overlay Multicasting Scheme by Optimizing Bandwidth and Latency Discrepancy Ratios
Authors: Omar F. Hamad, T. Marwala
Abstract:
With optimized bandwidth and latency discrepancy ratios, Node Gain Scores (NGSs) are determined and used as a basis for shaping the max-heap overlay. The NGSs - determined as the respective bandwidth-latency-products - govern the construction of max-heap-form overlays. Each NGS is earned as a synergy of discrepancy ratio of the bandwidth requested with respect to the estimated available bandwidth, and latency discrepancy ratio between the nodes and the source node. The tree leads to enhanceddelivery overlay multicasting – increasing packet delivery which could, otherwise, be hindered by induced packet loss occurring in other schemes not considering the synergy of these parameters on placing the nodes on the overlays. The NGS is a function of four main parameters – estimated available bandwidth, Ba; individual node's requested bandwidth, Br; proposed node latency to its prospective parent (Lp); and suggested best latency as advised by source node (Lb). Bandwidth discrepancy ratio (BDR) and latency discrepancy ratio (LDR) carry weights of α and (1,000 - α ) , respectively, with arbitrary chosen α ranging between 0 and 1,000 to ensure that the NGS values, used as node IDs, maintain a good possibility of uniqueness and balance between the most critical factor between the BDR and the LDR. A max-heap-form tree is constructed with assumption that all nodes possess NGS less than the source node. To maintain a sense of load balance, children of each level's siblings are evenly distributed such that a node can not accept a second child, and so on, until all its siblings able to do so, have already acquired the same number of children. That is so logically done from left to right in a conceptual overlay tree. The records of the pair-wise approximate available bandwidths as measured by a pathChirp scheme at individual nodes are maintained. Evaluation measures as compared to other schemes – Bandwidth Aware multicaSt architecturE (BASE), Tree Building Control Protocol (TBCP), and Host Multicast Tree Protocol (HMTP) - have been conducted. This new scheme generally performs better in terms of trade-off between packet delivery ratio; link stress; control overhead; and end-to-end delays.
Keywords: Overlay multicast, Available bandwidth, Max-heapform overlay, Induced packet loss, Bandwidth-latency product, Node Gain Score (NGS).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15711090 Uncertainty Propagation and Sensitivity Analysis During Calibration of an Integrated Land Use and Transport Model
Authors: Parikshit Dutta, Mathieu Saujot, Elise Arnaud, Benoit Lefevre, Emmanuel Prados
Abstract:
In this work, propagation of uncertainty during calibration process of TRANUS, an integrated land use and transport model (ILUTM), has been investigated. It has also been examined, through a sensitivity analysis, which input parameters affect the variation of the outputs the most. Moreover, a probabilistic verification methodology of calibration process, which equates the observed and calculated production, has been proposed. The model chosen as an application is the model of the city of Grenoble, France. For sensitivity analysis and uncertainty propagation, Monte Carlo method was employed, and a statistical hypothesis test was used for verification. The parameters of the induced demand function in TRANUS, were assumed as uncertain in the present case. It was found that, if during calibration, TRANUS converges, then with a high probability the calibration process is verified. Moreover, a weak correlation was found between the inputs and the outputs of the calibration process. The total effect of the inputs on outputs was investigated, and the output variation was found to be dictated by only a few input parameters.Keywords: Uncertainty propagation, sensitivity analysis, calibration under uncertainty, hypothesis testing, integrated land use and transport models, TRANUS, Grenoble.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15211089 Educational Data Mining: The Case of Department of Mathematics and Computing in the Period 2009-2018
Authors: M. Sitoe, O. Zacarias
Abstract:
University education is influenced by several factors that range from the adoption of strategies to strengthen the whole process to the academic performance improvement of the students themselves. This work uses data mining techniques to develop a predictive model to identify students with a tendency to evasion and retention. To this end, a database of real students’ data from the Department of University Admission (DAU) and the Department of Mathematics and Informatics (DMI) was used. The data comprised 388 undergraduate students admitted in the years 2009 to 2014. The Weka tool was used for model building, using three different techniques, namely: K-nearest neighbor, random forest, and logistic regression. To allow for training on multiple train-test splits, a cross-validation approach was employed with a varying number of folds. To reduce bias variance and improve the performance of the models, ensemble methods of Bagging and Stacking were used. After comparing the results obtained by the three classifiers, Logistic Regression using Bagging with seven folds obtained the best performance, showing results above 90% in all evaluated metrics: accuracy, rate of true positives, and precision. Retention is the most common tendency.
Keywords: Evasion and retention, cross validation, bagging, stacking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1191088 Firm Performance of Thai Cuisines in Bangkok, Thailand: Contribution to the Tourism Industry
Authors: Prateep Wajeetongratana
Abstract:
This study is a descriptive-normative research. It attempted to investigate the restaurants’ firm performance in terms of the customers and restaurant personnel’s degree of satisfaction. A total of 12 restaurants in Bangkok, Thailand that offer Thai cuisine were included in this study. It involved 24 stockholders/managers, 120 subordinates and 360 customers. General Managers and restaurants’ stockholders, 10 staffs, and 30 costumers for each restaurant were chosen for random sampling. This study found that respondents are slightly satisfied with their work environment but are generally satisfied with the accessibility to transportation, to malls, convenience, safety, recreation, noise-free, and attraction; customers find the Quality of Food in most Thai Cuisines like services, prices of food, sales promotion, and capital and length of service satisfactory. Therefore, both stockholder-related and personnel-related factors which are influenced by restaurant, personnel, and customer-related factors are partially accepted whereas; customer-related factors which are influenced by restaurant, personnel and customer-related factors are rejected.
Keywords: Firm performance, Thai Cuisine, Tourism industry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22411087 Copper Content in Daily Food Rations Planned and Served to Students from Selected Military Academies and Soldiers Doing Compulsory Military Service in the Polish Army
Authors: J. Bertrandt, A. Kłos, R. Waszkowski, T. Nowicki, R. Pytlak, E. Stęzycka, A. Gazdzinska
Abstract:
The aim of the work was estimation of copper intake with the daily food rations used for alimentation of students of military high schools and soldiers doing compulsory military service in the Polish Army. An average planned copper content in daily food rations used for alimentation of students and soldiers amounted to 2.49±0.35 mg, and 2.44±0.25 mg respectively. The copper content in the daily food ration given for consumption to students amounted from 1.81±0.14 mg to 2.58±0.44 mg while daily food rations served to soldiers delivered from 2.06±0.45 mg to 2.13±0.33 mg. The copper content in the rations planned for students and soldiers alimentation was within the limits of the norms obligatory in Poland. Daily food rations given for consumption, except rations served for students, were within the limits of the recommended norms, but food rations really eaten by examined men didn’t cover the requirements for copper.
Keywords: Copper, daily food ration, military service.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14681086 Flow Analysis of Viscous Nanofluid Due to Rotating Rigid Disk with Navier’s Slip: A Numerical Study
Authors: Khalil Ur Rehman, M. Y. Malik, Usman Ali
Abstract:
In this paper, the problem proposed by Von Karman is treated in the attendance of additional flow field effects when the liquid is spaced above the rotating rigid disk. To be more specific, a purely viscous fluid flow yield by rotating rigid disk with Navier’s condition is considered in both magnetohydrodynamic and hydrodynamic frames. The rotating flow regime is manifested with heat source/sink and chemically reactive species. Moreover, the features of thermophoresis and Brownian motion are reported by considering nanofluid model. The flow field formulation is obtained mathematically in terms of high order differential equations. The reduced system of equations is solved numerically through self-coded computational algorithm. The pertinent outcomes are discussed systematically and provided through graphical and tabular practices. A simultaneous way of study makes this attempt attractive in this sense that the article contains dual framework and validation of results with existing work confirms the execution of self-coded algorithm for fluid flow regime over a rotating rigid disk.
Keywords: Nanoparticles, Newtonian fluid model, chemical reaction, heat source/sink.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9881085 New Features for Specific JPEG Steganalysis
Authors: Johann Barbier, Eric Filiol, Kichenakoumar Mayoura
Abstract:
We present in this paper a new approach for specific JPEG steganalysis and propose studying statistics of the compressed DCT coefficients. Traditionally, steganographic algorithms try to preserve statistics of the DCT and of the spatial domain, but they cannot preserve both and also control the alteration of the compressed data. We have noticed a deviation of the entropy of the compressed data after a first embedding. This deviation is greater when the image is a cover medium than when the image is a stego image. To observe this deviation, we pointed out new statistic features and combined them with the Multiple Embedding Method. This approach is motivated by the Avalanche Criterion of the JPEG lossless compression step. This criterion makes possible the design of detectors whose detection rates are independent of the payload. Finally, we designed a Fisher discriminant based classifier for well known steganographic algorithms, Outguess, F5 and Hide and Seek. The experiemental results we obtained show the efficiency of our classifier for these algorithms. Moreover, it is also designed to work with low embedding rates (< 10-5) and according to the avalanche criterion of RLE and Huffman compression step, its efficiency is independent of the quantity of hidden information.
Keywords: Compressed frequency domain, Fisher discriminant, specific JPEG steganalysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21621084 Thai Halal Products Brand Tips
Authors: Pibool Waijittragum
Abstract:
The purpose of this research is to analyze the marketing strategies of Thai Halal products which related to the way of life for Thai Muslims. The expected benefit is the marketing strategy for brand building process for Halal products in Thailand. 4 elements of marketing strategies which necessary for the brand identity creation is the research framework: consists of Attributes, Benefits, Values and Personality. The research methodology was applied using qualitative and quantitative; 19 marketing experts with dynamic roles in Thai consumer products were interviewed. In addition, a field survey of 122 Thai Muslims selected from 175 Muslim communities in Bangkok was studied. Data analysis will be according to 5 categories of Thai Halal product: 1) Meat 2) Vegetable and Fruits 3) Instant foods and Garnishing ingredient 4) Beverages, Desserts and Snacks 5) Hygienic daily products; such as soap, shampoo and body lotion.
Keywords: Marketing strategies, Product identity, Branding, Thai Halal products.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22601083 Tracing Quality Cost in a Luggage Manufacturing Industry
Authors: S. B. Jaju, R. R. Lakhe
Abstract:
Quality costs are the costs associated with preventing, finding, and correcting defective work. Since the main language of corporate management is money, quality-related costs act as means of communication between the staff of quality engineering departments and the company managers. The objective of quality engineering is to minimize the total quality cost across the life of product. Quality costs provide a benchmark against which improvement can be measured over time. It provides a rupee-based report on quality improvement efforts. It is an effective tool to identify, prioritize and select quality improvement projects. After reviewing through the literature it was noticed that a simplified methodology for data collection of quality cost in a manufacturing industry was required. The quantified standard methodology is proposed for collecting data of various elements of quality cost categories for manufacturing industry. Also in the light of research carried out so far, it is felt necessary to standardise cost elements in each of the prevention, appraisal, internal failure and external failure costs. . Here an attempt is made to standardise the various cost elements applicable to manufacturing industry and data is collected by using the proposed quantified methodology. This paper discusses the case study carried in luggage manufacturing industry.Keywords: Quality Costs, PAF model, quantified methodology, Case study.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22541082 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures
Authors: Adriano Z. Zambom, Preethi Ravikumar
Abstract:
One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.Keywords: Additive models, local polynomial regression, residuals, mean square error, variable selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10101081 Mathematical Approach towards Fault Detection and Isolation of Linear Dynamical Systems
Authors: V.Manikandan, N.Devarajan
Abstract:
The main objective of this work is to provide a fault detection and isolation based on Markov parameters for residual generation and a neural network for fault classification. The diagnostic approach is accomplished in two steps: In step 1, the system is identified using a series of input / output variables through an identification algorithm. In step 2, the fault is diagnosed comparing the Markov parameters of faulty and non faulty systems. The Artificial Neural Network is trained using predetermined faulty conditions serves to classify the unknown fault. In step 1, the identification is done by first formulating a Hankel matrix out of Input/ output variables and then decomposing the matrix via singular value decomposition technique. For identifying the system online sliding window approach is adopted wherein an open slit slides over a subset of 'n' input/output variables. The faults are introduced at arbitrary instances and the identification is carried out in online. Fault residues are extracted making a comparison of the first five Markov parameters of faulty and non faulty systems. The proposed diagnostic approach is illustrated on benchmark problems with encouraging results.
Keywords: Artificial neural network, Fault Diagnosis, Identification, Markov parameters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16331080 Linear Phase High Pass FIR Filter Design using Improved Particle Swarm Optimization
Authors: Sangeeta Mondal, Vasundhara, Rajib Kar, Durbadal Mandal, S. P. Ghoshal
Abstract:
This paper presents an optimal design of linear phase digital high pass finite impulse response (FIR) filter using Improved Particle Swarm Optimization (IPSO). In the design process, the filter length, pass band and stop band frequencies, feasible pass band and stop band ripple sizes are specified. FIR filter design is a multi-modal optimization problem. An iterative method is introduced to find the optimal solution of FIR filter design problem. Evolutionary algorithms like real code genetic algorithm (RGA), particle swarm optimization (PSO), improved particle swarm optimization (IPSO) have been used in this work for the design of linear phase high pass FIR filter. IPSO is an improved PSO that proposes a new definition for the velocity vector and swarm updating and hence the solution quality is improved. A comparison of simulation results reveals the optimization efficacy of the algorithm over the prevailing optimization techniques for the solution of the multimodal, nondifferentiable, highly non-linear, and constrained FIR filter design problems.Keywords: FIR Filter, IPSO, GA, PSO, Parks and McClellan Algorithm, Evolutionary Optimization, High Pass Filter
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30991079 Development and Characterization of Bio-Tribological, Nano-Multilayer Coatings for Medical Tools Application
Authors: L. Major, J. M. Lackner, M. Dyner, B. Major
Abstract:
Development of new generation bio-tribological, multilayer coatings opens an avenue for fabrication of future hightech functional surfaces. In the presented work, nano-composite, Cr/CrN+[Cr/ a-C:H implanted by metallic nanocrystals] multilayer coatings have been developed for surface protection of medical tools. Thin films were fabricated by a hybrid Pulsed Laser Deposition technique. Complex microstructure analysis of nanomultilayer coatings, subjected to mechanical and biological tests, were performed by means of transmission electron microscopy (TEM). Microstructure characterization revealed the layered arrangement of Cr23C6 nanoparticles in multilayer structure. Influence of deposition conditions on bio-tribological properties of the coatings was studied. The bio-tests were used as a screening tool for the analyzed nanomultilayer coatings before they could be deposited on medical tools. Bio-medical tests were done using fibroblasts. The mechanical properties of the coatings were investigated by means of a ball-ondisc mechanical test. The micro hardness was done using Berkovich indenter. The scratch adhesion test was done using Rockwell indenter. From the bio-tribological point of view, the optimal properties had the C106_1 material.Keywords: Bio-tribological coatings, cell-material interaction, hybrid PLD, tribology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19961078 Nonlinear Static Analysis of Laminated Composite Hollow Beams with Super-Elliptic Cross-Sections
Authors: G. Akgun, I. Algul, H. Kurtaran
Abstract:
In this paper geometrically nonlinear static behavior of laminated composite hollow super-elliptic beams is investigated using generalized differential quadrature method. Super-elliptic beam can have both oval and elliptic cross-sections by adjusting parameters in super-ellipse formulation (also known as Lamé curves). Equilibrium equations of super-elliptic beam are obtained using the virtual work principle. Geometric nonlinearity is taken into account using von-Kármán nonlinear strain-displacement relations. Spatial derivatives in strains are expressed with the generalized differential quadrature method. Transverse shear effect is considered through the first-order shear deformation theory. Static equilibrium equations are solved using Newton-Raphson method. Several composite super-elliptic beam problems are solved with the proposed method. Effects of layer orientations of composite material, boundary conditions, ovality and ellipticity on bending behavior are investigated.
Keywords: Generalized differential quadrature, geometric nonlinearity, laminated composite, super-elliptic cross-section.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13851077 In Vitro Study of Coded Transmission in Synthetic Aperture Ultrasound Imaging Systems
Authors: Ihor Trots, Yuriy Tasinkevych, Andrzej Nowicki, Marcin Lewandowski
Abstract:
In the paper the study of synthetic transmit aperture method applying the Golay coded transmission for medical ultrasound imaging is presented. Longer coded excitation allows to increase the total energy of the transmitted signal without increasing the peak pressure. Moreover signal-to-noise ratio and penetration depth are improved while maintaining high ultrasound image resolution. In the work the 128-element linear transducer array with 0.3 mm inter-element spacing excited by one cycle and the 8 and 16- bit Golay coded sequences at nominal frequency 4 MHz was used. To generate a spherical wave covering the full image region a single element transmission aperture was used and all the elements received the echo signals. The comparison of 2D ultrasound images of the tissue mimicking phantom and in vitro measurements of the beef liver is presented to illustrate the benefits of the coded transmission. The results were obtained using the synthetic aperture algorithm with transmit and receive signals correction based on a single element directivity function.Keywords: Golay coded sequences, radiation pattern, signal processing, synthetic aperture, ultrasound imaging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16751076 Artificial Intelligence Techniques applied to Biomedical Patterns
Authors: Giovanni Luca Masala
Abstract:
Pattern recognition is the research area of Artificial Intelligence that studies the operation and design of systems that recognize patterns in the data. Important application areas are image analysis, character recognition, fingerprint classification, speech analysis, DNA sequence identification, man and machine diagnostics, person identification and industrial inspection. The interest in improving the classification systems of data analysis is independent from the context of applications. In fact, in many studies it is often the case to have to recognize and to distinguish groups of various objects, which requires the need for valid instruments capable to perform this task. The objective of this article is to show several methodologies of Artificial Intelligence for data classification applied to biomedical patterns. In particular, this work deals with the realization of a Computer-Aided Detection system (CADe) that is able to assist the radiologist in identifying types of mammary tumor lesions. As an additional biomedical application of the classification systems, we present a study conducted on blood samples which shows how these methods may help to distinguish between carriers of Thalassemia (or Mediterranean Anaemia) and healthy subjects.Keywords: Computer Aided Detection, mammary tumor, pattern recognition, thalassemia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14251075 Steel Dust as a Coating Agent for Iron Ore Pellets at Ironmaking
Authors: M. Bahgat, H. Hanafy, H. Al-Tassan
Abstract:
Cluster formation is an essential phenomenon during direct reduction processes at shaft furnaces. Decreasing the reducing temperature to avoid this problem can cause a significant drop in throughput. In order to prevent sticking of pellets, a coating material basically inactive under the reducing conditions prevailing in the shaft furnace, should be applied to cover the outer layer of the pellets. In the present work, steel dust is used as coating material for iron ore pellets to explore dust coating effectiveness and determines the best coating conditions. Steel dust coating is applied for iron ore pellets in various concentrations. Dust slurry concentrations of 5.0-30% were used to have a coated steel dust amount of 1.0-5.0 kg per ton iron ore. Coated pellets with various concentrations were reduced isothermally in weight loss technique with simulated gas mixture to the composition of reducing gases at shaft furnaces. The influences of various coating conditions on the reduction behavior and the morphology were studied. The optimum reduced samples were comparatively applied for sticking index measurement. It was found that the optimized steel dust coating condition that achieve higher reducibility with lower sticking index was 30% steel dust slurry concentration with 3.0 kg steel dust/ton ore.Keywords: Ironmaking, coating, steel dust, reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 939