Search results for: Quantum computing
546 RANFIS : Rough Adaptive Neuro-Fuzzy Inference System
Authors: Sandeep Chandana, Rene V. Mayorga
Abstract:
The paper presents a new hybridization methodology involving Neural, Fuzzy and Rough Computing. A Rough Sets based approximation technique has been proposed based on a certain Neuro – Fuzzy architecture. A New Rough Neuron composition consisting of a combination of a Lower Bound neuron and a Boundary neuron has also been described. The conventional convergence of error in back propagation has been given away for a new framework based on 'Output Excitation Factor' and an inverse input transfer function. The paper also presents a brief comparison of performances, of the existing Rough Neural Networks and ANFIS architecture against the proposed methodology. It can be observed that the rough approximation based neuro-fuzzy architecture is superior to its counterparts.
Keywords: Boundary neuron, neuro-fuzzy, output excitation factor, RANFIS, rough approximation, rough neural computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704545 Software Effort Estimation Using Soft Computing Techniques
Authors: Parvinder S. Sandhu, Porush Bassi, Amanpreet Singh Brar
Abstract:
Various models have been derived by studying large number of completed software projects from various organizations and applications to explore how project sizes mapped into project effort. But, still there is a need to prediction accuracy of the models. As Neuro-fuzzy based system is able to approximate the non-linear function with more precision. So, Neuro-Fuzzy system is used as a soft computing approach to generate model by formulating the relationship based on its training. In this paper, Neuro-Fuzzy technique is used for software estimation modeling of on NASA software project data and performance of the developed models are compared with the Halstead, Walston-Felix, Bailey-Basili and Doty Models mentioned in the literature.
Keywords: Effort Estimation, Neural-Fuzzy Model, Halstead Model, Walston-Felix Model, Bailey-Basili Model, Doty Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2075544 Prediction of Binding Free Energies for Dyes Removal Using Computational Chemistry
Authors: R. Chanajaree, D. Luanwiset, K. Pongpratea
Abstract:
Dye removal is an environmental concern because the textile industries have been increasing by world population and industrialization. Adsorption is the technique to find adsorbents to remove dyes from wastewater. This method is low-cost and effective for dye removal. This work tries to develop effective adsorbents using the computational approach because it will be able to predict the possibility of the adsorbents for specific dyes in terms of binding free energies. The computational approach is faster and cheaper than the experimental approach in case of finding the best adsorbents. All starting structures of dyes and adsorbents are optimized by quantum calculation. The complexes between dyes and adsorbents are generated by the docking method. The obtained binding free energies from docking are compared to binding free energies from the experimental data. The calculated energies can be ranked as same as the experimental results. In addition, this work also shows the possible orientation of the complexes. This work used two experimental groups of the complexes of the dyes and adsorbents. In the first group, there are chitosan (adsorbent) and two dyes (reactive red (RR) and direct sun yellow (DY)). In the second group, there are poly(1,2-epoxy-3-phenoxy) propane (PEPP), which is the adsorbent, and 2 dyes of bromocresol green (BCG) and alizarin yellow (AY).
Keywords: Dye removal, binding free energies, quantum calculation, docking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 719543 Performance Evaluation of Parallel Surface Modeling and Generation on Actual and Virtual Multicore Systems
Authors: Nyeng P. Gyang
Abstract:
Even though past, current and future trends suggest that multicore and cloud computing systems are increasingly prevalent/ubiquitous, this class of parallel systems is nonetheless underutilized, in general, and barely used for research on employing parallel Delaunay triangulation for parallel surface modeling and generation, in particular. The performances, of actual/physical and virtual/cloud multicore systems/machines, at executing various algorithms, which implement various parallelization strategies of the incremental insertion technique of the Delaunay triangulation algorithm, were evaluated. T-tests were run on the data collected, in order to determine whether various performance metrics differences (including execution time, speedup and efficiency) were statistically significant. Results show that the actual machine is approximately twice faster than the virtual machine at executing the same programs for the various parallelization strategies. Results, which furnish the scalability behaviors of the various parallelization strategies, also show that some of the differences between the performances of these systems, during different runs of the algorithms on the systems, were statistically significant. A few pseudo superlinear speedup results, which were computed from the raw data collected, are not true superlinear speedup values. These pseudo superlinear speedup values, which arise as a result of one way of computing speedups, disappear and give way to asymmetric speedups, which are the accurate kind of speedups that occur in the experiments performed.Keywords: Cloud computing systems, multicore systems, parallel delaunay triangulation, parallel surface modeling and generation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 879542 A Consideration of the Achievement of Productive Level Parallel Programming Skills
Authors: Tadayoshi Horita, Masakazu Akiba, Mina Terauchi, Tsuneo Kanno
Abstract:
This paper gives a consideration of the achievement of productive level parallel programming skills, based on the data of the graduation studies in the Polytechnic University of Japan. The data show that most students can achieve only parallel programming skills during the graduation study (about 600 to 700 hours), if the programming environment is limited to GPGPUs. However, the data also show that it is a very high level task that a student achieves productive level parallel programming skills during only the graduation study. In addition, it shows that the parallel programming environments for GPGPU, such as CUDA and OpenCL, may be more suitable for parallel computing education than other environments such as MPI on a cluster system and Cell.B.E. These results must be useful for the areas of not only software developments, but also hardware product developments using computer technologies.
Keywords: Parallel computing, programming education, GPU, GPGPU, CUDA, OpenCL, MPI, Cell.B.E.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1687541 Cloud Computing Security for Multi-Cloud Service Providers: Controls and Techniques in our Modern Threat Landscape
Authors: Sandesh Achar
Abstract:
Cloud computing security is a broad term that covers a variety of security concerns for organizations that use cloud services. Multi-cloud service providers must consider several factors when addressing security for their customers, including identity and access management, data at rest and in transit, egress and ingress traffic control, vulnerability and threat management, and auditing. This paper explores each of these aspects of cloud security in detail and provides recommendations for best practices for multi-cloud service providers. It also discusses the challenges inherent in securing a multi-cloud environment and offers solutions for overcoming these challenges. By the end of this paper, readers should have a good understanding of the various security concerns associated with multi-cloud environments in the context of today’s modern cyber threats and how to address them.
Keywords: Multi-cloud service, SOC, system organization control, data loss prevention, DLP, identity and access management, IAM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 706540 The Effect of Increment in Simulation Samples on a Combined Selection Procedure
Authors: Mohammad H. Almomani, Rosmanjawati Abdul Rahman
Abstract:
Statistical selection procedures are used to select the best simulated system from a finite set of alternatives. In this paper, we present a procedure that can be used to select the best system when the number of alternatives is large. The proposed procedure consists a combination between Ranking and Selection, and Ordinal Optimization procedures. In order to improve the performance of Ordinal Optimization, Optimal Computing Budget Allocation technique is used to determine the best simulation lengths for all simulation systems and to reduce the total computation time. We also argue the effect of increment in simulation samples for the combined procedure. The results of numerical illustration show clearly the effect of increment in simulation samples on the proposed combination of selection procedure.Keywords: Indifference-Zone, Optimal Computing Budget Allocation, Ordinal Optimization, Ranking and Selection, Subset Selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1241539 Parallel Double Splicing on Iso-Arrays
Authors: V. Masilamani, D.K. Sheena Christy, D.G. Thomas
Abstract:
Image synthesis is an important area in image processing. To synthesize images various systems are proposed in the literature. In this paper, we propose a bio-inspired system to synthesize image and to study the generating power of the system, we define the class of languages generated by our system. We call image as array in this paper. We use a primitive called iso-array to synthesize image/array. The operation is double splicing on iso-arrays. The double splicing operation is used in DNA computing and we use this to synthesize image. A comparison of the family of languages generated by the proposed self restricted double splicing systems on iso-arrays with the existing family of local iso-picture languages is made. Certain closure properties such as union, concatenation and rotation are studied for the family of languages generated by the proposed model.Keywords: DNA computing, splicing system, iso-picture languages, iso-array double splicing system, iso-array self splicing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544538 Trust Managementfor Pervasive Computing Environments
Authors: Denis Trcek
Abstract:
Trust is essential for further and wider acceptance of contemporary e-services. It was first addressed almost thirty years ago in Trusted Computer System Evaluation Criteria standard by the US DoD. But this and other proposed approaches of that period were actually solving security. Roughly some ten years ago, methodologies followed that addressed trust phenomenon at its core, and they were based on Bayesian statistics and its derivatives, while some approaches were based on game theory. However, trust is a manifestation of judgment and reasoning processes. It has to be dealt with in accordance with this fact and adequately supported in cyber environment. On the basis of the results in the field of psychology and our own findings, a methodology called qualitative algebra has been developed, which deals with so far overlooked elements of trust phenomenon. It complements existing methodologies and provides a basis for a practical technical solution that supports management of trust in contemporary computing environments. Such solution is also presented at the end of this paper.Keywords: internet security, trust management, multi-agent systems, reasoning and judgment, modeling and simulation, qualitativealgebra
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582537 A Distributed Approach to Extract High Utility Itemsets from XML Data
Authors: S. Kannimuthu, K. Premalatha
Abstract:
This paper investigates a new data mining capability that entails mining of High Utility Itemsets (HUI) in a distributed environment. Existing research in data mining deals with only presence or absence of an items and do not consider the semantic measures like weight or cost of the items. Thus, HUI mining algorithm has evolved. HUI mining is the one kind of utility mining concept, aims to identify itemsets whose utility satisfies a given threshold. Although, the approach of mining HUIs in a distributed environment and mining of the same from XML data have not explored yet. In this work, a novel approach is proposed to mine HUIs from the XML based data in a distributed environment. This work utilizes Service Oriented Computing (SOC) paradigm which provides Knowledge as a Service (KaaS). The interesting patterns are provided via the web services with the help of knowledge server to answer the queries of the consumers. The performance of the approach is evaluated on various databases using execution time and memory consumption.
Keywords: Data mining, Knowledge as a Service, service oriented computing, utility mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2454536 Design and Implementation of Shared Memory based Parallel File System Logging Method for High Performance Computing
Authors: Hyeyoung Cho, Sungho Kim, SangDong Lee
Abstract:
I/O workload is a critical and important factor to analyze I/O pattern and file system performance. However tracing I/O operations on the fly distributed parallel file system is non-trivial due to collection overhead and a large volume of data. In this paper, we design and implement a parallel file system logging method for high performance computing using shared memory-based multi-layer scheme. It minimizes the overhead with reduced logging operation response time and provides efficient post-processing scheme through shared memory. Separated logging server can collect sequential logs from multiple clients in a cluster through packet communication. Implementation and evaluation result shows low overhead and high scalability of this architecture for high performance parallel logging analysis.Keywords: I/O workload, PVFS, I/O Trace.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1560535 A Timed and Colored Petri Nets for Modeling and Verifying Cloud System Elasticity
Authors: W. Louhichi, M.Berrima, N. Ben Rajeb Robbana
Abstract:
Elasticity is the essential property of cloud computing. As the name suggests, it constitutes the ability of a cloud system to adjust resource provisioning in relation to fluctuating workloads. There are two types of elasticity operations, vertical and horizontal. In this work, we are interested in horizontal scaling, which is ensured by two mechanisms; scaling in and scaling out. Following the sizing of the system, we can adopt scaling in the event of over-supply and scaling out in the event of under-supply. In this paper, we propose a formal model, based on temporized and colored Petri nets (TdCPNs), for the modeling of the duplication and the removal of a virtual machine from a server. This model is based on formal Petri Nets (PNs) modeling language. The proposed models are edited, verified, and simulated with two examples implemented in colored Petri nets (CPNs)tools, which is a modeling tool for colored and timed PNs.
Keywords: Cloud computing, elasticity, elasticity controller, petri nets, scaling in, scaling out.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 646534 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encryption
Authors: Victor Onomza Waziri, John K. Alhassan, Idris Ismaila, Moses Noel Dogonyaro
Abstract:
This paper describes the problem of building secure computational services for encrypted information in the Cloud Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy, confidentiality, availability of the users. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute a theoretical presentations in a high-level computational processes that are based on number theory and algebra that can easily be integrated and leveraged in the Cloud computing with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based cryptographic security algorithm.
Keywords: Data Analytics, Security, Privacy, Bootstrapping, and Fully Homomorphic Encryption Scheme.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3458533 Automatic Light Control in Domotics using Artificial Neural Networks
Authors: Carlos Machado, José A. Mendes
Abstract:
Home Automation is a field that, among other subjects, is concerned with the comfort, security and energy requirements of private homes. The configuration of automatic functions in this type of houses is not always simple to its inhabitants requiring the initial setup and regular adjustments. In this work, the ubiquitous computing system vision is used, where the users- action patterns are captured, recorded and used to create the contextawareness that allows the self-configuration of the home automation system. The system will try to free the users from setup adjustments as the home tries to adapt to its inhabitants- real habits. In this paper it is described a completely automated process to determine the light state and act on them, taking in account the users- daily habits. Artificial Neural Network (ANN) is used as a pattern recognition method, classifying for each moment the light state. The work presented uses data from a real house where a family is actually living.Keywords: ANN, Home Automation, Neural Systems, PatternRecognition, Ubiquitous Computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2071532 Cloud Computing Support for Diagnosing Researches
Authors: A. Amirov, O. Gerget, V. Kochegurov
Abstract:
One of the main biomedical problem lies in detecting dependencies in semi structured data. Solution includes biomedical portal and algorithms (integral rating health criteria, multidimensional data visualization methods). Biomedical portal allows to process diagnostic and research data in parallel mode using Microsoft System Center 2012, Windows HPC Server cloud technologies. Service does not allow user to see internal calculations instead it provides practical interface. When data is sent for processing user may track status of task and will achieve results as soon as computation is completed. Service includes own algorithms and allows diagnosing and predicating medical cases. Approved methods are based on complex system entropy methods, algorithms for determining the energy patterns of development and trajectory models of biological systems and logical–probabilistic approach with the blurring of images.
Keywords: Biomedical portal, cloud computing, diagnostic and prognostic research, mathematical data analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1644531 Chlorophyll Fluorescence as Criterion for the Diagnosis Salt Stress in Wheat (Triticum aestivum) Plants
Authors: M. Abdeshahian, M. Nabipour, M. Meskarbashee
Abstract:
To investigate effect of salt stress on Chlorophyll fluorescence four cultivars (fong,star,chamran and kharchia) of wheat (Triticum aestivum) plants subjected to salinity levels ( control,8,12 and 16 dsm-1 ) from one week after emergence to the end of stem elongation under greenhouse condition . results showed that quantum yield of photosystem II from light adopted leaves (ΦPSII), Photochemical quenching (qP) ,quantum yield of dark adopted leaves (fv/fm) and non photochemical quenching (NPq) were affected by salt stress . Salinity levels affected photosynthetic rate. Star and fong cultivars showed minimum and maximum levels of photosynthetic rate in respectively. Minimum photosynthetic rate differences between levels of salinity were shown in Kharchia. Shoot dry matter of all cultivars decreased by increasing salinity levels. Results showed that non photochemical quenching by salinity levels attribute to the decreases in shoot dry matter.Keywords: salt stress, wheat, chlorophyll fluorescence, photosynthesis , shoot dry matter .
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2480530 Design and Implementation of Security Middleware for Data Warehouse Signature Framework
Authors: Mayada AlMeghari
Abstract:
Recently, grid middlewares have provided large integrated use of network resources as the shared data and the CPU to become a virtual supercomputer. In this work, we present the design and implementation of the middleware for Data Warehouse Signature (DWS) Framework. The aim of using the middleware in the proposed DWS framework is to achieve the high performance by the parallel computing. This middleware is developed on Alchemi.Net framework to increase the security among the network nodes through the authentication and group-key distribution model. This model achieves the key security and prevents any intermediate attacks in the middleware. This paper presents the flow process structures of the middleware design. In addition, the paper ensures the implementation of security for DWS middleware enhancement with the authentication and group-key distribution model. Finally, from the analysis of other middleware approaches, the developed middleware of DWS framework is the optimal solution of a complete covering of security issues.
Keywords: Middleware, parallel computing, data warehouse, security, group-key, high performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 337529 Parallel and Distributed Mining of Association Rule on Knowledge Grid
Authors: U. Sakthi, R. Hemalatha, R. S. Bhuvaneswaran
Abstract:
In Virtual organization, Knowledge Discovery (KD) service contains distributed data resources and computing grid nodes. Computational grid is integrated with data grid to form Knowledge Grid, which implements Apriori algorithm for mining association rule on grid network. This paper describes development of parallel and distributed version of Apriori algorithm on Globus Toolkit using Message Passing Interface extended with Grid Services (MPICHG2). The creation of Knowledge Grid on top of data and computational grid is to support decision making in real time applications. In this paper, the case study describes design and implementation of local and global mining of frequent item sets. The experiments were conducted on different configurations of grid network and computation time was recorded for each operation. We analyzed our result with various grid configurations and it shows speedup of computation time is almost superlinear.Keywords: Association rule, Grid computing, Knowledge grid, Mobility prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2181528 Spherical Harmonic Based Monostatic Anisotropic Point Scatterer Model for RADAR Applications
Authors: Eric Huang, Coleman DeLude, Justin Romberg, Saibal Mukhopadhyay, Madhavan Swaminathan
Abstract:
High-performance computing (HPC) based emulators can be used to model the scattering from multiple stationary and moving targets for RADAR applications. These emulators rely on the RADAR Cross Section (RCS) of the targets being available in complex scenarios. Representing the RCS using tables generated from EM simulations is oftentimes cumbersome leading to large storage requirements. In this paper, we proposed a spherical harmonic based anisotropic scatterer model to represent the RCS of complex targets. The problem of finding the locations and reflection profiles of all scatterers can be formulated as a linear least square problem with a special sparsity constraint. We solve this problem using a modified Orthogonal Matching Pursuit algorithm. The results show that the spherical harmonic based scatterer model can effectively represent the RCS data of complex targets.
Keywords: RADAR, RCS, high performance computing, point scatterer model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 606527 Replicating Data Objects in Large-scale Distributed Computing Systems using Extended Vickrey Auction
Authors: Samee Ullah Khan, Ishfaq Ahmad
Abstract:
This paper proposes a novel game theoretical technique to address the problem of data object replication in largescale distributed computing systems. The proposed technique draws inspiration from computational economic theory and employs the extended Vickrey auction. Specifically, players in a non-cooperative environment compete for server-side scarce memory space to replicate data objects so as to minimize the total network object transfer cost, while maintaining object concurrency. Optimization of such a cost in turn leads to load balancing, fault-tolerance and reduced user access time. The method is experimentally evaluated against four well-known techniques from the literature: branch and bound, greedy, bin-packing and genetic algorithms. The experimental results reveal that the proposed approach outperforms the four techniques in both the execution time and solution quality.Keywords: Auctions, data replication, pricing, static allocation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465526 High Performance in Parallel Data Integration: An Empirical Evaluation of the Ratio Between Processing Time and Number of Physical Nodes
Authors: Caspar von Seckendorff, Eldar Sultanow
Abstract:
Many studies have shown that parallelization decreases efficiency [1], [2]. There are many reasons for these decrements. This paper investigates those which appear in the context of parallel data integration. Integration processes generally cannot be allocated to packages of identical size (i. e. tasks of identical complexity). The reason for this is unknown heterogeneous input data which result in variable task lengths. Process delay is defined by the slowest processing node. It leads to a detrimental effect on the total processing time. With a real world example, this study will show that while process delay does initially increase with the introduction of more nodes it ultimately decreases again after a certain point. The example will make use of the cloud computing platform Hadoop and be run inside Amazon-s EC2 compute cloud. A stochastic model will be set up which can explain this effect.
Keywords: Process delay, speedup, efficiency, parallel computing, data integration, E-Commerce, Amazon Elastic Compute Cloud (EC2), Hadoop, Nutch.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1629525 Isospectral Hulthén Potential
Authors: Anil Kumar
Abstract:
Supersymmetric Quantum Mechanics is an interesting framework to analyze nonrelativistic quantal problems. Using these techniques, we construct a family of strictly isospectral Hulth´en potentials. Isospectral wave functions are generated and plotted for different values of the deformation parameter.
Keywords: Hulth´en potential, Isospectral Hamiltonian.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3522524 Digital Homeostasis: Tangible Computing as a Multi-Sensory Installation
Authors: Andrea Macruz
Abstract:
This paper explores computation as a process for design by examining how computers can become more than an operative strategy in a designer's toolkit. It documents this, building upon concepts of neuroscience and Antonio Damasio's Homeostasis Theory, which is the control of bodily states through feedback intended to keep conditions favorable for life. To do this, it follows a methodology through algorithmic drawing and discusses the outcomes of three multi-sensory design installations, which culminated from a course in an academic setting. It explains both the studio process that took place to create the installations and the computational process that was developed, related to the fields of algorithmic design and tangible computing. It discusses how designers can use computational range to achieve homeostasis related to sensory data in a multi-sensory installation. The outcomes show clearly how people and computers interact with different sensory modalities and affordances. They propose using computers as meta-physical stabilizers rather than tools.
Keywords: Antonio Damasio, emotional feedback, algorithmic drawing, homeostasis, multi-sensory installation, neuroscience.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 366523 Residual Dipolar Couplings in NMR Spectroscopy Using Lanthanide Tags
Authors: Elias Akoury
Abstract:
Nuclear Magnetic Resonance (NMR) spectroscopy is an indispensable technique used in structure determination of small and macromolecules to study their physical properties, elucidation of characteristic interactions, dynamics and thermodynamic processes. Quantum mechanics defines the theoretical description of NMR spectroscopy and treatment of the dynamics of nuclear spin systems. The phenomenon of residual dipolar coupling (RDCs) has become a routine tool for accurate structure determination by providing global orientation information of magnetic dipole-dipole interaction vectors within a common reference frame. This offers accessibility of distance-independent angular information and insights to local relaxation. The measurement of RDCs requires an anisotropic orientation medium for the molecules to partially align along the magnetic field. This can be achieved by introduction of liquid crystals or attaching a paramagnetic center. Although anisotropic paramagnetic tags continue to mark achievements in the biomolecular NMR of large proteins, its application in small organic molecules remains unspread. Here, we propose a strategy for the synthesis of a lanthanide tag and the measurement of RDCs in organic molecules using paramagnetic lanthanide complexes.
Keywords: Lanthanide Tags, NMR spectroscopy, residual dipolar coupling, quantum mechanics of spin dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 989522 Secure Resource Selection in Computational Grid Based on Quantitative Execution Trust
Authors: G.Kavitha, V.Sankaranarayanan
Abstract:
Grid computing provides a virtual framework for controlled sharing of resources across institutional boundaries. Recently, trust has been recognised as an important factor for selection of optimal resources in a grid. We introduce a new method that provides a quantitative trust value, based on the past interactions and present environment characteristics. This quantitative trust value is used to select a suitable resource for a job and eliminates run time failures arising from incompatible user-resource pairs. The proposed work will act as a tool to calculate the trust values of the various components of the grid and there by improves the success rate of the jobs submitted to the resource on the grid. The access to a resource not only depend on the identity and behaviour of the resource but also upon its context of transaction, time of transaction, connectivity bandwidth, availability of the resource and load on the resource. The quality of the recommender is also evaluated based on the accuracy of the feedback provided about a resource. The jobs are submitted for execution to the selected resource after finding the overall trust value of the resource. The overall trust value is computed with respect to the subjective and objective parameters.Keywords: access control, feedback, grid computing, reputation, security, trust, trust parameter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488521 Teaching Computer Programming to Diverse Students: A Comparative, Mixed-Methods, Classroom Research Study
Authors: Almudena Konrad, Tomás Galguera
Abstract:
Lack of motivation and interest is a serious obstacle to students’ learning computing skills. A need exists for a knowledge base on effective pedagogy and curricula to teach computer programming. This paper presents results from research evaluating a six-year project designed to teach complex concepts in computer programming collaboratively, while supporting students to continue developing their computer thinking and related coding skills individually. Utilizing a quasi-experimental, mixed methods design, the pedagogical approaches and methods were assessed in two contrasting groups of students with different socioeconomic status, gender, and age composition. Analyses of quantitative data from Likert-scale surveys and an evaluation rubric, combined with qualitative data from reflective writing exercises and semi-structured interviews yielded convincing evidence of the project’s success at both teaching and inspiring students.Keywords: Computational thinking, computing education, computer programming curriculum, logic, teaching methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 789520 Modeling the Symptom-Disease Relationship by Using Rough Set Theory and Formal Concept Analysis
Authors: Mert Bal, Hayri Sever, Oya Kalıpsız
Abstract:
Medical Decision Support Systems (MDSSs) are sophisticated, intelligent systems that can provide inference due to lack of information and uncertainty. In such systems, to model the uncertainty various soft computing methods such as Bayesian networks, rough sets, artificial neural networks, fuzzy logic, inductive logic programming and genetic algorithms and hybrid methods that formed from the combination of the few mentioned methods are used. In this study, symptom-disease relationships are presented by a framework which is modeled with a formal concept analysis and theory, as diseases, objects and attributes of symptoms. After a concept lattice is formed, Bayes theorem can be used to determine the relationships between attributes and objects. A discernibility relation that forms the base of the rough sets can be applied to attribute data sets in order to reduce attributes and decrease the complexity of computation.
Keywords: Formal Concept Analysis, Rough Set Theory, Granular Computing, Medical Decision Support System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814519 Tipover Stability Enhancement of Wheeled Mobile Manipulators Using an Adaptive Neuro- Fuzzy Inference Controller System
Authors: A. Ghaffari, A. Meghdari, D. Naderi, S. Eslami
Abstract:
In this paper an algorithm based on the adaptive neuro-fuzzy controller is provided to enhance the tipover stability of mobile manipulators when they are subjected to predefined trajectories for the end-effector and the vehicle. The controller creates proper configurations for the manipulator to prevent the robot from being overturned. The optimal configuration and thus the most favorable control are obtained through soft computing approaches including a combination of genetic algorithm, neural networks, and fuzzy logic. The proposed algorithm, in this paper, is that a look-up table is designed by employing the obtained values from the genetic algorithm in order to minimize the performance index and by using this data base, rule bases are designed for the ANFIS controller and will be exerted on the actuators to enhance the tipover stability of the mobile manipulator. A numerical example is presented to demonstrate the effectiveness of the proposed algorithm.Keywords: Mobile Manipulator, Tipover Stability Enhancement, Adaptive Neuro-Fuzzy Inference Controller System, Soft Computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1963518 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping
Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa
Abstract:
The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.
Keywords: Neural network computing, information processing, input-output mapping, training time, computers with high memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1323517 Employee Motivation Factors That Affect Job Performance of Suan Sunandha Rajabhat University Employee
Authors: Orawan Boriban, Phatthanan Chaiyabut
Abstract:
The purpose of this research is to study motivation factors and also to study factors relation to job performance to compare motivation factors under the personal factor classification such as gender, age, income, educational level, marital status, and working duration; and to study the relationship between Motivation Factors and Job Performance with job satisfactions. The sample groups utilized in this research were 400 Suan Sunandha Rajabhat University employees. This research is a quantitative research using questionnaires as research instrument. The statistics applied for data analysis including percentage, mean, and standard deviation. In addition, the difference analysis was conducted by t value computing, one-way analysis of variance and Pearson’s correlation coefficient computing. The findings of the study results were as follows the findings showed that the aspects of job promotion and salary were at the moderate levels. Additionally, the findings also showed that the motivations that affected the revenue branch chiefs’ job performance were job security, job accomplishment, policy and management, job promotion, and interpersonal relation.
Keywords: Motivation Factors, Job Performance, Suan Sunandha Rajabhat University Employee.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2848