Search results for: Augmented%20Augmented%20Chemical%20Reactions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 150

Search results for: Augmented%20Augmented%20Chemical%20Reactions

30 Design Parameters Selection and Optimization of Weld Zone Development in Resistance Spot Welding

Authors: Norasiah Muhammad, Yupiter HP Manurung

Abstract:

This paper investigates the development of weld zone in Resistance Spot Welding (RSW) which focuses on weld nugget and Heat Affected Zone (HAZ). The effects of four factors namely weld current, weld time, electrode force and hold time were studied using a general 24 factorial design augmented by five centre points. The results of the analysis showed that all selected factors except hold time exhibit significant effect on weld nugget radius and HAZ size. Optimization of the welding parameters (weld current, weld time and electrode force) to normalize weld nugget and to minimize HAZ size was then conducted using Central Composite Design (CCD) in Response Surface Methodology (RSM) and the optimum parameters were determined. A regression model for radius of weld nugget and HAZ size was developed and its adequacy was evaluated. The experimental results obtained under optimum operating conditions were then compared with the predicted values and were found to agree satisfactorily with each other

Keywords: Factorial design, Optimization, Resistance Spot Welding (RSW), Response Surface Methodology (RSM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3375
29 Role of Sodium Concentration, Waiting Time and Constituents’ Temperature on the Rheological Behavior of Alkali Activated Slag Concrete

Authors: Muhammet M. Erdem, Erdoğan Özbay, Ibrahim H. Durmuş, Mustafa Erdemir, Murat Bikçe, Müzeyyen Balçıkanlı

Abstract:

In this paper, rheological behavior of alkali activated slag concretes were investigated depending on the sodium concentration (SC), waiting time (WT) after production, and constituents’ temperature (CT) parameters. For this purpose, an experimental program was conducted with four different SCs of 1.85, 3.0, 4.15, and 5.30%, three different WT of 0 (just after production), 15, and 30 minutes and three different CT of 18, 30, and 40 °C. Solid precursors are activated by water glass and sodium hydroxide solutions with silicate modulus (Ms = SiO2/Na2O) of 1. Slag content and (water + activator solution)/slag ratio were kept constant in all mixtures. Yield stress and plastic viscosity values were defined for each mixture by using the ICAR rheometer. Test results were demonstrated that all of the three studied parameters have tremendous effect on the yield stress and plastic viscosity values of the alkali activated slag concretes. Increasing the SC, WT, and CT drastically augmented the rheological parameters. At the 15 and 30 minutes WT after production, most of the alkali activated slag concretes were set instantaneously, and rheological measurements were not performed.

Keywords: Alkali activation, slag, rheology, yield stress, plastic viscosity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1044
28 Fast 3D Collision Detection Algorithm using 2D Intersection Area

Authors: Taehyun Yoon, Keechul Jung

Abstract:

There are many researches to detect collision between real object and virtual object in 3D space. In general, these techniques are need to huge computing power. So, many research and study are constructed by using cloud computing, network computing, and distribute computing. As a reason of these, this paper proposed a novel fast 3D collision detection algorithm between real and virtual object using 2D intersection area. Proposed algorithm uses 4 multiple cameras and coarse-and-fine method to improve accuracy and speed performance of collision detection. In the coarse step, this system examines the intersection area between real and virtual object silhouettes from all camera views. The result of this step is the index of virtual sensors which has a possibility of collision in 3D space. To decide collision accurately, at the fine step, this system examines the collision detection in 3D space by using the visual hull algorithm. Performance of the algorithm is verified by comparing with existing algorithm. We believe proposed algorithm help many other research, study and application fields such as HCI, augmented reality, intelligent space, and so on.

Keywords: Collision Detection, Computer Vision, Human Computer Interaction, Visual Hull

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2364
27 Research on IBR-Driven Distributed Collaborative Visualization System

Authors: Yin Runmin, Song Changfeng

Abstract:

Image-based Rendering(IBR) techniques recently reached in broad fields which leads to a critical challenge to build up IBR-Driven visualization platform where meets requirement of high performance, large bounds of distributed visualization resource aggregation and concentration, multiple operators deploying and CSCW design employing. This paper presents an unique IBR-based visualization dataflow model refer to specific characters of IBR techniques and then discusses prominent feature of IBR-Driven distributed collaborative visualization (DCV) system before finally proposing an novel prototype. The prototype provides a well-defined three level modules especially work as Central Visualization Server, Local Proxy Server and Visualization Aid Environment, by which data and control for collaboration move through them followed the previous dataflow model. With aid of this triple hierarchy architecture of that, IBR oriented application construction turns to be easy. The employed augmented collaboration strategy not only achieve convenient multiple users synchronous control and stable processing management, but also is extendable and scalable.

Keywords: Image-Based Rendering, Distributed CollaborativeVisualization, Computer Supported Cooperative Work, Model andSimulation, Modular Visualization Environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1438
26 Improving Decision Support for Organ Transplant

Authors: I. McCulloh, A. Placona, D. Stewart, D. Gause, K. Kiernan, M. Stuart, C. Zinner, L. Cartwright

Abstract:

We find in our data that an alarming number of viable deceased donor kidneys are discarded every year in the US, while waitlisted candidates are dying every day. We observe as many as 85% of transplanted organs are refused at least once for a patient that scored higher on the match list. There are hundreds of clinical variables involved in making a clinical transplant decision and there is rarely an ideal match. Decision makers exhibit an optimism bias where they may refuse an organ offer assuming a better match is imminent. We propose a semi-parametric Cox proportional hazard model, augmented by an accelerated failure time model based on patient-specific suitable organ supply and demand to estimate a time-to-next-offer. Performance is assessed with Cox-Snell residuals and decision curve analysis, demonstrating improved decision support for up to a 5-year outlook. Providing clinical decision-makers with quantitative evidence of likely patient outcomes (e.g., time to next offer and the mortality associated with waiting) may improve decisions and reduce optimism bias, thus reducing discarded organs and matching more patients on the waitlist.

Keywords: Decision science, KDPI, optimism bias, organ transplant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 105
25 Double Diffusive Convection in a Partially Porous Cavity under Suction/Injection Effects

Authors: Y. Outaleb, K. Bouhadef, O. Rahli

Abstract:

Double-diffusive steady convection in a partially porous cavity with partially permeable walls and under the combined buoyancy effects of thermal and mass diffusion was analysed numerically using finite volume method. The top wall is well insulated and impermeable while the bottom surface is partially well insulated and impermeable and partially submitted to constant temperature T1 and concentration C1. Constant equal temperature T2 and concentration C2 are imposed along the vertical surfaces of the enclosure. Mass suction/injection and injection/suction are respectively considered at the bottom of the porous centred partition and at one of the vertical walls. Heat and mass transfer characteristics as streamlines and average Nusselt numbers and Sherwood numbers were discussed for different values of buoyancy ratio, Rayleigh number, and injection/suction coefficient. It is especially noted that increasing the injection factor disadvantages the exchanges in the case of the injection while the transfer is augmented in case of suction. On the other hand, a critical value of the buoyancy ratio was highlighted for which heat and mass transfers are minimized.

Keywords: Double diffusive convection, Injection/Extraction, Partially porous cavity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520
24 Comparison of Deep Convolutional Neural Networks Models for Plant Disease Identification

Authors: Megha Gupta, Nupur Prakash

Abstract:

Identification of plant diseases has been performed using machine learning and deep learning models on the datasets containing images of healthy and diseased plant leaves. The current study carries out an evaluation of some of the deep learning models based on convolutional neural network architectures for identification of plant diseases. For this purpose, the publicly available New Plant Diseases Dataset, an augmented version of PlantVillage dataset, available on Kaggle platform, containing 87,900 images has been used. The dataset contained images of 26 diseases of 14 different plants and images of 12 healthy plants. The CNN models selected for the study presented in this paper are AlexNet, ZFNet, VGGNet (four models), GoogLeNet, and ResNet (three models). The selected models are trained using PyTorch, an open-source machine learning library, on Google Colaboratory. A comparative study has been carried out to analyze the high degree of accuracy achieved using these models. The highest test accuracy and F1-score of 99.59% and 0.996, respectively, were achieved by using GoogLeNet with Mini-batch momentum based gradient descent learning algorithm.

Keywords: comparative analysis, convolutional neural networks, deep learning, plant disease identification

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 557
23 The Influence of Audio on Perceived Quality of Segmentation

Authors: Silvio R. R. Sanches, Bianca C. Barbosa, Beatriz R. Brum, Cléber G.Corrêa

Abstract:

In order to evaluate the quality of a segmentation algorithm, the researchers use subjective or objective metrics. Although subjective metrics are more accurate than objective ones, objective metrics do not require user feedback to test an algorithm. Objective metrics require subjective experiments only during their development. Subjective experiments typically display to users some videos (generated from frames with segmentation errors) that simulate the environment of an application domain. This user feedback is crucial information for metric definition. In the subjective experiments applied to develop some state-of-the-art metrics used to test segmentation algorithms, the videos displayed during the experiments did not contain audio. Audio is an essential component in applications such as videoconference and augmented reality. If the audio influences the user’s perception, using only videos without audio in subjective experiments can compromise the efficiency of an objective metric generated using data from these experiments. This work aims to identify if the audio influences the user’s perception of segmentation quality in background substitution applications with audio. The proposed approach used a subjective method based on formal video quality assessment methods. The results showed that audio influences the quality of segmentation perceived by a user.

Keywords: Background substitution, influence of audio, segmentation evaluation, segmentation quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 290
22 Ab initio Study of Co2ZrGe and Co2NbB Full Heusler Compounds

Authors: Abada Ahmed, Hiadsi Said, Ouahrani Tarik, Amrani Bouhalouane, Amara Kadda

Abstract:

Using the first-principles full-potential linearized augmented plane wave plus local orbital (FP-LAPW+lo) method based on density functional theory (DFT), we have investigated the electronic structure and magnetism of full Heusler alloys Co2ZrGe and Co2NbB. These compounds are predicted to be half-metallic ferromagnets (HMFs) with a total magnetic moment of 2.000 B per formula unit, well consistent with the Slater-Pauling rule. Calculations show that both the alloys have an indirect band gaps, in the minority-spin channel of density of states (DOS), with values of 0.58 eV and 0.47 eV for Co2ZrGe and Co2NbB, respectively. Analysis of the DOS and magnetic moments indicates that their magnetism is mainly related to the d-d hybridization between the Co and Zr (or Nb) atoms. The half-metallicity is found to be relatively robust against volume changes. In addition, an atom inside molecule AIM formalism and an electron localization function ELF were also adopted to study the bonding properties of these compounds, building a bridge between their electronic and bonding behavior. As they have a good crystallographic compatibility with the lattice of semiconductors used industrially and negative calculated cohesive energies with considerable absolute values these two alloys could be promising magnetic materials in the spintronic field.

Keywords: Electronic properties, full Heusler alloys, halfmetallic ferromagnets, magnetic properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2458
21 Enhanced Particle Swarm Optimization Approach for Solving the Non-Convex Optimal Power Flow

Authors: M. R. AlRashidi, M. F. AlHajri, M. E. El-Hawary

Abstract:

An enhanced particle swarm optimization algorithm (PSO) is presented in this work to solve the non-convex OPF problem that has both discrete and continuous optimization variables. The objective functions considered are the conventional quadratic function and the augmented quadratic function. The latter model presents non-differentiable and non-convex regions that challenge most gradient-based optimization algorithms. The optimization variables to be optimized are the generator real power outputs and voltage magnitudes, discrete transformer tap settings, and discrete reactive power injections due to capacitor banks. The set of equality constraints taken into account are the power flow equations while the inequality ones are the limits of the real and reactive power of the generators, voltage magnitude at each bus, transformer tap settings, and capacitor banks reactive power injections. The proposed algorithm combines PSO with Newton-Raphson algorithm to minimize the fuel cost function. The IEEE 30-bus system with six generating units is used to test the proposed algorithm. Several cases were investigated to test and validate the consistency of detecting optimal or near optimal solution for each objective. Results are compared to solutions obtained using sequential quadratic programming and Genetic Algorithms.

Keywords: Particle Swarm Optimization, Optimal Power Flow, Economic Dispatch.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2318
20 Accurate Visualization of Graphs of Functions of Two Real Variables

Authors: Zeitoun D. G., Thierry Dana-Picard

Abstract:

The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.

Keywords: Function singularities, mesh generation, point allocation, visualization, collocation least squares method, Augmented Lagrangian method, Uzawa's Algorithm, Preconditioned Conjugate Gradien

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1671
19 Design, Fabrication and Performance Evaluation of Mobile Engine-Driven Pneumatic Paddy Collector

Authors: Sony P. Aquino, Helen F. Gavino, Victorino T. Taylan, Teresito G. Aguinaldo

Abstract:

A simple mobile engine-driven pneumatic paddy collector made of locally available materials using local manufacturing technology was designed, fabricated, and tested for collecting and bagging of paddy dried on concrete pavement. The pneumatic paddy collector had the following major components: radial flat bladed type centrifugal fan, power transmission system, bagging area, frame and the conveyance system. Results showed significant differences on the collecting capacity, noise level, and fuel consumption when rotational speed of the air mover shaft was varied. Other parameters such as collecting efficiency, air velocity, augmented cracked grain percentage, and germination rate were not significantly affected by varying rotational speed of the air mover shaft. The pneumatic paddy collector had a collecting efficiency of 99.33 % with a collecting capacity of 2685.00 kg/h at maximum rotational speed of centrifugal fan shaft of about 4200 rpm. The machine entailed an investment cost of P 62,829.25. The break-even weight of paddy was 510,606.75 kg/yr at a collecting cost of 0.11 P/kg of paddy. Utilizing the machine for 400 hours per year generated an income of P 23,887.73. The projected time needed to recover cost of the machine based on 2685 kg/h collecting capacity was 2.63 year.

Keywords: Mobile engine-driven pneumatic paddy collector, collecting capacity and efficiency, simple cost analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5438
18 Applying Theory of Inventive Problem Solving to Develop Innovative Solutions: A Case Study

Authors: Y. H. Wang, C. C. Hsieh

Abstract:

Good service design can increase organization revenue and consumer satisfaction while reducing labor and time costs. The problems facing consumers in the original serve model for eyewear and optical industry includes the following issues: 1. Insufficient information on eyewear products 2. Passively dependent on recommendations, insufficient selection 3. Incomplete records on progression of vision conditions 4. Lack of complete customer records. This study investigates the case of Kobayashi Optical, applying the Theory of Inventive Problem Solving (TRIZ) to develop innovative solutions for eyewear and optical industry. Analysis results raise the following conclusions and management implications: In order to provide customers with improved professional information and recommendations, Kobayashi Optical is suggested to establish customer purchasing records. Overall service efficiency can be enhanced by applying data mining techniques to analyze past consumer preferences and purchase histories. Furthermore, Kobayashi Optical should continue to develop a 3D virtual trial service which can allow customers for easy browsing of different frame styles and colors. This 3D virtual trial service will save customer waiting times in during peak service times at stores.

Keywords: Theory of inventive problem solving, service design, augmented reality, eyewear and optical industry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1623
17 A Metric-Set and Model Suggestion for Better Software Project Cost Estimation

Authors: Murat Ayyıldız, Oya Kalıpsız, Sırma Yavuz

Abstract:

Software project effort estimation is frequently seen as complex and expensive for individual software engineers. Software production is in a crisis. It suffers from excessive costs. Software production is often out of control. It has been suggested that software production is out of control because we do not measure. You cannot control what you cannot measure. During last decade, a number of researches on cost estimation have been conducted. The metric-set selection has a vital role in software cost estimation studies; its importance has been ignored especially in neural network based studies. In this study we have explored the reasons of those disappointing results and implemented different neural network models using augmented new metrics. The results obtained are compared with previous studies using traditional metrics. To be able to make comparisons, two types of data have been used. The first part of the data is taken from the Constructive Cost Model (COCOMO'81) which is commonly used in previous studies and the second part is collected according to new metrics in a leading international company in Turkey. The accuracy of the selected metrics and the data samples are verified using statistical techniques. The model presented here is based on Multi-Layer Perceptron (MLP). Another difficulty associated with the cost estimation studies is the fact that the data collection requires time and care. To make a more thorough use of the samples collected, k-fold, cross validation method is also implemented. It is concluded that, as long as an accurate and quantifiable set of metrics are defined and measured correctly, neural networks can be applied in software cost estimation studies with success

Keywords: Software Metrics, Software Cost Estimation, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915
16 Full Potential Study of Electronic and Optical Properties of NdF3

Authors: Sapan Mohan Saini

Abstract:

We report the electronic structure and optical properties of NdF3 compound. Our calculations are based on density functional theory (DFT) using the full potential linearized augmented plane wave (FPLAPW) method with the inclusion of spin orbit coupling. We employed the local spin density approximation (LSDA) and Coulomb-corrected local spin density approximation, known for treating the highly correlated 4f electrons properly, is able to reproduce the correct insulating ground state. We find that the standard LSDA approach is incapable of correctly describing the electronic properties of such materials since it positions the f-bands incorrectly resulting in an incorrect metallic ground state. On the other hand, LSDA + U approximation, known for treating the highly correlated 4f electrons properly, is able to reproduce the correct insulating ground state. Interestingly, however, we do not find any significant differences in the optical properties calculated using LSDA, and LSDA + U suggesting that the 4f electrons do not play a decisive role in the optical properties of these compounds. The reflectivity for NdF3 compound stays low till 7 eV which is consistent with their large energy gaps. The calculated energy gaps are in good agreement with experiments. Our calculated reflectivity compares well with the experimental data and the results are analyzed in the light of band to band transitions.

Keywords: FPLAPW Method, optical properties, rare earthtrifluorides LSDA+U

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1623
15 Evolutionary Approach for Automated Discovery of Censored Production Rules

Authors: Kamal K. Bharadwaj, Basheer M. Al-Maqaleh

Abstract:

In the recent past, there has been an increasing interest in applying evolutionary methods to Knowledge Discovery in Databases (KDD) and a number of successful applications of Genetic Algorithms (GA) and Genetic Programming (GP) to KDD have been demonstrated. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. The PRs, however, are unable to handle exceptions and do not exhibit variable precision. The Censored Production Rules (CPRs), an extension of PRs, were proposed by Michalski & Winston that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations, in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence are tight or there is simply no information available as to whether it holds or not. Thus, the 'If P Then D' part of the CPR expresses important information, while the Unless C part acts only as a switch and changes the polarity of D to ~D. This paper presents a classification algorithm based on evolutionary approach that discovers comprehensible rules with exceptions in the form of CPRs. The proposed approach has flexible chromosome encoding, where each chromosome corresponds to a CPR. Appropriate genetic operators are suggested and a fitness function is proposed that incorporates the basic constraints on CPRs. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Censored Production Rule, Data Mining, MachineLearning, Evolutionary Algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1833
14 Hacking the Spatial Limitations in Bridging Virtual and Traditional Teaching Methodologies in Sri Lanka

Authors: Manuela Nayantara Jeyaraj

Abstract:

Having moved into the 21st century, it is way past being arguable that innovative technology needs to be incorporated into conventional classroom teaching. Though the Western world has found presumable success in achieving this, it is still a concept under battle in developing countries such as Sri Lanka. Reaching the acme of implementing interactive virtual learning within classrooms is a struggling idealistic fascination within the island. In order to overcome this problem, this study is set to reveal facts that limit the implementation of virtual, interactive learning within the school classrooms and provide hacks that could prove the augmented use of the Virtual World to enhance teaching and learning experiences. As each classroom moves along with the usage of technology to fulfill its functionalities, a few intense hacks provided will build the administrative onuses on a virtual system. These hacks may divulge barriers based on social conventions, financial boundaries, digital literacy, intellectual capacity of the staff, and highlight the impediments in introducing students to an interactive virtual learning environment and thereby provide the necessary actions or changes to be made to succeed and march along in creating an intellectual society built on virtual learning and lifestyle. This digital learning environment will be composed of multimedia presentations, trivia and pop quizzes conducted on a GUI, assessments conducted via a virtual system, records maintained on a database, etc. The ultimate objective of this study could enhance every child's basic learning environment; hence, diminishing the digital divide that exists in certain communities.

Keywords: Digital divide, digital learning, digitization, Sri Lanka, teaching methodologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1153
13 Cumulative Learning based on Dynamic Clustering of Hierarchical Production Rules(HPRs)

Authors: Kamal K.Bharadwaj, Rekha Kandwal

Abstract:

An important structuring mechanism for knowledge bases is building clusters based on the content of their knowledge objects. The objects are clustered based on the principle of maximizing the intraclass similarity and minimizing the interclass similarity. Clustering can also facilitate taxonomy formation, that is, the organization of observations into a hierarchy of classes that group similar events together. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. In this paper, a set of related HPRs is called a cluster and is represented by a HPR-tree. This paper discusses an algorithm based on cumulative learning scenario for dynamic structuring of clusters. The proposed scheme incrementally incorporates new knowledge into the set of clusters from the previous episodes and also maintains summary of clusters as Synopsis to be used in the future episodes. Examples are given to demonstrate the behaviour of the proposed scheme. The suggested incremental structuring of clusters would be useful in mining data streams.

Keywords: Cumulative learning, clustering, data mining, hierarchical production rules.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1393
12 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: Economic production quantity, random cost, supply chain management, vendor-managed inventory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 626
11 Discovery of Quantified Hierarchical Production Rules from Large Set of Discovered Rules

Authors: Tamanna Siddiqui, M. Afshar Alam

Abstract:

Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. This paper focuses on the issue of mining Quantified rules with crisp hierarchical structure using Genetic Programming (GP) approach to knowledge discovery. The post-processing scheme presented in this work uses Quantified production rules as initial individuals of GP and discovers hierarchical structure. In proposed approach rules are quantified by using Dempster Shafer theory. Suitable genetic operators are proposed for the suggested encoding. Based on the Subsumption Matrix(SM), an appropriate fitness function is suggested. Finally, Quantified Hierarchical Production Rules (HPRs) are generated from the discovered hierarchy, using Dempster Shafer theory. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Knowledge discovery in database, quantification, dempster shafer theory, genetic programming, hierarchy, subsumption matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1485
10 An Educational Application of Online Games for Learning Difficulties

Authors: M. Margoudi, Z. Smyrnaiou

Abstract:

The current paper presents the results of a conducted case study. During the past few years the number of children diagnosed with Learning Difficulties has drastically augmented and especially the cases of ADHD (Attention Deficit Hyperactivity Disorder). One of the core characteristics of ADHD is a deficit in working memory functions. The review of the literature indicates a plethora of educational software that aim at training and enhancing the working memory. Nevertheless, in the current paper, the possibility of using for the same purpose free, online games will be explored. Another issue of interest is the potential effect of the working memory training to the core symptoms of ADHD. In order to explore the abovementioned research questions, three digital tests are employed, all of which are developed on the E-slate platform by the author, in order to check the levels of ADHD’s symptoms and to be used as diagnostic tools, both in the beginning and in the end of the case study. The tools used during the main intervention of the research are free online games for the training of working memory. The research and the data analysis focus on the following axes: a) the presence and the possible change in two of the core symptoms of ADHD, attention and impulsivity and b) a possible change in the general cognitive abilities of the individual. The case study was conducted with the participation of a thirteen year-old, female student, diagnosed with ADHD, during after-school hours. The results of the study indicate positive changes both in the levels of attention and impulsivity. Therefore, we conclude that the training of working memory through the use of free, online games has a positive impact on the characteristics of ADHD. Finally, concerning the second research question, the change in general cognitive abilities, no significant changes were noted.

Keywords: ADHD, attention, impulsivity, online games.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1828
9 Validity of Universe Structure Conception as Nested Vortexes

Authors: Khaled M. Nabil

Abstract:

This paper introduces the Nested Vortexes conception of the universe structure and interprets all the physical phenomena according this conception. The paper first reviews recent physics theories, either in microscopic scale or macroscopic scale, to collect evidence that the space is not empty. But, these theories describe the property of the space medium without determining its structure. Determining the structure of space medium is essential to understand the mechanism that leads to its properties. Without determining the space medium structure, many phenomena; such as electric and magnetic fields, gravity, or wave-particle duality remain uninterpreted. Thus, this paper introduces a conception about the structure of the universe. It assumes that the universe is a medium of ultra-tiny homogeneous particles which are still undiscovered. Like any medium with certain movements, possibly because of a great asymmetric explosion, vortexes have occurred. A vortex condenses the ultra-tiny particles in its center forming a bigger particle, the bigger particles, in turn, could be trapped in a bigger vortex and condense in its center forming a much bigger particle and so on. This conception describes galaxies, stars, protons as particles at different levels. Existing of the particle’s vortexes make the consistency of the speed of light postulate is not true. This conception shows that the vortex motion dynamic agrees with the motion of all the universe particles at any level. An experiment has been carried out to detect the orbiting effect of aggregated vortexes of aligned atoms of a permanent magnet. Based on the described particle’s structure, the gravity force of a particle and attraction between particles as well as charge, electric and magnetic fields and quantum mechanics characteristics are interpreted. All augmented physics phenomena are solved.

Keywords: Astrophysics, cosmology, particles’ structure model, particles’ forces, vortex dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 782
8 An Intelligent Controller Augmented with Variable Zero Lag Compensation for Antilock Braking System

Authors: Benjamin C. Agwah, Paulinus C. Eze

Abstract:

Antilock braking system (ABS) is one of the important contributions by the automobile industry, designed to ensure road safety in such way that vehicles are kept steerable and stable when during emergency braking. This paper presents a wheel slip-based intelligent controller with variable zero lag compensation for ABS. It is required to achieve a very fast perfect wheel slip tracking during hard braking condition and eliminate chattering with improved transient and steady state performance, while shortening the stopping distance using effective braking torque less than maximum allowable torque to bring a braking vehicle to a stop. The dynamic of a vehicle braking with a braking velocity of 30 ms⁻¹ on a straight line was determined and modelled in MATLAB/Simulink environment to represent a conventional ABS system without a controller. Simulation results indicated that system without a controller was not able to track desired wheel slip and the stopping distance was 135.2 m. Hence, an intelligent control based on fuzzy logic controller (FLC) was designed with a variable zero lag compensator (VZLC) added to enhance the performance of FLC control variable by eliminating steady state error, provide improve bandwidth to eliminate the effect of high frequency noise such as chattering during braking. The simulation results showed that FLC-VZLC provided fast tracking of desired wheel slip, eliminated chattering, and reduced stopping distance by 70.5% (39.92 m), 63.3% (49.59 m), 57.6% (57.35 m) and 50% (69.13 m) on dry, wet, cobblestone and snow road surface conditions respectively. Generally, the proposed system used effective braking torque that is less than the maximum allowable braking torque to achieve efficient wheel slip tracking and overall robust control performance on different road surfaces.

Keywords: ABS, Fuzzy Logic Controller, Variable Zero Lag Compensator, Wheel Slip Tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 278
7 A Cumulative Learning Approach to Data Mining Employing Censored Production Rules (CPRs)

Authors: Rekha Kandwal, Kamal K.Bharadwaj

Abstract:

Knowledge is indispensable but voluminous knowledge becomes a bottleneck for efficient processing. A great challenge for data mining activity is the generation of large number of potential rules as a result of mining process. In fact sometimes result size is comparable to the original data. Traditional data mining pruning activities such as support do not sufficiently reduce the huge rule space. Moreover, many practical applications are characterized by continual change of data and knowledge, thereby making knowledge voluminous with each change. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. Michalski & Winston proposed Censored Production Rules (CPRs), as an extension of production rules, that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence, are tight or there is simply no information available as to whether it holds or not. Thus the 'If P Then D' part of the CPR expresses important information while the Unless C part acts only as a switch changes the polarity of D to ~D. In this paper a scheme based on Dempster-Shafer Theory (DST) interpretation of a CPR is suggested for discovering CPRs from the discovered flat PRs. The discovery of CPRs from flat rules would result in considerable reduction of the already discovered rules. The proposed scheme incrementally incorporates new knowledge and also reduces the size of knowledge base considerably with each episode. Examples are given to demonstrate the behaviour of the proposed scheme. The suggested cumulative learning scheme would be useful in mining data streams.

Keywords: Censored production rules, cumulative learning, data mining, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1446
6 Buckling Optimization of Radially-Graded, Thin-Walled, Long Cylinders under External Pressure

Authors: Karam Y. Maalawi

Abstract:

This paper presents a generalized formulation for the problem of buckling optimization of anisotropic, radially graded, thin-walled, long cylinders subject to external hydrostatic pressure. The main structure to be analyzed is built of multi-angle fibrous laminated composite lay-ups having different volume fractions of the constituent materials within the individual plies. This yield to a piecewise grading of the material in the radial direction; that is the physical and mechanical properties of the composite material are allowed to vary radially. The objective function is measured by maximizing the critical buckling pressure while preserving the total structural mass at a constant value equals to that of a baseline reference design. In the selection of the significant optimization variables, the fiber volume fractions adjoin the standard design variables including fiber orientation angles and ply thicknesses. The mathematical formulation employs the classical lamination theory, where an analytical solution that accounts for the effective axial and flexural stiffness separately as well as the inclusion of the coupling stiffness terms is presented. The proposed model deals with dimensionless quantities in order to be valid for thin shells having arbitrary thickness-to-radius ratios. The critical buckling pressure level curves augmented with the mass equality constraint are given for several types of cylinders showing the functional dependence of the constrained objective function on the selected design variables. It was shown that material grading can have significant contribution to the whole optimization process in achieving the required structural designs with enhanced stability limits.

Keywords: Buckling instability, structural optimization, functionally graded material, laminated cylindrical shells, externalhydrostatic pressure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2310
5 Government of Ghana’s Budget: An Assessment of Its Compliance with Fundamental Budgeting Principles

Authors: Mohammed Sani Abdulai

Abstract:

Public sector budgeting, all over the world, is underpinned by some universally accepted principles of sound budget management such as budget unity, universality, annuality, and a balanced budget. These traditional principles, though fundamental, had, in recent years, been augmented by the more modern principles of budgeting within fiscal objective, alignment with medium-term strategic plans as well as the observance of such related concepts as transparency, openness and accessibility. In this paper, we have endeavored to shed light, from literature and practice, on the meaning and purposes of such fundamental budgeting principles. We have also assessed the extent to which the Government of Ghana’s budget complies with the four traditional principles of budget unity, universality, annuality, and a balanced budget and the three out of the ten modern principles of budgetary governance of Organisation for Economic Co-operation and Development (OECD). We did so by using a qualitative method of review and analysis of existing documents and the performance assessment reports on Ghana’s Public Financial Management (PFM) measured using such frameworks as the Public Expenditure and Financial Accountability (PEFA), the Open Budget Survey (OBS) and its Index (OBI), the reports and action plans of Open Government Partnership (OGP) and the Global Initiative for Fiscal Transparency (GIFT). Other performance assessment reports that were relied on included, but not limited to, the Joint Evaluation Report of PFM in Ghana, 2001-2010, and the Joint Evaluation of Budget Support to Ghana, 2005-2015. We have, through this paper, brought to the fore the lessons that could be learned on how those budgetary principles undergird the Government of Ghana’s budget formulation, execution, accounting, control, and oversight. These lessons include, but are not limited to, the need for both scholars and practitioners in the PFM space to be aware of the impact of those principles on public sector budgeting.

Keywords: Annulaity, Balanced Budget, Budget Unity, Budgetary Principles, OECD’s Principles on Budgetary Governance, Open Budget Index, Public Expenditure and Financial Accountability, Universality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 584
4 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.

Keywords: Human Motion Recognition, Motion representation, Laban Movement Analysis, Discrete Hidden Markov Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 677
3 Multi-Agent Searching Adaptation Using Levy Flight and Inferential Reasoning

Authors: Sagir M. Yusuf, Chris Baber

Abstract:

In this paper, we describe how to achieve knowledge understanding and prediction (Situation Awareness (SA)) for multiple-agents conducting searching activity using Bayesian inferential reasoning and learning. Bayesian Belief Network was used to monitor agents' knowledge about their environment, and cases are recorded for the network training using expectation-maximisation or gradient descent algorithm. The well trained network will be used for decision making and environmental situation prediction. Forest fire searching by multiple UAVs was the use case. UAVs are tasked to explore a forest and find a fire for urgent actions by the fire wardens. The paper focused on two problems: (i) effective agents’ path planning strategy and (ii) knowledge understanding and prediction (SA). The path planning problem by inspiring animal mode of foraging using Lévy distribution augmented with Bayesian reasoning was fully described in this paper. Results proof that the Lévy flight strategy performs better than the previous fixed-pattern (e.g., parallel sweeps) approaches in terms of energy and time utilisation. We also introduced a waypoint assessment strategy called k-previous waypoints assessment. It improves the performance of the ordinary levy flight by saving agent’s resources and mission time through redundant search avoidance. The agents (UAVs) are to report their mission knowledge at the central server for interpretation and prediction purposes. Bayesian reasoning and learning were used for the SA and results proof effectiveness in different environments scenario in terms of prediction and effective knowledge representation. The prediction accuracy was measured using learning error rate, logarithm loss, and Brier score and the result proves that little agents mission that can be used for prediction within the same or different environment. Finally, we described a situation-based knowledge visualization and prediction technique for heterogeneous multi-UAV mission. While this paper proves linkage of Bayesian reasoning and learning with SA and effective searching strategy, future works is focusing on simplifying the architecture.

Keywords: Lèvy flight, situation awareness, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 470
2 Localized and Time-Resolved Velocity Measurements of Pulsatile Flow in a Rectangular Channel

Authors: R. Blythman, N. Jeffers, T. Persoons, D. B. Murray

Abstract:

The exploitation of flow pulsation in micro- and mini-channels is a potentially useful technique for enhancing cooling of high-end photonics and electronics systems. It is thought that pulsation alters the thickness of the hydrodynamic and thermal boundary layers, and hence affects the overall thermal resistance of the heat sink. Although the fluid mechanics and heat transfer are inextricably linked, it can be useful to decouple the parameters to better understand the mechanisms underlying any heat transfer enhancement. Using two-dimensional, two-component particle image velocimetry, the current work intends to characterize the heat transfer mechanisms in pulsating flow with a mean Reynolds number of 48 by experimentally quantifying the hydrodynamics of a generic liquid-cooled channel geometry. Flows circulated through the test section by a gear pump are modulated using a controller to achieve sinusoidal flow pulsations with Womersley numbers of 7.45 and 2.36 and an amplitude ratio of 0.75. It is found that the transient characteristics of the measured velocity profiles are dependent on the speed of oscillation, in accordance with the analytical solution for flow in a rectangular channel. A large velocity overshoot is observed close to the wall at high frequencies, resulting from the interaction of near-wall viscous stresses and inertial effects of the main fluid body. The steep velocity gradients at the wall are indicative of augmented heat transfer, although the local flow reversal may reduce the upstream temperature difference in heat transfer applications. While unsteady effects remain evident at the lower frequency, the annular effect subsides and retreats from the wall. The shear rate at the wall is increased during the accelerating half-cycle and decreased during deceleration compared to steady flow, suggesting that the flow may experience both enhanced and diminished heat transfer during a single period. Hence, the thickness of the hydrodynamic boundary layer is reduced for positively moving flow during one half of the pulsation cycle at the investigated frequencies. It is expected that the size of the thermal boundary layer is similarly reduced during the cycle, leading to intervals of heat transfer enhancement.

Keywords: Heat transfer enhancement, particle image velocimetry, localized and time-resolved velocity, photonics and electronics cooling, pulsating flow, Richardson’s annular effect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2257
1 Learning Classifier Systems Approach for Automated Discovery of Censored Production Rules

Authors: Suraiya Jabin, Kamal K. Bharadwaj

Abstract:

In the recent past Learning Classifier Systems have been successfully used for data mining. Learning Classifier System (LCS) is basically a machine learning technique which combines evolutionary computing, reinforcement learning, supervised or unsupervised learning and heuristics to produce adaptive systems. A LCS learns by interacting with an environment from which it receives feedback in the form of numerical reward. Learning is achieved by trying to maximize the amount of reward received. All LCSs models more or less, comprise four main components; a finite population of condition–action rules, called classifiers; the performance component, which governs the interaction with the environment; the credit assignment component, which distributes the reward received from the environment to the classifiers accountable for the rewards obtained; the discovery component, which is responsible for discovering better rules and improving existing ones through a genetic algorithm. The concatenate of the production rules in the LCS form the genotype, and therefore the GA should operate on a population of classifier systems. This approach is known as the 'Pittsburgh' Classifier Systems. Other LCS that perform their GA at the rule level within a population are known as 'Mitchigan' Classifier Systems. The most predominant representation of the discovered knowledge is the standard production rules (PRs) in the form of IF P THEN D. The PRs, however, are unable to handle exceptions and do not exhibit variable precision. The Censored Production Rules (CPRs), an extension of PRs, were proposed by Michalski and Winston that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: IF P THEN D UNLESS C, where Censor C is an exception to the rule. Such rules are employed in situations, in which conditional statement IF P THEN D holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence are tight or there is simply no information available as to whether it holds or not. Thus, the IF P THEN D part of CPR expresses important information, while the UNLESS C part acts only as a switch and changes the polarity of D to ~D. In this paper Pittsburgh style LCSs approach is used for automated discovery of CPRs. An appropriate encoding scheme is suggested to represent a chromosome consisting of fixed size set of CPRs. Suitable genetic operators are designed for the set of CPRs and individual CPRs and also appropriate fitness function is proposed that incorporates basic constraints on CPR. Experimental results are presented to demonstrate the performance of the proposed learning classifier system.

Keywords: Censored Production Rule, Data Mining, GeneticAlgorithm, Learning Classifier System, Machine Learning, PittsburgApproach, , Reinforcement learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487