Search results for: initial input
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5247

Search results for: initial input

4047 The Analysis of Different Classes of Weighted Fuzzy Petri Nets and Their Features

Authors: Yurii Bloshko, Oksana Olar

Abstract:

This paper presents the analysis of 6 different classes of Petri nets: fuzzy Petri nets (FPN), generalized fuzzy Petri nets (GFPN), parameterized fuzzy Petri nets (PFPN), T2GFPN, flexible generalized fuzzy Petri nets (FGFPN), binary Petri nets (BPN). These classes were simulated in the special software PNeS® for the analysis of its pros and cons on the example of models which are dedicated to the decision-making process of passenger transport logistics. The paper includes the analysis of two approaches: when input values are filled with the experts’ knowledge; when fuzzy expectations represented by output values are added to the point. These approaches fulfill the possibilities of triples of functions which are replaced with different combinations of t-/s-norms.

Keywords: fuzzy petri net, intelligent computational techniques, knowledge representation, triangular norms

Procedia PDF Downloads 141
4046 Erosion Modeling of Surface Water Systems for Long Term Simulations

Authors: Devika Nair, Sean Bellairs, Ken Evans

Abstract:

Flow and erosion modeling provides an avenue for simulating the fine suspended sediment in surface water systems like streams and creeks. Fine suspended sediment is highly mobile, and many contaminants that may have been released by any sort of catchment disturbance attach themselves to these sediments. Therefore, a knowledge of fine suspended sediment transport is important in assessing contaminant transport. The CAESAR-Lisflood Landform Evolution Model, which includes a hydrologic model (TOPMODEL) and a hydraulic model (Lisflood), is being used to assess the sediment movement in tropical streams on account of a disturbance in the catchment of the creek and to determine the dynamics of sediment quantity in the creek through the years by simulating the model for future years. The accuracy of future simulations depends on the calibration and validation of the model to the past and present events. Calibration and validation of the model involve finding a combination of parameters of the model, which, when applied and simulated, gives model outputs similar to those observed for the real site scenario for corresponding input data. Calibrating the sediment output of the CAESAR-Lisflood model at the catchment level and using it for studying the equilibrium conditions of the landform is an area yet to be explored. Therefore, the aim of the study was to calibrate the CAESAR-Lisflood model and then validate it so that it could be run for future simulations to study how the landform evolves over time. To achieve this, the model was run for a rainfall event with a set of parameters, plus discharge and sediment data for the input point of the catchment, to analyze how similar the model output would behave when compared with the discharge and sediment data for the output point of the catchment. The model parameters were then adjusted until the model closely approximated the real site values of the catchment. It was then validated by running the model for a different set of events and checking that the model gave similar results to the real site values. The outcomes demonstrated that while the model can be calibrated to a greater extent for hydrology (discharge output) throughout the year, the sediment output calibration may be slightly improved by having the ability to change parameters to take into account the seasonal vegetation growth during the start and end of the wet season. This study is important to assess hydrology and sediment movement in seasonal biomes. The understanding of sediment-associated metal dispersion processes in rivers can be used in a practical way to help river basin managers more effectively control and remediate catchments affected by present and historical metal mining.

Keywords: erosion modelling, fine suspended sediments, hydrology, surface water systems

Procedia PDF Downloads 84
4045 The Relationship between Energy Consumption and Economic Growth in Turkey: A Time Series Analysis

Authors: Burcu Guvenek, Volkan Alptekin

Abstract:

Turkey is a country in the process of development and its economy has undergone structural reforms in order to realize a sustainable development and energy has vital role as a basic input for this aim. Turkey has been in the process of economic growth and development and, because of this, has an increasing energy need. This paper investigates relationship between economic growth and electricity consumption using annual data for Turkey between 1970-2008 by using bounds test. As economic growth and energy consumption variables used in empirical analysis was different order of integration I(0) and I(1), we employed bounds test approach. We have not found co-integration relationship between the variables.

Keywords: bounds test, economic growth, energy consumption, Turkey

Procedia PDF Downloads 363
4044 Predicting the Uniaxial Strength Distribution of Brittle Materials Based on a Uniaxial Test

Authors: Benjamin Sonnenreich

Abstract:

Brittle fracture failure probability is best described using a stochastic approach which is based on the 'weakest link concept' and the connection between a microstructure and macroscopic fracture scale. A general theoretical and experimental framework is presented to predict the uniaxial strength distribution according to independent uniaxial test data. The framework takes as input the applied stresses, the geometry, the materials, the defect distributions and the relevant random variables from uniaxial test results and gives as output an overall failure probability that can be used to improve the reliability of practical designs. Additionally, the method facilitates comparisons of strength data from several sources, uniaxial tests, and sample geometries.

Keywords: brittle fracture, strength distribution, uniaxial, weakest link concept

Procedia PDF Downloads 325
4043 Removal of Methylene Blue from Aqueous Solution by Adsorption onto Untreated Coffee Grounds

Authors: N. Azouaou, H. Mokaddem, D. Senadjki, K. Kedjit, Z. Sadaoui

Abstract:

Introduction: Water contamination caused by dye industries, including food, leather, textile, plastic, cosmetics, paper-making, printing and dye synthesis, has caused more and more attention, since most dyes are harmful to human being and environments. Untreated coffee grounds were used as a high-efficiency adsorbent for the removal of a cationic dye (methylene blue, MB) from aqueous solution. Characterization of the adsorbent was performed using several techniques such as SEM, surface area (BET), FTIR and pH zero charge. The effects of contact time, adsorbent dose, initial solution pH and initial concentration were systematically investigated. Results showed the adsorption kinetics followed the pseudo-second-order kinetic model. Langmuir isotherm model is in good agreement with the experimental data as compared to Freundlich and D–R models. The maximum adsorption capacity was found equal to 52.63mg/g. In addition, the possible adsorption mechanism was also proposed based on the experimental results. Experimental: The adsorption experiments were carried out in batch at room temperature. A given mass of adsorbent was added to methylene blue (MB) solution and the entirety was agitated during a certain time. The samples were carried out at quite time intervals. The concentrations of MB left in supernatant solutions after different time intervals were determined using a UV–vis spectrophotometer. The amount of MB adsorbed per unit mass of coffee grounds (qt) and the dye removal efficiency (R %) were evaluated. Results and Discussion: Some chemical and physical characteristics of coffee grounds are presented and the morphological analysis of the adsorbent was also studied. Conclusions: The good capacity of untreated coffee grounds to remove MB from aqueous solution was demonstrated in this study, highlighting its potential for effluent treatment processes. The kinetic experiments show that the adsorption is rapid and maximum adsorption capacities qmax= 52.63mg/g achieved in 30min. The adsorption process is a function of the adsorbent concentration, pH and metal ion concentration. The optimal parameters found are adsorbent dose m=5g, pH=5 and ambient temperature. FTIR spectra showed that the principal functional sites taking part in the sorption process included carboxyl and hydroxyl groups.

Keywords: adsorption, methylene blue, coffee grounds, kinetic study

Procedia PDF Downloads 230
4042 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 68
4041 An Improved Cooperative Communication Scheme for IoT System

Authors: Eui-Hak Lee, Jae-Hyun Ro, Hyoung-Kyu Song

Abstract:

In internet of things (IoT) system, the communication scheme with reliability and low power is required to connect a terminal. Cooperative communication can achieve reliability and lower power than multiple-input multiple-output (MIMO) system. Cooperative communication increases the reliability with low power, but decreases a throughput. It has a weak point that the communication throughput is decreased. In this paper, a novel scheme is proposed to increase the communication throughput. The novel scheme is a transmission structure that increases transmission rate. And a decoding scheme according to the novel transmission structure is proposed. Simulation results show that the proposed scheme increases the throughput without bit error rate (BER) performance degradation.

Keywords: cooperative communication, IoT, STBC, transmission rate

Procedia PDF Downloads 396
4040 Tourism Satellite Account: Approach and Information System Development

Authors: Pappas Theodoros, Mihail Diakomihalis

Abstract:

Measuring the economic impact of tourism in a benchmark economy is a global concern, with previous measurements being partial and not fully integrated. Tourism is a phenomenon that requires individual consumption of visitors and which should be observed and measured to reveal, thus, the overall contribution of tourism to an economy. The Tourism Satellite Account (TSA) is a critical tool for assessing the annual growth of tourism, providing reliable measurements. This article introduces a system of TSA information that encompasses all the works of the TSA, including input, storage, management, and analysis of data, as well as additional future functions and enhances the efficiency of tourism data management and TSA collection utility. The methodology and results presented offer insights into the development and implementation of TSA.

Keywords: tourism satellite account, information system, data-based tourist account, relation database

Procedia PDF Downloads 84
4039 Rail-To-Rail Output Op-Amp Design with Negative Miller Capacitance Compensation

Authors: Muhaned Zaidi, Ian Grout, Abu Khari bin A’ain

Abstract:

In this paper, a two-stage op-amp design is considered using both Miller and negative Miller compensation techniques. The first op-amp design uses Miller compensation around the second amplification stage, whilst the second op-amp design uses negative Miller compensation around the first stage and Miller compensation around the second amplification stage. The aims of this work were to compare the gain and phase margins obtained using the different compensation techniques and identify the ability to choose either compensation technique based on a particular set of design requirements. The two op-amp designs created are based on the same two-stage rail-to-rail output CMOS op-amp architecture where the first stage of the op-amp consists of differential input and cascode circuits, and the second stage is a class AB amplifier. The op-amps have been designed using a 0.35mm CMOS fabrication process.

Keywords: op-amp, rail-to-rail output, Miller compensation, Negative Miller capacitance

Procedia PDF Downloads 338
4038 Digital Recording System Identification Based on Audio File

Authors: Michel Kulhandjian, Dimitris A. Pados

Abstract:

The objective of this work is to develop a theoretical framework for reliable digital recording system identification from digital audio files alone, for forensic purposes. A digital recording system consists of a microphone and a digital sound processing card. We view the cascade as a system of unknown transfer function. We expect same manufacturer and model microphone-sound card combinations to have very similar/near identical transfer functions, bar any unique manufacturing defect. Input voice (or other) signals are modeled as non-stationary processes. The technical problem under consideration becomes blind deconvolution with non-stationary inputs as it manifests itself in the specific application of digital audio recording equipment classification.

Keywords: blind system identification, audio fingerprinting, blind deconvolution, blind dereverberation

Procedia PDF Downloads 304
4037 Artificial Neural Network Speed Controller for Excited DC Motor

Authors: Elabed Saud

Abstract:

This paper introduces the new ability of Artificial Neural Networks (ANNs) in estimating speed and controlling the separately excited DC motor. The neural control scheme consists of two parts. One is the neural estimator which is used to estimate the motor speed. The other is the neural controller which is used to generate a control signal for a converter. These two neutrals are training by Levenberg-Marquardt back-propagation algorithm. ANNs are the standard three layers feed-forward neural network with sigmoid activation functions in the input and hidden layers and purelin in the output layer. Simulation results are presented to demonstrate the effectiveness of this neural and advantage of the control system DC motor with ANNs in comparison with the conventional scheme without ANNs.

Keywords: Artificial Neural Network (ANNs), excited DC motor, convenional controller, speed Controller

Procedia PDF Downloads 726
4036 Experimental Study of Iron Metal Powder Compacting by Controlled Impact

Authors: Todor N. Penchev, Dimitar N. Karastoianov, Stanislav D. Gyoshev

Abstract:

For compacting of iron powder are used hydraulic presses and high velocity hammers. In this paper are presented initial research on application of an innovative powder compacting method, which uses a hammer working with controlled impact. The results show that by this method achieves the reduction of rebounds and improve efficiency of impact, compared with a high-speed compacting. Depending on the power of the engine (industrial rocket engine), this effect may be amplified to such an extent as to obtain a impact without rebound (sticking impact) and in long-time action of the impact force.

Keywords: powder metallurgy, impact, iron powder compacting, rocket engine

Procedia PDF Downloads 521
4035 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model

Authors: Mohammad Zamani, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.

Keywords: circular vertical, spillway, numerical model, boundary conditions

Procedia PDF Downloads 86
4034 Multiphysic Coupling Between Hypersonc Reactive Flow and Thermal Structural Analysis with Ablation for TPS of Space Lunchers

Authors: Margarita Dufresne

Abstract:

This study devoted to development TPS for small space re-usable launchers. We have used SIRIUS design for S1 prototype. Multiphysics coupling for hypersonic reactive flow and thermos-structural analysis with and without ablation is provided by -CCM+ and COMSOL Multiphysics and FASTRAN and ACE+. Flow around hypersonic flight vehicles is the interaction of multiple shocks and the interaction of shocks with boundary layers. These interactions can have a very strong impact on the aeroheating experienced by the flight vehicle. A real gas implies the existence of a gas in equilibrium, non-equilibrium. Mach number ranged from 5 to 10 for first stage flight.The goals of this effort are to provide validation of the iterative coupling of hypersonic physics models in STAR-CCM+ and FASTRAN with COMSOL Multiphysics and ACE+. COMSOL Multiphysics and ACE+ are used for thermal structure analysis to simulate Conjugate Heat Transfer, with Conduction, Free Convection and Radiation to simulate Heat Flux from hypersonic flow. The reactive simulations involve an air chemical model of five species: N, N2, NO, O and O2. Seventeen chemical reactions, involving dissociation and recombination probabilities calculation include in the Dunn/Kang mechanism. Forward reaction rate coefficients based on a modified Arrhenius equation are computed for each reaction. The algorithms employed to solve the reactive equations used the second-order numerical scheme is obtained by a “MUSCL” (Monotone Upstream-cantered Schemes for Conservation Laws) extrapolation process in the structured case. Coupled inviscid flux: AUSM+ flux-vector splitting The MUSCL third-order scheme in STAR-CCM+ provides third-order spatial accuracy, except in the vicinity of strong shocks, where, due to limiting, the spatial accuracy is reduced to second-order and provides improved (i.e., reduced) dissipation compared to the second-order discretization scheme. initial unstructured mesh is refined made using this initial pressure gradient technique for the shock/shock interaction test case. The suggested by NASA turbulence models are the K-Omega SST with a1 = 0.355 and QCR (quadratic) as the constitutive option. Specified k and omega explicitly in initial conditions and in regions – k = 1E-6 *Uinf^2 and omega = 5*Uinf/ (mean aerodynamic chord or characteristic length). We put into practice modelling tips for hypersonic flow as automatic coupled solver, adaptative mesh refinement to capture and refine shock front, using advancing Layer Mesher and larger prism layer thickness to capture shock front on blunt surfaces. The temperature range from 300K to 30 000 K and pressure between 1e-4 and 100 atm. FASTRAN and ACE+ are coupled to provide high-fidelity solution for hot hypersonic reactive flow and Conjugate Heat Transfer. The results of both approaches meet the CIRCA wind tunnel results.

Keywords: hypersonic, first stage, high speed compressible flow, shock wave, aerodynamic heating, conugate heat transfer, conduction, free convection, radiation, fastran, ace+, comsol multiphysics, star-ccm+, thermal protection system (tps), space launcher, wind tunnel

Procedia PDF Downloads 71
4033 EMG Based Orthosis for Upper Limb Rehabilitation in Hemiparesis Patients

Authors: Nancy N. Sharmila, Aparna Mishra

Abstract:

Hemiparesis affects almost 80% of stroke patients each year. It is marked by paralysis or weakness on one half of the body. Our model provides both assistance and physical therapy for hemiparesis patients for swift recovery. In order to accomplish our goal a force is provided that pulls the forearm up (as in flexing the arm), and pushes the forearm down (as in extending the arm), which will also assist the user during ADL (Activities of Daily Living). The model consists of a mechanical component which is placed around the patient’s bicep and an EMG control circuit to assist patients in daily activities, which makes it affordable and easy to use. In order to enhance the neuromuscular system’s effectiveness in synchronize the movement, proprioceptive neuromuscular facilitation (PNF) concept is used. The EMG signals are acquired from the unaffected arm as an input to drive the orthosis. This way the patient is invigorated to use the orthosis for regular exercise.

Keywords: EMG, hemiparesis, orthosis, rehabilitation

Procedia PDF Downloads 445
4032 Language Activation Theory: Unlocking Bilingual Language Processing

Authors: Leorisyl D. Siarot

Abstract:

It is conventional to see and hear Filipinos, in general, speak two or more languages. This phenomenon brings us to a closer look on how our minds process the input and produce an output with a specific chosen language. This study aimed to generate a theoretical model which explained the interaction of the first and the second languages in the human mind. After a careful analysis of the gathered data, a theoretical prototype called Language Activation Model was generated. For every string, there are three specialized banks: lexico-semantics, morphono-syntax, and pragmatics. These banks are interrelated to other banks of other language strings. As the bilingual learns more languages, a new string is replicated and is filled up with the information of the new language learned. The principles of the first and second languages' interaction are drawn; these are expressed in laws, namely: law of dominance, law of availability, law of usuality and law of preference. Furthermore, difficulties encountered in the learning of second languages were also determined.

Keywords: bilingualism, psycholinguistics, second language learning, languages

Procedia PDF Downloads 512
4031 Gut Microbiota in Patients with Opioid Use Disorder: A 12-week Follow up Study

Authors: Sheng-Yu Lee

Abstract:

Aim: Opioid use disorder is often characterized by repetitive drug-seeking and drug-taking behaviors with severe public health consequences. Animal model showed that opioid-induced perturbations in the gut microbiota causally relate to neuroinflammation, deficits in reward responding, and opioid tolerance, possibly due to changes in gut microbiota. Therefore, we propose that the dysbiosis of gut microbiota can be associated with pathogenesis of opioid dependence. In this current study, we explored the differences in gut microbiota between patients and normal controls and in patients before and after initiation of methadone treatment program for 12 weeks. Methods: Patients with opioid use disorder between 20 and 65 years were recruited from the methadone maintenance outpatient clinic in 2 medical centers in the Southern Taiwan. Healthy controls without any family history of major psychiatric disorders (schizophrenia, bipolar disorder and major depressive disorder) were recruited from the community. After initial screening, 15 patients with opioid use disorder joined the study for initial evaluation (Week 0), 12 of them completed the 12-week follow-up while receiving methadone treatment and ceased heroin use (Week 12). Fecal samples were collected from the patients at baseline and the end of 12th week. A one-time fecal sample was collected from the healthy controls. The microbiota of fecal samples were investigated using 16S rRNA V3V4 amplicon sequencing, followed by bioinformatics and statistical analyses. Results: We found no significant differences in species diversity in opioid dependent patients between Week 0 and Week 12, nor compared between patients at both points and controls. For beta diversity, using principal component analysis, we found no significant differences between patients at Week 0 and Week 12, however, both patient groups showed significant differences compared to control (P=0.011). Furthermore, the linear discriminant analysis effect size (LEfSe) analysis was used to identify differentially enriched bacteria between opioid use patients and healthy controls. Compared to controls, the relative abundance of Lactobacillaceae Lactobacillus (L. Lactobacillus), Megasphaera Megasphaerahexanoica (M. Megasphaerahexanoica) and Caecibacter Caecibactermassiliensis (C Caecibactermassiliensis) were increased in patients at Week 0, while Coriobacteriales Atopobiaceae (C. Atopobiaceae), Acidaminococcus Acidaminococcusintestini (A. Acidaminococcusintestini) and Tractidigestivibacter Tractidigestivibacterscatoligenes (T. Tractidigestivibacterscatoligenes) were increased in patients at Week 12. Conclusion: In conclusion, we suggest that the gut microbiome community maybe linked to opioid use disorder, such differences may not be altered even after 12-week of cessation of opioid use.

Keywords: opioid use disorder, gut microbiota, methadone treatment, follow up study

Procedia PDF Downloads 106
4030 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 144
4029 Novel Approach to Design of a Class-EJ Power Amplifier Using High Power Technology

Authors: F. Rahmani, F. Razaghian, A. R. Kashaninia

Abstract:

This article proposes a new method for application in communication circuit systems that increase efficiency, PAE, output power and gain in the circuit. The proposed method is based on a combination of switching class-E and class-J and has been termed class-EJ. This method was investigated using both theory and simulation to confirm ~72% PAE and output power of > 39 dBm. The combination and design of the proposed power amplifier accrues gain of over 15dB in the 2.9 to 3.5 GHz frequency bandwidth. This circuit was designed using MOSFET and high power transistors. The load- and source-pull method achieved the best input and output networks using lumped elements. The proposed technique was investigated for fundamental and second harmonics having desirable amplitudes for the output signal.

Keywords: power amplifier (PA), high power, class-J and class-E, high efficiency

Procedia PDF Downloads 491
4028 The Effect of Filter Design and Face Velocity on Air Filter Performance

Authors: Iyad Al-Attar

Abstract:

Air filters installed in HVAC equipment and gas turbine for power generation confront several atmospheric contaminants with various concentrations while operating in different environments (tropical, coastal, hot). This leads to engine performance degradation, as contaminants are capable of deteriorating components and fouling compressor assembly. Compressor fouling is responsible for 70 to 85% of gas turbine performance degradation leading to reduction in power output and availability and an increase in the heat rate and fuel consumption. Therefore, filter design must take into account face velocities, pleat count and its corresponding surface area; to verify filter performance characteristics (Efficiency and Pressure Drop). The experimental work undertaken in the current study examined two groups of four filters with different pleating densities were investigated for the initial pressure drop response and fractional efficiencies. The pleating densities used for this study is 28, 30, 32 and 34 pleats per 100mm for each pleated panel and measured for ten different flow rates ranging from 500 to 5000 m3/h with increment of 500m3/h. This experimental work of the current work has highlighted the underlying reasons behind the reduction in filter permeability due to the increase in face velocity and pleat density. The reasons that led to surface area losses of filtration media are due to one or combination of the following effects: pleat-crowding, deflection of the entire pleated panel, pleat distortion at the corner of the pleat and/or filtration medium compression. It is evident from entire array of experiments that as the particle size increases, the efficiency decreases until the MPPS is reached. Beyond the MPPS, the efficiency increases with increase in particle size. The MPPS shifts to a smaller particle size as the face velocity increases, while the pleating density and orientation did not have a pronounced effect on the MPPS. Throughout the study, an optimal pleat count which satisfies initial pressure drop and efficiency requirements may not have necessarily existed. The work has also suggested that a valid comparison of the pleat densities should be based on the effective surface area that participates in the filtration action and not the total surface area the pleat density provides.

Keywords: air filters, fractional efficiency, gas cleaning, glass fibre, HEPA filter, permeability, pressure drop

Procedia PDF Downloads 135
4027 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase

Authors: Antoine Lauvray, Fabien Poulhaon, Pierre Michaud, Pierre Joyot, Emmanuel Duc

Abstract:

Additive Friction Stir Manufacturing (AFSM) is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. Unlike in Friction Stir Welding (FSW) where abundant literature exists and addresses many aspects going from process implementation to characterization and modeling, there are still few research works focusing on AFSM. Therefore, there is still a lack of understanding of the physical phenomena taking place during the process. This research work aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system composed of the tool, the filler material, and the substrate and due to pure friction. Analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes, through numerical modeling followed by experimental validation, to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque, and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.

Keywords: numerical model, additive manufacturing, friction, process

Procedia PDF Downloads 147
4026 Generalized Central Paths for Convex Programming

Authors: Li-Zhi Liao

Abstract:

The central path has played the key role in the interior point method. However, the convergence of the central path may not be true even in some convex programming problems with linear constraints. In this paper, the generalized central paths are introduced for convex programming. One advantage of the generalized central paths is that the paths will always converge to some optimal solutions of the convex programming problem for any initial interior point. Some additional theoretical properties for the generalized central paths will be also reported.

Keywords: central path, convex programming, generalized central path, interior point method

Procedia PDF Downloads 327
4025 Design of Reconfigurable Fixed-Point LMS Adaptive FIR Filter

Authors: S. Padmapriya, V. Lakshmi Prabha

Abstract:

In this paper, an efficient reconfigurable fixed-point Least Mean Square Adaptive FIR filter is proposed. The proposed architecture has two methods of operation: one is area efficient design and the other is optimized power. Pipelining of the adder blocks and partial product generator are used to achieve low area and reversible logic is used to obtain low power design. Depending upon the input samples and filter coefficients, one of the techniques is chosen. Least-Mean-Square adaptation is performed to update the weights. The architecture is coded using Verilog and synthesized in cadence encounter 0.18μm technology. The synthesized results show that the area reduction ratio of the proposed when compared with conventional technique is about 1.2%.

Keywords: adaptive filter, carry select adder, least mean square algorithm, reversible logic

Procedia PDF Downloads 330
4024 Boundedness and Asymptotic Behavior of Solutions for Gierer-Meinhardt Systems

Authors: S. Henine, A. Youkana

Abstract:

This work is devoted to study the global existence and asymptotic behavior of solutions for Gierer-Meinhardt systems arising in biological phenomena. We prove that the solutions are global and uniformly bounded by a positive constant independent of the time. Our technique is based on Lyapunov functional argument. Under suitable conditions, we established a result on the asymptotic behavior of solutions. These results are valid for any positive continuous initial data, and improve some recently results established.

Keywords: asymptotic behavior, Gierer-Meinhardt systems, global existence, Lyapunov functional

Procedia PDF Downloads 388
4023 Zero-Dissipative Explicit Runge-Kutta Method for Periodic Initial Value Problems

Authors: N. Senu, I. A. Kasim, F. Ismail, N. Bachok

Abstract:

In this paper zero-dissipative explicit Runge-Kutta method is derived for solving second-order ordinary differential equations with periodical solutions. The phase-lag and dissipation properties for Runge-Kutta (RK) method are also discussed. The new method has algebraic order three with dissipation of order infinity. The numerical results for the new method are compared with existing method when solving the second-order differential equations with periodic solutions using constant step size.

Keywords: dissipation, oscillatory solutions, phase-lag, Runge-Kutta methods

Procedia PDF Downloads 411
4022 Production of Rhamnolipids from Different Resources and Estimating the Kinetic Parameters for Bioreactor Design

Authors: Olfat A. Mohamed

Abstract:

Rhamnolipids biosurfactants have distinct properties given them importance in many industrial applications, especially their great new future applications in cosmetic and pharmaceutical industries. These applications have encouraged the search for diverse and renewable resources to control the cost of production. The experimental results were then applied to find a suitable mathematical model for obtaining the design criteria of the batch bioreactor. This research aims to produce Rhamnolipids from different oily wastewater sources such as petroleum crude oil (PO) and vegetable oil (VO) by using Pseudomonas aeruginosa ATCC 9027. Different concentrations of the PO and the VO are added to the media broth separately are in arrangement (0.5 1, 1.5, 2, 2.5 % v/v) and (2, 4, 6, 8 and 10%v/v). The effect of the initial concentration of oil residues and the addition of glycerol and palmitic acid was investigated as an inducer in the production of rhamnolipid and the surface tension of the broth. It was found that 2% of the waste (PO) and 6% of the waste (VO) was the best initial substrate concentration for the production of rhamnolipids (2.71, 5.01 g rhamnolipid/l) as arrangement. Addition of glycerol (10-20% v glycerol/v PO) to the 2% PO fermentation broth led to increase the rhamnolipid production (about 1.8-2 times fold). However, the addition of palmitic acid (5 and 10 g/l) to fermentation broth contained 6% VO rarely enhanced the production rate. The experimental data for 2% initially (PO) was used to estimate the various kinetic parameters. The following results were obtained, maximum rate or velocity of reaction (Vmax) = 0.06417 g/l.hr), yield of cell weight per unit weight of substrate utilized (Yx/s = 0.324 g Cx/g Cs) maximum specific growth rate (μmax = 0.05791 hr⁻¹), yield of rhamnolipid weight per unit weight of substrate utilized (Yp/s)=0.2571gCp/g Cs), maintenance coefficient (Ms =0.002419), Michaelis-Menten constant, (Km=6.1237 gmol/l), endogenous decay coefficient (Kd=0.002375 hr⁻¹). Predictive parameters and advanced mathematical models were applied to evaluate the time of the batch bioreactor. The results were as follows: 123.37, 129 and 139.3 hours in respect of microbial biomass, substrate and product concentration, respectively compared with experimental batch time of 120 hours in all cases. The expected mathematical models are compatible with the laboratory results and can, therefore, be considered as tools for expressing the actual system.

Keywords: batch bioreactor design, glycerol, kinetic parameters, petroleum crude oil, Pseudomonas aeruginosa, rhamnolipids biosurfactants, vegetable oil

Procedia PDF Downloads 131
4021 Unsupervised Detection of Burned Area from Remote Sensing Images Using Spatial Correlation and Fuzzy Clustering

Authors: Tauqir A. Moughal, Fusheng Yu, Abeer Mazher

Abstract:

Land-cover and land-use change information are important because of their practical uses in various applications, including deforestation, damage assessment, disasters monitoring, urban expansion, planning, and land management. Therefore, developing change detection methods for remote sensing images is an important ongoing research agenda. However, detection of change through optical remote sensing images is not a trivial task due to many factors including the vagueness between the boundaries of changed and unchanged regions and spatial dependence of the pixels to its neighborhood. In this paper, we propose a binary change detection technique for bi-temporal optical remote sensing images. As in most of the optical remote sensing images, the transition between the two clusters (change and no change) is overlapping and the existing methods are incapable of providing the accurate cluster boundaries. In this regard, a methodology has been proposed which uses the fuzzy c-means clustering to tackle the problem of vagueness in the changed and unchanged class by formulating the soft boundaries between them. Furthermore, in order to exploit the neighborhood information of the pixels, the input patterns are generated corresponding to each pixel from bi-temporal images using 3×3, 5×5 and 7×7 window. The between images and within image spatial dependence of the pixels to its neighborhood is quantified by using Pearson product moment correlation and Moran’s I statistics, respectively. The proposed technique consists of two phases. At first, between images and within image spatial correlation is calculated to utilize the information that the pixels at different locations may not be independent. Second, fuzzy c-means technique is used to produce two clusters from input feature by not only taking care of vagueness between the changed and unchanged class but also by exploiting the spatial correlation of the pixels. To show the effectiveness of the proposed technique, experiments are conducted on multispectral and bi-temporal remote sensing images. A subset (2100×1212 pixels) of a pan-sharpened, bi-temporal Landsat 5 thematic mapper optical image of Los Angeles, California, is used in this study which shows a long period of the forest fire continued from July until October 2009. Early forest fire and later forest fire optical remote sensing images were acquired on July 5, 2009 and October 25, 2009, respectively. The proposed technique is used to detect the fire (which causes change on earth’s surface) and compared with the existing K-means clustering technique. Experimental results showed that proposed technique performs better than the already existing technique. The proposed technique can be easily extendable for optical hyperspectral images and is suitable for many practical applications.

Keywords: burned area, change detection, correlation, fuzzy clustering, optical remote sensing

Procedia PDF Downloads 169
4020 Corporate Law and Its View Point of Locking in Capital

Authors: Saad Saeed Althiabi

Abstract:

This paper discusses the corporate positioning and how it became popular as a way to systematize production because of the unique manner in which incorporation legalized organizers to secure financial capital through locking it in. The power to lock in capital comes from the fact that a corporate exists as a separate legal entity, whose survival and governance are separated from any of its participants. The law essentially creates a different legal person when a corporation is created. Although this idea has been played down in the legal learning of the last decades in favor of the view that a corporation is purely something through which natural persons interrelate, recent legal research has begun to reassess the importance of entity status. Entity status, under the law and the related separation of governance from input of financial capital through the configuration of a corporation, sanctioned corporate participants to do somewhat more than connect in a series of business transactions.

Keywords: corporate law, entity status, locking in capital, financial capital

Procedia PDF Downloads 555
4019 On the Approximate Solution of Continuous Coefficients for Solving Third Order Ordinary Differential Equations

Authors: A. M. Sagir

Abstract:

This paper derived four newly schemes which are combined in order to form an accurate and efficient block method for parallel or sequential solution of third order ordinary differential equations of the form y^'''= f(x,y,y^',y^'' ), y(α)=y_0,〖y〗^' (α)=β,y^('' ) (α)=μ with associated initial or boundary conditions. The implementation strategies of the derived method have shown that the block method is found to be consistent, zero stable and hence convergent. The derived schemes were tested on stiff and non-stiff ordinary differential equations, and the numerical results obtained compared favorably with the exact solution.

Keywords: block method, hybrid, linear multistep, self-starting, third order ordinary differential equations

Procedia PDF Downloads 271
4018 Implementation and Modeling of a Quadrotor

Authors: Ersan Aktas, Eren Turanoğuz

Abstract:

In this study, the quad-electrical rotor driven unmanned aerial vehicle system is designed and modeled using fundamental dynamic equations. After that, mechanical, electronical and control system of the air vehicle are designed and implemented. Brushless motor speeds are altered via electronic speed controllers in order to achieve desired controllability. The vehicle's fundamental Euler angles (i.e., roll angle, pitch angle, and yaw angle) are obtained via AHRS sensor. These angles are provided as an input to the control algorithm that run on soft the processor on the electronic card. The vehicle control algorithm is implemented in the electronic card. Controller is designed and improved for each Euler angles. Finally, flight tests have been performed to observe and improve the flight characteristics.

Keywords: quadrotor, UAS applications, control architectures, PID

Procedia PDF Downloads 365