Search results for: STEP fault
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3437

Search results for: STEP fault

2687 Inulinase Immobilization on Functionalized Magnetic Nanoparticles Prepared with Soy Protein Isolate Conjugated Bovine Serum Albumin for High Fructose Syrup Production

Authors: Homa Torabizadeh, Mohaddeseh Mikani

Abstract:

Inulinase from Aspergillus niger was covalently immobilized on magnetic nanoparticles (MNPs/Fe3O4) covered with soy protein isolate (SPI/Fe3O4) functionalized by bovine serum albumin (BSA) nanoparticles. MNPs are promising enzyme carriers because they separate easily under external magnetic fields and have enhanced immobilized enzyme reusability. As MNPs aggregate simply, surface coating strategy was employed. SPI functionalized by BSA was a suitable candidate for nanomagnetite coating due to its superior biocompatibility and hydrophilicity. Fe3O4@SPI-BSA nanoparticles were synthesized as a novel carrier with narrow particle size distribution. Step by step fabrication monitoring of Fe3O4@SPI-BSA nanoparticles was performed using field emission scanning electron microscopy and dynamic light scattering. The results illustrated that nanomagnetite with the spherical morphology was well monodispersed with the diameter of about 35 nm. The average size of the SPI-BSA nanoparticles was 80 to 90 nm, and their zeta potential was around −34 mV. Finally, the mean diameter of fabricated Fe3O4@SPI-BSA NPs was less than 120 nm. Inulinase enzyme from Aspergillus niger was covalently immobilized through gluteraldehyde on Fe3O4@SPI-BSA nanoparticles successfully. Fourier transform infrared spectra and field emission scanning electron microscopy images provided sufficient proof for the enzyme immobilization on the nanoparticles with 80% enzyme loading.

Keywords: high fructose syrup, inulinase immobilization, functionalized magnetic nanoparticles, soy protein isolate

Procedia PDF Downloads 281
2686 A Two-Stage Process for the Sustainable Production of Aliphatic Polyesters

Authors: A. Douka, S. Vouyiouka, L. M. Papaspyridi, D. Korres, C. Papaspyrides

Abstract:

A "green" process was studied for the preparation of partially renewable aliphatic polyesters based on 1,4-butanediol and 1,8-octanediol with various diacids and derivatives, namely diethyl succinate, adipic acid, sebacic acid, 1,12-dodecanedioic acid and 1,14-tetradecanedioic acid. A first step of enzymatic prepolymerization was carried out in the presence of two different solvents, toluene and diphenylether, applying molecular sieves and vacuum, respectively, to remove polycondensation by-products. Poly(octylene adipate) (PE 8.6), poly(octylene dodecanate)(PE 8.12) and poly(octylene tetradecanate) (PE 8.14) were firstly enzymatically produced in toluene using molecular sieves giving however, low-molecular-weight products. Thereafter, the synthesis of PE 8.12 and PE 8.14 was examined under optimized conditions using diphenylether as solvent and a more vigorous by-product removal step, such as application of vacuum. Apart from these polyesters, the optimized process was also implemented for the production of another long-chain polyester-poly(octylene sebacate) (PE 8.10) and a short-chain polyester-poly(butylene succinate) (PE 4.4). Subsequently, bulk post-polymerization in the melt or solid state was performed. SSP runs involved absence of biocatalyst and reaction temperatures (T) in the vicinity of the prepolymer melting point (Tm-T varied between 15.5 up to 4oC). Focusing on PE 4.4 and PE 8.12, SSP took place under vacuum or flowing nitrogen leading to increase of the molecular weight and improvement of the end product physical appearance and thermal properties.

Keywords: aliphatic polyester, enzymatic polymerization, solid state polymerization, Novozym 435

Procedia PDF Downloads 313
2685 Variation of Inductance in a Switched-Reluctance Motor under Various Rotor Faults

Authors: Muhammad Asghar Saqib, Saad Saleem Khan, Syed Abdul Rahman Kashif

Abstract:

In order to have higher efficiency, performance and reliability the regular monitoring of an electrical motor is required. This article presents a novel view of the air-gap magnetic field analysis of a switched reluctance motor under rotor cracks and rotor tilt along its shaft axis. The fault diagnosis is illustrated on the basis of a 3-D model of the motor using finite element analysis (FEA). The analytical equations of flux linkages have been used to determine the inductance. The results of the 3-D finite element analysis on a 6/4 switched reluctance motor (SRM) shows the variation of mutual inductance with the tilting of the rotor shaft and cracked rotor conditions. These results present useful information regarding the detection of shaft tilting and cracked rotors.

Keywords: switched reluctance motor, finite element analysis, cracked rotor, 3-D modelling of a srm

Procedia PDF Downloads 647
2684 Effects of Soaking of Maize on the Viscosity of Masa and Tortilla Physical Properties at Different Nixtamalization Times

Authors: Jorge Martínez-Rodríguez, Esther Pérez-Carrillo, Diana Laura Anchondo Álvarez, Julia Lucía Leal Villarreal, Mariana Juárez Dominguez, Luisa Fernanda Torres Hernández, Daniela Salinas Morales, Erick Heredia-Olea

Abstract:

Maize tortillas are a staple food in Mexico which are mostly made by nixtamalization, which includes the cooking and steeping of maize kernels in alkaline conditions. The cooking step in nixtamalization demands a lot of energy and also generates nejayote, a water pollutant, at the end of the process. The aim of this study was to reduce the cooking time by adding a maize soaking step before nixtamalization while maintaining the quality properties of masa and tortillas. Maize kernels were soaked for 36 h to increase moisture up to 36%. Then, the effect of different cooking times (0, 5, 10, 15, 20, 20, 25, 30, 35, 45-control and 50 minutes) was evaluated on viscosity profile (RVA) of masa to select the treatments with a profile similar or equal to control. All treatments were left steeping overnight and had the same milling conditions. Treatments selected were 20- and 25-min cooking times which had similar values for pasting temperature (79.23°C and 80.23°C), Maximum Viscosity (105.88 Cp and 96.25 Cp) and Final Viscosity (188.5 Cp and 174 Cp) to those of 45 min-control (77.65 °C, 110.08 Cp, and 186.70 Cp, respectively). Afterward, tortillas were produced with the chosen treatments (20 and 25 min) and for control, then were analyzed for texture, damage starch, colorimetry, thickness, and average diameter. Colorimetric analysis of tortillas only showed significant differences for yellow/blue coordinates (b* parameter) at 20 min (0.885), unlike the 25-minute treatment (1.122). Luminosity (L*) and red/green coordinates (a*) showed no significant differences from treatments with respect control (69.912 and 1.072, respectively); however, 25 minutes was closer in both parameters (73.390 and 1.122) than 20 minutes (74.08 and 0.884). For the color difference, (E), the 25 min value (3.84) was the most similar to the control. However, for tortilla thickness and diameter, the 20-minute with 1.57 mm and 13.12 cm respectively was closer to those of the control (1.69 mm and 13.86 cm) although smaller to it. On the other hand, the 25 min treatment tortilla was smaller than both 20 min and control with 1.51 mm thickness and 13.590 cm diameter. According to texture analyses, there was no difference in terms of stretchability (8.803-10.308 gf) and distance for the break (95.70-126.46 mm) among all treatments. However, for the breaking point, all treatments (317.1 gf and 276.5 gf for 25 and 20- min treatment, respectively) were significantly different from the control tortilla (392.2 gf). Results suggest that by adding a soaking step and reducing cooking time by 25 minutes, masa and tortillas obtained had similar functional and textural properties to the traditional nixtamalization process.

Keywords: tortilla, nixtamalization, corn, lime cooking, RVA, colorimetry, texture, masa rheology

Procedia PDF Downloads 158
2683 Simulation-Based Parametric Study for the Hybrid Superplastic Forming of AZ31

Authors: Fatima Ghassan Al-Abtah, Naser Al-Huniti, Elsadig Mahdi

Abstract:

As the lightest constructional metal on earth, magnesium alloys offer excellent potential for weight reduction in the transportation industry, and it was observed that some magnesium alloys exhibit superior ductility and superplastic behavior at high temperatures. The main limitation of the superplastic forming (SPF) includes the low production rate since it needs a long forming time for each part. Through this study, an SPF process that starts with a mechanical pre-forming stage is developed to promote formability and reduce forming time. A two-dimensional finite element model is used to simulate the process. The forming process consists of two steps. At the pre-forming step (deep drawing), the sheet is drawn into the die to a preselected level, using a mechanical punch, and at the second step (SPF) a pressurized gas is applied at a controlled rate. It is shown that a significant reduction in forming time and improved final thickness uniformity can be achieved when the hybrid forming technique is used, where the process achieved a fully formed part at 400°C. Investigation for the impact of different forming process parameters achieved by comparing forming time and the distribution of final thickness that were obtained from the simulation analysis. Maximum thinning decreased from over 67% to less than 55% and forming time significantly decreased by more than 6 minutes, and the required gas pressure profile was predicted for optimum forming process parameters based on the 0.001/sec target constant strain rate within the sheet.

Keywords: magnesium, plasticity, superplastic forming, finite element analysis

Procedia PDF Downloads 144
2682 Mechanical Qualification Test Campaign on the Demise Observation Capsule

Authors: B. Tiseo, V. Quaranta, G. Bruno, R. Gardi, T. Watts, S. Dussy

Abstract:

This paper describes the qualification test campaign performed on the Demise Observation Capsule DOC-EQM as part of the Future Launch Preparatory Program FLPP3. The mechanical environment experienced during launch ascent and separation phase was first identified and then replicated in terms of sine, random and shock vibration. The loads identification is derived by selecting the worst possible case. Vibration and shock qualification test performed at CIRA Space Qualification laboratory is herein described. Mechanical fixtures’ design and validation, carried out by means of FEM, is also addressed due to its fundamental role in the vibrational test campaign. The Demise Observation Capsule (DOC) successfully passed the qualification test campaign. Functional test and resonance search have not been point any fault and damages of the capsule.

Keywords: capsule, demise, demise observation capsule, DOC, launch environment, re-ntry, qualification

Procedia PDF Downloads 135
2681 Direct Transient Stability Assessment of Stressed Power Systems

Authors: E. Popov, N. Yorino, Y. Zoka, Y. Sasaki, H. Sugihara

Abstract:

This paper discusses the performance of critical trajectory method (CTrj) for power system transient stability analysis under various loading settings and heavy fault condition. The method obtains Controlling Unstable Equilibrium Point (CUEP) which is essential for estimation of power system stability margins. The CUEP is computed by applying the CTrjto the boundary controlling unstable equilibrium point (BCU) method. The Proposed method computes a trajectory on the stability boundary that starts from the exit point and reaches CUEP under certain assumptions. The robustness and effectiveness of the method are demonstrated via six power system models and five loading conditions. As benchmark is used conventional simulation method whereas the performance is compared with and BCU Shadowing method.

Keywords: power system, transient stability, critical trajectory method, energy function method

Procedia PDF Downloads 371
2680 Formulation Policy of Criminal Sanction in Indonesian Criminal Justice System

Authors: Dini Dewi Heniarti

Abstract:

This One of criminal sanctions that are often imposed by the judge is imprisonment. The issue on the imposition of imprisonment has been subject of contentious debate and criticism among various groups for a long time. In practice, the problematics of imprisonment lead to complicated problems. The impact of the reckless imposition of the imprisonment includes among others overcapacity of the correctional institution and increasing crimes within the correctional facilities. Therefore, there is a need for renewal of the existing condemnation paradigm, considering the developing phenomena associated with the penal imposition. Imprisonment as one element of the Indonesian penal system is an important and integral part of the other elements. The philosophy of the current penal system, which still refers to the Criminal Code, still carries the values of retaliation and fault-finding toward the offender. Therefore, it is important to reconstruct a new thought in order to realize a penal system that is represented in the formulation of a more humanistic criminal sanction

Keywords: criminal code, criminal sanction, Indonesian legal system, reconstruction of thought

Procedia PDF Downloads 218
2679 Comparative Evaluation of Pharmacologically Guided Approaches (PGA) to Determine Maximum Recommended Starting Dose (MRSD) of Monoclonal Antibodies for First Clinical Trial

Authors: Ibraheem Husain, Abul Kalam Najmi, Karishma Chester

Abstract:

First-in-human (FIH) studies are a critical step in clinical development of any molecule that has shown therapeutic promise in preclinical evaluations, since preclinical research and safety studies into clinical development is a crucial step for successful development of monoclonal antibodies for guidance in pharmaceutical industry for the treatment of human diseases. Therefore, comparison between USFDA and nine pharmacologically guided approaches (PGA) (simple allometry, maximum life span potential, brain weight, rule of exponent (ROE), two species methods and one species methods) were made to determine maximum recommended starting dose (MRSD) for first in human clinical trials using four drugs namely Denosumab, Bevacizumab, Anakinra and Omalizumab. In our study, the predicted pharmacokinetic (pk) parameters and the estimated first-in-human dose of antibodies were compared with the observed human values. The study indicated that the clearance and volume of distribution of antibodies can be predicted with reasonable accuracy in human and a good estimate of first human dose can be obtained from the predicted human clearance and volume of distribution. A pictorial method evaluation chart was also developed based on fold errors for simultaneous evaluation of various methods.

Keywords: clinical pharmacology (CPH), clinical research (CRE), clinical trials (CTR), maximum recommended starting dose (MRSD), clearance and volume of distribution

Procedia PDF Downloads 364
2678 Evaluation of Vehicle Classification Categories: Florida Case Study

Authors: Ren Moses, Jaqueline Masaki

Abstract:

This paper addresses the need for accurate and updated vehicle classification system through a thorough evaluation of vehicle class categories to identify errors arising from the existing system and proposing modifications. The data collected from two permanent traffic monitoring sites in Florida were used to evaluate the performance of the existing vehicle classification table. The vehicle data were collected and classified by the automatic vehicle classifier (AVC), and a video camera was used to obtain ground truth data. The Federal Highway Administration (FHWA) vehicle classification definitions were used to define vehicle classes from the video and compare them to the data generated by AVC in order to identify the sources of misclassification. Six types of errors were identified. Modifications were made in the classification table to improve the classification accuracy. The results of this study include the development of updated vehicle classification table with a reduction in total error by 5.1%, a step by step procedure to use for evaluation of vehicle classification studies and recommendations to improve FHWA 13-category rule set. The recommendations for the FHWA 13-category rule set indicate the need for the vehicle classification definitions in this scheme to be updated to reflect the distribution of current traffic. The presented results will be of interest to States’ transportation departments and consultants, researchers, engineers, designers, and planners who require accurate vehicle classification information for planning, designing and maintenance of transportation infrastructures.

Keywords: vehicle classification, traffic monitoring, pavement design, highway traffic

Procedia PDF Downloads 170
2677 Analysis of Transformer by Gas and Moisture Sensor during Laboratory Time Monitoring

Authors: Miroslav Gutten, Daniel Korenciak, Milan Simko, Milan Chupac

Abstract:

Ensure the reliable and correct function of transformers is the main essence of on-line non-destructive diagnostic tool, which allows the accurately track of the status parameters. Devices for on-line diagnostics are very costly. However, there are devices, whose price is relatively low and when used correctly, they can be executed a complex diagnostics. One of these devices is sensor HYDRAN M2, which is used to detect the moisture and gas content in the insulation oil. Using the sensor HYDRAN M2 in combination with temperature, load measurement, and physicochemical analysis can be made the economically inexpensive diagnostic system, which use is not restricted to distribution transformers. This system was tested in educational laboratory environment at measured oil transformer 22/0.4 kV. From the conclusions referred in article is possible to determine, which kind of fault was occurred in the transformer and how was an impact on the temperature, evolution of gases and water content.

Keywords: transformer, diagnostics, gas and moisture sensor, monitoring

Procedia PDF Downloads 371
2676 Design and Simulation of Low Cost Boost-Half- Bridge Microinverter with Grid Connection

Authors: P. Bhavya, P. R. Jayasree

Abstract:

This paper presents a low cost transformer isolated boost half bridge micro-inverter for single phase grid connected PV system. Since the output voltage of a single PV panel is as low as 20~50V, a high voltage gain inverter is required for the PV panel to connect to the single-phase grid. The micro-inverter has two stages, an isolated dc-dc converter stage and an inverter stage with a dc link. To achieve MPPT and to step up the PV voltage to the dc link voltage, a transformer isolated boost half bridge dc-dc converter is used. To output the synchronised sinusoidal current with unity power factor to the grid, a pulse width modulated full bridge inverter with LCL filter is used. Variable step size Maximum Power Point Tracking (MPPT) method is adopted such that fast tracking and high MPPT efficiency are both obtained. AC voltage as per grid requirement is obtained at the output of the inverter. High power factor (>0.99) is obtained at both heavy and light loads. This paper gives the results of computer simulation program of a grid connected solar PV system using MATLAB/Simulink and SIM Power System tool.

Keywords: boost-half-bridge, micro-inverter, maximum power point tracking, grid connection, MATLAB/Simulink

Procedia PDF Downloads 329
2675 Identification and Characterization of Antimicrobial Peptides Isolated from Entophytic Bacteria and Their Activity against Multidrug-Resistance Gram-Negative Bacteria in South Korea

Authors: Maryam Beiranvand

Abstract:

Multi-drug resistance in various microorganisms has increased globally in many healthcare facilities. Less effective antimicrobial activity of drug therapies for infection control becomes trouble. Since 1980, no new type of antimicrobial drug has been identified, even though combinations of antibiotic drugs have been discovered almost every decade. Between 1981 and 2006, over 70% of novel pharmaceuticals and chemical agents came from natural sources. Microorganisms have yielded almost 22,000 natural compounds. The identification of antimicrobial components from endophytes bacteria could help overcome the threat posed by multi-drug resistant strains. The project aims to analyze and identify antimicrobial peptides isolated from entophytic bacteria and their activity against multidrug-resistant Gram-negative bacteria in South Korea. Endophytic Paenibacillus polymyxa. 4G3 isolated from the plant, Gynura procumbery exhibited considerable antimicrobial activity against Methicillin-resistant Staphylococcus aureus, and Escherichia coli. The Rapid Annotations using Subsystems Technology showed that the total size of the draft genome was 5,739,603bp, containing 5178 genes with 45.8% G+C content. Genome annotation using antiSMASH version 6.0.0 was performed, which predicted the most common types of non-ribosomal peptide synthetase (NRPS) and polyketide synthase (PKS). In this study, diethyl aminoethyl cellulose (DEAEC) resin was used as the first step in purifying for unknown peptides, and then the target protein was identified using hydrophilic and hydrophobic solutions, optimal pH, and step-by-step tests for antimicrobial activity. This crude was subjected to C18 chromatography and elution with 0, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, and 100% methanol, respectively. Only the fraction eluted with 20% -60% methanol demonstrated good antimicrobial activity against MDR E. coli. The concentration of the active fragment was measured by the Brad-ford test, and Protein A280 - Thermo Fisher Scientific at the end by examining the SDS PAGE Resolving Gel, 10% Acrylamide and purity were confirmed. Our study showed that, based on the combined results of the analysis and purification. P polymyxa. 4G3 has a high potential exists for producing novel functions of polymyxin E and bacitracin against bacterial pathogens.

Keywords: endophytic bacteria, antimicrobial activity, antimicrobial peptide, whole genome sequencing analysis, multi -drug resistance gram negative bacteria

Procedia PDF Downloads 58
2674 Tomato-Weed Classification by RetinaNet One-Step Neural Network

Authors: Dionisio Andujar, Juan lópez-Correa, Hugo Moreno, Angela Ri

Abstract:

The increased number of weeds in tomato crops highly lower yields. Weed identification with the aim of machine learning is important to carry out site-specific control. The last advances in computer vision are a powerful tool to face the problem. The analysis of RGB (Red, Green, Blue) images through Artificial Neural Networks had been rapidly developed in the past few years, providing new methods for weed classification. The development of the algorithms for crop and weed species classification looks for a real-time classification system using Object Detection algorithms based on Convolutional Neural Networks. The site study was located in commercial corn fields. The classification system has been tested. The procedure can detect and classify weed seedlings in tomato fields. The input to the Neural Network was a set of 10,000 RGB images with a natural infestation of Cyperus rotundus l., Echinochloa crus galli L., Setaria italica L., Portulaca oeracea L., and Solanum nigrum L. The validation process was done with a random selection of RGB images containing the aforementioned species. The mean average precision (mAP) was established as the metric for object detection. The results showed agreements higher than 95 %. The system will provide the input for an online spraying system. Thus, this work plays an important role in Site Specific Weed Management by reducing herbicide use in a single step.

Keywords: deep learning, object detection, cnn, tomato, weeds

Procedia PDF Downloads 92
2673 Novel Inference Algorithm for Gaussian Process Classification Model with Multiclass and Its Application to Human Action Classification

Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park

Abstract:

In this paper, we propose a novel inference algorithm for the multi-class Gaussian process classification model that can be used in the field of human behavior recognition. This algorithm can drive simultaneously both a posterior distribution of a latent function and estimators of hyper-parameters in a Gaussian process classification model with multi-class. Our algorithm is based on the Laplace approximation (LA) technique and variational EM framework. This is performed in two steps: called expectation and maximization steps. First, in the expectation step, using the Bayesian formula and LA technique, we derive approximately the posterior distribution of the latent function indicating the possibility that each observation belongs to a certain class in the Gaussian process classification model. Second, in the maximization step, using a derived posterior distribution of latent function, we compute the maximum likelihood estimator for hyper-parameters of a covariance matrix necessary to define prior distribution for latent function. These two steps iteratively repeat until a convergence condition satisfies. Moreover, we apply the proposed algorithm with human action classification problem using a public database, namely, the KTH human action data set. Experimental results reveal that the proposed algorithm shows good performance on this data set.

Keywords: bayesian rule, gaussian process classification model with multiclass, gaussian process prior, human action classification, laplace approximation, variational EM algorithm

Procedia PDF Downloads 323
2672 Novel Formal Verification Based Coverage Augmentation Technique

Authors: Surinder Sood, Debajyoti Mukherjee

Abstract:

Formal verification techniques have become widely popular in pre-silicon verification as an alternate to constrain random simulation based techniques. This paper proposed a novel formal verification-based coverage augmentation technique in verifying complex RTL functional verification faster. The proposed approach relies on augmenting coverage analysis coming from simulation and formal verification. Besides this, the functional qualification framework not only helps in improving the coverage at a faster pace but also aids in maturing and qualifying the formal verification infrastructure. The proposed technique has helped to achieve faster verification sign-off, resulting in faster time-to-market. The design picked had a complex control and data path and had many configurable options to meet multiple specification needs. The flow is generic, and tool independent, thereby leveraging across the projects and design will be much easier

Keywords: COI (cone of influence), coverage, formal verification, fault injection

Procedia PDF Downloads 107
2671 Sync Consensus Algorithm: Trying to Reach an Agreement at Full Speed

Authors: Yuri Zinchenko

Abstract:

Recently, distributed storage systems have been used more and more in various aspects of everyday life. They provide such necessary properties as Scalability, Fault Tolerance, Durability, and others. At the same time, not only reliable but also fast data storage remains one of the most pressing issues in this area. That brings us to the consensus algorithm as one of the most important components that has a great impact on the functionality of a distributed system. This paper is the result of an analysis of several well-known consensus algorithms, such as Paxos and Raft. The algorithm it offers, called Sync, promotes, but does not insist on simultaneous writing to the nodes (which positively affects the overall writing speed) and tries to minimize the system's inactive time. This allows nodes to reach agreement on the system state in a shorter period, which is a critical factor for distributed systems. Also when developing Sync, a lot of attention was paid to such criteria as simplicity and intuitiveness, the importance of which is difficult to overestimate.

Keywords: sync, consensus algorithm, distributed system, leader-based, synchronization.

Procedia PDF Downloads 49
2670 Modeling the Acquisition of Expertise in a Sequential Decision-Making Task

Authors: Cristóbal Moënne-Loccoz, Rodrigo C. Vergara, Vladimir López, Domingo Mery, Diego Cosmelli

Abstract:

Our daily interaction with computational interfaces is plagued of situations in which we go from inexperienced users to experts through self-motivated exploration of the same task. In many of these interactions, we must learn to find our way through a sequence of decisions and actions before obtaining the desired result. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion so that a specific sequence of actions must be performed in order to produce the expected outcome. But, as they become experts in the use of such interfaces, do users adopt specific search and learning strategies? Moreover, if so, can we use this information to follow the process of expertise development and, eventually, predict future actions? This would be a critical step towards building truly adaptive interfaces that can facilitate interaction at different moments of the learning curve. Furthermore, it could provide a window into potential mechanisms underlying decision-making behavior in real world scenarios. Here we tackle this question using a simple game interface that instantiates a 4-level binary decision tree (BDT) sequential decision-making task. Participants have to explore the interface and discover an underlying concept-icon mapping in order to complete the game. We develop a Hidden Markov Model (HMM)-based approach whereby a set of stereotyped, hierarchically related search behaviors act as hidden states. Using this model, we are able to track the decision-making process as participants explore, learn and develop expertise in the use of the interface. Our results show that partitioning the problem space into such stereotyped strategies is sufficient to capture a host of exploratory and learning behaviors. Moreover, using the modular architecture of stereotyped strategies as a Mixture of Experts, we are able to simultaneously ask the experts about the user's most probable future actions. We show that for those participants that learn the task, it becomes possible to predict their next decision, above chance, approximately halfway through the game. Our long-term goal is, on the basis of a better understanding of real-world decision-making processes, to inform the construction of interfaces that can establish dynamic conversations with their users in order to facilitate the development of expertise.

Keywords: behavioral modeling, expertise acquisition, hidden markov models, sequential decision-making

Procedia PDF Downloads 243
2669 Second Generation Biofuels: A Futuristic Green Deal for Lignocellulosic Waste

Authors: Nivedita Sharma

Abstract:

The global demand for fossil fuels is very high, but their use is not sustainable since its reserves are declining. Additionally, fossil fuels are responsible for the accumulation of greenhouse gases. The emission of greenhouse gases from the transport sector can be reduced by substituting fossil fuels by biofuels. Thus, renewable fuels capable of sequestering carbon dioxide are in high demand. Second‐generation biofuels, which require lignocellulosic biomass as a substrate and ultimately producing ethanol, fall largely in this category. Bioethanol is a favorable and near carbon-neutral renewable biofuel leading to reduction in tailpipe pollutant emission and improving the ambient air quality. Lignocellulose consists of three main components: cellulose, hemicellulose and lignin which can be converted to ethanol with the help of microbial enzymes. Enzymatic hydrolysis of lignocellulosic biomass in 1st step is considered as the most efficient and least polluting methods for generating fermentable hexose and pentose sugars which subsequently are fermented to power alcohol by yeasts in 2nd step of the process. In the present technology, a complete bioconversion process i.e. potential hydrolytic enzymes i.e. cellulase and xylanase producing microorganisms have been isolated from different niches, screened for enzyme production, identified using phenotyping and genotyping, enzyme production, purification and application of enzymes for saccharification of different lignocellulosic biomass followed by fermentation of hydrolysate to ethanol with high yield is to be presented in detail.

Keywords: cellulase, xylanase, lignocellulose, bioethanol, microbial enzymes

Procedia PDF Downloads 85
2668 Implementation of a Paraconsistent-Fuzzy Digital PID Controller in a Level Control Process

Authors: H. M. Côrtes, J. I. Da Silva Filho, M. F. Blos, B. S. Zanon

Abstract:

In a modern society the factor corresponding to the increase in the level of quality in industrial production demand new techniques of control and machinery automation. In this context, this work presents the implementation of a Paraconsistent-Fuzzy Digital PID controller. The controller is based on the treatment of inconsistencies both in the Paraconsistent Logic and in the Fuzzy Logic. Paraconsistent analysis is performed on the signals applied to the system inputs using concepts from the Paraconsistent Annotated Logic with annotation of two values (PAL2v). The signals resulting from the paraconsistent analysis are two values defined as Dc - Degree of Certainty and Dct - Degree of Contradiction, which receive a treatment according to the Fuzzy Logic theory, and the resulting output of the logic actions is a single value called the crisp value, which is used to control dynamic system. Through an example, it was demonstrated the application of the proposed model. Initially, the Paraconsistent-Fuzzy Digital PID controller was built and tested in an isolated MATLAB environment and then compared to the equivalent Digital PID function of this software for standard step excitation. After this step, a level control plant was modeled to execute the controller function on a physical model, making the tests closer to the actual. For this, the control parameters (proportional, integral and derivative) were determined for the configuration of the conventional Digital PID controller and of the Paraconsistent-Fuzzy Digital PID, and the control meshes in MATLAB were assembled with the respective transfer function of the plant. Finally, the results of the comparison of the level control process between the Paraconsistent-Fuzzy Digital PID controller and the conventional Digital PID controller were presented.

Keywords: fuzzy logic, paraconsistent annotated logic, level control, digital PID

Procedia PDF Downloads 271
2667 Modeling of Turbulent Flow for Two-Dimensional Backward-Facing Step Flow

Authors: Alex Fedoseyev

Abstract:

This study investigates a generalized hydrodynamic equation (GHE) simplified model for the simulation of turbulent flow over a two-dimensional backward-facing step (BFS) at Reynolds number Re=132000. The GHE were derived from the generalized Boltzmann equation (GBE). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considers particles of finite dimensions. The GHE has additional terms, temporal and spatial fluctuations, compared to the Navier-Stokes equations (NSE). These terms have a timescale multiplier τ, and the GHE becomes the NSE when $\tau$ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The BFS flow modeling results obtained by 2D calculations cannot match the experimental data for Re>450. One or two additional equations are required for the turbulence model to be added to the NSE, which typically has two to five parameters to be tuned for specific problems. It is shown that the GHE does not require an additional turbulence model, whereas the turbulent velocity results are in good agreement with the experimental results. A review of several studies on the simulation of flow over the BFS from 1980 to 2023 is provided. Most of these studies used different turbulence models when Re>1000. In this study, the 2D turbulent flow over a BFS with height H=L/3 (where L is the channel height) at Reynolds number Re=132000 was investigated using numerical solutions of the GHE (by a finite-element method) and compared to the solutions from the Navier-Stokes equations, k–ε turbulence model, and experimental results. The comparison included the velocity profiles at X/L=5.33 (near the end of the recirculation zone, available from the experiment), recirculation zone length, and velocity flow field. The mean velocity of NSE was obtained by averaging the solution over the number of time steps. The solution with a standard k −ε model shows a velocity profile at X/L=5.33, which has no backward flow. A standard k−ε model underpredicts the experimental recirculation zone length X/L=7.0∓0.5 by a substantial amount of 20-25%, and a more sophisticated turbulence model is needed for this problem. The obtained data confirm that the GHE results are in good agreement with the experimental results for turbulent flow over two-dimensional BFS. A turbulence model was not required in this case. The computations were stable. The solution time for the GHE is the same or less than that for the NSE and significantly less than that for the NSE with the turbulence model. The proposed approach was limited to 2D and only one Reynolds number. Further work will extend this approach to 3D flow and a higher Re.

Keywords: backward-facing step, comparison with experimental data, generalized hydrodynamic equations, separation, reattachment, turbulent flow

Procedia PDF Downloads 47
2666 Generation of Quasi-Measurement Data for On-Line Process Data Analysis

Authors: Hyun-Woo Cho

Abstract:

For ensuring the safety of a manufacturing process one should quickly identify an assignable cause of a fault in an on-line basis. To this end, many statistical techniques including linear and nonlinear methods have been frequently utilized. However, such methods possessed a major problem of small sample size, which is mostly attributed to the characteristics of empirical models used for reference models. This work presents a new method to overcome the insufficiency of measurement data in the monitoring and diagnosis tasks. Some quasi-measurement data are generated from existing data based on the two indices of similarity and importance. The performance of the method is demonstrated using a real data set. The results turn out that the presented methods are able to handle the insufficiency problem successfully. In addition, it is shown to be quite efficient in terms of computational speed and memory usage, and thus on-line implementation of the method is straightforward for monitoring and diagnosis purposes.

Keywords: data analysis, diagnosis, monitoring, process data, quality control

Procedia PDF Downloads 469
2665 Groundwater Seepage Estimation into Amirkabir Tunnel Using Analytical Methods and DEM and SGR Method

Authors: Hadi Farhadian, Homayoon Katibeh

Abstract:

In this paper, groundwater seepage into Amirkabir tunnel has been estimated using analytical and numerical methods for 14 different sections of the tunnel. Site Groundwater Rating (SGR) method also has been performed for qualitative and quantitative classification of the tunnel sections. The obtained results of above-mentioned methods were compared together. The study shows reasonable accordance with results of the all methods unless for two sections of tunnel. In these two sections there are some significant discrepancies between numerical and analytical results mainly originated from model geometry and high overburden. SGR and the analytical and numerical calculations, confirm the high concentration of seepage inflow in fault zones. Maximum seepage flow into tunnel has been estimated 0.425 lit/sec/m using analytical method and 0.628 lit/sec/m using numerical method occurred in crashed zone. Based on SGR method, six sections of 14 sections in Amirkabir tunnel axis are found to be in "No Risk" class that is supported by the analytical and numerical seepage value of less than 0.04 lit/sec/m.

Keywords: water Seepage, Amirkabir Tunnel, analytical method, DEM, SGR

Procedia PDF Downloads 463
2664 Tool for Analysing the Sensitivity and Tolerance of Mechatronic Systems in Matlab GUI

Authors: Bohuslava Juhasova, Martin Juhas, Renata Masarova, Zuzana Sutova

Abstract:

The article deals with the tool in Matlab GUI form that is designed to analyse a mechatronic system sensitivity and tolerance. In the analysed mechatronic system, a torque is transferred from the drive to the load through a coupling containing flexible elements. Different methods of control system design are used. The classic form of the feedback control is proposed using Naslin method, modulus optimum criterion and inverse dynamics method. The cascade form of the control is proposed based on combination of modulus optimum criterion and symmetric optimum criterion. The sensitivity is analysed on the basis of absolute and relative sensitivity of system function to the change of chosen parameter value of the mechatronic system, as well as the control subsystem. The tolerance is analysed in the form of determining the range of allowed relative changes of selected system parameters in the field of system stability. The tool allows to analyse an influence of torsion stiffness, torsion damping, inertia moments of the motor and the load and controller(s) parameters. The sensitivity and tolerance are monitored in terms of the impact of parameter change on the response in the form of system step response and system frequency-response logarithmic characteristics. The Symbolic Math Toolbox for expression of the final shape of analysed system functions was used. The sensitivity and tolerance are graphically represented as 2D graph of sensitivity or tolerance of the system function and 3D/2D static/interactive graph of step/frequency response.

Keywords: mechatronic systems, Matlab GUI, sensitivity, tolerance

Procedia PDF Downloads 423
2663 Probabilistic Safety Assessment of Koeberg Spent Fuel Pool

Authors: Sibongiseni Thabethe, Ian Korir

Abstract:

The effective management of spent fuel pool (SFP) safety has been raised as one of the emerging issues to further enhance nuclear installation safety after the Fukushima accident on March 11, 2011. Before then, SFP safety-related issues have been mainly focused on (a) controlling the configuration of the fuel assemblies in the pool with no loss of pool coolants and (b) ensuring adequate pool storage space to prevent fuel criticality owing to chain reactions of the fission products and the ability for neutron absorption to keep the fuel cool. A probabilistic safety (PSA) assessment was performed using the systems analysis program for hands-on integrated reliability evaluations (SAPHIRE) computer code. Event and fault tree analysis was done to develop a PSA model for the Koeberg SFP. We present preliminary PSA results of events that lead to boiling and cause fuel uncovering, resulting in possible fuel damage in the Koeberg SFP.

Keywords: computer code, fuel assemblies, probabilistic risk assessment, spent fuel pool

Procedia PDF Downloads 159
2662 Multi Response Optimization in Drilling Al6063/SiC/15% Metal Matrix Composite

Authors: Hari Singh, Abhishek Kamboj, Sudhir Kumar

Abstract:

This investigation proposes a grey-based Taguchi method to solve the multi-response problems. The grey-based Taguchi method is based on the Taguchi’s design of experimental method, and adopts Grey Relational Analysis (GRA) to transfer multi-response problems into single-response problems. In this investigation, an attempt has been made to optimize the drilling process parameters considering weighted output response characteristics using grey relational analysis. The output response characteristics considered are surface roughness, burr height and hole diameter error under the experimental conditions of cutting speed, feed rate, step angle, and cutting environment. The drilling experiments were conducted using L27 orthogonal array. A combination of orthogonal array, design of experiments and grey relational analysis was used to ascertain best possible drilling process parameters that give minimum surface roughness, burr height and hole diameter error. The results reveal that combination of Taguchi design of experiment and grey relational analysis improves surface quality of drilled hole.

Keywords: metal matrix composite, drilling, optimization, step drill, surface roughness, burr height, hole diameter error

Procedia PDF Downloads 304
2661 Designing ZIF67 Derivatives Using Ammonia-Based Fluorine Complex as Structure-Directing Agent for Energy Storage Applications

Authors: Lu-Yin Lin

Abstract:

The morphology of electroactive material is highly related to energy storage ability. Structure-directing agent (SDA) can design electroactive materials with favorable surface properties. Zeolitic imidazolate framework 67 (ZIF67) is one of the potential electroactive materials for energy storage devices. The SDA concept is less applied to designing ZIF67 derivatives in previous studies. An in-situ technique with ammonium fluoride (NH₄F) as SDA is proposed to produce a ZIF67 derivative with highly improved energy storage ability. Attracted by the effective in-situ technique, the NH₄F, ammonium bifluoride (NH₄HF₂), and ammonium tetrafluoroborate (NH₄BF₄) are first used as SDA to synthesize ZIF67 derivatives in one-step solution process as electroactive material of energy storage devices. The mechanisms of forming ZIF67 derivatives synthesized with different SDAs are discussed to explain the SDA effects on physical and electrochemical properties. The largest specific capacitance (CF) of 1527.0 Fg-¹ and the capacity of 296.9 mAhg-¹ are obtained for the ZIF67 derivative prepared using NH₄BF₄ as SDA. The energy storage device composed of the optimal ZIF67 derivative and carbon electrodes presents a maximum energy density of 15.1 Whkg-¹ at the power density of 857 Wkg-¹. The CF retention of 90% and Coulombic efficiency larger than 98% are also obtained after 5000 cycles.

Keywords: ammonium bifluoride, ammonium tetrafluoroborate, energy storage device, one-step solution process, structure-directing agent, zeolitic imidazolate framework 67

Procedia PDF Downloads 63
2660 Electromagnetic Simulation of Underground Cable Perforation by Nail

Authors: Ahmed Nour El Islam Ayad, Tahar Rouibah, Wafa Krika, Houari Boudjella, Larab Moulay, Farid Benhamida, Selma Benmoussa

Abstract:

The purpose of this study is to evaluate the electromagnetic field of an underground cable of very high voltage perforated by nail. The aim of this work shows a numerical simulation of the electromagnetic field of 400 kV line after perforation through a ferrous nail in four positions for the pinch pin at different distances. From results for a longitudinal section, we observe and evaluate the distribution and the variation of the electromagnetic field in the cable and the earth. When the nail approaches the underground power cable, the distribution of the magnetic field changes and takes several forms, the magnetic field increase and become very important when the nail breaks the metal screen and will produce a significant leak of the electric field, characterized by a large electric arc and or electric discharge to earth and then a fault in the electrical network. These electromagnetic analysis results help to detect defects in underground cables.

Keywords: underground, electromagnetic, nail, defect

Procedia PDF Downloads 214
2659 Design of Replication System for Computer-Generated Hologram in Optical Component Application

Authors: Chih-Hung Chen, Yih-Shyang Cheng, Yu-Hsin Tu

Abstract:

Holographic optical elements (HOEs) have recently been one of the most suitable components in optoelectronic technology owing to the requirement of the product system with compact size. Computer-generated holography (CGH) is a well-known technology for HOEs production. In some cases, a well-designed diffractive optical element with multifunctional components is also an important issue and needed for an advanced optoelectronic system. Spatial light modulator (SLM) is one of the key components that has great capability to display CGH pattern and is widely used in various applications, such as an image projection system. As mentioned to multifunctional components, such as phase and amplitude modulation of light, high-resolution hologram with multiple-exposure procedure is also one of the suitable candidates. However, holographic recording under multiple exposures, the diffraction efficiency of the final hologram is inevitably lower than that with single exposure process. In this study, a two-step holographic recording method, including the master hologram fabrication and the replicated hologram production, will be designed. Since there exist a reduction factor M² of diffraction efficiency in multiple-exposure holograms (M multiple exposures), so it seems that single exposure would be more efficient for holograms replication. In the second step of holographic replication, a stable optical system with one-shot copying is introduced. For commercial application, one may utilize this concept of holographic copying to obtain duplications of HOEs with higher optical performance.

Keywords: holographic replication, holography, one-shot copying, optical element

Procedia PDF Downloads 147
2658 Application of Finite Volume Method for Numerical Simulation of Contaminant Transfer in a Two-Dimensional Reservoir

Authors: Atousa Ataieyan, Salvador A. Gomez-Lopera, Gennaro Sepede

Abstract:

Today, due to the growing urban population and consequently, the increasing water demand in cities, the amount of contaminants entering the water resources is increasing. This can impose harmful effects on the quality of the downstream water. Therefore, predicting the concentration of discharged pollutants at different times and distances of the interested area is of high importance in order to carry out preventative and controlling measures, as well as to avoid consuming the contaminated water. In this paper, the concentration distribution of an injected conservative pollutant in a square reservoir containing four symmetric blocks and three sources using Finite Volume Method (FVM) is simulated. For this purpose, after estimating the flow velocity, classical Advection-Diffusion Equation (ADE) has been discretized over the studying domain by Backward Time- Backward Space (BTBS) scheme. Then, the discretized equations for each node have been derived according to the initial condition, boundary conditions and point contaminant sources. Finally, taking into account the appropriate time step and space step, a computational code was set up in MATLAB. Contaminant concentration was then obtained at different times and distances. Simulation results show how using BTBS differentiating scheme and FVM as a numerical method for solving the partial differential equation of transport is an appropriate approach in the case of two-dimensional contaminant transfer in an advective-diffusive flow.

Keywords: BTBS differentiating scheme, contaminant concentration, finite volume, mass transfer, water pollution

Procedia PDF Downloads 125