Search results for: combination rule
2443 Equilibrium Modeling of a Two Stage Downdraft Gasifier Using Different Gasification Fluids
Authors: F. R. M. Nascimento, E. E. S. Lora, J. C. E. Palácio
Abstract:
A mathematical model to investigate the performance of a two stage fixed bed downdraft gasifier operating with air, steam and oxygen mixtures as the gasifying fluid has been developed. The various conditions of mixtures for a double stage fluid entry, have been performed. The model has been validated through a series of experimental tests performed by NEST – The Excellence Group in Thermal and Distributed Generation of the Federal University of Itajubá. Influence of mixtures are analyzed through the Steam to Biomass (SB), Equivalence Ratio (ER) and the Oxygen Concentration (OP) parameters in order to predict the best operating conditions to obtain adequate output gas quality, once is a key parameter for subsequent gas processing in the synthesis of biofuels, heat and electricity generation. Results show that there is an optimal combination in the steam and oxygen content of the gasifying fluid which allows the user find the best conditions to design and operate the equipment according to the desired application.Keywords: air, equilibrium, downdraft, fixed bed gasification, mathematical modeling, mixtures, oxygen steam
Procedia PDF Downloads 4812442 High Efficiency Achievement by a New Heterojunction N-Zno:Al/P-Si Solar Cell
Authors: A. Bouloufa, F. Khaled, K. Djessas
Abstract:
This paper presents a new structure of solar cell based on p-type microcrystalline silicon as an absorber and n-type aluminum doped zinc oxide (ZnO:Al) transparent conductive oxide as an optical window. The ZnO:Al layer deposited by rf-magnetron sputtering at room temperature yields a low resistivity about 7,64.10-2Ω.cm and more than 85% mean optical transmittance in the VIS–NIR range, with an optical band gap of 3.3 eV. These excellent optical properties of this layer in combination with an optimal contact at the front surface result in a superior light trapping yielding to efficiencies about 20%. In order to improve efficiency, we have used a p+-µc-Si thin layer highly doped as a back surface field which minimizes significantly the impact of rear surface recombination velocity on voltage and current leading to a high efficiency of 24%. Optoelectronic parameters were determined using the current density-voltage (J-V) curve by means of a numerical simulation with Analysis of Microelectronic and Photonic Structures (AMPS-1D) device simulator.Keywords: optical window, thin film, solar cell, efficiency
Procedia PDF Downloads 2872441 Discrimination Between Bacillus and Alicyclobacillus Isolates in Apple Juice by Fourier Transform Infrared Spectroscopy and Multivariate Analysis
Authors: Murada Alholy, Mengshi Lin, Omar Alhaj, Mahmoud Abugoush
Abstract:
Alicyclobacillus is a causative agent of spoilage in pasteurized and heat-treated apple juice products. Differentiating between this genus and the closely related Bacillus is crucially important. In this study, Fourier transform infrared spectroscopy (FT-IR) was used to identify and discriminate between four Alicyclobacillus strains and four Bacillus isolates inoculated individually into apple juice. Loading plots over the range of 1350 and 1700 cm-1 reflected the most distinctive biochemical features of Bacillus and Alicyclobacillus. Multivariate statistical methods (e.g. principal component analysis (PCA) and soft independent modeling of class analogy (SIMCA)) were used to analyze the spectral data. Distinctive separation of spectral samples was observed. This study demonstrates that FT-IR spectroscopy in combination with multivariate analysis could serve as a rapid and effective tool for fruit juice industry to differentiate between Bacillus and Alicyclobacillus and to distinguish between species belonging to these two genera.Keywords: alicyclobacillus, bacillus, FT-IR, spectroscopy, PCA
Procedia PDF Downloads 4892440 Functional Decomposition Based Effort Estimation Model for Software-Intensive Systems
Authors: Nermin Sökmen
Abstract:
An effort estimation model is needed for software-intensive projects that consist of hardware, embedded software or some combination of the two, as well as high level software solutions. This paper first focuses on functional decomposition techniques to measure functional complexity of a computer system and investigates its impact on system development effort. Later, it examines effects of technical difficulty and design team capability factors in order to construct the best effort estimation model. With using traditional regression analysis technique, the study develops a system development effort estimation model which takes functional complexity, technical difficulty and design team capability factors as input parameters. Finally, the assumptions of the model are tested.Keywords: functional complexity, functional decomposition, development effort, technical difficulty, design team capability, regression analysis
Procedia PDF Downloads 2932439 Use of Predictive Food Microbiology to Determine the Shelf-Life of Foods
Authors: Fatih Tarlak
Abstract:
Predictive microbiology can be considered as an important field in food microbiology in which it uses predictive models to describe the microbial growth in different food products. Predictive models estimate the growth of microorganisms quickly, efficiently, and in a cost-effective way as compared to traditional methods of enumeration, which are long-lasting, expensive, and time-consuming. The mathematical models used in predictive microbiology are mainly categorised as primary and secondary models. The primary models are the mathematical equations that define the growth data as a function of time under a constant environmental condition. The secondary models describe the effects of environmental factors, such as temperature, pH, and water activity (aw) on the parameters of the primary models, including the maximum specific growth rate and lag phase duration, which are the most critical growth kinetic parameters. The combination of primary and secondary models provides valuable information to set limits for the quantitative detection of the microbial spoilage and assess product shelf-life.Keywords: shelf-life, growth model, predictive microbiology, simulation
Procedia PDF Downloads 2112438 The Combination of the Mel Frequency Cepstral Coefficients (MFCC), Perceptual Linear Prediction (PLP), JITTER and SHIMMER Coefficients for the Improvement of Automatic Recognition System for Dysarthric Speech
Authors: Brahim-Fares Zaidi, Malika Boudraa, Sid-Ahmed Selouani
Abstract:
Our work aims to improve our Automatic Recognition System for Dysarthria Speech (ARSDS) based on the Hidden Models of Markov (HMM) and the Hidden Markov Model Toolkit (HTK) to help people who are sick. With pronunciation problems, we applied two techniques of speech parameterization based on Mel Frequency Cepstral Coefficients (MFCC's) and Perceptual Linear Prediction (PLP's) and concatenated them with JITTER and SHIMMER coefficients in order to increase the recognition rate of a dysarthria speech. For our tests, we used the NEMOURS database that represents speakers with dysarthria and normal speakers.Keywords: hidden Markov model toolkit (HTK), hidden models of Markov (HMM), Mel-frequency cepstral coefficients (MFCC), perceptual linear prediction (PLP’s)
Procedia PDF Downloads 1612437 Screening of Congenital Heart Diseases with Fetal Phonocardiography
Authors: F. Kovács, K. Kádár, G. Hosszú, Á. T. Balogh, T. Zsedrovits, N. Kersner, A. Nagy, Gy. Jeney
Abstract:
The paper presents a novel screening method to indicate congenital heart diseases (CHD), which otherwise could remain undetected because of their low level. Therefore, not belonging to the high-risk population, the pregnancies are not subject to the regular fetal monitoring with ultrasound echocardiography. Based on the fact that CHD is a morphological defect of the heart causing turbulent blood flow, the turbulence appears as a murmur, which can be detected by fetal phonocardiography (fPCG). The proposed method applies measurements on the maternal abdomen and from the recorded sound signal a sophisticated processing determines the fetal heart murmur. The paper describes the problems and the additional advantages of the fPCG method including the possibility of measurements at home and its combination with the prescribed regular cardiotocographic (CTG) monitoring. The proposed screening process implemented on a telemedicine system provides an enhanced safety against hidden cardiac diseases.Keywords: cardiac murmurs, fetal phonocardiography, screening of CHDs, telemedicine system
Procedia PDF Downloads 3322436 Political Views and ICT in Tertiary Institutions in Achieving the Millennium Development Goals (MDGs)
Authors: Ibe Perpetual Nwakaego
Abstract:
The Millennium Development Goals (MDGs), were an integrated project formed to eradicate many unnatural situations the citizens of the third world country may found themselves in. The MDGs, to be a sustainable project for the future depends 100% on the actions of governments, multilateral institutions and civil society. This paper first looks at the political views on the MDGs and relates it to the current electoral situations around the country by underlining the drastic changes over the few months. The second part of the paper presents ICT in tertiary institutions as one of the solutions in terms of the success of the MDGs. ICT is vital in all phases of the educational process and development of the cloud connectivity is an added advantage of Information and Communication Technology (ICT) for sharing a common data bank for research purposes among UNICEF, RED CROSS, NPS, INEC, NMIC, and WHO. Finally, the paper concludes with areas that need twigging and recommendations for the tertiary institutions committed to delivering an ambitious set of goals. A combination of observation and document materials for data gathering was employed as the methodology for carrying out this research.Keywords: MDGs, ICT, database, politics
Procedia PDF Downloads 1982435 Convergence of Sinc Methods Applied to Kuramoto-Sivashinsky Equation
Authors: Kamel Al-Khaled
Abstract:
A comparative study of the Sinc-Galerkin and Sinc-Collocation methods for solving the Kuramoto-Sivashinsky equation is given. Both approaches depend on using Sinc basis functions. Firstly, a numerical scheme using Sinc-Galerkin method is developed to approximate the solution of Kuramoto-Sivashinsky equation. Sinc approximations to both derivatives and indefinite integrals reduces the solution to an explicit system of algebraic equations. The error in the solution is shown to converge to the exact solution at an exponential. The convergence proof of the solution for the discrete system is given using fixed-point iteration. Secondly, a combination of a Crank-Nicolson formula in the time direction, with the Sinc-collocation in the space direction is presented, where the derivatives in the space variable are replaced by the necessary matrices to produce a system of algebraic equations. The methods are tested on two examples. The demonstrated results show that both of the presented methods more or less have the same accuracy.Keywords: Sinc-Collocation, nonlinear PDEs, numerical methods, fixed-point
Procedia PDF Downloads 4712434 Developing a Multiagent-Based Decision Support System for Realtime Multi-Risk Disaster Management
Authors: D. Moser, D. Pinto, A. Cipriano
Abstract:
A Disaster Management System (DMS) for countries with different disasters is very important. In the world different disasters like earthquakes, tsunamis, volcanic eruption, fire or other natural or man-made disasters occurs and have an effect on the population. It is also possible that two or more disasters arisen at the same time, this means to handle multi-risk situations. To handle such a situation a Decision Support System (DSS) based on multiagents is a suitable architecture. The most known DMSs deal with one (in the case of an earthquake-tsunami combination with two) disaster and often with one particular disaster. Nevertheless, a DSS helps for a better realtime response. Analyze the existing systems in the literature and expand them for multi-risk disasters to construct a well-organized system is the proposal of our work. The here shown work is an approach of a multi-risk system, which needs an architecture, and well-defined aims. In this moment our study is a kind of case study to analyze the way we have to follow to create our proposed system in the future.Keywords: decision support system, disaster management system, multi-risk, multiagent system
Procedia PDF Downloads 4312433 Automatic Detection and Classification of Diabetic Retinopathy Using Retinal Fundus Images
Authors: A. Biran, P. Sobhe Bidari, A. Almazroe, V. Lakshminarayanan, K. Raahemifar
Abstract:
Diabetic Retinopathy (DR) is a severe retinal disease which is caused by diabetes mellitus. It leads to blindness when it progress to proliferative level. Early indications of DR are the appearance of microaneurysms, hemorrhages and hard exudates. In this paper, an automatic algorithm for detection of DR has been proposed. The algorithm is based on combination of several image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Also, Support Vector Machine (SVM) Classifier is used to classify retinal images to normal or abnormal cases including non-proliferative or proliferative DR. The proposed method has been tested on images selected from Structured Analysis of the Retinal (STARE) database using MATLAB code. The method is perfectly able to detect DR. The sensitivity specificity and accuracy of this approach are 90%, 87.5%, and 91.4% respectively.Keywords: diabetic retinopathy, fundus images, STARE, Gabor filter, support vector machine
Procedia PDF Downloads 2942432 A Photoredox (C)sp³-(C)sp² Coupling Method Comparison Study
Authors: Shasline Gedeon, Tiffany W. Ardley, Ying Wang, Nathan J. Gesmundo, Katarina A. Sarris, Ana L. Aguirre
Abstract:
Drug discovery and delivery involve drug targeting, an approach that helps find a drug against a chosen target through high throughput screening and other methods by way of identifying the physical properties of the potential lead compound. Physical properties of potential drug candidates have been an imperative focus since the unveiling of Lipinski's Rule of 5 for oral drugs. Throughout a compound's journey from discovery, clinical phase trials, then becoming a classified drug on the market, the desirable properties are optimized while minimizing/eliminating toxicity and undesirable properties. In the pharmaceutical industry, the ability to generate molecules in parallel with maximum efficiency is a substantial factor achieved through sp²-sp² carbon coupling reactions, e.g., Suzuki Coupling reactions. These reaction types allow for the increase of aromatic fragments onto a compound. More recent literature has found benefits to decreasing aromaticity, calling for more sp³-sp² carbon coupling reactions instead. The objective of this project is to provide a comparison between various sp³-sp² carbon coupling methods and reaction conditions, collecting data on production of the desired product. There were four different coupling methods being tested amongst three cores and 4-5 installation groups per method; each method ran under three distinct reaction conditions. The tested methods include the Photoredox Decarboxylative Coupling, the Photoredox Potassium Alkyl Trifluoroborate (BF3K) Coupling, the Photoredox Cross-Electrophile (PCE) Coupling, and the Weix Cross-Electrophile (WCE) Coupling. The results concluded that the Decarboxylative method was very difficult in yielding product despite the several literature conditions chosen. The BF3K and PCE methods produced competitive results. Amongst the two Cross-Electrophile coupling methods, the Photoredox method surpassed the Weix method on numerous accounts. The results will be used to build future libraries.Keywords: drug discovery, high throughput chemistry, photoredox chemistry, sp³-sp² carbon coupling methods
Procedia PDF Downloads 1442431 Random Vertical Seismic Vibrations of the Long Span Cantilever Beams
Authors: Sergo Esadze
Abstract:
Seismic resistance norms require calculation of cantilevers on vertical components of the base seismic acceleration. Long span cantilevers, as a rule, must be calculated as a separate construction element. According to the architectural-planning solution, functional purposes and environmental condition of a designing buildings/structures, long span cantilever construction may be of very different types: both by main bearing element (beam, truss, slab), and by material (reinforced concrete, steel). A choice from these is always linked with bearing construction system of the building. Research of vertical seismic vibration of these constructions requires individual approach for each (which is not specified in the norms) in correlation with model of seismic load. The latest may be given both as deterministic load and as a random process. Loading model as a random process is more adequate to this problem. In presented paper, two types of long span (from 6m – up to 12m) reinforcement concrete cantilever beams have been considered: a) bearing elements of cantilevers, i.e., elements in which they fixed, have cross-sections with large sizes and cantilevers are made with haunch; b) cantilever beam with load-bearing rod element. Calculation models are suggested, separately for a) and b) types. They are presented as systems with finite quantity degree (concentrated masses) of freedom. Conditions for fixing ends are corresponding with its types. Vertical acceleration and vertical component of the angular acceleration affect masses. Model is based on assumption translator-rotational motion of the building in the vertical plane, caused by vertical seismic acceleration. Seismic accelerations are considered as random processes and presented by multiplication of the deterministic envelope function on stationary random process. Problem is solved within the framework of the correlation theory of random process. Solved numerical examples are given. The method is effective for solving the specific problems.Keywords: cantilever, random process, seismic load, vertical acceleration
Procedia PDF Downloads 1892430 Camel Thorn Has Hepatoprotective Activity Against Carbon Tetrachloride or Acetaminophen-Induced Hepatotoxicity but Enhances the Cardiac Toxicity of Adriamycin in Rodents
Authors: Awad G. Abdellatif, Huda M. Gargoum, Abdelkader A. Debani, Mudafara Bengleil, Salmin Alshalmani, N. El Zuki, Omran El Fitouri
Abstract:
In this study, the administration of 660 mg/kg of the ethanolic extract of the Alhgigraecorum (camel thorn) to mice, showed a significant decrease in the level of transaminases in animals treated with a combination of CTE plus carbon tetrachloride (CCl4) or acetaminophen as compared to animals receiving CCl4 or acetaminophen alone. The histopathological investigation also confirmed that camel thorn extract protects the liver against damage-induced either by carbon tetrachloride or acetaminophen. On the other hand, the cardiac toxicity produced by adriamycin was significantly increased in the presence of the ethanolic extract of camel thorn. Our study suggested that camel thorn can protect the liver against the injury produced by carbon tetrachloride or acetaminophen, with an unexpected increase in the cardiac toxicity–induced by adriamycin in rodents.Keywords: ethanolic, alhgigraecorum, tetrachloride, acetaminophen
Procedia PDF Downloads 5022429 Application of Post-Stack and Pre-Stack Seismic Inversion for Prediction of Hydrocarbon Reservoirs in a Persian Gulf Gas Field
Authors: Nastaran Moosavi, Mohammad Mokhtari
Abstract:
Seismic inversion is a technique which has been in use for years and its main goal is to estimate and to model physical characteristics of rocks and fluids. Generally, it is a combination of seismic and well-log data. Seismic inversion can be carried out through different methods; we have conducted and compared post-stack and pre- stack seismic inversion methods on real data in one of the fields in the Persian Gulf. Pre-stack seismic inversion can transform seismic data to rock physics such as P-impedance, S-impedance and density. While post- stack seismic inversion can just estimate P-impedance. Then these parameters can be used in reservoir identification. Based on the results of inverting seismic data, a gas reservoir was detected in one of Hydrocarbon oil fields in south of Iran (Persian Gulf). By comparing post stack and pre-stack seismic inversion it can be concluded that the pre-stack seismic inversion provides a more reliable and detailed information for identification and prediction of hydrocarbon reservoirs.Keywords: density, p-impedance, s-impedance, post-stack seismic inversion, pre-stack seismic inversion
Procedia PDF Downloads 3242428 Surface Sterilization of Aquatic Plant, Cryptopcoryne affinis by Using Clorox and Mercury Chloride
Authors: Sridevi Devadas
Abstract:
This study was aimed to examine the combination efficiency of Clorox (5.25% Sodium Hypochlorite) and mercury chloride (HgCl2) as reagent for surface sterilization process of aquatic plant, Cryptocoryne affinis (C. affinis). The treatment applied 10% of the Clorox and 0.1 ppm of mercury chloride. The maximum exposure time for Clorox and mercury chloride was 10 min and 60 sec respectively. After exposed to the treatments protocols (T1-T15) the explants were transferred to culture room under control temperature at 25°C ± 2°C and subjected to 16 hours fluorescence light (2000 lumens) for 30 days. The both sterilizing agents were not applied on control specimens. Upon analysis, the result indicates all of the treatments protocols produced sterile explants at range of minimum 1.5 ± 0.7 (30%) to maximum 5.0 ± 0.0 (100%). Meanwhile, maximum 1.0 ± 0.7 numbers of leaves and 1.4 ± 0.6 numbers of roots have been produced. The optimized exposure time was 0 to 15 min for Clorox and 30 sec for HgCl2 whereby 90% to 100% sterilization was archived at this condition.Keywords: Cryptocoryne affinis, surface sterilization, tissue culture, clorox, mercury chloride
Procedia PDF Downloads 6002427 A Teaching Learning Based Optimization for Optimal Design of a Hybrid Energy System
Authors: Ahmad Rouhani, Masood Jabbari, Sima Honarmand
Abstract:
This paper introduces a method to optimal design of a hybrid Wind/Photovoltaic/Fuel cell generation system for a typical domestic load that is not located near the electricity grid. In this configuration the combination of a battery, an electrolyser, and a hydrogen storage tank are used as the energy storage system. The aim of this design is minimization of overall cost of generation scheme over 20 years of operation. The Matlab/Simulink is applied for choosing the appropriate structure and the optimization of system sizing. A teaching learning based optimization is used to optimize the cost function. An overall power management strategy is designed for the proposed system to manage power flows among the different energy sources and the storage unit in the system. The results have been analyzed in terms of technics and economics. The simulation results indicate that the proposed hybrid system would be a feasible solution for stand-alone applications at remote locations.Keywords: hybrid energy system, optimum sizing, power management, TLBO
Procedia PDF Downloads 5782426 A Temporal QoS Ontology For ERTMS/ETCS
Authors: Marc Sango, Olimpia Hoinaru, Christophe Gransart, Laurence Duchien
Abstract:
Ontologies offer a means for representing and sharing information in many domains, particularly in complex domains. For example, it can be used for representing and sharing information of System Requirement Specification (SRS) of complex systems like the SRS of ERTMS/ETCS written in natural language. Since this system is a real-time and critical system, generic ontologies, such as OWL and generic ERTMS ontologies provide minimal support for modeling temporal information omnipresent in these SRS documents. To support the modeling of temporal information, one of the challenges is to enable representation of dynamic features evolving in time within a generic ontology with a minimal redesign of it. The separation of temporal information from other information can help to predict system runtime operation and to properly design and implement them. In addition, it is helpful to provide a reasoning and querying techniques to reason and query temporal information represented in the ontology in order to detect potential temporal inconsistencies. Indeed, a user operation, such as adding a new constraint on existing planning constraints can cause temporal inconsistencies, which can lead to system failures. To address this challenge, we propose a lightweight 3-layer temporal Quality of Service (QoS) ontology for representing, reasoning and querying over temporal and non-temporal information in a complex domain ontology. Representing QoS entities in separated layers can clarify the distinction between the non QoS entities and the QoS entities in an ontology. The upper generic layer of the proposed ontology provides an intuitive knowledge of domain components, specially ERTMS/ETCS components. The separation of the intermediate QoS layer from the lower QoS layer allows us to focus on specific QoS Characteristics, such as temporal or integrity characteristics. In this paper, we focus on temporal information that can be used to predict system runtime operation. To evaluate our approach, an example of the proposed domain ontology for handover operation, as well as a reasoning rule over temporal relations in this domain-specific ontology, are given.Keywords: system requirement specification, ERTMS/ETCS, temporal ontologies, domain ontologies
Procedia PDF Downloads 4222425 The Combination Of Aortic Dissection Detection Risk Score (ADD-RS) With D-dimer As A Diagnostic Tool To Exclude The Diagnosis Of Acute Aortic Syndrome (AAS)
Authors: Mohamed Hamada Abdelkader Fayed
Abstract:
Background: To evaluate the diagnostic accuracy of (ADD-RS) with D-dimer as a screening test to exclude AAS. Methods: We conducted research for the studies examining the diagnostic accuracy of (ADD- RS)+ D-dimer to exclude the diagnosis of AAS, We searched MEDLINE, Embase, and Cochrane of Trials up to 31 December 2020. Results: We identified 3 studies using (ADD-RS) with D-dimer as a diagnostic tool for AAS, involving 3261 patients were AAS was diagnosed in 559(17.14%) patients. Overall results showed that the pooled sensitivities were 97.6 (95% CI 0.95.6, 99.6) at (ADD-RS)≤1(low risk group) with D-dimer and 97.4(95% CI 0.95.4,, 99.4) at (ADD-RS)>1(High risk group) with D-dimer., the failure rate was 0.48% at low risk group and 4.3% at high risk group respectively. Conclusions: (ADD-RS) with D-dimer was a useful screening test with high sensitivity to exclude Acute Aortic Syndrome.Keywords: aortic dissection detection risk score, D-dimer, acute aortic syndrome, diagnostic accuracy
Procedia PDF Downloads 2152424 Antioxidant Activities, Chemical Components, Physicochemical, and Sensory Characteristics of Kecombrang Tea (Etlingera elatior)
Authors: Rifda Naufalin, Nurul Latifasari, Siti Nuryanti, Muna Ridha Hanifah
Abstract:
Kecombrang is a Zingiberaceae plant which has antioxidant properties. The high antioxidant content in kecombrang flowers has the potential to be processed as a functional beverage raw material so that it can be used as an ingredient in making herbal teas. The purpose of this study was to determine the chemical components, physicochemistry, antioxidant activity and sensory characteristics of kecombrang tea. The research methodology was carried out by using a completely randomized design with processing factors of kecombrang tea namely blanching and non-blanching, fermentation and non-fermentation, and the optimal time for drying kecombrang tea. The best treatment combination based on the effective index method is the treatment of the blanching process followed by drying at a temperature of 50ᵒC until the 2% moisture content can produce kecombrang tea with a total phenol content of 5.95 mg Tannic Acid Equivalent (TAE) / gram db, total flavonoid 3%, pH 4.5, and antioxidant activity 82.95%, red color, distinctive aroma of tea, fresh taste, and preferred by panelists.Keywords: kecombrang tea, blanching, fermentation, total phenol, and antioxidant activity
Procedia PDF Downloads 1482423 A Spatial Perspective on the Metallized Combustion Aspect of Rockets
Authors: Chitresh Prasad, Arvind Ramesh, Aditya Virkar, Karan Dholkaria, Vinayak Malhotra
Abstract:
Solid Propellant Rocket is a rocket that utilises a combination of a solid Oxidizer and a solid Fuel. Success in Solid Rocket Motor design and development depends significantly on knowledge of burning rate behaviour of the selected solid propellant under all motor operating conditions and design limit conditions. Most Solid Motor Rockets consist of the Main Engine, along with multiple Boosters that provide an additional thrust to the space-bound vehicle. Though widely used, they have been eclipsed by Liquid Propellant Rockets, because of their better performance characteristics. The addition of a catalyst such as Iron Oxide, on the other hand, can drastically enhance the performance of a Solid Rocket. This scientific investigation tries to emulate the working of a Solid Rocket using Sparklers and Energized Candles, with a central Energized Candle acting as the Main Engine and surrounding Sparklers acting as the Booster. The Energized Candle is made of Paraffin Wax, with Magnesium filings embedded in it’s wick. The Sparkler is made up of 45% Barium Nitrate, 35% Iron, 9% Aluminium, 10% Dextrin and the remaining composition consists of Boric Acid. The Magnesium in the Energized Candle, and the combination of Iron and Aluminium in the Sparkler, act as catalysts and enhance the burn rates of both materials. This combustion of Metallized Propellants has an influence over the regression rate of the subject candle. The experimental parameters explored here are Separation Distance, Systematically varying Configuration and Layout Symmetry. The major performance parameter under observation is the Regression Rate of the Energized Candle. The rate of regression is significantly affected by the orientation and configuration of the sparklers, which usually act as heat sources for the energized candle. The Overall Efficiency of any engine is factorised by the thermal and propulsive efficiencies. Numerous efforts have been made to improve one or the other. This investigation focuses on the Orientation of Rocket Motor Design to maximize their Overall Efficiency. The primary objective is to analyse the Flame Spread Rate variations of the energized candle, which resembles the solid rocket propellant used in the first stage of rocket operation thereby affecting the Specific Impulse values in a Rocket, which in turn have a deciding impact on their Time of Flight. Another objective of this research venture is to determine the effectiveness of the key controlling parameters explored. This investigation also emulates the exhaust gas interactions of the Solid Rocket through concurrent ignition of the Energized Candle and Sparklers, and their behaviour is analysed. Modern space programmes intend to explore the universe outside our solar system. To accomplish these goals, it is necessary to design a launch vehicle which is capable of providing incessant propulsion along with better efficiency for vast durations. The main motivation of this study is to enhance Rocket performance and their Overall Efficiency through better designing and optimization techniques, which will play a crucial role in this human conquest for knowledge.Keywords: design modifications, improving overall efficiency, metallized combustion, regression rate variations
Procedia PDF Downloads 1782422 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms
Authors: Abdul Rehman, Bo Liu
Abstract:
Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization
Procedia PDF Downloads 2252421 The Spherical Geometric Model of Absorbed Particles: Application to the Electron Transport Study
Authors: A. Bentabet, A. Aydin, N. Fenineche
Abstract:
The mean penetration depth has a most important in the absorption transport phenomena. Analytical model of light ion backscattering coefficients from solid targets have been made by Vicanek and Urbassek. In the present work, we showed a mathematical expression (deterministic model) for Z1/2. In advantage, in the best of our knowledge, relatively only one analytical model exit for electron or positron mean penetration depth in solid targets. In this work, we have presented a simple geometric spherical model of absorbed particles based on CSDA scheme. In advantage, we have showed an analytical expression of the mean penetration depth by combination between our model and the Vicanek and Urbassek theory. For this, we have used the Relativistic Partial Wave Expansion Method (RPWEM) and the optical dielectric model to calculate the elastic cross sections and the ranges respectively. Good agreement was found with the experimental and theoretical data.Keywords: Bentabet spherical geometric model, continuous slowing down approximation, stopping powers, ranges, mean penetration depth
Procedia PDF Downloads 6422420 Management Practices in Holding Pens in Pig’s Slaughterhouses in the Valle De Aburrá, Antioquia and Animal Welfare
Authors: Natalia Uribe Corrales, Santiago Henao Villegas
Abstract:
Introduction: The management of pigs in the holding pens at the slaughterhouses is a key point to minimize levels of stress and fear, improve efficiency, maintain a good quality of meat and avoid economic losses. Holding pens should guarantee drinking water continuously, a minimum space of 1.2 m2/ animal; As well as an adequate management in the conduction of the animals towards stun. Objective: To characterize the management practices in holding pens in slaughterhouses in the Valle de Aburrá. Methods: A descriptive cross - sectional study was carried out in Valle de Aburrá benefit plants, which were authorized by National Institute for Food and Medicine Surveillance (INVIMA). Variables such as management mechanisms to the pens, time of housing, water supply, load density, vocalization, slips and falls of the animals in the pens and mechanism of conduction towards desensitization were analyzed. Results: 225 pigs were analyzed, finding that 35.6% were lowered with slaps from the trucks to the waiting pens; The lairage time was greater than 10 hours in 16% of the animals; 12.9% of pigs had no water permanently; 40.9% was subjected to a high load density, while 19.6% had a low load density. Regarding aspects of animal welfare, 37.3% presented high vocalizations; 29.3% and 14.2% presented slips or falls respectively. Regarding the mechanism of conduction towards desensitization, slapping was used in 56% and electrical prod in 4%. Conclusions: It is necessary to continue promoting the learning of the densities of load, since both high and low densities generate inconveniences in animal welfare, favoring the appearance of lesions and stress in the animals. Also, to promote the rule of permanent water in the pens and a time of housing less than 10 hours. In relation to the driving mechanisms, it is necessary to continue animal husbandry campaigns, encouraging the use of other alternatives such as boards or panels to assist the movement of pigs.Keywords: animal welfare, quality of meat, swine, waiting pens
Procedia PDF Downloads 1962419 Providing a Secure Hybrid Method for Graphical Password Authentication to Prevent Shoulder Surfing, Smudge and Brute Force Attack
Authors: Faraji Sepideh
Abstract:
Nowadays, purchase rate of the smart device is increasing and user authentication is one of the important issues in information security. Alphanumeric strong passwords are difficult to memorize and also owners write them down on papers or save them in a computer file. In addition, text password has its own flaws and is vulnerable to attacks. Graphical password can be used as an alternative to alphanumeric password that users choose images as a password. This type of password is easier to use and memorize and also more secure from pervious password types. In this paper we have designed a more secure graphical password system to prevent shoulder surfing, smudge and brute force attack. This scheme is a combination of two types of graphical passwords recognition based and Cued recall based. Evaluation the usability and security of our proposed scheme have been explained in conclusion part.Keywords: brute force attack, graphical password, shoulder surfing attack, smudge attack
Procedia PDF Downloads 1612418 An Investigation Into an Essential Property of Creativity, Which Is the First-Person Experience
Authors: Ukpaka Paschal
Abstract:
Margret Boden argues that a creative product is one that is new, surprising, and valuable as a result of the combination, exploration, or transformation involved in producing it. Boden uses examples of artificial intelligence systems that fit all of these criteria and argues that real creativity involves autonomy, intentionality, valuation, emotion, and consciousness. This paper provides an analysis of all these elements in order to try to understand whether they are sufficient to account for creativity, especially human creativity. This paper focuses on Generative Adversarial Networks (GANs), which is a class of artificial intelligence algorithms that are said to have disproved the common perception that creativity is something that only humans possess. This paper will then argue that Boden’s listed properties of creativity, which capture the creativity exhibited by GANs, are not sufficient to account for human creativity, and this paper will further identify “first-person phenomenological experience” as an essential property of human creativity. The rationale behind the proposed essential property is that if creativity involves comprehending our experience of the world around us into a form of self-expression, then our experience of the world really matters with regard to creativity.Keywords: artificial intelligence, creativity, GANs, first-person experience
Procedia PDF Downloads 1362417 Learning the History of a Tuscan Village: A Serious Game Using Geolocation Augmented Reality
Authors: Irene Capecchi, Tommaso Borghini, Iacopo Bernetti
Abstract:
An important tool for the enhancement of cultural sites is serious games (SG), i.e., games designed for educational purposes; SG is applied in cultural sites through trivia, puzzles, and mini-games for participation in interactive exhibitions, mobile applications, and simulations of past events. The combination of Augmented Reality (AR) and digital cultural content has also produced examples of cultural heritage recovery and revitalization around the world. Through AR, the user perceives the information of the visited place in a more real and interactive way. Another interesting technological development for the revitalization of cultural sites is the combination of AR and Global Positioning System (GPS), which integrated have the ability to enhance the user's perception of reality by providing historical and architectural information linked to specific locations organized on a route. To the author’s best knowledge, there are currently no applications that combine GPS AR and SG for cultural heritage revitalization. The present research focused on the development of an SG based on GPS and AR. The study area is the village of Caldana in Tuscany, Italy. Caldana is a fortified Renaissance village; the most important architectures are the walls, the church of San Biagio, the rectory, and the marquis' palace. The historical information is derived from extensive research by the Department of Architecture at the University of Florence. The storyboard of the SG is based on the history of the three characters who built the village: marquis Marcello Agostini, who was commissioned by Cosimo I de Medici, Grand Duke of Tuscany, to build the village, his son Ippolito and his architect Lorenzo Pomarelli. The three historical characters were modeled in 3D using the freeware MakeHuman and imported into Blender and Mixamo to associate a skeleton and blend shapes to have gestural animations and reproduce lip movement during speech. The Unity Rhubarb Lip Syncer plugin was used for the lip sync animation. The historical costumes were created by Marvelous Designer. The application was developed using the Unity 3D graphics and game engine. The AR+GPS Location plugin was used to position the 3D historical characters based on GPS coordinates. The ARFoundation library was used to display AR content. The SG is available in two versions: for children and adults. the children's version consists of finding a digital treasure consisting of valuable items and historical rarities. Players must find 9 village locations where 3D AR models of historical figures explaining the history of the village provide clues. To stimulate players, there are 3 levels of rewards for every 3 clues discovered. The rewards consist of AR masks for archaeologist, professor, and explorer. At the adult level, the SG consists of finding the 16 historical landmarks in the village, and learning historical and architectural information interactively and engagingly. The application is being tested on a sample of adults and children. Test subjects will be surveyed on a Likert scale to find out their perceptions of using the app and the learning experience between the guided tour and interaction with the app.Keywords: augmented reality, cultural heritage, GPS, serious game
Procedia PDF Downloads 952416 Capability of a Single Antigen to Induce Both Protective and Disease Enhancing Antibody: An Obstacle in the Creation of Vaccines and Passive Immunotherapies
Authors: Parul Kulshreshtha, Subrata Sinha, Rakesh Bhatnagar
Abstract:
This study was conducted by taking B. anthracis as a model pathogen. On infecting a host, B. anthracis secretes three proteins, namely, protective antigen (PA, 83kDa), edema factor (EF, 89 kDa) and lethal factor (LF, 90 kDa). These three proteins are the components of two anthrax toxins. PA binds to the cell surface receptors, namely, tumor endothelial marker (TEM) 8 and capillary morphogenesis protein (CMG) 2. TEM8 and CMG2 interact with LDL-receptor related protein (LRP) 6 for endocytosis of EF and LF. On entering the cell, EF acts as a calmodulin-dependent adenylate cyclase that causes a prolonged increase of cytosolic cyclic adenosine monophosphate (cAMP). LF is a metalloprotease that cleaves most isoforms of mitogen-activated protein kinase kinases (MAPKK/MEK) close to their N-terminus. By secreting these two toxins, B.anthracis ascertains death of the host. Once the systemic levels of the toxins rise, antibiotics alone cannot save the host. Therefore, toxin-specific inhibitors have to be developed. In this wake, monoclonal antibodies have been developed for the neutralization of toxic effects of anthrax toxins. We created hybridomas by using spleen of mice that were actively immunized with rLFn (recombinant N-terminal domain of lethal factor of B. anthracis) to obtain anti-toxin antibodies. Later on, separate group of mice were immunized with rLFn to obtain a polyclonal control for passive immunization studies of monoclonal antibodies. This led to the identification of one cohort of rLFn-immunized mice that harboured disease-enhancing polyclonal antibodies. At the same time, the monoclonal antibodies from all the hybridomas were being tested. Two hybridomas secreted monoclonal antibodies (H8 and H10) that were cross-reactive with EF (edema factor) and LF (lethal factor), while the other two hybridomas secreted LF-specific antibodies (H7 and H11). The protective efficacy of H7, H8, H10 and H11 was investigated. H7, H8 and H10 were found to be protective. H11 was found to have disease enhancing characteristics in-vitro and in mouse model of challenge with B. anthracis. In this study the disease enhancing character of H11 monoclonal antibody and anti-rLFn polyclonal sera was investigated. Combination of H11 with protective monoclonal antibodies (H8 and H10) reduced its disease enhancing nature both in-vitro and in-vivo. But combination of H11 with LETscFv (an scFv with VH and VL identical to H10 but lacking Fc region) could not abrogate the disease-enhancing character of H11 mAb. Therefore it was concluded that for suppression of disease enhancement, Fc portion was absolutely essential for interaction of H10 with H11. Our study indicates that the protective potential of an antibody depends equally on its idiotype/ antigen specificity and its isotype. A number of monoclonal and engineered antibodies are being explored as immunotherapeutics but it is absolutely essential to characterize each one for their individual and combined protective potential. Although new in the sphere of toxin-based diseases, it is extremely important to characterize the disease-enhancing nature of polyclonal as well as monoclonal antibodies. This is because several anti-viral therapeutics and vaccines have failed in the face of this phenomenon. The passive –immunotherapy thus needs to be well formulated to avoid any contraindications.Keywords: immunotherapy, polyclonal, monoclonal, antibody-dependent disease enhancement
Procedia PDF Downloads 3862415 Analysis of Q-Learning on Artificial Neural Networks for Robot Control Using Live Video Feed
Authors: Nihal Murali, Kunal Gupta, Surekha Bhanot
Abstract:
Training of artificial neural networks (ANNs) using reinforcement learning (RL) techniques is being widely discussed in the robot learning literature. The high model complexity of ANNs along with the model-free nature of RL algorithms provides a desirable combination for many robotics applications. There is a huge need for algorithms that generalize using raw sensory inputs, such as vision, without any hand-engineered features or domain heuristics. In this paper, the standard control problem of line following robot was used as a test-bed, and an ANN controller for the robot was trained on images from a live video feed using Q-learning. A virtual agent was first trained in simulation environment and then deployed onto a robot’s hardware. The robot successfully learns to traverse a wide range of curves and displays excellent generalization ability. Qualitative analysis of the evolution of policies, performance and weights of the network provide insights into the nature and convergence of the learning algorithm.Keywords: artificial neural networks, q-learning, reinforcement learning, robot learning
Procedia PDF Downloads 3722414 The Application of Extend Spectrum-Based Pushover Analysis for Seismic Evaluation of Reinforced Concrete Wall Structures
Authors: Yang Liu
Abstract:
Reinforced concrete (RC) shear wall structures are one of the most popular and efficient structural forms for medium- and high-rise buildings to resist the action of earthquake loading. Thus, it is of great significance to evaluate the seismic demands of the RC shear walls. In this paper, the application of the extend spectrum-based pushover analysis (ESPA) method on the seismic evaluation of the shear wall structure is presented. The ESPA method includes a nonlinear consecutive pushover analysis procedure and a linear elastic modal response analysis procedure to consider the combination of modes in both elastic and inelastic cases. It is found from the results of case study that the ESPA method can predict the seismic performance of shear wall structures, including internal forces and deformations very well.Keywords: reinforced concrete shear wall, seismic performance, high mode effect, nonlinear analysis
Procedia PDF Downloads 157