Search results for: Combinatorial Optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3332

Search results for: Combinatorial Optimization

692 Thermodynamic Modeling and Exergoeconomic Analysis of an Isobaric Adiabatic Compressed Air Energy Storage System

Authors: Youssef Mazloum, Haytham Sayah, Maroun Nemer

Abstract:

The penetration of renewable energy sources into the electric grid is significantly increasing. However, the intermittence of these sources breaks the balance between supply and demand for electricity. Hence, the importance of the energy storage technologies, they permit restoring the balance and reducing the drawbacks of intermittence of the renewable energies. This paper discusses the modeling and the cost-effectiveness of an isobaric adiabatic compressed air energy storage (IA-CAES) system. The proposed system is a combination among a compressed air energy storage (CAES) system with pumped hydro storage system and thermal energy storage system. The aim of this combination is to overcome the disadvantages of the conventional CAES system such as the losses due to the storage pressure variation, the loss of the compression heat and the use of fossil fuel sources. A steady state model is developed to perform an energy and exergy analyses of the IA-CAES system and calculate the distribution of the exergy losses in the latter system. A sensitivity analysis is also carried out to estimate the effects of some key parameters on the system’s efficiency, such as the pinch of the heat exchangers, the isentropic efficiency of the rotating machinery and the pressure losses. The conducted sensitivity analysis is a local analysis since the sensibility of each parameter changes with the variation of the other parameters. Therefore, an exergoeconomic study is achieved as well as a cost optimization in order to reduce the electricity cost produced during the production phase. The optimizer used is OmOptim which is a genetic algorithms based optimizer.

Keywords: cost-effectiveness, Exergoeconomic analysis, isobaric adiabatic compressed air energy storage (IA-CAES) system, thermodynamic modeling

Procedia PDF Downloads 249
691 The Effect of Electrical Discharge Plasma on Inactivation of Escherichia Coli MG 1655 in Pure Culture

Authors: Zoran Herceg, Višnja Stulić, Anet Režek Jambrak, Tomislava Vukušić

Abstract:

Electrical discharge plasma is a new non-thermal processing technique which is used for the inactivation of contaminating and hazardous microbes in liquids. Plasma is a source of different antimicrobial species including UV photons, charged particles, and reactive species such as superoxide, hydroxyl radicals, nitric oxide and ozone. Escherichia coli was studied as foodborne pathogen. The aim of this work was to examine inactivation effects of electrical discharge plasma treatment on the Escherichia coli MG 1655 in pure culture. Two types of plasma configuration and polarity were used. First configuration was with titanium wire as high voltage needle and another with medical stainless steel needle used to form bubbles in treated volume and titanium wire as high voltage needle. Model solution samples were inoculated with Escerichia coli MG 1655 and treated by electrical discharge plasma at treatment time of 5 and 10 min, and frequency of 60, 90 and 120 Hz. With the first configuration after 5 minutes of treatment at frequency of 120 Hz the inactivation rate was 1.3 log₁₀ reduction and after 10 minutes of treatment the inactivation rate was 3.0 log₁₀ reduction. At the frequency of 90 Hz after 10 minutes inactivation rate was 1.3 log₁₀ reduction. With the second configuration after 5 minutes of treatment at frequency of 120 Hz the inactivation rate was 1.2 log₁₀ reduction and after 10 minutes of treatment the inactivation rate was also 3.0 log₁₀ reduction. In this work it was also examined the formation of biofilm, nucleotide and protein leakage at 260/280 nm, before and after treatment and recuperation of treated samples. Further optimization of method is needed to understand mechanism of inactivation.

Keywords: electrical discharge plasma, escherichia coli MG 1655, inactivation, point-to-plate electrode configuration

Procedia PDF Downloads 435
690 Biomass and Lipid Enhancement by Response Surface Methodology in High Lipid Accumulating Indigenous Strain Rhodococcus opacus and Biodiesel Study

Authors: Kulvinder Bajwa, Narsi R. Bishnoi

Abstract:

Finding a sustainable alternative for today’s petrochemical industry is a major challenge facing by researchers, scientists, chemical engineers, and society at the global level. Microorganisms are considered to be sustainable feedstock for 3rd generation biofuel production. In this study, we have investigated the potential of a native bacterial strain isolated from a petrol contaminated site for the production of biodiesel. The bacterium was identified to be Rhodococcus opacus by biochemical test and 16S rRNA. Compositional analysis of bacterial biomass has been carried out by Fourier transform infrared spectroscopy (FTIR) in order to confirm lipid profile. Lipid and biomass were optimized by combination with Box Behnken design (BBD) of response surface methodology. The factors selected for the optimization of growth condition were glucose, yeast extract, and ammonium nitrate concentration. The experimental model developed through RSM in terms of effective operational factors (BBD) was found to be suitable to describe the lipid and biomass production, which indicated higher lipid and biomass with a minimum concentration of ammonium nitrate, yeast extract, and quite higher dose of glucose supplementation. Optimum results of the experiments were found to be 2.88 gL⁻¹ biomass and lipid content 38.75% at glucose 20 gL⁻¹, ammonium nitrate 0.5 gL⁻¹ and yeast extract 1.25 gL⁻¹. Furthermore, GCMS study revealed that Rhodococcus opacus has favorable fatty acid profile for biodiesel production.

Keywords: biofuel, Oleaginious bacteria, Rhodococcus opacus, FTIR, BBD, free fatty acids

Procedia PDF Downloads 138
689 Opto-Electronic Properties and Structural Phase Transition of Filled-Tetrahedral NaZnAs

Authors: R. Khenata, T. Djied, R. Ahmed, H. Baltache, S. Bin-Omran, A. Bouhemadou

Abstract:

We predict structural, phase transition as well as opto-electronic properties of the filled-tetrahedral (Nowotny-Juza) NaZnAs compound in this study. Calculations are carried out by employing the full potential (FP) linearized augmented plane wave (LAPW) plus local orbitals (lo) scheme developed within the structure of density functional theory (DFT). Exchange-correlation energy/potential (EXC/VXC) functional is treated using Perdew-Burke and Ernzerhof (PBE) parameterization for generalized gradient approximation (GGA). In addition to Trans-Blaha (TB) modified Becke-Johnson (mBJ) potential is incorporated to get better precision for optoelectronic properties. Geometry optimization is carried out to obtain the reliable results of the total energy as well as other structural parameters for each phase of NaZnAs compound. Order of the structural transitions as a function of pressure is found as: Cu2Sb type → β → α phase in our study. Our calculated electronic energy band structures for all structural phases at the level of PBE-GGA as well as mBJ potential point out; NaZnAs compound is a direct (Γ–Γ) band gap semiconductor material. However, as compared to PBE-GGA, mBJ potential approximation reproduces higher values of fundamental band gap. Regarding the optical properties, calculations of real and imaginary parts of the dielectric function, refractive index, reflectivity coefficient, absorption coefficient and energy loss-function spectra are performed over a photon energy ranging from 0.0 to 30.0 eV by polarizing incident radiation in parallel to both [100] and [001] crystalline directions.

Keywords: NaZnAs, FP-LAPW+lo, structural properties, phase transition, electronic band-structure, optical properties

Procedia PDF Downloads 437
688 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights

Authors: Julian Wise

Abstract:

Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.

Keywords: mineral technology, big data, machine learning operations, data lake

Procedia PDF Downloads 114
687 Scheduling in a Single-Stage, Multi-Item Compatible Process Using Multiple Arc Network Model

Authors: Bokkasam Sasidhar, Ibrahim Aljasser

Abstract:

The problem of finding optimal schedules for each equipment in a production process is considered, which consists of a single stage of manufacturing and which can handle different types of products, where changeover for handling one type of product to the other type incurs certain costs. The machine capacity is determined by the upper limit for the quantity that can be processed for each of the products in a set up. The changeover costs increase with the number of set ups and hence to minimize the costs associated with the product changeover, the planning should be such that similar types of products should be processed successively so that the total number of changeovers and in turn the associated set up costs are minimized. The problem of cost minimization is equivalent to the problem of minimizing the number of set ups or equivalently maximizing the capacity utilization in between every set up or maximizing the total capacity utilization. Further, the production is usually planned against customers’ orders, and generally different customers’ orders are assigned one of the two priorities – “normal” or “priority” order. The problem of production planning in such a situation can be formulated into a Multiple Arc Network (MAN) model and can be solved sequentially using the algorithm for maximizing flow along a MAN and the algorithm for maximizing flow along a MAN with priority arcs. The model aims to provide optimal production schedule with an objective of maximizing capacity utilization, so that the customer-wise delivery schedules are fulfilled, keeping in view the customer priorities. Algorithms have been presented for solving the MAN formulation of the production planning with customer priorities. The application of the model is demonstrated through numerical examples.

Keywords: scheduling, maximal flow problem, multiple arc network model, optimization

Procedia PDF Downloads 405
686 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluate the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: convolutional neural network, electronic medical record, feature representation, lexical semantics, semantic decision

Procedia PDF Downloads 127
685 Creation of Ultrafast Ultra-Broadband High Energy Laser Pulses

Authors: Walid Tawfik

Abstract:

The interaction of high intensity ultrashort laser pulses with plasma generates many significant applications, including soft x-ray lasers, time-resolved laser induced plasma spectroscopy LIPS, and laser-driven accelerators. The development in producing of femtosecond down to ten femtosecond optical pulses has facilitates scientists with a vital tool in a variety of ultrashort phenomena, such as high field physics, femtochemistry and high harmonic generation HHG. In this research, we generate a two-octave-wide ultrashort supercontinuum pulses with an optical spectrum extending from 3.5 eV (ultraviolet) to 1.3 eV (near-infrared) using a capillary fiber filled with neon gas. These pulses are formed according to nonlinear self-phase modulation in the neon gas as a nonlinear medium. The investigations of the created pulses were made using spectral phase interferometry for direct electric-field reconstruction (SPIDER). A complete description of the output pulses was considered. The observed characterization of the produced pulses includes the beam profile, the pulse width, and the spectral bandwidth. After reaching optimization conditions, the intensity of the reconstructed pulse autocorrelation function was applied for the shorts pulse duration to achieve transform limited ultrashort pulses with durations below 6-fs energies up to 600μJ. Moreover, the effect of neon pressure variation on the pulse width was examined. The nonlinear self-phase modulation realized to be increased with the pressure of the neon gas. The observed results may lead to an advanced method to control and monitor ultrashort transit interaction in femtochemistry.

Keywords: supercontinuum, ultrafast, SPIDER, ultra-broadband

Procedia PDF Downloads 225
684 Determining Factors for Successful Blended Learning in Higher Education: A Qualitative Study

Authors: Pia Wetzl

Abstract:

The learning process of students can be optimized by combining online teaching with face-to-face sessions. So-called blended learning offers extensive flexibility as well as contact opportunities with fellow students and teachers. Furthermore, learning can be individualized and self-regulated. The aim of this article is to investigate which factors are necessary for blended learning to be successful. Semi-structured interviews were conducted with students (N = 60) and lecturers (N = 21) from different disciplines at two German universities. The questions focused on the perception of online, face-to-face and blended learning courses. In addition, questions focused on possible optimization potential and obstacles to practical implementation. The results show that on-site presence is very important for blended learning to be successful. If students do not get to know each other on-site, there is a risk of loneliness during the self-learning phases. This has a negative impact on motivation. From the perspective of the lecturers, the willingness of the students to participate in the sessions on-site is low. Especially when there is no obligation to attend, group work is difficult to implement because the number of students attending is too low. Lecturers would like to see more opportunities from the university and its administration to enforce attendance. In their view, this is the only way to ensure the success of blended learning. In addition, they see the conception of blended learning courses as requiring a great deal of time, which they are not always willing to invest. More incentives are necessary to keep the lecturers motivated to develop engaging teaching material. The study identifies factors that can help teachers conceptualize blended learning. It also provides specific implementation advice and identifies potential impacts. This catalogue has great value for the future-oriented development of courses at universities. Future studies could test its practical use.

Keywords: blended learning, higher education, teachers, student learning, qualitative research

Procedia PDF Downloads 70
683 3D Geomechanical Model the Best Solution of the 21st Century for Perforation's Problems

Authors: Luis Guiliana, Andrea Osorio

Abstract:

The lack of comprehension of the reservoir geomechanics conditions may cause operational problems that cost to the industry billions of dollars per year. The drilling operations at the Ceuta Field, Area 2 South, Maracaibo Lake, have been very expensive due to problems associated with drilling. The principal objective of this investigation is to develop a 3D geomechanical model in this area, in order to optimize the future drillings in the field. For this purpose, a 1D geomechanical model was built at first instance, following the workflow of the MEM (Mechanical Earth Model), this consists of the following steps: 1) Data auditing, 2) Analysis of drilling events and structural model, 3) Mechanical stratigraphy, 4) Overburden stress, 5) Pore pressure, 6) Rock mechanical properties, 7) Horizontal stresses, 8) Direction of the horizontal stresses, 9) Wellbore stability. The 3D MEM was developed through the geostatistic model of the Eocene C-SUP VLG-3676 reservoir and the 1D MEM. With this data the geomechanical grid was embedded. The analysis of the results threw, that the problems occurred in the wells that were examined were mainly due to wellbore stability issues. It was determined that the stress field change as the stratigraphic column deepens, it is normal to strike-slip at the Middle Miocene and Lower Miocene, and strike-slipe to reverse at the Eocene. In agreement to this, at the level of the Eocene, the most advantageous direction to drill is parallel to the maximum horizontal stress (157º). The 3D MEM allowed having a tridimensional visualization of the rock mechanical properties, stresses and operational windows (mud weight and pressures) variations. This will facilitate the optimization of the future drillings in the area, including those zones without any geomechanics information.

Keywords: geomechanics, MEM, drilling, stress

Procedia PDF Downloads 275
682 Magnetic Cellulase/Halloysite Nanotubes as Biocatalytic System for Converting Agro-Waste into Value-Added Product

Authors: Devendra Sillu, Shekhar Agnihotri

Abstract:

The 'nano-biocatalyst' utilizes an ordered assembling of enzyme on to nanomaterial carriers to catalyze desirable biochemical kinetics and substrate selectivity. The current study describes an inter-disciplinary approach for converting agriculture waste, sugarcane bagasse into D-glucose exploiting halloysite nanotubes (HNTs) decorated cellulase enzyme as nano-biocatalytic system. Cellulase was successfully immobilized on HNTs employing polydopamine as an eco-friendly crosslinker while iron oxide nanoparticles were attached to facilitate magnetic recovery of material. The characterization studies (UV-Vis, TEM, SEM, and XRD) displayed the characteristic features of both cellulase and magnetic HNTs in the resulting nanocomposite. Various factors (i.e., working pH, temp., crosslinker conc., enzyme conc.) which may influence the activity of biocatalytic system were investigated. The experimental design was performed using Response Surface Methodology (RSM) for process optimization. Analyses data demonstrated that the nanobiocatalysts retained 80.30% activity even at elevated temperature (55°C) and excellent storage stabilities after 10 days. The repeated usage of system revealed a remarkable consistent relative activity over several cycles. The immobilized cellulase was employed to decompose agro-waste and the maximum decomposition rate of 67.2 % was achieved. Conclusively, magnetic HNTs can serve as a potential support for enzyme immobilization with long term usage, good efficacy, reusability and easy recovery from solution.

Keywords: halloysite nanotubes, enzyme immobilization, cellulase, response surface methodology, magnetic recovery

Procedia PDF Downloads 136
681 Trading off Accuracy for Speed in Powerdrill

Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica

Abstract:

In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.

Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries

Procedia PDF Downloads 261
680 Development and Optimization of Colon Targeted Drug Delivery System of Ayurvedic Churna Formulation Using Eudragit L100 and Ethyl Cellulose as Coating Material

Authors: Anil Bhandari, Imran Khan Pathan, Peeyush K. Sharma, Rakesh K. Patel, Suresh Purohit

Abstract:

The purpose of this study was to prepare time and pH dependent release tablets of Ayurvedic Churna formulation and evaluate their advantages as colon targeted drug delivery system. The Vidangadi Churna was selected for this study which contains Embelin and Gallic acid. Embelin is used in Helminthiasis as therapeutic agent. Embelin is insoluble in water and unstable in gastric environment so it was formulated in time and pH dependent tablets coated with combination of two polymers Eudragit L100 and ethyl cellulose. The 150mg of core tablet of dried extract and lactose were prepared by wet granulation method. The compression coating was used in the polymer concentration of 150mg for both the layer as upper and lower coating tablet was investigated. The results showed that no release was found in 0.1 N HCl and pH 6.8 phosphate buffers for initial 5 hours and about 98.97% of the drug was released in pH 7.4 phosphate buffer in total 17 hours. The in vitro release profiles of drug from the formulation could be best expressed first order kinetics as highest linearity (r2= 0.9943). The results of the present study have demonstrated that the time and pH dependent tablets system is a promising vehicle for preventing rapid hydrolysis in gastric environment and improving oral bioavailability of Embelin and Gallic acid for treatment of Helminthiasis.

Keywords: embelin, gallic acid, Vidangadi Churna, colon targeted drug delivery

Procedia PDF Downloads 362
679 Statistical Assessment of Models for Determination of Soil–Water Characteristic Curves of Sand Soils

Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha

Abstract:

Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and time-consuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.

Keywords: soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil, geotechnical engineering

Procedia PDF Downloads 340
678 The Importance of Visual Communication in Artificial Intelligence

Authors: Manjitsingh Rajput

Abstract:

Visual communication plays an important role in artificial intelligence (AI) because it enables machines to understand and interpret visual information, similar to how humans do. This abstract explores the importance of visual communication in AI and emphasizes the importance of various applications such as computer vision, object emphasis recognition, image classification and autonomous systems. In going deeper, with deep learning techniques and neural networks that modify visual understanding, In addition to AI programming, the abstract discusses challenges facing visual interfaces for AI, such as data scarcity, domain optimization, and interpretability. Visual communication and other approaches, such as natural language processing and speech recognition, have also been explored. Overall, this abstract highlights the critical role that visual communication plays in advancing AI capabilities and enabling machines to perceive and understand the world around them. The abstract also explores the integration of visual communication with other modalities like natural language processing and speech recognition, emphasizing the critical role of visual communication in AI capabilities. This methodology explores the importance of visual communication in AI development and implementation, highlighting its potential to enhance the effectiveness and accessibility of AI systems. It provides a comprehensive approach to integrating visual elements into AI systems, making them more user-friendly and efficient. In conclusion, Visual communication is crucial in AI systems for object recognition, facial analysis, and augmented reality, but challenges like data quality, interpretability, and ethics must be addressed. Visual communication enhances user experience, decision-making, accessibility, and collaboration. Developers can integrate visual elements for efficient and accessible AI systems.

Keywords: visual communication AI, computer vision, visual aid in communication, essence of visual communication.

Procedia PDF Downloads 98
677 Heat Sink Optimization for a High Power Wearable Thermoelectric Module

Authors: Zohreh Soleimani, Sally Salome Shahzad, Stamatis Zoras

Abstract:

As a result of current energy and environmental issues, the human body is known as one of the promising candidate for converting wasted heat to electricity (Seebeck effect). Thermoelectric generator (TEG) is one of the most prevalent means of harvesting body heat and converting that to eco-friendly electrical power. However, the uneven distribution of the body heat and its curvature geometry restrict harvesting adequate amount of energy. To perfectly transform the heat radiated by the body into power, the most direct solution is conforming the thermoelectric generators (TEG) with the arbitrary surface of the body and increase the temperature difference across the thermoelectric legs. Due to this, a computational survey through COMSOL Multiphysics is presented in this paper with the main focus on the impact of integrating a flexible wearable TEG with a corrugated shaped heat sink on the module power output. To eliminate external parameters (temperature, air flow, humidity), the simulations are conducted within indoor thermal level and when the wearer is stationary. The full thermoelectric characterization of the proposed TEG fabricated by a wavy shape heat sink has been computed leading to a maximum power output of 25µW/cm2 at a temperature gradient nearly 13°C. It is noteworthy that for the flexibility of the proposed TEG and heat sink, the applicability and efficiency of the module stay high even on the curved surfaces of the body. As a consequence, the results demonstrate the superiority of such a TEG to the most state of the art counterparts fabricated with no heat sink and offer a new train of thought for the development of self-sustained and unobtrusive wearable power suppliers which generate energy from low grade dissipated heat from the body.

Keywords: device simulation, flexible thermoelectric module, heat sink, human body heat

Procedia PDF Downloads 153
676 Aerodynamic Modeling Using Flight Data at High Angle of Attack

Authors: Rakesh Kumar, A. K. Ghosh

Abstract:

The paper presents the modeling of linear and nonlinear longitudinal aerodynamics using real flight data of Hansa-3 aircraft gathered at low and high angles of attack. The Neural-Gauss-Newton (NGN) method has been applied to model the linear and nonlinear longitudinal dynamics and estimate parameters from flight data. Unsteady aerodynamics due to flow separation at high angles of attack near stall has been included in the aerodynamic model using Kirchhoff’s quasi-steady stall model. NGN method is an algorithm that utilizes Feed Forward Neural Network (FFNN) and Gauss-Newton optimization to estimate the parameters and it does not require any a priori postulation of mathematical model or solving of equations of motion. NGN method was validated on real flight data generated at moderate angles of attack before application to the data at high angles of attack. The estimates obtained from compatible flight data using NGN method were validated by comparing with wind tunnel values and the maximum likelihood estimates. Validation was also carried out by comparing the response of measured motion variables with the response generated by using estimates a different control input. Next, NGN method was applied to real flight data generated by executing a well-designed quasi-steady stall maneuver. The results obtained in terms of stall characteristics and aerodynamic parameters were encouraging and reasonably accurate to establish NGN as a method for modeling nonlinear aerodynamics from real flight data at high angles of attack.

Keywords: parameter estimation, NGN method, linear and nonlinear, aerodynamic modeling

Procedia PDF Downloads 451
675 Integrated Two Stage Processing of Biomass Conversion to Hydroxymethylfurfural Esters Using Ionic Liquid as Green Solvent and Catalyst: Synthesis of Mono Esters

Authors: Komal Kumar, Sreedevi Upadhyayula

Abstract:

In this study, a two-stage process was established for the synthesis of HMF esters using ionic liquid acid catalyst. Ionic liquid catalyst with different strength of the Bronsted acidity was prepared in the laboratory and characterized using 1H NMR, FT-IR, and 13C NMR spectroscopy. Solid acid catalyst from the ionic liquid catalyst was prepared using the immobilization method. The acidity of the synthesized acid catalyst was measured using Hammett function and titration method. Catalytic performance was evaluated for the biomass conversion to 5-hydroxymethylfurfural (5-HMF) and levulinic acid (LA) in methyl isobutyl ketone (MIBK)-water biphasic system. A good yield of 5-HMF and LA was found at the different composition of MIBK: Water. In the case of MIBK: Water ratio 10:1, good yield of 5-HMF was observed at ambient temperature 150˚C. Upgrading of 5-HMF into monoesters from the reaction of 5-HMF and reactants using biomass-derived monoacid were performed. Ionic liquid catalyst with -SO₃H functional group was found to be best efficient in comparative of a solid acid catalyst for the esterification reaction and biomass conversion. A good yield of 5-HMF esters with high 5-HMF conversion was found to be at 105˚C using the best active catalyst. In this process, process A was the hydrothermal conversion of cellulose and monomer into 5-HMF and LA using acid catalyst. And the process B was the esterification followed by using similar acid catalyst. All monoesters of 5-HMF synthesized here can be used in chemical, cross linker for adhesive or coatings and pharmaceutical industry. A theoretical density functional theory (DFT) study for the optimization of the ionic liquid structure was performed using the Gaussian 09 program to find out the minimum energy configuration of ionic liquid catalyst.

Keywords: biomass conversion, 5-HMF, Ionic liquid, HMF ester

Procedia PDF Downloads 254
674 Dynamic Programming Based Algorithm for the Unit Commitment of the Transmission-Constrained Multi-Site Combined Heat and Power System

Authors: A. Rong, P. B. Luh, R. Lahdelma

Abstract:

High penetration of intermittent renewable energy sources (RES) such as solar power and wind power into the energy system has caused temporal and spatial imbalance between electric power supply and demand for some countries and regions. This brings about the critical need for coordinating power production and power exchange for different regions. As compared with the power-only systems, the combined heat and power (CHP) systems can provide additional flexibility of utilizing RES by exploiting the interdependence of power and heat production in the CHP plant. In the CHP system, power production can be influenced by adjusting heat production level and electric power can be used to satisfy heat demand by electric boiler or heat pump in conjunction with heat storage, which is much cheaper than electric storage. This paper addresses multi-site CHP systems without considering RES, which lay foundation for handling penetration of RES. The problem under study is the unit commitment (UC) of the transmission-constrained multi-site CHP systems. We solve the problem by combining linear relaxation of ON/OFF states and sequential dynamic programming (DP) techniques, where relaxed states are used to reduce the dimension of the UC problem and DP for improving the solution quality. Numerical results for daily scheduling with realistic models and data show that DP-based algorithm is from a few to a few hundred times faster than CPLEX (standard commercial optimization software) with good solution accuracy (less than 1% relative gap from the optimal solution on the average).

Keywords: dynamic programming, multi-site combined heat and power system, relaxed states, transmission-constrained generation unit commitment

Procedia PDF Downloads 367
673 Modified Clusterwise Regression for Pavement Management

Authors: Mukesh Khadka, Alexander Paz, Hanns de la Fuente-Mella

Abstract:

Typically, pavement performance models are developed in two steps: (i) pavement segments with similar characteristics are grouped together to form a cluster, and (ii) the corresponding performance models are developed using statistical techniques. A challenge is to select the characteristics that define clusters and the segments associated with them. If inappropriate characteristics are used, clusters may include homogeneous segments with different performance behavior or heterogeneous segments with similar performance behavior. Prediction accuracy of performance models can be improved by grouping the pavement segments into more uniform clusters by including both characteristics and a performance measure. This grouping is not always possible due to limited information. It is impractical to include all the potential significant factors because some of them are potentially unobserved or difficult to measure. Historical performance of pavement segments could be used as a proxy to incorporate the effect of the missing potential significant factors in clustering process. The current state-of-the-art proposes Clusterwise Linear Regression (CLR) to determine the pavement clusters and the associated performance models simultaneously. CLR incorporates the effect of significant factors as well as a performance measure. In this study, a mathematical program was formulated for CLR models including multiple explanatory variables. Pavement data collected recently over the entire state of Nevada were used. International Roughness Index (IRI) was used as a pavement performance measure because it serves as a unified standard that is widely accepted for evaluating pavement performance, especially in terms of riding quality. Results illustrate the advantage of the using CLR. Previous studies have used CLR along with experimental data. This study uses actual field data collected across a variety of environmental, traffic, design, and construction and maintenance conditions.

Keywords: clusterwise regression, pavement management system, performance model, optimization

Procedia PDF Downloads 253
672 Optimizing Recycling and Reuse Strategies for Circular Construction Materials with Life Cycle Assessment

Authors: Zhongnan Ye, Xiaoyi Liu, Shu-Chien Hsu

Abstract:

Rapid urbanization has led to a significant increase in construction and demolition waste (C&D waste), underscoring the need for sustainable waste management strategies in the construction industry. Aiming to enhance the sustainability of urban construction practices, this study develops an optimization model to effectively suggest the optimal recycling and reuse strategies for C&D waste, including concrete and steel. By employing Life Cycle Assessment (LCA), the model evaluates the environmental impacts of adopted construction materials throughout their lifecycle. The model optimizes the quantity of materials to recycle or reuse, the selection of specific recycling and reuse processes, and logistics decisions related to the transportation and storage of recycled materials with the objective of minimizing the overall environmental impact, quantified in terms of carbon emissions, energy consumption, and associated costs, while adhering to a range of constraints. These constraints include capacity limitations, quality standards for recycled materials, compliance with environmental regulations, budgetary limits, and temporal considerations such as project deadlines and material availability. The strategies are expected to be both cost-effective and environmentally beneficial, promoting a circular economy within the construction sector, aligning with global sustainability goals, and providing a scalable framework for managing construction waste in densely populated urban environments. The model is helpful in reducing the carbon footprint of construction projects, conserving valuable resources, and supporting the industry’s transition towards a more sustainable future.

Keywords: circular construction, construction and demolition waste, life cycle assessment, material recycling

Procedia PDF Downloads 84
671 Intelligent Control of Bioprocesses: A Software Application

Authors: Mihai Caramihai, Dan Vasilescu

Abstract:

The main research objective of the experimental bioprocess analyzed in this paper was to obtain large biomass quantities. The bioprocess is performed in 100 L Bioengineering bioreactor with 42 L cultivation medium made of peptone, meat extract and sodium chloride. The reactor was equipped with pH, temperature, dissolved oxygen, and agitation controllers. The operating parameters were 37 oC, 1.2 atm, 250 rpm and air flow rate of 15 L/min. The main objective of this paper is to present a case study to demonstrate that intelligent control, describing the complexity of the biological process in a qualitative and subjective manner as perceived by human operator, is an efficient control strategy for this kind of bioprocesses. In order to simulate the bioprocess evolution, an intelligent control structure, based on fuzzy logic has been designed. The specific objective is to present a fuzzy control approach, based on human expert’ rules vs. a modeling approach of the cells growth based on bioprocess experimental data. The kinetic modeling may represent only a small number of bioprocesses for overall biosystem behavior while fuzzy control system (FCS) can manipulate incomplete and uncertain information about the process assuring high control performance and provides an alternative solution to non-linear control as it is closer to the real world. Due to the high degree of non-linearity and time variance of bioprocesses, the need of control mechanism arises. BIOSIM, an original developed software package, implements such a control structure. The simulation study has showed that the fuzzy technique is quite appropriate for this non-linear, time-varying system vs. the classical control method based on a priori model.

Keywords: intelligent, control, fuzzy model, bioprocess optimization

Procedia PDF Downloads 328
670 Memory Based Reinforcement Learning with Transformers for Long Horizon Timescales and Continuous Action Spaces

Authors: Shweta Singh, Sudaman Katti

Abstract:

The most well-known sequence models make use of complex recurrent neural networks in an encoder-decoder configuration. The model used in this research makes use of a transformer, which is based purely on a self-attention mechanism, without relying on recurrence at all. More specifically, encoders and decoders which make use of self-attention and operate based on a memory, are used. In this research work, results for various 3D visual and non-visual reinforcement learning tasks designed in Unity software were obtained. Convolutional neural networks, more specifically, nature CNN architecture, are used for input processing in visual tasks, and comparison with standard long short-term memory (LSTM) architecture is performed for both visual tasks based on CNNs and non-visual tasks based on coordinate inputs. This research work combines the transformer architecture with the proximal policy optimization technique used popularly in reinforcement learning for stability and better policy updates while training, especially for continuous action spaces, which are used in this research work. Certain tasks in this paper are long horizon tasks that carry on for a longer duration and require extensive use of memory-based functionalities like storage of experiences and choosing appropriate actions based on recall. The transformer, which makes use of memory and self-attention mechanism in an encoder-decoder configuration proved to have better performance when compared to LSTM in terms of exploration and rewards achieved. Such memory based architectures can be used extensively in the field of cognitive robotics and reinforcement learning.

Keywords: convolutional neural networks, reinforcement learning, self-attention, transformers, unity

Procedia PDF Downloads 138
669 Design of Two-Channel Quadrature Mirror Filter Banks Using a Transformation Approach

Authors: Ju-Hong Lee, Yi-Lin Shieh

Abstract:

Two-dimensional (2-D) quadrature mirror filter (QMF) banks have been widely considered for high-quality coding of image and video data at low bit rates. Without implementing subband coding, a 2-D QMF bank is required to have an exactly linear-phase response without magnitude distortion, i.e., the perfect reconstruction (PR) characteristics. The design problem of 2-D QMF banks with the PR characteristics has been considered in the literature for many years. This paper presents a transformation approach for designing 2-D two-channel QMF banks. Under a suitable one-dimensional (1-D) to two-dimensional (2-D) transformation with a specified decimation/interpolation matrix, the analysis and synthesis filters of the QMF bank are composed of 1-D causal and stable digital allpass filters (DAFs) and possess the 2-D doubly complementary half-band (DC-HB) property. This facilitates the design problem of the two-channel QMF banks by finding the real coefficients of the 1-D recursive DAFs. The design problem is formulated based on the minimax phase approximation for the 1-D DAFs. A novel objective function is then derived to obtain an optimization for 1-D minimax phase approximation. As a result, the problem of minimizing the objective function can be simply solved by using the well-known weighted least-squares (WLS) algorithm in the minimax (L∞) optimal sense. The novelty of the proposed design method is that the design procedure is very simple and the designed 2-D QMF bank achieves perfect magnitude response and possesses satisfactory phase response. Simulation results show that the proposed design method provides much better design performance and much less design complexity as compared with the existing techniques.

Keywords: Quincunx QMF bank, doubly complementary filter, digital allpass filter, WLS algorithm

Procedia PDF Downloads 228
668 Numerical Studies on 2D and 3D Boundary Layer Blockage and External Flow Choking at Wing in Ground Effect

Authors: K. Dhanalakshmi, N. Deepak, E. Manikandan, S. Kanagaraj, M. Sulthan Ariff Rahman, P. Chilambarasan C. Abhimanyu, C. A. Akaash Emmanuel Raj, V. R. Sanal Kumar

Abstract:

In this paper using a validated double precision, density-based implicit standard k-ε model, the detailed 2D and 3D numerical studies have been carried out to examine the external flow choking at wing-in-ground (WIG) effect craft. The CFD code is calibrated using the exact solution based on the Sanal flow choking condition for adiabatic flows. We observed that at the identical WIG effect conditions the numerically predicted 2D boundary layer blockage is significantly higher than the 3D case and as a result, the airfoil exhibited an early external flow choking than the corresponding wing, which is corroborated with the exact solution. We concluded that, in lieu of the conventional 2D numerical simulation, it is invariably beneficial to go for a realistic 3D simulation of the wing in ground effect, which is analogous and would have the aspects of a real-time parametric flow. We inferred that under the identical flying conditions the chances of external flow choking at WIG effect is higher for conventional aircraft than an aircraft facilitating a divergent channel effect at the bottom surface of the fuselage as proposed herein. We concluded that the fuselage and wings integrated geometry optimization can improve the overall aerodynamic performance of WIG craft. This study is a pointer to the designers and/or pilots for perceiving the zone of danger a priori due to the anticipated external flow choking at WIG effect craft for safe flying at the close proximity of the terrain and the dynamic surface of the marine.

Keywords: boundary layer blockage, chord dominated ground effect, external flow choking, WIG effect

Procedia PDF Downloads 273
667 Software-Defined Architecture and Front-End Optimization for DO-178B Compliant Distance Measuring Equipment

Authors: Farzan Farhangian, Behnam Shakibafar, Bobda Cedric, Rene Jr. Landry

Abstract:

Among the air navigation technologies, many of them are capable of increasing aviation sustainability as well as accuracy improvement in Alternative Positioning, Navigation, and Timing (APNT), especially avionics Distance Measuring Equipment (DME), Very high-frequency Omni-directional Range (VOR), etc. The integration of these air navigation solutions could make a robust and efficient accuracy in air mobility, air traffic management and autonomous operations. Designing a proper RF front-end, power amplifier and software-defined transponder could pave the way for reaching an optimized avionics navigation solution. In this article, the possibility of reaching an optimum front-end to be used with single low-cost Software-Defined Radio (SDR) has been investigated in order to reach a software-defined DME architecture. Our software-defined approach uses the firmware possibilities to design a real-time software architecture compatible with a Multi Input Multi Output (MIMO) BladeRF to estimate an accurate time delay between a Transmission (Tx) and the reception (Rx) channels using the synchronous scheduled communication. We could design a novel power amplifier for the transmission channel of the DME to pass the minimum transmission power. This article also investigates designing proper pair pulses based on the DO-178B avionics standard. Various guidelines have been tested, and the possibility of passing the certification process for each standard term has been analyzed. Finally, the performance of the DME was tested in the laboratory environment using an IFR6000, which showed that the proposed architecture reached an accuracy of less than 0.23 Nautical mile (Nmi) with 98% probability.

Keywords: avionics, DME, software defined radio, navigation

Procedia PDF Downloads 83
666 Image Quality and Dose Optimisations in Digital and Computed Radiography X-ray Radiography Using Lumbar Spine Phantom

Authors: Elhussaien Elshiekh

Abstract:

A study was performed to management and compare radiation doses and image quality during Lumbar spine PA and Lumbar spine LAT, x- ray radiography using Computed Radiography (CR) and Digital Radiography (DR). Standard exposure factors such as kV, mAs and FFD used for imaging the Lumbar spine anthropomorphic phantom obtained from average exposure factors that were used with CR in five radiology centres. Lumbar spine phantom was imaged using CR and DR systems. Entrance surface air kerma (ESAK) was calculated X-ray tube output and patient exposure factor. Images were evaluated using visual grading system based on the European Guidelines on Quality Criteria for diagnostic radiographic images. The ESAK corresponding to each image was measured at the surface of the phantom. Six experienced specialists evaluated hard copies of all the images, the image score (IS) was calculated for each image by finding the average score of the Six evaluators. The IS value also was used to determine whether an image was diagnostically acceptable. The optimum recommended exposure factors founded here for Lumbar spine PA and Lumbar spine LAT, with respectively (80 kVp,25 mAs at 100 cm FFD) and (75 kVp,15 mAs at 100 cm FFD) for CR system, and (80 kVp,15 mAs at100 cm FFD) and (75 kVp,10 mAs at 100 cm FFD) for DR system. For Lumbar spine PA, the lowest ESAK value required to obtain a diagnostically acceptable image were 0.80 mGy for DR and 1.20 mGy for CR systems. Similarly for Lumbar spine LAT projection, the lowest ESAK values to obtain a diagnostically acceptable image were 0.62 mGy for DR and 0.76 mGy for CR systems. At standard kVp and mAs values, the image quality did not vary significantly between the CR and the DR system, but at higher kVp and mAs values, the DR images were found to be of better quality than CR images. In addition, the lower limit of entrance skin dose consistent with diagnostically acceptable DR images was 40% lower than that for CR images.

Keywords: image quality, dosimetry, radiation protection, optimization, digital radiography, computed radiography

Procedia PDF Downloads 54
665 Optimal Continuous Scheduled Time for a Cumulative Damage System with Age-Dependent Imperfect Maintenance

Authors: Chin-Chih Chang

Abstract:

Many manufacturing systems suffer failures due to complex degradation processes and various environment conditions such as random shocks. Consider an operating system is subject to random shocks and works at random times for successive jobs. When successive jobs often result in production losses and performance deterioration, it would be better to do maintenance or replacement at a planned time. A preventive replacement (PR) policy is presented to replace the system before a failure occurs at a continuous time T. In such a policy, the failure characteristics of the system are designed as follows. Each job would cause a random amount of additive damage to the system, and the system fails when the cumulative damage has exceeded a failure threshold. Suppose that the deteriorating system suffers one of the two types of shocks with age-dependent probabilities: type-I (minor) shock is rectified by a minimal repair, or type-II (catastrophic) shock causes the system to fail. A corrective replacement (CR) is performed immediately when the system fails. In summary, a generalized maintenance model to scheduling replacement plan for an operating system is presented below. PR is carried out at time T, whereas CR is carried out when any type-II shock occurs and the total damage exceeded a failure level. The main objective is to determine the optimal continuous schedule time of preventive replacement through minimizing the mean cost rate function. The existence and uniqueness of optimal replacement policy are derived analytically. It can be seen that the present model is a generalization of the previous models, and the policy with preventive replacement outperforms the one without preventive replacement.

Keywords: preventive replacement, working time, cumulative damage model, minimal repair, imperfect maintenance, optimization

Procedia PDF Downloads 367
664 Cost Benefit Analysis: Evaluation among the Millimetre Wavebands and SHF Bands of Small Cell 5G Networks

Authors: Emanuel Teixeira, Anderson Ramos, Marisa Lourenço, Fernando J. Velez, Jon M. Peha

Abstract:

This article discusses the benefit cost analysis aspects of millimetre wavebands (mmWaves) and Super High Frequency (SHF). The devaluation along the distance of the carrier-to-noise-plus-interference ratio with the coverage distance is assessed by considering two different path loss models, the two-slope urban micro Line-of-Sight (UMiLoS) for the SHF band and the modified Friis propagation model, for frequencies above 24 GHz. The equivalent supported throughput is estimated at the 5.62, 28, 38, 60 and 73 GHz frequency bands and the influence of carrier-to-noise-plus-interference ratio in the radio and network optimization process is explored. Mostly owing to the lessening caused by the behaviour of the two-slope propagation model for SHF band, the supported throughput at this band is higher than at the millimetre wavebands only for the longest cell lengths. The benefit cost analysis of these pico-cellular networks was analysed for regular cellular topologies, by considering the unlicensed spectrum. For shortest distances, we can distinguish an optimal of the revenue in percentage terms for values of the cell length, R ≈ 10 m for the millimeter wavebands and for longest distances an optimal of the revenue can be observed at R ≈ 550 m for the 5.62 GHz. It is possible to observe that, for the 5.62 GHz band, the profit is slightly inferior than for millimetre wavebands, for the shortest Rs, and starts to increase for cell lengths approximately equal to the ratio between the break-point distance and the co-channel reuse factor, achieving a maximum for values of R approximately equal to 550 m.

Keywords: millimetre wavebands, SHF band, SINR, cost benefit analysis, 5G

Procedia PDF Downloads 146
663 Solar PV System for Automatic Guideway Transit (AGT) System in BPSU Main Campus

Authors: Nelson S. Andres, Robert O. Aguilar, Mar O. Tapia, Meeko C. Masangcap, John Denver Catapang, Greg C. Mallari

Abstract:

This study focuses on exploring the possibility of using solar PV as an alternative for generating electricity to electrify the AGT System installed in BPSU Main Campus instead of using the power grid. The output of this study gives BPSU the option to invest on solar PV system to pro-actively respond to one of UN’s Sustainable Development Goals of having reliable, sustainable and modern energy sources to reduce energy pollution and climate change impact in the long run. Thus, this study covers the technical as well as the financial studies, which BPSU can also be used to outsource funding from different government agencies. For this study, the electrical design and requirements of the on-going DOST AGT system project are carefully considered. In the proposed design, the AGT station has installed with a rechargeable battery system where the energy harnessed by the solar PV panels installed on the rooftop of the station/NCEA building shall be directed to. The solar energy is then directly supplied to the electric double-layer capacitors (EDLC's) batteries and thus transmitted to other types of equipment in need. When the AGT is not in use, the harnessed energy may be used by NCEA building, thus, lessening the energy consumption of the building from the grid. The use of solar PV system with EDLC is compared with the use of an electric grid for the purpose of electrifying the AGT or the NCEA building (when AGT is not in use). This is to figure how much solar energy are accumulated by the solar PV to accommodate the need for coaches’ motors, lighting, air-conditioning units, door sensor, panel display, etc. The proposed PV Solar design, as well as the data regarding the charging and discharging of batteries and the power consumption of all AGT components, are simulated for optimization, analysis and validation through the use of PVSyst software.

Keywords: AGT, Solar PV, railway, EDLC

Procedia PDF Downloads 86