Search results for: matrix model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18526

Search results for: matrix model

16666 Two-Stage Launch Vehicle Trajectory Modeling for Low Earth Orbit Applications

Authors: Assem M. F. Sallam, Ah. El-S. Makled

Abstract:

This paper presents a study on the trajectory of a two stage launch vehicle. The study includes dynamic responses of motion parameters as well as the variation of angles affecting the orientation of the launch vehicle (LV). LV dynamic characteristics including state vector variation with corresponding altitude and velocity for the different LV stages separation, as well as the angle of attack and flight path angles are also discussed. A flight trajectory study for the drop zone of first stage and the jettisoning of fairing are introduced in the mathematical modeling to study their effect. To increase the accuracy of the LV model, atmospheric model is used taking into consideration geographical location and the values of solar flux related to the date and time of launch, accurate atmospheric model leads to enhancement of the calculation of Mach number, which affects the drag force over the LV. The mathematical model is implemented on MATLAB based software (Simulink). The real available experimental data are compared with results obtained from the theoretical computation model. The comparison shows good agreement, which proves the validity of the developed simulation model; the maximum error noticed was generally less than 10%, which is a result that can lead to future works and enhancement to decrease this level of error.

Keywords: launch vehicle modeling, launch vehicle trajectory, mathematical modeling, Matlab- Simulink

Procedia PDF Downloads 277
16665 Spatial Organization of Cells over the Process of Pellicle Formation by Pseudomonas alkylphenolica KL28

Authors: Kyoung Lee

Abstract:

Numerous aerobic bacteria have the ability to form multicellular communities on the surface layer of the air-liquid (A-L) interface as a biofilm called a pellicle. Pellicles occupied at the A-L interface will benefit from the utilization of oxygen from air and nutrient from liquid. Buoyancy of cells can be obtained by high surface tension at the A-L interface. Thus, formation of pellicles is an adaptive advantage in utilization of excess nutrients in the standing culture where oxygen depletion is easily set up due to rapid cell growth. In natural environments, pellicles are commonly observed on the surface of lake or pond contaminated with pollutants. Previously, we have shown that when cultured in standing LB media an alkylphenol-degrading bacteria Pseudomonas alkylphenolia KL28 forms pellicles in a diameter of 0.3-0.5 mm with a thickness of ca 40 µm. The pellicles have unique features for possessing flatness and unusual rigidity. In this study, the biogenesis of the circular pellicles has been investigated by observing the cell organization at early stages of pellicle formation and cell arrangements in pellicle, providing a clue for highly organized cellular arrangement to be adapted to the air-liquid niche. Here, we first monitored developmental patterns of pellicle from monolayer to multicellular organization. Pellicles were shaped by controlled growth of constituent cells which accumulate extracellular polymeric substance. The initial two-dimensional growth was transited to multilayers by a constraint force of accumulated self-produced extracellular polymeric substance. Experiments showed that pellicles are formed by clonal growth and even with knock-out of genes for flagella and pilus formation. In contrast, the mutants in the epm gene cluster for alginate-like polymer biosynthesis were incompetent in cell alignment for initial two-dimensional growth of pellicles. Electron microscopic and confocal laser scanning microscopic studies showed that the fully matured structures are highly packed by matrix-encased cells which have special arrangements. The cells on the surface of the pellicle lie relatively flat and inside longitudinally cross packed. HPLC analysis of the extrapolysaccharide (EPS) hydrolysate from the colonies from LB agar showed a composition with L-fucose, L-rhamnose, D-galactosamine, D-glucosamine, D-galactose, D-glucose, D-mannose. However, that from pellicles showed similar neutral and amino sugar profile but missing galactose. Furthermore, uronic acid analysis of EPS hydrolysates by HPLC showed that mannuronic acid was detected from pellicles not from colonies, indicating the epm-derived polymer is critical for pellicle formation as proved by the epm mutants. This study verified that for the circular pellicle architecture P. alkylphenolica KL28 cells utilized EPS building blocks different from that used for colony construction. These results indicate that P. alkylphenolica KL28 is a clever architect that dictates unique cell arrangements with selected EPS matrix material to construct sophisticated building, circular biofilm pellicles.

Keywords: biofilm, matrix, pellicle, pseudomonas

Procedia PDF Downloads 153
16664 Calibration and Validation of the Aquacrop Model for Simulating Growth and Yield of Rain-Fed Sesame (Sesamum Indicum L.) Under Different Soil Fertility Levels in the Semi-arid Areas of Tigray, Ethiopia

Authors: Abadi Berhane, Walelign Worku, Berhanu Abrha, Gebre Hadgu

Abstract:

Sesame is an important oilseed crop in Ethiopia, which is the second most exported agricultural commodity next to coffee. However, there is poor soil fertility management and a research-led farming system for the crop. The AquaCrop model was applied as a decision-support tool, which performs a semi-quantitative approach to simulate the yield of crops under different soil fertility levels. The objective of this experiment was to calibrate and validate the AquaCrop model for simulating the growth and yield of sesame under different nitrogen fertilizer levels and to test the performance of the model as a decision-support tool for improved sesame cultivation in the study area. The experiment was laid out as a randomized complete block design (RCBD) in a factorial arrangement in the 2016, 2017, and 2018 main cropping seasons. In this experiment, four nitrogen fertilizer rates, 0, 23, 46, and 69 Kg/ha nitrogen, and three improved varieties (Setit-1, Setit-2, and Humera-1). In the meantime, growth, yield, and yield components of sesame were collected from each treatment. Coefficient of determination (R2), Root mean square error (RMSE), Normalized root mean square error (N-RMSE), Model efficiency (E), and Degree of agreement (D) were used to test the performance of the model. The results indicated that the AquaCrop model successfully simulated soil water content with R2 varying from 0.92 to 0.98, RMSE 6.5 to 13.9 mm, E 0.78 to 0.94, and D 0.95 to 0.99, and the corresponding values for AB also varied from 0.92 to 0.98, 0.33 to 0.54 tons/ha, 0.74 to 0.93, and 0.9 to 0.98, respectively. The results on the canopy cover of sesame also showed that the model acceptably simulated canopy cover with R2 varying from 0.95 to 0.99 and a RMSE of 5.3 to 8.6%. The AquaCrop model was appropriately calibrated to simulate soil water content, canopy cover, aboveground biomass, and sesame yield; the results indicated that the model adequately simulated the growth and yield of sesame under the different nitrogen fertilizer levels. The AquaCrop model might be an important tool for improved soil fertility management and yield enhancement strategies of sesame. Hence, the model might be applied as a decision-support tool in soil fertility management in sesame production.

Keywords: aquacrop model, normalized water productivity, nitrogen fertilizer, canopy cover, sesame

Procedia PDF Downloads 79
16663 Physical Characterization of a Watershed for Correlation with Parameters of Thomas Hydrological Model and Its Application in Iber Hidrodinamic Model

Authors: Carlos Caro, Ernest Blade, Nestor Rojas

Abstract:

This study determined the relationship between basic geo-technical parameters and parameters of the hydro logical model Thomas for water balance of rural watersheds, as a methodological calibration application, applicable in distributed models as IBER model, which represents a distributed system simulation models for unsteady flow numerical free surface. There was an exploration in 25 points (on 15 sub) basin of Rio Piedras (Boy.) obtaining soil samples, to which geo-technical characterization was performed by laboratory tests. Thomas model has a physical characterization of the input area by only four parameters (a, b, c, d). Achieve measurable relationship between geo technical parameters and 4 values of hydro logical parameters helps to determine subsurface, underground and surface flow more agile manner. It is intended in this way to reach some solutions regarding limits initial model parameters on the basis of Thomas geo-technical characterization. In hydro geological models of rural watersheds, calibration is an important process in the characterization of the study area. This step can require a significant computational cost and time, especially if the initial values or parameters before calibration are outside of the geo-technical reality. A better approach in these initial values means optimization of these process through a geo-technical materials area, where is obtained an important approach to the study as in the starting range of variation for the calibration parameters.

Keywords: distributed hydrology, hydrological and geotechnical characterization, Iber model

Procedia PDF Downloads 522
16662 Extended Intuitionistic Fuzzy VIKOR Method in Group Decision Making: The Case of Vendor Selection Decision

Authors: Nastaran Hajiheydari, Mohammad Soltani Delgosha

Abstract:

Vendor (supplier) selection is a group decision-making (GDM) process, in which, based on some predetermined criteria, the experts’ preferences are provided in order to rank and choose the most desirable suppliers. In the real business environment, our attitudes or our choices would be made in an uncertain and indecisive situation could not be expressed in a crisp framework. Intuitionistic fuzzy sets (IFSs) could handle such situations in the best way. VIKOR method was developed to solve multi-criteria decision-making (MCDM) problems. This method, which is used to determine the compromised feasible solution with respect to the conflicting criteria, introduces a multi-criteria ranking index based on the particular measure of 'closeness' to the 'ideal solution'. Until now, there has been a little investigation of VIKOR with IFS, therefore we extended the intuitionistic fuzzy (IF) VIKOR to solve vendor selection problem under IF GDM environment. The present study intends to develop an IF VIKOR method in a GDM situation. Therefore, a model is presented to calculate the criterion weights based on entropy measure. Then, the interval-valued intuitionistic fuzzy weighted geometric (IFWG) operator utilized to obtain the total decision matrix. In the next stage, an approach based on the positive idle intuitionistic fuzzy number (PIIFN) and negative idle intuitionistic fuzzy number (NIIFN) was developed. Finally, the application of the proposed method to solve a vendor selection problem illustrated.

Keywords: group decision making, intuitionistic fuzzy set, intuitionistic fuzzy entropy measure, vendor selection, VIKOR

Procedia PDF Downloads 156
16661 Model Predictive Control with Unscented Kalman Filter for Nonlinear Implicit Systems

Authors: Takashi Shimizu, Tomoaki Hashimoto

Abstract:

A class of implicit systems is known as a more generalized class of systems than a class of explicit systems. To establish a control method for such a generalized class of systems, we adopt model predictive control method which is a kind of optimal feedback control with a performance index that has a moving initial time and terminal time. However, model predictive control method is inapplicable to systems whose all state variables are not exactly known. In other words, model predictive control method is inapplicable to systems with limited measurable states. In fact, it is usual that the state variables of systems are measured through outputs, hence, only limited parts of them can be used directly. It is also usual that output signals are disturbed by process and sensor noises. Hence, it is important to establish a state estimation method for nonlinear implicit systems with taking the process noise and sensor noise into consideration. To this purpose, we apply the model predictive control method and unscented Kalman filter for solving the optimization and estimation problems of nonlinear implicit systems, respectively. The objective of this study is to establish a model predictive control with unscented Kalman filter for nonlinear implicit systems.

Keywords: optimal control, nonlinear systems, state estimation, Kalman filter

Procedia PDF Downloads 202
16660 Deep Routing Strategy: Deep Learning based Intelligent Routing in Software Defined Internet of Things.

Authors: Zabeehullah, Fahim Arif, Yawar Abbas

Abstract:

Software Defined Network (SDN) is a next genera-tion networking model which simplifies the traditional network complexities and improve the utilization of constrained resources. Currently, most of the SDN based Internet of Things(IoT) environments use traditional network routing strategies which work on the basis of max or min metric value. However, IoT network heterogeneity, dynamic traffic flow and complexity demands intelligent and self-adaptive routing algorithms because traditional routing algorithms lack the self-adaptions, intelligence and efficient utilization of resources. To some extent, SDN, due its flexibility, and centralized control has managed the IoT complexity and heterogeneity but still Software Defined IoT (SDIoT) lacks intelligence. To address this challenge, we proposed a model called Deep Routing Strategy (DRS) which uses Deep Learning algorithm to perform routing in SDIoT intelligently and efficiently. Our model uses real-time traffic for training and learning. Results demonstrate that proposed model has achieved high accuracy and low packet loss rate during path selection. Proposed model has also outperformed benchmark routing algorithm (OSPF). Moreover, proposed model provided encouraging results during high dynamic traffic flow.

Keywords: SDN, IoT, DL, ML, DRS

Procedia PDF Downloads 110
16659 Uncertainty Quantification of Crack Widths and Crack Spacing in Reinforced Concrete

Authors: Marcel Meinhardt, Manfred Keuser, Thomas Braml

Abstract:

Cracking of reinforced concrete is a complex phenomenon induced by direct loads or restraints affecting reinforced concrete structures as soon as the tensile strength of the concrete is exceeded. Hence it is important to predict where cracks will be located and how they will propagate. The bond theory and the crack formulas in the actual design codes, for example, DIN EN 1992-1-1, are all based on the assumption that the reinforcement bars are embedded in homogeneous concrete without taking into account the influence of transverse reinforcement and the real stress situation. However, it can often be observed that real structures such as walls, slabs or beams show a crack spacing that is orientated to the transverse reinforcement bars or to the stirrups. In most Finite Element Analysis studies, the smeared crack approach is used for crack prediction. The disadvantage of this model is that the typical strain localization of a crack on element level can’t be seen. The crack propagation in concrete is a discontinuous process characterized by different factors such as the initial random distribution of defects or the scatter of material properties. Such behavior presupposes the elaboration of adequate models and methods of simulation because traditional mechanical approaches deal mainly with average material parameters. This paper concerned with the modelling of the initiation and the propagation of cracks in reinforced concrete structures considering the influence of transverse reinforcement and the real stress distribution in reinforced concrete (R/C) beams/plates in bending action. Therefore, a parameter study was carried out to investigate: (I) the influence of the transversal reinforcement to the stress distribution in concrete in bending mode and (II) the crack initiation in dependence of the diameter and distance of the transversal reinforcement to each other. The numerical investigations on the crack initiation and propagation were carried out with a 2D reinforced concrete structure subjected to quasi static loading and given boundary conditions. To model the uncertainty in the tensile strength of concrete in the Finite Element Analysis correlated normally and lognormally distributed random filed with different correlation lengths were generated. The paper also presents and discuss different methods to generate random fields, e.g. the Covariance Matrix Decomposition Method. For all computations, a plastic constitutive law with softening was used to model the crack initiation and the damage of the concrete in tension. It was found that the distributions of crack spacing and crack widths are highly dependent of the used random field. These distributions are validated to experimental studies on R/C panels which were carried out at the Laboratory for Structural Engineering at the University of the German Armed Forces in Munich. Also, a recommendation for parameters of the random field for realistic modelling the uncertainty of the tensile strength is given. The aim of this research was to show a method in which the localization of strains and cracks as well as the influence of transverse reinforcement on the crack initiation and propagation in Finite Element Analysis can be seen.

Keywords: crack initiation, crack modelling, crack propagation, cracks, numerical simulation, random fields, reinforced concrete, stochastic

Procedia PDF Downloads 157
16658 Modeling and Optimization of a Microfluidic Electrochemical Cell for the Electro-Reduction of CO₂ to CH₃OH

Authors: Barzin Rajabloo, Martin Desilets

Abstract:

First, an electrochemical model for the reduction of CO₂ into CH₃OH is developed in which mass and charge transfer, reactions at the surface of the electrodes and fluid flow of the electrolyte are considered. This mathematical model is developed in COMSOL Multiphysics® where both secondary and tertiary current distribution interfaces are coupled to consider concentrations and potentials inside different parts of the cell. Constant reaction rates are assumed as the fitted parameters to minimize the error between experimental data and modeling results. The model is validated through a comparison with experimental data in terms of faradaic efficiency for production of CH₃OH, the current density in different applied cathode potentials as well as current density in different electrolyte flow rates. The comparison between model outputs and experimental measurements shows a good agreement. The model indicates the higher hydrogen evolution in comparison with CH₃OH production as well as mass transfer limitation caused by CO₂ concentration, which are consistent with findings in the literature. After validating the model, in the second part of the study, some design parameters of the cell, such as cathode geometry and catholyte/anolyte channel widths, are modified to reach better performance and higher faradaic efficiency of methanol production.

Keywords: carbon dioxide, electrochemical reduction, methanol, modeling

Procedia PDF Downloads 109
16657 A Dynamic Neural Network Model for Accurate Detection of Masked Faces

Authors: Oladapo Tolulope Ibitoye

Abstract:

Neural networks have become prominent and widely engaged in algorithmic-based machine learning networks. They are perfect in solving day-to-day issues to a certain extent. Neural networks are computing systems with several interconnected nodes. One of the numerous areas of application of neural networks is object detection. This is a prominent area due to the coronavirus disease pandemic and the post-pandemic phases. Wearing a face mask in public slows the spread of the virus, according to experts’ submission. This calls for the development of a reliable and effective model for detecting face masks on people's faces during compliance checks. The existing neural network models for facemask detection are characterized by their black-box nature and large dataset requirement. The highlighted challenges have compromised the performance of the existing models. The proposed model utilized Faster R-CNN Model on Inception V3 backbone to reduce system complexity and dataset requirement. The model was trained and validated with very few datasets and evaluation results shows an overall accuracy of 96% regardless of skin tone.

Keywords: convolutional neural network, face detection, face mask, masked faces

Procedia PDF Downloads 68
16656 A Comparative Analysis of ARIMA and Threshold Autoregressive Models on Exchange Rate

Authors: Diteboho Xaba, Kolentino Mpeta, Tlotliso Qejoe

Abstract:

This paper assesses the in-sample forecasting of the South African exchange rates comparing a linear ARIMA model and a SETAR model. The study uses a monthly adjusted data of South African exchange rates with 420 observations. Akaike information criterion (AIC) and the Schwarz information criteria (SIC) are used for model selection. Mean absolute error (MAE), root mean squared error (RMSE) and mean absolute percentage error (MAPE) are error metrics used to evaluate forecast capability of the models. The Diebold –Mariano (DM) test is employed in the study to check forecast accuracy in order to distinguish the forecasting performance between the two models (ARIMA and SETAR). The results indicate that both models perform well when modelling and forecasting the exchange rates, but SETAR seemed to outperform ARIMA.

Keywords: ARIMA, error metrices, model selection, SETAR

Procedia PDF Downloads 244
16655 The Quality of Management: A Leadership Maturity Model to Leverage Complexity

Authors: Marlene Kuhn, Franziska Schäfer, Heiner Otten

Abstract:

Today´s production processes experience a constant increase in complexity paving new ways for progressive forms of leadership. In the customized production, individual customer requirements drive companies to adapt their manufacturing processes constantly while the pressure for smaller lot sizes, lower costs and faster lead times grows simultaneously. When production processes are becoming more dynamic and complex, the conventional quality management approaches show certain limitations. This paper gives an introduction to complexity science from a quality management perspective. By analyzing and evaluating different characteristics of complexity, the critical complexity parameters are identified and assessed. We found that the quality of leadership plays a crucial role when dealing with increasing complexity. Therefore, we developed a concept for qualitative leadership customized for the management within complex processes based on a maturity model. The maturity model was then applied in the industry to assess the leadership quality of several shop floor managers with a positive evaluation feedback. In result, the maturity model proved to be a sustainable approach to leverage the rising complexity in production processes more effectively.

Keywords: maturity model, process complexity, quality of leadership, quality management

Procedia PDF Downloads 370
16654 Service Business Model Canvas: A Boundary Object Operating as a Business Development Tool

Authors: Taru Hakanen, Mervi Murtonen

Abstract:

This study aims to increase understanding of the transition of business models in servitization. The significance of service in all business has increased dramatically during the past decades. Service-dominant logic (SDL) describes this change in the economy and questions the goods-dominant logic on which business has primarily been based in the past. A business model canvas is one of the most cited and used tools in defining end developing business models. The starting point of this paper lies in the notion that the traditional business model canvas is inherently goods-oriented and best suits for product-based business. However, the basic differences between goods and services necessitate changes in business model representations when proceeding in servitization. Therefore, new knowledge is needed on how the conception of business model and the business model canvas as its representation should be altered in servitized firms in order to better serve business developers and inter-firm co-creation. That is to say, compared to products, services are intangible and they are co-produced between the supplier and the customer. Value is always co-created in interaction between a supplier and a customer, and customer experience primarily depends on how well the interaction succeeds between the actors. The role of service experience is even stronger in service business compared to product business, as services are co-produced with the customer. This paper provides business model developers with a service business model canvas, which takes into account the intangible, interactive, and relational nature of service. The study employs a design science approach that contributes to theory development via design artifacts. This study utilizes qualitative data gathered in workshops with ten companies from various industries. In particular, key differences between Goods-dominant logic (GDL) and SDL-based business models are identified when an industrial firm proceeds in servitization. As the result of the study, an updated version of the business model canvas is provided based on service-dominant logic. The service business model canvas ensures a stronger customer focus and includes aspects salient for services, such as interaction between companies, service co-production, and customer experience. It can be used for the analysis and development of a current service business model of a company or for designing a new business model. It facilitates customer-focused new service design and service development. It aids in the identification of development needs, and facilitates the creation of a common view of the business model. Therefore, the service business model canvas can be regarded as a boundary object, which facilitates the creation of a common understanding of the business model between several actors involved. The study contributes to the business model and service business development disciplines by providing a managerial tool for practitioners in service development. It also provides research insight into how servitization challenges companies’ business models.

Keywords: boundary object, business model canvas, managerial tool, service-dominant logic

Procedia PDF Downloads 367
16653 Simulation of a Fluid Catalytic Cracking Process

Authors: Sungho Kim, Dae Shik Kim, Jong Min Lee

Abstract:

Fluid catalytic cracking (FCC) process is one of the most important process in modern refinery indusrty. This paper focuses on the fluid catalytic cracking (FCC) process. As the FCC process is difficult to model well, due to its nonlinearities and various interactions between its process variables, rigorous process modeling of whole FCC plant is demanded for control and plant-wide optimization of the plant. In this study, a process design for the FCC plant includes riser reactor, main fractionator, and gas processing unit was developed. A reactor model was described based on four-lumped kinetic scheme. Main fractionator, gas processing unit and other process units are designed to simulate real plant data, using a process flowsheet simulator, Aspen PLUS. The custom reactor model was integrated with the process flowsheet simulator to develop an integrated process model.

Keywords: fluid catalytic cracking, simulation, plant data, process design

Procedia PDF Downloads 457
16652 Neuron Dynamics of Single-Compartment Traub Model for Hardware Implementations

Authors: J. C. Moctezuma, V. Breña-Medina, Jose Luis Nunez-Yanez, Joseph P. McGeehan

Abstract:

In this work we make a bifurcation analysis for a single compartment representation of Traub model, one of the most important conductance-based models. The analysis focus in two principal parameters: current and leakage conductance. Study of stable and unstable solutions are explored; also Hop-bifurcation and frequency interpretation when current varies is examined. This study allows having control of neuron dynamics and neuron response when these parameters change. Analysis like this is particularly important for several applications such as: tuning parameters in learning process, neuron excitability tests, measure bursting properties of the neuron, etc. Finally, a hardware implementation results were developed to corroborate these results.

Keywords: Traub model, Pinsky-Rinzel model, Hopf bifurcation, single-compartment models, bifurcation analysis, neuron modeling

Procedia PDF Downloads 323
16651 Education of Purchasing Professionals in Austria: Competence Based View

Authors: Volker Koch

Abstract:

This paper deals with the education of purchasing professionals in Austria. In this education, equivalent and measurable criteria are collected in order to create a comparison. The comparison shows the problem. To make the aforementioned comparison possible, methodologies such as KODE-Competence Atlas or presentations in a matrix form are used. The result shows the content taught and whether there are any similarities or interesting differences in the current Austrian purchasers’ formations. Purchasing professionals learning competencies are also illustrated in the study result.

Keywords: competencies, education, purchasing professional, technological-oriented

Procedia PDF Downloads 297
16650 Assessment of the Thermal and Mechanical Properties of Bio-based Composite Materials for Thermal Insulation

Authors: Nega Tesfie Asfaw, Rafik Absi, Labouda B. A, Ikram El Abbassi

Abstract:

Composite materials have come to the fore a few decades ago because of their superior insulation performances. Recycling natural fiber composites and natural fiber reinforcement of waste materials are other steps for conserving resources and the environment. This paper reviewed the Thermal properties (Thermal conductivity, Effusivity, and Diffusivity) and Mechanical properties (Compressive strength, Flexural strength, and Tensile strength) of bio-composite materials for thermal insulation in the construction industry. For several years, the development of the building materials industry has placed a special emphasis on bio-source materials. According to recent studies, most natural fibers have good thermal insulating qualities and good mechanical properties. To determine the thermal and mechanical performance of bio-composite materials in construction most research used experimental methods. the results of the study show that these natural fibers have allowed us to optimize energy consumption in a building and state that density, porosity, percentage of fiber, the direction of heat flow orientation of the fiber, and the shape of the specimen are the main elements that limit the thermal performance and also showed that density, porosity, Type of Fiber, Fiber length, orientation and weight percentage loading, Fiber-matrix adhesion, Choice of the polymer matrix, Presence of void are the main elements that limit the mechanical performance of the insulation material. Based on the results of this reviewed paper Moss fibers (0.034W/ (m. K)), Wood Fiber (0.043 W/ (m. K)), Wheat straw (0.046 W/ (m. K), and corn husk fibers (0.046 W/ (m. K) are a most promising solution for energy efficiency for construction industry with interesting insulation properties and with good acceptable mechanical properties. Finally, depending on the best fibers used for insulation applications in the construction sector, the thermal performance rate of various fibers reviewed in this article are analyzed. Due to Typha's high porosity, the results indicated that Typha australis fiber had a better thermal performance rate of 89.03% with clay.

Keywords: bio-based materials, thermal conductivity, compressive strength, thermal performance

Procedia PDF Downloads 29
16649 Polymer Mediated Interaction between Grafted Nanosheets

Authors: Supriya Gupta, Paresh Chokshi

Abstract:

Polymer-particle interactions can be effectively utilized to produce composites that possess physicochemical properties superior to that of neat polymer. The incorporation of fillers with dimensions comparable to polymer chain size produces composites with extra-ordinary properties owing to very high surface to volume ratio. The dispersion of nanoparticles is achieved by inducing steric repulsion realized by grafting particles with polymeric chains. A comprehensive understanding of the interparticle interaction between these functionalized nanoparticles plays an important role in the synthesis of a stable polymer nanocomposite. With the focus on incorporation of clay sheets in a polymer matrix, we theoretically construct the polymer mediated interparticle potential for two nanosheets grafted with polymeric chains. The self-consistent field theory (SCFT) is employed to obtain the inhomogeneous composition field under equilibrium. Unlike the continuum models, SCFT is built from the microscopic description taking in to account the molecular interactions contributed by both intra- and inter-chain potentials. We present the results of SCFT calculations of the interaction potential curve for two grafted nanosheets immersed in the matrix of polymeric chains of dissimilar chemistry to that of the grafted chains. The interaction potential is repulsive at short separation and shows depletion attraction for moderate separations induced by high grafting density. It is found that the strength of attraction well can be tuned by altering the compatibility between the grafted and the mobile chains. Further, we construct the interaction potential between two nanosheets grafted with diblock copolymers with one of the blocks being chemically identical to the free polymeric chains. The interplay between the enthalpic interaction between the dissimilar species and the entropy of the free chains gives rise to a rich behavior in interaction potential curve obtained for two separate cases of free chains being chemically similar to either the grafted block or the free block of the grafted diblock chains.

Keywords: clay nanosheets, polymer brush, polymer nanocomposites, self-consistent field theory

Procedia PDF Downloads 252
16648 Development of Groundwater Management Model Using Groundwater Sustainability Index

Authors: S. S. Rwanga, J. M. Ndambuki, Y. Woyessa

Abstract:

Development of a groundwater management model is an important step in the exploitation and management of any groundwater aquifer as it assists in the long-term sustainable planning of the resource. The current study was conducted in Central Limpopo province of South Africa with the overall objective of determining how much water can be withdrawn from the aquifer without producing nonreversible impacts on the groundwater quantity, hence developing a model which can sustainably protect the aquifer. The development was done through the computation of Groundwater Sustainability Index (GSI). Values of GSI close to unity and above indicated overexploitation. In this study, an index of 0.8 was considered as overexploitation. The results indicated that there is potential for higher abstraction rates compared to the current abstraction rates. GSI approach can be used in the management of groundwater aquifer to sustainably develop the resource and also provides water managers and policy makers with fundamental information on where future water developments can be carried out.

Keywords: development, groundwater, groundwater sustainability index, model

Procedia PDF Downloads 170
16647 Residual Life Estimation Based on Multi-Phase Nonlinear Wiener Process

Authors: Hao Chen, Bo Guo, Ping Jiang

Abstract:

Residual life (RL) estimation based on multi-phase nonlinear Wiener process was studied in this paper, which is significant for complicated products with small samples. Firstly, nonlinear Wiener model with random parameter was introduced and multi-phase nonlinear Wiener model was proposed to model degradation process of products that were nonlinear and separated into different phases. Then the multi-phase RL probability density function based on the presented model was derived approximately in a closed form and parameters estimation was achieved with the method of maximum likelihood estimation (MLE). Finally, the method was applied to estimate the RL of high voltage plus capacitor. Compared with the other three different models by log-likelihood function (Log-LF) and Akaike information criterion (AIC), the results show that the proposed degradation model can capture degradation process of high voltage plus capacitors in a better way and provide a more reliable result.

Keywords: multi-phase nonlinear wiener process, residual life estimation, maximum likelihood estimation, high voltage plus capacitor

Procedia PDF Downloads 453
16646 Improvement of Central Composite Design in Modeling and Optimization of Simulation Experiments

Authors: A. Nuchitprasittichai, N. Lerdritsirikoon, T. Khamsing

Abstract:

Simulation modeling can be used to solve real world problems. It provides an understanding of a complex system. To develop a simplified model of process simulation, a suitable experimental design is required to be able to capture surface characteristics. This paper presents the experimental design and algorithm used to model the process simulation for optimization problem. The CO2 liquefaction based on external refrigeration with two refrigeration circuits was used as a simulation case study. Latin Hypercube Sampling (LHS) was purposed to combine with existing Central Composite Design (CCD) samples to improve the performance of CCD in generating the second order model of the system. The second order model was then used as the objective function of the optimization problem. The results showed that adding LHS samples to CCD samples can help capture surface curvature characteristics. Suitable number of LHS sample points should be considered in order to get an accurate nonlinear model with minimum number of simulation experiments.

Keywords: central composite design, CO2 liquefaction, latin hypercube sampling, simulation-based optimization

Procedia PDF Downloads 166
16645 Mapping of Urban Micro-Climate in Lyon (France) by Integrating Complementary Predictors at Different Scales into Multiple Linear Regression Models

Authors: Lucille Alonso, Florent Renard

Abstract:

The characterizations of urban heat island (UHI) and their interactions with climate change and urban climates are the main research and public health issue, due to the increasing urbanization of the population. These solutions require a better knowledge of the UHI and micro-climate in urban areas, by combining measurements and modelling. This study is part of this topic by evaluating microclimatic conditions in dense urban areas in the Lyon Metropolitan Area (France) using a combination of data traditionally used such as topography, but also from LiDAR (Light Detection And Ranging) data, Landsat 8 satellite observation and Sentinel and ground measurements by bike. These bicycle-dependent weather data collections are used to build the database of the variable to be modelled, the air temperature, over Lyon’s hyper-center. This study aims to model the air temperature, measured during 6 mobile campaigns in Lyon in clear weather, using multiple linear regressions based on 33 explanatory variables. They are of various categories such as meteorological parameters from remote sensing, topographic variables, vegetation indices, the presence of water, humidity, bare soil, buildings, radiation, urban morphology or proximity and density to various land uses (water surfaces, vegetation, bare soil, etc.). The acquisition sources are multiple and come from the Landsat 8 and Sentinel satellites, LiDAR points, and cartographic products downloaded from an open data platform in Greater Lyon. Regarding the presence of low, medium, and high vegetation, the presence of buildings and ground, several buffers close to these factors were tested (5, 10, 20, 25, 50, 100, 200 and 500m). The buffers with the best linear correlations with air temperature for ground are 5m around the measurement points, for low and medium vegetation, and for building 50m and for high vegetation is 100m. The explanatory model of the dependent variable is obtained by multiple linear regression of the remaining explanatory variables (Pearson correlation matrix with a |r| < 0.7 and VIF with < 5) by integrating a stepwise sorting algorithm. Moreover, holdout cross-validation is performed, due to its ability to detect over-fitting of multiple regression, although multiple regression provides internal validation and randomization (80% training, 20% testing). Multiple linear regression explained, on average, 72% of the variance for the study days, with an average RMSE of only 0.20°C. The impact on the model of surface temperature in the estimation of air temperature is the most important variable. Other variables are recurrent such as distance to subway stations, distance to water areas, NDVI, digital elevation model, sky view factor, average vegetation density, or building density. Changing urban morphology influences the city's thermal patterns. The thermal atmosphere in dense urban areas can only be analysed on a microscale to be able to consider the local impact of trees, streets, and buildings. There is currently no network of fixed weather stations sufficiently deployed in central Lyon and most major urban areas. Therefore, it is necessary to use mobile measurements, followed by modelling to characterize the city's multiple thermal environments.

Keywords: air temperature, LIDAR, multiple linear regression, surface temperature, urban heat island

Procedia PDF Downloads 137
16644 Electromagnetic Modeling of a MESFET Transistor Using the Moments Method Combined with Generalised Equivalent Circuit Method

Authors: Takoua Soltani, Imen Soltani, Taoufik Aguili

Abstract:

The communications' and radar systems' demands give rise to new developments in the domain of active integrated antennas (AIA) and arrays. The main advantages of AIA arrays are the simplicity of fabrication, low cost of manufacturing, and the combination between free space power and the scanner without a phase shifter. The integrated active antenna modeling is the coupling between the electromagnetic model and the transport model that will be affected in the high frequencies. Global modeling of active circuits is important for simulating EM coupling, interaction between active devices and the EM waves, and the effects of EM radiation on active and passive components. The current review focuses on the modeling of the active element which is a MESFET transistor immersed in a rectangular waveguide. The proposed EM analysis is based on the Method of Moments combined with the Generalised Equivalent Circuit method (MOM-GEC). The Method of Moments which is the most common and powerful software as numerical techniques have been used in resolving the electromagnetic problems. In the class of numerical techniques, MOM is the dominant technique in solving of Maxwell and Transport’s integral equations for an active integrated antenna. In this situation, the equivalent circuit is introduced to the development of an integral method formulation based on the transposition of field problems in a Generalised equivalent circuit that is simpler to treat. The method of Generalised Equivalent Circuit (MGEC) was suggested in order to represent integral equations circuits that describe the unknown electromagnetic boundary conditions. The equivalent circuit presents a true electric image of the studied structures for describing the discontinuity and its environment. The aim of our developed method is to investigate the antenna parameters such as the input impedance and the current density distribution and the electric field distribution. In this work, we propose a global EM modeling of the MESFET AsGa transistor using an integral method. We will begin by describing the modeling structure that allows defining an equivalent EM scheme translating the electromagnetic equations considered. Secondly, the projection of these equations on common-type test functions leads to a linear matrix equation where the unknown variable represents the amplitudes of the current density. Solving this equation resulted in providing the input impedance, the distribution of the current density and the electric field distribution. From electromagnetic calculations, we were able to present the convergence of input impedance for different test function number as a function of the guide mode numbers. This paper presents a pilot study to find the answer to map out the variation of the existing current evaluated by the MOM-GEC. The essential improvement of our method is reducing computing time and memory requirements in order to provide a sufficient global model of the MESFET transistor.

Keywords: active integrated antenna, current density, input impedance, MESFET transistor, MOM-GEC method

Procedia PDF Downloads 198
16643 An Assessment of the Temperature Change Scenarios Using RS and GIS Techniques: A Case Study of Sindh

Authors: Jan Muhammad, Saad Malik, Fadia W. Al-Azawi, Ali Imran

Abstract:

In the era of climate variability, rising temperatures are the most significant aspect. In this study PRECIS model data and observed data are used for assessing the temperature change scenarios of Sindh province during the first half of present century. Observed data from various meteorological stations of Sindh are the primary source for temperature change detection. The current scenario (1961–1990) and the future one (2010-2050) are acted by the PRECIS Regional Climate Model at a spatial resolution of 25 * 25 km. Regional Climate Model (RCM) can yield reasonably suitable projections to be used for climate-scenario. The main objective of the study is to map the simulated temperature as obtained from climate model-PRECIS and their comparison with observed temperatures. The analysis is done on all the districts of Sindh in order to have a more precise picture of temperature change scenarios. According to results the temperature is likely to increases by 1.5 - 2.1°C by 2050, compared to the baseline temperature of 1961-1990. The model assesses more accurate values in northern districts of Sindh as compared to the coastal belt of Sindh. All the district of the Sindh province exhibit an increasing trend in the mean temperature scenarios and each decade seems to be warmer than the previous one. An understanding of the change in temperatures is very vital for various sectors such as weather forecasting, water, agriculture, and health, etc.

Keywords: PRECIS Model, real observed data, Arc GIS, interpolation techniques

Procedia PDF Downloads 249
16642 Hydrodynamics of Dual Hybrid Impeller of Stirred Reactor Using Radiotracer

Authors: Noraishah Othman, Siti K. Kamarudin, Norinsan K. Othman, Mohd S. Takriff, Masli I. Rosli, Engku M. Fahmi, Mior A. Khusaini

Abstract:

The present work describes hydrodynamics of mixing characteristics of two dual hybrid impeller consisting of, radial and axial impeller using radiotracer technique. Type A mixer, a Rushton turbine is mounted above a Pitched Blade Turbine (PBT) at common shaft and Type B mixer, a Rushton turbine is mounted below PBT. The objectives of this paper are to investigate the residence time distribution (RTD) of two hybrid mixers and to represent the respective mixers by RTD model. Each type of mixer will experience five radiotracer experiments using Tc99m as source of tracer and scintillation detectors NaI(Tl) are used for tracer detection. The results showed that mixer in parallel model and mixers in series with exchange can represent the flow model in mixer A whereas only mixer in parallel model can represent Type B mixer well than other models. In conclusion, Type A impeller, Rushton impeller above PBT, reduced the presence of dead zone in the mixer significantly rather than Type B.

Keywords: hybrid impeller, residence time distribution (RTD), radiotracer experiments, RTD model

Procedia PDF Downloads 358
16641 A Mathematical Agent-Based Model to Examine Two Patterns of Language Change

Authors: Gareth Baxter

Abstract:

We use a mathematical model of language change to examine two recently observed patterns of language change: one in which most speakers change gradually, following the mean of the community change, and one in which most individuals use predominantly one variant or another, and change rapidly if they change at all. The model is based on Croft’s Utterance Selection account of language change, which views language change as an evolutionary process, in which different variants (different ‘ways of saying the same thing’) compete for usage in a population of speakers. Language change occurs when a new variant replaces an older one as the convention within a given population. The present model extends a previous simpler model to include effects related to speaker aging and interspeaker variation in behaviour. The two patterns of individual change (one more centralized and the other more polarized) were recently observed in historical language changes, and it was further observed that slower changes were more associated with the centralized pattern, while quicker changes were more polarized. Our model suggests that the two patterns of change can be explained by different balances between the preference of speakers to use one variant over another and the degree of accommodation to (propensity to adapt towards) other speakers. The correlation with the rate of change appears naturally in our model, and results from the fact that both differential weighting of variants and the degree of accommodation affect the time for change to occur, while also determining the patterns of change. This work represents part of an ongoing effort to examine phenomena in language change through the use of mathematical models. This offers another way to evaluate qualitative explanations that cannot be practically tested (or cannot be tested at all) in a real-world, large-scale speech community.

Keywords: agent based modeling, cultural evolution, language change, social behavior modeling, social influence

Procedia PDF Downloads 235
16640 Effects of Screen Time on Children from a Systems Engineering Perspective

Authors: Misagh Faezipour

Abstract:

This paper explores the effects of screen time on children from a systems engineering perspective. We reviewed literature from several related works on the effects of screen time on children to explore all factors and interrelationships that would impact children that are subjected to using long screen times. Factors such as kids' age, parent attitudes, parent screen time influence, amount of time kids spend with technology, psychosocial and physical health outcomes, reduced mental imagery, problem-solving and adaptive thinking skills, obesity, unhealthy diet, depressive symptoms, health problems, disruption in sleep behavior, decrease in physical activities, problematic relationship with mothers, language, social, emotional delays, are examples of some factors that could be either a cause or effect of screen time. A systems engineering perspective is used to explore all the factors and factor relationships that were discovered through literature. A causal model is used to illustrate a graphical representation of these factors and their relationships. Through the causal model, the factors with the highest impacts can be realized. Future work would be to develop a system dynamics model to view the dynamic behavior of the relationships and observe the impact of changes in different factors in the model. The different changes on the input of the model, such as a healthier diet or obesity rate, would depict the effect of the screen time in the model and portray the effect on the children’s health and other factors that are important, which also works as a decision support tool.

Keywords: children, causal model, screen time, systems engineering, system dynamics

Procedia PDF Downloads 144
16639 Erosion Modeling of Surface Water Systems for Long Term Simulations

Authors: Devika Nair, Sean Bellairs, Ken Evans

Abstract:

Flow and erosion modeling provides an avenue for simulating the fine suspended sediment in surface water systems like streams and creeks. Fine suspended sediment is highly mobile, and many contaminants that may have been released by any sort of catchment disturbance attach themselves to these sediments. Therefore, a knowledge of fine suspended sediment transport is important in assessing contaminant transport. The CAESAR-Lisflood Landform Evolution Model, which includes a hydrologic model (TOPMODEL) and a hydraulic model (Lisflood), is being used to assess the sediment movement in tropical streams on account of a disturbance in the catchment of the creek and to determine the dynamics of sediment quantity in the creek through the years by simulating the model for future years. The accuracy of future simulations depends on the calibration and validation of the model to the past and present events. Calibration and validation of the model involve finding a combination of parameters of the model, which, when applied and simulated, gives model outputs similar to those observed for the real site scenario for corresponding input data. Calibrating the sediment output of the CAESAR-Lisflood model at the catchment level and using it for studying the equilibrium conditions of the landform is an area yet to be explored. Therefore, the aim of the study was to calibrate the CAESAR-Lisflood model and then validate it so that it could be run for future simulations to study how the landform evolves over time. To achieve this, the model was run for a rainfall event with a set of parameters, plus discharge and sediment data for the input point of the catchment, to analyze how similar the model output would behave when compared with the discharge and sediment data for the output point of the catchment. The model parameters were then adjusted until the model closely approximated the real site values of the catchment. It was then validated by running the model for a different set of events and checking that the model gave similar results to the real site values. The outcomes demonstrated that while the model can be calibrated to a greater extent for hydrology (discharge output) throughout the year, the sediment output calibration may be slightly improved by having the ability to change parameters to take into account the seasonal vegetation growth during the start and end of the wet season. This study is important to assess hydrology and sediment movement in seasonal biomes. The understanding of sediment-associated metal dispersion processes in rivers can be used in a practical way to help river basin managers more effectively control and remediate catchments affected by present and historical metal mining.

Keywords: erosion modelling, fine suspended sediments, hydrology, surface water systems

Procedia PDF Downloads 84
16638 An Integrated Approach for Optimal Selection of Machining Parameters in Laser Micro-Machining Process

Authors: A. Gopala Krishna, M. Lakshmi Chaitanya, V. Kalyana Manohar

Abstract:

In the existent analysis, laser micro machining (LMM) of Silicon carbide (SiCp) reinforced Aluminum 7075 Metal Matrix Composite (Al7075/SiCp MMC) was studied. While machining, Because of the intense heat generated, A layer gets formed on the work piece surface which is called recast layer and this layer is detrimental to the surface quality of the component. The recast layer needs to be as small as possible for precise applications. Therefore, The height of recast layer and the depth of groove which are conflicting in nature were considered as the significant manufacturing criteria, Which determines the pursuit of a machining process obtained in LMM of Al7075/10%SiCp composite. The present work formulates the depth of groove and height of recast layer in relation to the machining parameters using the Response Surface Methodology (RSM) and correspondingly, The formulated mathematical models were put to use for optimization. Since the effect of machining parameters on the depth of groove and height of recast layer was contradictory, The problem was explicated as a multi objective optimization problem. Moreover, An evolutionary Non-dominated sorting genetic algorithm (NSGA-II) was employed to optimize the model established by RSM. Subsequently this algorithm was also adapted to achieve the Pareto optimal set of solutions that provide a detailed illustration for making the optimal solutions. Eventually experiments were conducted to affirm the results obtained from RSM and NSGA-II.

Keywords: Laser Micro Machining (LMM), depth of groove, Height of recast layer, Response Surface Methodology (RSM), non-dominated sorting genetic algorithm

Procedia PDF Downloads 345
16637 Modeling and Optimization of Micro-Grid Using Genetic Algorithm

Authors: Mehrdad Rezaei, Reza Haghmaram, Nima Amjadi

Abstract:

This paper proposes an operating and cost optimization model for micro-grid (MG). This model takes into account emission costs of NOx, SO2, and CO2, together with the operation and maintenance costs. Wind turbines (WT), photovoltaic (PV) arrays, micro turbines (MT), fuel cells (FC), diesel engine generators (DEG) with different capacities are considered in this model. The aim of the optimization is minimizing operation cost according to constraints, supply demand and safety of the system. The proposed genetic algorithm (GA), with the ability to fine-tune its own settings, is used to optimize the micro-grid operation.

Keywords: micro-grid, optimization, genetic algorithm, MG

Procedia PDF Downloads 512