Search results for: reduced order macro models
21915 Field Deployment of Corrosion Inhibitor Developed for Sour Oil and Gas Carbon Steel Pipelines
Authors: Jeremy Moloney
Abstract:
A major oil and gas operator in western Canada producing approximately 50,000 BOE per day of sour fluids was experiencing increased water production along with decreased oil production over several years. The higher water volumes being produced meant an increase in the operator’s incumbent corrosion inhibitor (CI) chemical requirements but with reduced oil production revenues. Thus, a cost-effective corrosion inhibitor solution was sought to deliver enhanced corrosion mitigation of the carbon steel pipeline infrastructure but at reduced chemical injection dose rates. This paper presents the laboratory work conducted on the development of a corrosion inhibitor under the operator’s simulated sour operating conditions and then subsequent field testing of the product. The new CI not only provided extremely good levels of general and localized corrosion inhibition and outperformed the incumbent CI under the laboratory test conditions but did so at vastly lower concentrations. In turn, the novel CI product facilitated field chemical injection rates to be optimized and reduced by 40% compared with the incumbent whilst maintaining superior corrosion protection resulting in significant cost savings and associated sustainability benefits for the operator.Keywords: carbon steel, sour gas, hydrogen sulphide, localized corrosion, pitting, corrosion inhibitor
Procedia PDF Downloads 8621914 Efficient Deep Neural Networks for Real-Time Strawberry Freshness Monitoring: A Transfer Learning Approach
Authors: Mst. Tuhin Akter, Sharun Akter Khushbu, S. M. Shaqib
Abstract:
A real-time system architecture is highly effective for monitoring and detecting various damaged products or fruits that may deteriorate over time or become infected with diseases. Deep learning models have proven to be effective in building such architectures. However, building a deep learning model from scratch is a time-consuming and costly process. A more efficient solution is to utilize deep neural network (DNN) based transfer learning models in the real-time monitoring architecture. This study focuses on using a novel strawberry dataset to develop effective transfer learning models for the proposed real-time monitoring system architecture, specifically for evaluating and detecting strawberry freshness. Several state-of-the-art transfer learning models were employed, and the best performing model was found to be Xception, demonstrating higher performance across evaluation metrics such as accuracy, recall, precision, and F1-score.Keywords: strawberry freshness evaluation, deep neural network, transfer learning, image augmentation
Procedia PDF Downloads 9121913 A Reduced Distributed Sate Space for Modular Petri Nets
Authors: Sawsen Khlifa, Chiheb AMeur Abid, Belhassan Zouari
Abstract:
Modular verification approaches have been widely attempted to cope with the well known state explosion problem. This paper deals with the modular verification of modular Petri nets. We propose a reduced version for the modular state space of a given modular Petri net. The new structure allows the creation of smaller modular graphs. Each one draws the behavior of the corresponding module and outlines some global information. Hence, this version helps to overcome the explosion problem and to use less memory space. In this condensed structure, the verification of some generic properties concerning one module is limited to the exploration of its associated graph.Keywords: distributed systems, modular verification, petri nets, state space explosition
Procedia PDF Downloads 11721912 Computational Fluid Dynamics Design and Analysis of Aerodynamic Drag Reduction Devices for a Mazda T3500 Truck
Authors: Basil Nkosilathi Dube, Wilson R. Nyemba, Panashe Mandevu
Abstract:
In highway driving, over 50 percent of the power produced by the engine is used to overcome aerodynamic drag, which is a force that opposes a body’s motion through the air. Aerodynamic drag and thus fuel consumption increase rapidly at speeds above 90kph. It is desirable to minimize fuel consumption. Aerodynamic drag reduction in highway driving is the best approach to minimize fuel consumption and to reduce the negative impacts of greenhouse gas emissions on the natural environment. Fuel economy is the ultimate concern of automotive development. This study aims to design and analyze drag-reducing devices for a Mazda T3500 truck, namely, the cab roof and rear (trailer tail) fairings. The aerodynamic effects of adding these append devices were subsequently investigated. To accomplish this, two 3D CAD models of the Mazda truck were designed using the Design Modeler. One, with these, append devices and the other without. The models were exported to ANSYS Fluent for computational fluid dynamics analysis, no wind tunnel tests were performed. A fine mesh with more than 10 million cells was applied in the discretization of the models. The realizable k-ε turbulence model with enhanced wall treatment was used to solve the Reynold’s Averaged Navier-Stokes (RANS) equation. In order to simulate the highway driving conditions, the tests were simulated with a speed of 100 km/h. The effects of these devices were also investigated for low-speed driving. The drag coefficients for both models were obtained from the numerical calculations. By adding the cab roof and rear (trailer tail) fairings, the simulations show a significant reduction in aerodynamic drag at a higher speed. The results show that the greatest drag reduction is obtained when both devices are used. Visuals from post-processing show that the rear fairing minimized the low-pressure region at the rear of the trailer when moving at highway speed. The rear fairing achieved this by streamlining the turbulent airflow, thereby delaying airflow separation. For lower speeds, there were no significant differences in drag coefficients for both models (original and modified). The results show that these devices can be adopted for improving the aerodynamic efficiency of the Mazda T3500 truck at highway speeds.Keywords: aerodynamic drag, computation fluid dynamics, fluent, fuel consumption
Procedia PDF Downloads 14021911 Countering the Bullwhip Effect by Absorbing It Downstream in the Supply Chain
Authors: Geng Cui, Naoto Imura, Katsuhiro Nishinari, Takahiro Ezaki
Abstract:
The bullwhip effect, which refers to the amplification of demand variance as one moves up the supply chain, has been observed in various industries and extensively studied through analytic approaches. Existing methods to mitigate the bullwhip effect, such as decentralized demand information, vendor-managed inventory, and the Collaborative Planning, Forecasting, and Replenishment System, rely on the willingness and ability of supply chain participants to share their information. However, in practice, information sharing is often difficult to realize due to privacy concerns. The purpose of this study is to explore new ways to mitigate the bullwhip effect without the need for information sharing. This paper proposes a 'bullwhip absorption strategy' (BAS) to alleviate the bullwhip effect by absorbing it downstream in the supply chain. To achieve this, a two-stage supply chain system was employed, consisting of a single retailer and a single manufacturer. In each time period, the retailer receives an order generated according to an autoregressive process. Upon receiving the order, the retailer depletes the ordered amount, forecasts future demand based on past records, and places an order with the manufacturer using the order-up-to replenishment policy. The manufacturer follows a similar process. In essence, the mechanism of the model is similar to that of the beer game. The BAS is implemented at the retailer's level to counteract the bullwhip effect. This strategy requires the retailer to reduce the uncertainty in its orders, thereby absorbing the bullwhip effect downstream in the supply chain. The advantage of the BAS is that upstream participants can benefit from a reduced bullwhip effect. Although the retailer may incur additional costs, if the gain in the upstream segment can compensate for the retailer's loss, the entire supply chain will be better off. Two indicators, order variance and inventory variance, were used to quantify the bullwhip effect in relation to the strength of absorption. It was found that implementing the BAS at the retailer's level results in a reduction in both the retailer's and the manufacturer's order variances. However, when examining the impact on inventory variances, a trade-off relationship was observed. The manufacturer's inventory variance monotonically decreases with an increase in absorption strength, while the retailer's inventory variance does not always decrease as the absorption strength grows. This is especially true when the autoregression coefficient has a high value, causing the retailer's inventory variance to become a monotonically increasing function of the absorption strength. Finally, numerical simulations were conducted for verification, and the results were consistent with our theoretical analysis.Keywords: bullwhip effect, supply chain management, inventory management, demand forecasting, order-to-up policy
Procedia PDF Downloads 7621910 Energy Efficient Refrigerator
Authors: Jagannath Koravadi, Archith Gupta
Abstract:
In a world with constantly growing energy prices, and growing concerns about the global climate changes caused by increased energy consumption, it is becoming more and more essential to save energy wherever possible. Refrigeration systems are one of the major and bulk energy consuming systems now-a-days in industrial sectors, residential sectors and household environment. Refrigeration systems with considerable cooling requirements consume a large amount of electricity and thereby contribute greatly to the running costs. Therefore, a great deal of attention is being paid towards improvement of the performance of the refrigeration systems in this regard throughout the world. The Coefficient of Performance (COP) of a refrigeration system is used for determining the system's overall efficiency. The operating cost to the consumer and the overall environmental impact of a refrigeration system in turn depends on the COP or efficiency of the system. The COP of a refrigeration system should therefore be as high as possible. Slight modifications in the technical elements of the modern refrigeration systems have the potential to reduce the energy consumption, and improvements in simple operational practices with minimal expenses can have beneficial impact on COP of the system. Thus, the challenge is to determine the changes that can be made in a refrigeration system in order to improve its performance, reduce operating costs and power requirement, improve environmental outcomes, and achieve a higher COP. The opportunity here, and a better solution to this challenge, will be to incorporate modifications in conventional refrigeration systems for saving energy. Energy efficiency, in addition to improvement of COP, can deliver a range of savings such as reduced operation and maintenance costs, improved system reliability, improved safety, increased productivity, better matching of refrigeration load and equipment capacity, reduced resource consumption and greenhouse gas emissions, better working environment, and reduced energy costs. The present work aims at fabricating a working model of a refrigerator that will provide for effective heat recovery from superheated refrigerant with the help of an efficient de-superheater. The temperature of the refrigerant and water in the de-super heater at different intervals of time are measured to determine the quantity of waste heat recovered. It is found that the COP of the system improves by about 6% with the de-superheater and the power input to the compressor decreases by 4 % and also the refrigeration capacity increases by 4%.Keywords: coefficiency of performance, de-superheater, refrigerant, refrigeration capacity, heat recovery
Procedia PDF Downloads 32021909 Multiscale Analysis of Shale Heterogeneity in Silurian Longmaxi Formation from South China
Authors: Xianglu Tang, Zhenxue Jiang, Zhuo Li
Abstract:
Characterization of shale multi scale heterogeneity is an important part to evaluate size and space distribution of shale gas reservoirs in sedimentary basins. The origin of shale heterogeneity has always been a hot research topic for it determines shale micro characteristics description and macro quality reservoir prediction. Shale multi scale heterogeneity was discussed based on thin section observation, FIB-SEM, QEMSCAN, TOC, XRD, mercury intrusion porosimetry (MIP), and nitrogen adsorption analysis from 30 core samples in Silurian Longmaxi formation. Results show that shale heterogeneity can be characterized by pore structure and mineral composition. The heterogeneity of shale pore is showed by different size pores at nm-μm scale. Macropores (pore diameter > 50 nm) have a large percentage of pore volume than mesopores (pore diameter between 2~ 50 nm) and micropores (pore diameter < 2nm). However, they have a low specific surface area than mesopores and micropores. Fractal dimensions of the pores from nitrogen adsorption data are higher than 2.7, what are higher than 2.8 from MIP data, showing extremely complex pore structure. This complexity in pore structure is mainly due to the organic matter and clay minerals with complex pore network structures, and diagenesis makes it more complicated. The heterogeneity of shale minerals is showed by mineral grains, lamina, and different lithology at nm-km scale under the continuous changing horizon. Through analyzing the change of mineral composition at each scale, random arrangement of mineral equal proportion, seasonal climate changes, large changes of sedimentary environment, and provenance supply are considered to be the main reasons that cause shale minerals heterogeneity from microcosmic to macroscopic. Due to scale effect, the change of shale multi scale heterogeneity is a discontinuous process, and there is a transformation boundary between homogeneous and in homogeneous. Therefore, a shale multi scale heterogeneity changing model is established by defining four types of homogeneous unit at different scales, which can be used to guide the prediction of shale gas distribution from micro scale to macro scale.Keywords: heterogeneity, homogeneous unit, multiscale, shale
Procedia PDF Downloads 45421908 Numerical Investigation of Thermal Energy Storage Panel Using Nanoparticle Enhanced Phase Change Material for Micro-Satellites
Authors: Jelvin Tom Sebastian, Vinod Yeldho Baby
Abstract:
In space, electronic devices are constantly attacked with radiation, which causes certain parts to fail or behave in unpredictable ways. To advance the thermal controllability for microsatellites, we need a new approach and thermal control system that is smaller than that on conventional satellites and that demand no electric power. Heat exchange inside the microsatellites is not that easy as conventional satellites due to the smaller size. With slight mass gain and no electric power, accommodating heat using phase change materials (PCMs) is a strong candidate for solving micro satellites' thermal difficulty. In other words, PCMs can absorb or produce heat in the form of latent heat, changing their phase and minimalizing the temperature fluctuation around the phase change point. The main restriction for these systems is thermal conductivity weakness of common PCMs. As PCM is having low thermal conductivity, it increases the melting and solidification time, which is not suitable for specific application like electronic cooling. In order to increase the thermal conductivity nanoparticles are introduced. Adding the nanoparticles in base PCM increases the thermal conductivity. Increase in weight concentration increases the thermal conductivity. This paper numerically investigates the thermal energy storage panel with nanoparticle enhanced phase change material. Silver nanostructure have increased the thermal properties of the base PCM, eicosane. Different weight concentration (1, 2, 3.5, 5, 6.5, 8, 10%) of silver enhanced phase change material was considered. Both steady state and transient analysis was performed to compare the characteristics of nanoparticle enhanced phase material at different heat loads. Results showed that in steady state, the temperature near the front panel reduced and temperature on NePCM panel increased as the weight concentration increased. With the increase in thermal conductivity more heat was absorbed into the NePCM panel. In transient analysis, it was found that the effect of nanoparticle concentration on maximum temperature of the system was reduced as the melting point of the material reduced with increase in weight concentration. But for the heat load of maximum 20W, the model with NePCM did not attain the melting point temperature. Therefore it showed that the model with NePCM is capable of holding more heat load. In order to study the heat load capacity double the load is given, maximum of 40W was given as first half of the cycle and the other is given constant OW. Higher temperature was obtained comparing the other heat load. The panel maintained a constant temperature for a long duration according to the NePCM melting point. In both the analysis, the uniformity of temperature of the TESP was shown. Using Ag-NePCM it allows maintaining a constant peak temperature near the melting point. Therefore, by altering the weight concentration of the Ag-NePCM it is possible to create an optimum operating temperature required for the effective working of the electronics components.Keywords: carbon-fiber-reinforced polymer, micro/nano-satellite, nanoparticle phase change material, thermal energy storage
Procedia PDF Downloads 20421907 Japanese Quail Breeding: The Second in Poultry Industry
Authors: A. Smaï, H. Idouhar-Saadi, S. Zenia, F. Haddadj, A. Aboun, S.Doumandji
Abstract:
The quail is the smallest member of the order fowl. His captive breeding has been practiced for centuries by the Japanese. Knowing that in the literature, it is mentioned that the end of lay is noted for the age of 6 months, our work has revealed a good egg production by females aged up to 35 weeks of age. In the same vein, our study focused on various parameters such as weight, diet and the number of eggs laid and this in order to better know the potential production and reproduction of domestic quail. Egg production has started from the 8th week of age of breeding, crop them and their counts are conducted daily basis until the age of 35 weeks. Indeed, biometric parameters are studied such as weight, length, and the largest diameter, the shape index, the index of shell, in order to analyze the physical condition of eggs by females of age. Until the age of 22 weeks, the eggs have maintained good biometric features. Japanese quail are best producing eggs. Hatchability is also considered. They are excellent poultry yields, since they begin laying eggs in two months and can provide abundant nesting with females over 8 months in our study. Other farms results reveal conclusions. Indeed, one aspect remains to be developed; it is the analysis of nutritional and therapeutic values of eggs over the age of females. The latter, given their wealth is a dietary supplement of animal origin with dietary value (it contains 0 cholesterol) that characterizes the quail eggs. Raising quail among other reproduction requires minimal when compared to other domestic birds space, this is the second breeding, in terms of importance after the chicken. Therefore, in the case of a farm that works exclusively in the production of eggs, requires minimal work and free space, as well as reduced costs.Keywords: Japanese quail, reproduction, eggs, biometrics, reproductive age
Procedia PDF Downloads 28521906 Rethinking Urban Green Space Quality and Planning Models from Users and Experts’ Perspective for Sustainable Development: The Case of Debre Berhan and Debre Markos Cities, Ethiopia
Authors: Alemaw Kefale, Aramde Fetene, Hayal Desta
Abstract:
This study analyzed the users' and experts' views on the green space quality and planning models in Debre Berhan (DB) and Debre Markos (DM) cities in Ethiopia. A questionnaire survey was conducted on 350 park users (148 from DB and 202 from DM) to rate the accessibility, size, shape, vegetation cover, social and cultural context, conservation and heritage, community participation, attractiveness, comfort, safety, inclusiveness, and maintenance of green spaces using a Likert scale. A key informant interview was held with 13 experts in DB and 12 in DM. Descriptive statistics and tests of independence of variables using the chi-square test were done. A statistically significant association existed between the perception of green space quality attributes and users' occupation (χ² (160, N = 350) = 224.463, p < 0.001), age (χ² (128, N = 350) = 212.812, p < 0.001), gender (χ² (32, N = 350) = 68.443, p < 0.001), and education level (χ² (192, N = 350) = 293.396, p < 0.001). 61.7 % of park users were unsatisfied with the quality of urban green spaces. The users perceived dense vegetation cover as "good," with a mean value of 3.41, while the remaining were perceived as "medium with a mean value of 2.62 – 3.32". Only quantitative space standards are practiced as a green space planning model, while other models are unfamiliar and never used in either city. Therefore, experts need to be aware of and practice urban green models during urban planning to ensure that new developments include green spaces to accommodate the community's and the environment's needs.Keywords: urban green space, quality, users and experts, green space planning models, Ethiopia
Procedia PDF Downloads 5921905 Decision Support System for the Management of the Shandong Peninsula, China
Authors: Natacha Fery, Guilherme L. Dalledonne, Xiangyang Zheng, Cheng Tang, Roberto Mayerle
Abstract:
A Decision Support System (DSS) for supporting decision makers in the management of the Shandong Peninsula has been developed. Emphasis has been given to coastal protection, coastal cage aquaculture and harbors. The investigations were done in the framework of a joint research project funded by the German Ministry of Education and Research (BMBF) and the Chinese Academy of Sciences (CAS). In this paper, a description of the DSS, the development of its components, and results of its application are presented. The system integrates in-situ measurements, process-based models, and a database management system. Numerical models for the simulation of flow, waves, sediment transport and morphodynamics covering the entire Bohai Sea are set up based on the Delft3D modelling suite (Deltares). Calibration and validation of the models were realized based on the measurements of moored Acoustic Doppler Current Profilers (ADCP) and High Frequency (HF) radars. In order to enable cost-effective and scalable applications, a database management system was developed. It enhances information processing, data evaluation, and supports the generation of data products. Results of the application of the DSS to the management of coastal protection, coastal cage aquaculture and harbors are presented here. Model simulations covering the most severe storms observed during the last decades were carried out leading to an improved understanding of hydrodynamics and morphodynamics. Results helped in the identification of coastal stretches subjected to higher levels of energy and improved support for coastal protection measures.Keywords: coastal protection, decision support system, in-situ measurements, numerical modelling
Procedia PDF Downloads 19521904 Elastic and Plastic Collision Comparison Using Finite Element Method
Authors: Gustavo Rodrigues, Hans Weber, Larissa Driemeier
Abstract:
The prevision of post-impact conditions and the behavior of the bodies during the impact have been object of several collision models. The formulation from Hertz’s theory is generally used dated from the 19th century. These models consider the repulsive force as proportional to the deformation of the bodies under contact and may consider it proportional to the rate of deformation. The objective of the present work is to analyze the behavior of the bodies during impact using the Finite Element Method (FEM) with elastic and plastic material models. The main parameters to evaluate are, the contact force, the time of contact and the deformation of the bodies. An advantage of using the FEM approach is the possibility to apply a plastic deformation to the model according to the material definition: there will be used Johnson–Cook plasticity model whose parameters are obtained through empirical tests of real materials. This model allows analyzing the permanent deformation caused by impact, phenomenon observed in real world depending on the forces applied to the body. These results are compared between them and with the model-based Hertz theory.Keywords: collision, impact models, finite element method, Hertz Theory
Procedia PDF Downloads 17521903 Internal Methane Dry Reforming Kinetic Models in Solid Oxide Fuel Cells
Authors: Saeed Moarrefi, Shou-Han Zhou, Liyuan Fan
Abstract:
Coupling with solid oxide fuel cells, methane dry reforming is a promising pathway for energy production while mitigating carbon emissions. However, the influence of carbon dioxide and electrochemical reactions on the internal dry reforming reaction within the fuel cells remains debatable, requiring accurate kinetic models to describe the internal reforming behaviors. We employed the Power-Law and Langmuir Hinshelwood–Hougen Watson models in an electrolyte-supported solid oxide fuel cell with a NiO-GDC-YSZ anode. The current density used in this study ranges from 0 to 1000 A/m2 at 973 K to 1173 K to estimate various kinetic parameters. The influence of the electrochemical reactions on the adsorption terms, the equilibrium of the reactions, the activation energy, the pre-exponential factor of the rate constant, and the adsorption equilibrium constant were studied. This study provides essential parameters for future simulations and highlights the need for a more detailed examination of reforming kinetic models.Keywords: dry reforming kinetics, Langmuir Hinshelwood–Hougen Watson, power-law, SOFC
Procedia PDF Downloads 2921902 Modelling Fluidization by Data-Based Recurrence Computational Fluid Dynamics
Authors: Varun Dongre, Stefan Pirker, Stefan Heinrich
Abstract:
Over the last decades, the numerical modelling of fluidized bed processes has become feasible even for industrial processes. Commonly, continuous two-fluid models are applied to describe large-scale fluidization. In order to allow for coarse grids novel two-fluid models account for unresolved sub-grid heterogeneities. However, computational efforts remain high – in the order of several hours of compute-time for a few seconds of real-time – thus preventing the representation of long-term phenomena such as heating or particle conversion processes. In order to overcome this limitation, data-based recurrence computational fluid dynamics (rCFD) has been put forward in recent years. rCFD can be regarded as a data-based method that relies on the numerical predictions of a conventional short-term simulation. This data is stored in a database and then used by rCFD to efficiently time-extrapolate the flow behavior in high spatial resolution. This study will compare the numerical predictions of rCFD simulations with those of corresponding full CFD reference simulations for lab-scale and pilot-scale fluidized beds. In assessing the predictive capabilities of rCFD simulations, we focus on solid mixing and secondary gas holdup. We observed that predictions made by rCFD simulations are highly sensitive to numerical parameters such as diffusivity associated with face swaps. We achieved a computational speed-up of four orders of magnitude (10,000 time faster than classical TFM simulation) eventually allowing for real-time simulations of fluidized beds. In the next step, we apply the checkerboarding technique by introducing gas tracers subjected to convection and diffusion. We then analyze the concentration profiles by observing mixing, transport of gas tracers, insights about the convective and diffusive pattern of the gas tracers, and further towards heat and mass transfer methods. Finally, we run rCFD simulations and calibrate them with numerical and physical parameters compared with convectional Two-fluid model (full CFD) simulation. As a result, this study gives a clear indication of the applicability, predictive capabilities, and existing limitations of rCFD in the realm of fluidization modelling.Keywords: multiphase flow, recurrence CFD, two-fluid model, industrial processes
Procedia PDF Downloads 7521901 An Overview of Domain Models of Urban Quantitative Analysis
Authors: Mohan Li
Abstract:
Nowadays, intelligent research technology is more and more important than traditional research methods in urban research work, and this proportion will greatly increase in the next few decades. Frequently such analyzing work cannot be carried without some software engineering knowledge. And here, domain models of urban research will be necessary when applying software engineering knowledge to urban work. In many urban plan practice projects, making rational models, feeding reliable data, and providing enough computation all make indispensable assistance in producing good urban planning. During the whole work process, domain models can optimize workflow design. At present, human beings have entered the era of big data. The amount of digital data generated by cities every day will increase at an exponential rate, and new data forms are constantly emerging. How to select a suitable data set from the massive amount of data, manage and process it has become an ability that more and more planners and urban researchers need to possess. This paper summarizes and makes predictions of the emergence of technologies and technological iterations that may affect urban research in the future, discover urban problems, and implement targeted sustainable urban strategies. They are summarized into seven major domain models. They are urban and rural regional domain model, urban ecological domain model, urban industry domain model, development dynamic domain model, urban social and cultural domain model, urban traffic domain model, and urban space domain model. These seven domain models can be used to guide the construction of systematic urban research topics and help researchers organize a series of intelligent analytical tools, such as Python, R, GIS, etc. These seven models make full use of quantitative spatial analysis, machine learning, and other technologies to achieve higher efficiency and accuracy in urban research, assisting people in making reasonable decisions.Keywords: big data, domain model, urban planning, urban quantitative analysis, machine learning, workflow design
Procedia PDF Downloads 17721900 Beneficial Effects of Curcumin against Stress Oxidative and Mitochondrial Dysfunction Induced by Trinitrobenzene Sulphonic Acid in Colon
Authors: Souad Mouzaoui, Bahia Djerdjouri
Abstract:
Oxidative stress is one of the main factors involved in the onset and chronicity of inflammatory bowel disease (IBD). In this study, we investigated the beneficial effects of a potent natural antioxidant, curcumin (Cur) on colitis and mitochondrial dysfunction in trinitrobenzene sulfonic acid (TNBS)-induced colitis in mice. Rectal instillation of the chemical irritant TNBS (30 mg kg-1) induced the disruption of distal colonic architecture and a massive inflammatory cells influx to the mucosa and submucosa layers. Under these conditions, daily administration of Cur (25 mg kg-1) efficiently decreased colitis scores in the inflamed distal colon by reducing leukocyte infiltrate as attested by reduced myeloperoxidase (MPO) activity. Moreover, the levels of nitrite, an end product of inducible NO synthase activity (iNOS) and malonyl dialdehyde (MDA), a marker of lipid peroxidation increased in a time depending manner in response to TNBS challenge. Conversely, the markers of the antioxidant pool, reduced glutathione (GSH) and catalase activity (CAT) were drastically reduced. Cur attenuated oxidative stress markers and partially restored CAT and GSH levels. Moreover, our results expanded the effect of Cur on TNBS-induced colonic mitochondrial dysfunction. In fact, TNBS induced mitochondrial swelling and lipids peroxidation. These events reflected in the opening of mitochondrial transition pore and could be an initial indication in the cascade process leading to cell death. TNBS inhibited also mitochondrial respiratory activity, caused overproduction of mitochondrial superoxide anion (O2-.) and reduced level of mitochondrial GSH. Nevertheless, Cur reduced the extent of mitochondrial oxidative stress induced by TNBS and restored colonic mitochondrial function. In conclusion, our results showed the critical role of oxidative stress in TNBS-induced colitis. They highlight the role of colonic mitochondrial dysfunction induced by TNBS, as a potential source of oxidative damages. Due to its potent antioxidant properties, Cur opens a promising therapeutic approach against oxidative inflammation in IBD.Keywords: colitis, curcumin, mitochondria, oxidative stress, TNBS
Procedia PDF Downloads 25321899 Inverse Matrix in the Theory of Dynamical Systems
Authors: Renata Masarova, Bohuslava Juhasova, Martin Juhas, Zuzana Sutova
Abstract:
In dynamic system theory a mathematical model is often used to describe their properties. In order to find a transfer matrix of a dynamic system we need to calculate an inverse matrix. The paper contains the fusion of the classical theory and the procedures used in the theory of automated control for calculating the inverse matrix. The final part of the paper models the given problem by the Matlab.Keywords: dynamic system, transfer matrix, inverse matrix, modeling
Procedia PDF Downloads 51621898 Flexible Capacitive Sensors Based on Paper Sheets
Authors: Mojtaba Farzaneh, Majid Baghaei Nejad
Abstract:
This article proposes a new Flexible Capacitive Tactile Sensors based on paper sheets. This method combines the parameters of sensor's material and dielectric, and forms a new model of flexible capacitive sensors. The present article tries to present a practical explanation of this method's application and advantages. With the use of this new method, it is possible to make a more flexibility and accurate sensor in comparison with the current models. To assess the performance of this model, the common capacitive sensor is simulated and the proposed model of this article and one of the existing models are assessed. The results of this article indicate that the proposed model of this article can enhance the speed and accuracy of tactile sensor and has less error in comparison with the current models. Based on the results of this study, it can be claimed that in comparison with the current models, the proposed model of this article is capable of representing more flexibility and more accurate output parameters for touching the sensor, especially in abnormal situations and uneven surfaces, and increases accuracy and practicality.Keywords: capacitive sensor, paper sheets, flexible, tactile, uneven
Procedia PDF Downloads 35321897 Design of Low Latency Multiport Network Router on Chip
Authors: P. G. Kaviya, B. Muthupandian, R. Ganesan
Abstract:
On-chip routers typically have buffers are used input or output ports for temporarily storing packets. The buffers are consuming some router area and power. The multiple queues in parallel as in VC router. While running a traffic trace, not all input ports have incoming packets needed to be transferred. Therefore large numbers of queues are empty and others are busy in the network. So the time consumption should be high for the high traffic. Therefore using a RoShaQ, minimize the buffer area and time The RoShaQ architecture was send the input packets are travel through the shared queues at low traffic. At high load traffic the input packets are bypasses the shared queues. So the power and area consumption was reduced. A parallel cross bar architecture is proposed in this project in order to reduce the power consumption. Also a new adaptive weighted routing algorithm for 8-port router architecture is proposed in order to decrease the delay of the network on chip router. The proposed system is simulated using Modelsim and synthesized using Xilinx Project Navigator.Keywords: buffer, RoShaQ architecture, shared queue, VC router, weighted routing algorithm
Procedia PDF Downloads 54221896 Optimizing a Hybrid Inventory System with Random Demand and Lead Time
Authors: Benga Ebouele, Thomas Tengen
Abstract:
Implementing either periodic or continuous inventory review model within most manufacturing-companies-supply chains as a management tool may incur higher costs. These high costs affect the system flexibility which in turn affects the level of service required to satisfy customers. However, these effects are not clearly understood because the parameters of both inventory review policies (protection demand interval, order quantity, etc.) are not designed to be fully utilized under different and uncertain conditions such as poor manufacturing, supplies and delivery performance. Coming up with a hybrid model which may combine in some sense the feature of both continuous and a periodic inventory review models should be useful. Therefore, there is a need to build and evaluate such hybrid model on the annual total cost, stock out probability and system’s flexibility in order to search for the most cost effective inventory review model. This work also seeks to find the optimal sets of parameters of inventory management under stochastic condition so as to optimise each policy independently. The results reveal that a continuous inventory system always incurs lesser cost than a periodic (R, S) inventory system, but this difference tends to decrease as time goes by. Although the hybrid inventory is the only one that can yield lesser cost over time, it is not always desirable but also natural to use it in order to help the system to meet high performance specification.Keywords: demand and lead time randomness, hybrid Inventory model, optimization, supply chain
Procedia PDF Downloads 31421895 Effect of Minerals in Middlings on the Reactivity of Gasification-Coke by Blending a Large Proportion of Long Flame Coal
Authors: Jianjun Wu, Fanhui Guo, Yixin Zhang
Abstract:
In this study, gasification-coke were produced by blending the middlings (MC), and coking coal (CC) and a large proportion of long flame coal (Shenfu coal, SC), the effects of blending ratio were investigated. Mineral evolution and crystalline order obtained by XRD methods were reproduced within reasonable accuracy. Structure characteristics of partially gasification-coke such as surface area and porosity were determined using the N₂ adsorption and mercury porosimetry. Experimental data of gasification-coke was dominated by the TGA results provided trend, reactivity differences between gasification-cokes are discussed in terms of structure characteristic, crystallinity, and alkali index (AI). The first-order reaction equation was suitable for the gasification reaction kinetics of CO₂ atmosphere which was represented by the volumetric reaction model with linear correlation coefficient above 0.985. The differences in the microporous structure of gasification-coke and catalysis caused by the minerals in parent coals were supposed to be the main factors which affect its reactivity. The addition of MC made the samples enriched with a large amount of ash causing a higher surface area and a lower crystalline order to gasification-coke which was beneficial to gasification reaction. The higher SiO₂ and Al₂O₃ contents, causing a decreasing AI value and increasing activation energy, which reduced the gasification reaction activity. It was found that the increasing amount of MC got a better performance on the coke gasification reactivity by blending > 30% SC with this coking process.Keywords: low-rank coal, middlings, structure characteristic, mineral evolution, alkali index, gasification-coke, gasification kinetics
Procedia PDF Downloads 17521894 Early Warning System of Financial Distress Based On Credit Cycle Index
Authors: Bi-Huei Tsai
Abstract:
Previous studies on financial distress prediction choose the conventional failing and non-failing dichotomy; however, the distressed extent differs substantially among different financial distress events. To solve the problem, “non-distressed”, “slightly-distressed” and “reorganization and bankruptcy” are used in our article to approximate the continuum of corporate financial health. This paper explains different financial distress events using the two-stage method. First, this investigation adopts firm-specific financial ratios, corporate governance and market factors to measure the probability of various financial distress events based on multinomial logit models. Specifically, the bootstrapping simulation is performed to examine the difference of estimated misclassifying cost (EMC). Second, this work further applies macroeconomic factors to establish the credit cycle index and determines the distressed cut-off indicator of the two-stage models using such index. Two different models, one-stage and two-stage prediction models, are developed to forecast financial distress, and the results acquired from different models are compared with each other, and with the collected data. The findings show that the two-stage model incorporating financial ratios, corporate governance and market factors has the lowest misclassification error rate. The two-stage model is more accurate than the one-stage model as its distressed cut-off indicators are adjusted according to the macroeconomic-based credit cycle index.Keywords: Multinomial logit model, corporate governance, company failure, reorganization, bankruptcy
Procedia PDF Downloads 37821893 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling
Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal
Abstract:
Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.Keywords: ABET, accreditation, benchmark collection, machine learning, program educational objectives, student outcomes, supervised multi-class classification, text mining
Procedia PDF Downloads 17321892 Artificial Intelligence Based Predictive Models for Short Term Global Horizontal Irradiation Prediction
Authors: Kudzanayi Chiteka, Wellington Makondo
Abstract:
The whole world is on the drive to go green owing to the negative effects of burning fossil fuels. Therefore, there is immediate need to identify and utilise alternative renewable energy sources. Among these energy sources solar energy is one of the most dominant in Zimbabwe. Solar power plants used to generate electricity are entirely dependent on solar radiation. For planning purposes, solar radiation values should be known in advance to make necessary arrangements to minimise the negative effects of the absence of solar radiation due to cloud cover and other naturally occurring phenomena. This research focused on the prediction of Global Horizontal Irradiation values for the sixth day given values for the past five days. Artificial intelligence techniques were used in this research. Three models were developed based on Support Vector Machines, Radial Basis Function, and Feed Forward Back-Propagation Artificial neural network. Results revealed that Support Vector Machines gives the best results compared to the other two with a mean absolute percentage error (MAPE) of 2%, Mean Absolute Error (MAE) of 0.05kWh/m²/day root mean square (RMS) error of 0.15kWh/m²/day and a coefficient of determination of 0.990. The other predictive models had prediction accuracies of MAPEs of 4.5% and 6% respectively for Radial Basis Function and Feed Forward Back-propagation Artificial neural network. These two models also had coefficients of determination of 0.975 and 0.970 respectively. It was found that prediction of GHI values for the future days is possible using artificial intelligence-based predictive models.Keywords: solar energy, global horizontal irradiation, artificial intelligence, predictive models
Procedia PDF Downloads 27421891 Analysis of Evolution of Higher Order Solitons by Numerical Simulation
Authors: K. Khadidja
Abstract:
Solitons are stable solution of nonlinear Schrodinger equation. Their stability is due to the exact combination between nonlinearity and dispersion which causes pulse broadening. Higher order solitons are born when nonlinear length is N multiple of dispersive length. Soliton order is determined by the number N itself. In this paper, evolution of higher order solitons is illustrated by simulation using Matlab. Results show that higher order solitons change their shape periodically, the reason why they are bad for transmission comparing to fundamental solitons which are constant. Partial analysis of a soliton of higher order explains that the periodic shape is due to the interplay between nonlinearity and dispersion which are not equal during a period. This class of solitons has many applications such as generation of supercontinuum and the impulse compression on the Femtosecond scale. As a conclusion, the periodicity which is harmful to transmission can be beneficial in other applications.Keywords: dispersion, nonlinearity, optical fiber, soliton
Procedia PDF Downloads 16821890 Data Modeling and Calibration of In-Line Pultrusion and Laser Ablation Machine Processes
Authors: David F. Nettleton, Christian Wasiak, Jonas Dorissen, David Gillen, Alexandr Tretyak, Elodie Bugnicourt, Alejandro Rosales
Abstract:
In this work, preliminary results are given for the modeling and calibration of two inline processes, pultrusion, and laser ablation, using machine learning techniques. The end product of the processes is the core of a medical guidewire, manufactured to comply with a user specification of diameter and flexibility. An ensemble approach is followed which requires training several models. Two state of the art machine learning algorithms are benchmarked: Kernel Recursive Least Squares (KRLS) and Support Vector Regression (SVR). The final objective is to build a precise digital model of the pultrusion and laser ablation process in order to calibrate the resulting diameter and flexibility of a medical guidewire, which is the end product while taking into account the friction on the forming die. The result is an ensemble of models, whose output is within a strict required tolerance and which covers the required range of diameter and flexibility of the guidewire end product. The modeling and automatic calibration of complex in-line industrial processes is a key aspect of the Industry 4.0 movement for cyber-physical systems.Keywords: calibration, data modeling, industrial processes, machine learning
Procedia PDF Downloads 30021889 Investigating the performance of machine learning models on PM2.5 forecasts: A case study in the city of Thessaloniki
Authors: Alexandros Pournaras, Anastasia Papadopoulou, Serafim Kontos, Anastasios Karakostas
Abstract:
The air quality of modern cities is an important concern, as poor air quality contributes to human health and environmental issues. Reliable air quality forecasting has, thus, gained scientific and governmental attention as an essential tool that enables authorities to take proactive measures for public safety. In this study, the potential of Machine Learning (ML) models to forecast PM2.5 at local scale is investigated in the city of Thessaloniki, the second largest city in Greece, which has been struggling with the persistent issue of air pollution. ML models, with proven ability to address timeseries forecasting, are employed to predict the PM2.5 concentrations and the respective Air Quality Index 5-days ahead by learning from daily historical air quality and meteorological data from 2014 to 2016 and gathered from two stations with different land use characteristics in the urban fabric of Thessaloniki. The performance of the ML models on PM2.5 concentrations is evaluated with common statistical methods, such as R squared (r²) and Root Mean Squared Error (RMSE), utilizing a portion of the stations’ measurements as test set. A multi-categorical evaluation is utilized for the assessment of their performance on respective AQIs. Several conclusions were made from the experiments conducted. Experimenting on MLs’ configuration revealed a moderate effect of various parameters and training schemas on the model’s predictions. Their performance of all these models were found to produce satisfactory results on PM2.5 concentrations. In addition, their application on untrained stations showed that these models can perform well, indicating a generalized behavior. Moreover, their performance on AQI was even better, showing that the MLs can be used as predictors for AQI, which is the direct information provided to the general public.Keywords: Air Quality, AQ Forecasting, AQI, Machine Learning, PM2.5
Procedia PDF Downloads 7921888 Quantitative Structure-Activity Relationship Study of Some Quinoline Derivatives as Antimalarial Agents
Authors: M. Ouassaf, S. Belaid
Abstract:
A series of quinoline derivatives with antimalarial activity were subjected to two-dimensional quantitative structure-activity relationship (2D-QSAR) studies. Three models were implemented using multiple regression linear MLR, a regression partial least squares (PLS), nonlinear regression (MNLR), to see which descriptors are closely related to the activity biologic. We relied on a principal component analysis (PCA). Based on our results, a comparison of the quality of, MLR, PLS, and MNLR models shows that the MNLR (R = 0.914 and R² = 0.835, RCV= 0.853) models have substantially better predictive capability because the MNLR approach gives better results than MLR (R = 0.835 and R² = 0,752, RCV=0.601)), PLS (R = 0.742 and R² = 0.552, RCV=0.550) The model of MNLR gave statistically significant results and showed good stability to data variation in leave-one-out cross-validation. The obtained results suggested that our proposed model MNLR may be useful to predict the biological activity of derivatives of quinoline.Keywords: antimalarial, quinoline, QSAR, PCA, MLR , MNLR, MLR
Procedia PDF Downloads 15721887 Modal Analysis of Small Frames using High Order Timoshenko Beams
Authors: Chadi Azoury, Assad Kallassy, Pierre Rahme
Abstract:
In this paper, we consider the modal analysis of small frames. Firstly, we construct the 3D model using H8 elements and find the natural frequencies of the frame focusing our attention on the modes in the XY plane. Secondly, we construct the 2D model (plane stress model) using Q4 elements. We concluded that the results of both models are very close to each other’s. Then we formulate the stiffness matrix and the mass matrix of the 3-noded Timoshenko beam that is well suited for thick and short beams like in our case. Finally, we model the corners where the horizontal and vertical bar meet with a special matrix. The results of our new model (3-noded Timoshenko beam for the horizontal and vertical bars and a special element for the corners based on the Q4 elements) are very satisfying when performing the modal analysis.Keywords: corner element, high-order Timoshenko beam, Guyan reduction, modal analysis of frames, rigid link, shear locking, and short beams
Procedia PDF Downloads 32021886 An Adaptive Hybrid Surrogate-Assisted Particle Swarm Optimization Algorithm for Expensive Structural Optimization
Authors: Xiongxiong You, Zhanwen Niu
Abstract:
Choosing an appropriate surrogate model plays an important role in surrogates-assisted evolutionary algorithms (SAEAs) since there are many types and different kernel functions in the surrogate model. In this paper, an adaptive selection of the best suitable surrogate model method is proposed to solve different kinds of expensive optimization problems. Firstly, according to the prediction residual error sum of square (PRESS) and different model selection strategies, the excellent individual surrogate models are integrated into multiple ensemble models in each generation. Then, based on the minimum root of mean square error (RMSE), the best suitable surrogate model is selected dynamically. Secondly, two methods with dynamic number of models and selection strategies are designed, which are used to show the influence of the number of individual models and selection strategy. Finally, some compared studies are made to deal with several commonly used benchmark problems, as well as a rotor system optimization problem. The results demonstrate the accuracy and robustness of the proposed method.Keywords: adaptive selection, expensive optimization, rotor system, surrogates assisted evolutionary algorithms
Procedia PDF Downloads 141