Search results for: optimization algorithms
1040 A Cross-Cultural Approach for Communication with Biological and Non-Biological Intelligences
Authors: Thomas Schalow
Abstract:
This paper posits the need to take a cross-cultural approach to communication with non-human cultures and intelligences in order to meet the following three imminent contingencies: communicating with sentient biological intelligences, communicating with extraterrestrial intelligences, and communicating with artificial super-intelligences. The paper begins with a discussion of how intelligence emerges. It disputes some common assumptions we maintain about consciousness, intention, and language. The paper next explores cross-cultural communication among humans, including non-sapiens species. The next argument made is that we need to become much more serious about communicating with the non-human, intelligent life forms that already exist around us here on Earth. There is an urgent need to broaden our definition of communication and reach out to the other sentient life forms that inhabit our world. The paper next examines the science and philosophy behind CETI (communication with extraterrestrial intelligences) and how it has proven useful, even in the absence of contact with alien life. However, CETI’s assumptions and methodology need to be revised and based on the cross-cultural approach to communication proposed in this paper if we are truly serious about finding and communicating with life beyond Earth. The final theme explored in this paper is communication with non-biological super-intelligences using a cross-cultural communication approach. This will present a serious challenge for humanity, as we have never been truly compelled to converse with other species, and our failure to seriously consider such intercourse has left us largely unprepared to deal with communication in a future that will be mediated and controlled by computer algorithms. Fortunately, our experience dealing with other human cultures can provide us with a framework for this communication. The basic assumptions behind intercultural communication can be applied to the many types of communication envisioned in this paper if we are willing to recognize that we are in fact dealing with other cultures when we interact with other species, alien life, and artificial super-intelligence. The ideas considered in this paper will require a new mindset for humanity, but a new disposition will prepare us to face the challenges posed by a future dominated by artificial intelligence.Keywords: artificial intelligence, CETI, communication, culture, language
Procedia PDF Downloads 3621039 Technology Valuation of Unconventional Gas R&D Project Using Real Option Approach
Authors: Young Yoon, Jinsoo Kim
Abstract:
The adoption of information and communication technologies (ICT) in all industry is growing under industry 4.0. Many oil companies also are increasingly adopting ICT to improve the efficiency of existing operations, take more accurate and quicker decision making and reduce entire cost by optimization. It is true that ICT is playing an important role in the process of unconventional oil and gas development and companies must take advantage of ICT to gain competitive advantage. In this study, real option approach has been applied to Unconventional gas R&D project to evaluate ICT of them. Many unconventional gas reserves such as shale gas and coal-bed methane(CBM) has developed due to technological improvement and high energy price. There are many uncertainties in unconventional development on the three stage(Exploration, Development, Production). The traditional quantitative benefits-cost method, such as net present value(NPV) is not sufficient for capturing ICT value. We attempted to evaluate the ICT valuation by applying the compound option model; the model is applied to real CBM project case, showing how it consider uncertainties. Variables are treated as uncertain and a Monte Carlo simulation is performed to consider variables effect. Acknowledgement—This work was supported by the Energy Efficiency & Resources Core Technology Program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea (No. 20152510101880) and by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-205S1A3A2046684).Keywords: information and communication technologies, R&D, real option, unconventional gas
Procedia PDF Downloads 2321038 Physical Characterization of a Watershed for Correlation with Parameters of Thomas Hydrological Model and Its Application in Iber Hidrodinamic Model
Authors: Carlos Caro, Ernest Blade, Nestor Rojas
Abstract:
This study determined the relationship between basic geo-technical parameters and parameters of the hydro logical model Thomas for water balance of rural watersheds, as a methodological calibration application, applicable in distributed models as IBER model, which represents a distributed system simulation models for unsteady flow numerical free surface. There was an exploration in 25 points (on 15 sub) basin of Rio Piedras (Boy.) obtaining soil samples, to which geo-technical characterization was performed by laboratory tests. Thomas model has a physical characterization of the input area by only four parameters (a, b, c, d). Achieve measurable relationship between geo technical parameters and 4 values of hydro logical parameters helps to determine subsurface, underground and surface flow more agile manner. It is intended in this way to reach some solutions regarding limits initial model parameters on the basis of Thomas geo-technical characterization. In hydro geological models of rural watersheds, calibration is an important process in the characterization of the study area. This step can require a significant computational cost and time, especially if the initial values or parameters before calibration are outside of the geo-technical reality. A better approach in these initial values means optimization of these process through a geo-technical materials area, where is obtained an important approach to the study as in the starting range of variation for the calibration parameters.Keywords: distributed hydrology, hydrological and geotechnical characterization, Iber model
Procedia PDF Downloads 5251037 Removal of Heavy Metals by KOH Activated Diplotaxis harra Biomass: Experimental Design Optimization
Authors: H. Tounsadi, A. Khalidi, M. Abdennouri, N. Barka
Abstract:
The objective of this study was to produce high quality activated carbons from Diplotaxis harra biomass by potassium hydroxide activation and their application for heavy metals removal. To reduce the number of experiments, full factorial experimental design at two levels were carried out to occur optimal preparation conditions and better conditions for the removal of cadmium and cobalt ions from aqueous solutions. The influence of different variables during the activation process, such as carbonization temperature, activation temperature, activation time and impregnation ratio (g KOH/g carbon) have been investigated, and the best production conditions were determined. The experimental results showed that removal of cadmium and cobalt ions onto activated carbons was more sensitive to methylene blue index instead of iodine number. Although, the removal of cadmium and cobalt ions is more influenced by activation temperature with a negative effect followed by the impregnation ratio with a positive impact. Based on the statistical data, the best conditions for the removal of cadmium and cobalt by prepared activated carbons have been established. The maximum iodine number and methylene blue index obtained under these conditions and the greater sorption capacities for cadmium and cobalt were investigated. These sorption capacities were greater than those of a commercial activated carbon used in water treatment.Keywords: activated carbon, cadmium, cobalt, Diplotaxis harra, experimental design, potassium hydroxide
Procedia PDF Downloads 2041036 The Design of Fire in Tube Boiler
Authors: Yoftahe Nigussie
Abstract:
This report presents a final year project pertaining to the design of Fire tube boiler for the purpose of producing saturated steam. The objective of the project is to produce saturated steam for different purpose with a capacity of 2000kg/h at 12bar design pressure by performing a design of a higher performance fire tube boiler that considered the requirements of cost minimization and parameters improvement. This is mostly done in selection of appropriate material for component parts, construction materials and production methods in different steps of analysis. In the analysis process, most of the design parameters are obtained by iterating with related formulas like selection of diameter of tubes with overall heat transfer coefficient optimization, and the other selections are also as like considered. The number of passes is two because of the size and area of the tubes and shell. As the analysis express by using heavy oil fuel no6 with a higher heating value of 44000kJ/kg and lower heating value of 41300kJ/kg and the amount of fuel consumed 140.37kg/hr. and produce 1610kw of heat with efficiency of 85.25%. The flow of the fluid is a cross flow because of its own advantage and the arrangement of the tube in-side the shell is welded with the tube sheet, and the tube sheet is attached with the shell and the end by using a gasket and weld. The design of the shell, using European Standard code section, is as like pressure vessel by considering the weight, including content and the supplementary accessories such as lifting lugs, openings, ends, man hole and supports with detail and assembly drawing.Keywords: steam generation, external treatment, internal treatment, steam velocity
Procedia PDF Downloads 1021035 Estimation of Bio-Kinetic Coefficients for Treatment of Brewery Wastewater
Authors: Abimbola M. Enitan, J. Adeyemo
Abstract:
Anaerobic modeling is a useful tool to describe and simulate the condition and behaviour of anaerobic treatment units for better effluent quality and biogas generation. The present investigation deals with the anaerobic treatment of brewery wastewater with varying organic loads. The chemical oxygen demand (COD) and total suspended solids (TSS) of the influent and effluent of the bioreactor were determined at various retention times to generate data for kinetic coefficients. The bio-kinetic coefficients in the modified Stover–Kincannon kinetic and methane generation models were determined to study the performance of anaerobic digestion process. At steady-state, the determination of the kinetic coefficient (K), the endogenous decay coefficient (Kd), the maximum growth rate of microorganisms (µmax), the growth yield coefficient (Y), ultimate methane yield (Bo), maximum utilization rate constant Umax and the saturation constant (KB) in the model were calculated to be 0.046 g/g COD, 0.083 (dˉ¹), 0.117 (d-¹), 0.357 g/g, 0.516 (L CH4/gCODadded), 18.51 (g/L/day) and 13.64 (g/L/day) respectively. The outcome of this study will help in simulation of anaerobic model to predict usable methane and good effluent quality during the treatment of industrial wastewater. Thus, this will protect the environment, conserve natural resources, saves time and reduce cost incur by the industries for the discharge of untreated or partially treated wastewater. It will also contribute to a sustainable long-term clean development mechanism for the optimization of the methane produced from anaerobic degradation of waste in a close system.Keywords: brewery wastewater, methane generation model, environment, anaerobic modeling
Procedia PDF Downloads 2741034 Optimization of Friction Stir Welding Parameters for Joining Aluminium Alloys using Response Surface Methodology and Artificial Neural Network
Authors: A. M. Khourshid, A. M. El-Kassas, I. Sabry
Abstract:
The objective of this work was to investigate the mechanical properties in order to demonstrate the feasibility of friction stir welding for joining Al 6061 aluminium alloys. Welding was performed on pipe with different thickness (2, 3 and 4 mm), five rotational speeds (485, 710, 910, 1120 and 1400 rpm) and a traverse speed of 4mm/min. This work focuses on two methods which are artificial neural networks using software and Response Surface Methodology (RSM) to predict the tensile strength, the percentage of elongation and hardness of friction stir welded 6061 aluminium alloy. An Artificial Neural Network (ANN) model was developed for the analysis of the friction stir welding parameters of 6061 pipe. Tensile strength, the percentage of elongation and hardness of weld joints were predicted by taking the parameters tool rotation speed, material thickness and axial force as a function. A comparison was made between measured and predicted data. Response Surface Methodology (RSM) was also developed and the values obtained for the response tensile strength, the percentage of elongation and hardness are compared with measured values. The effect of FSW process parameters on mechanical properties of 6061 aluminium alloy has been analysed in detail.Keywords: friction stir welding, aluminium alloy, response surface methodology, artificial neural network
Procedia PDF Downloads 2961033 A Constrained Model Predictive Control Scheme for Simultaneous Control of Temperature and Hygrometry in Greenhouses
Authors: Ayoub Moufid, Najib Bennis, Soumia El Hani
Abstract:
The objective of greenhouse climate control is to improve the culture development and to minimize the production costs. A greenhouse is an open system to external environment and the challenge is to regulate the internal climate despite the strong meteorological disturbances. The internal state of greenhouse considered in this work is defined by too relevant and coupled variables, namely inside temperature and hygrometry. These two variables are chosen to describe the internal state of greenhouses due to their importance in the development of plants and their sensitivity to external climatic conditions, sources of weather disturbances. A multivariable model is proposed and validated by considering a greenhouse as black-box system and the least square method is applied to parameters identification basing on collected experimental measures. To regulate the internal climate, we propose a Model Predictive Control (MPC) scheme. This one considers the measured meteorological disturbances and the physical and operational constraints on the control and state variables. A successful feasibility study of the proposed controller is presented, and simulation results show good performances despite the high interaction between internal and external variables and the strong external meteorological disturbances. The inside temperature and hygrometry are tracking nearly the desired trajectories. A comparison study with an On/Off control applied to the same greenhouse confirms the efficiency of the MPC approach to inside climate control.Keywords: climate control, constraints, identification, greenhouse, model predictive control, optimization
Procedia PDF Downloads 2081032 Proposed Algorithms to Assess Concussion Potential in Rear-End Motor Vehicle Collisions: A Meta-Analysis
Authors: Rami Hashish, Manon Limousis-Gayda, Caitlin McCleery
Abstract:
Introduction: Mild traumatic brain injuries, also referred to as concussions, represent an increasing burden to society. Due to limited objective diagnostic measures, concussions are diagnosed by assessing subjective symptoms, often leading to disputes to their presence. Common biomechanical measures associated with concussion are high linear and/or angular acceleration to the head. With regards to linear acceleration, approximately 80g’s has previously been shown to equate with a 50% probability of concussion. Motor vehicle collisions (MVCs) are a leading cause of concussion, due to high head accelerations experienced. The change in velocity (delta-V) of a vehicle in an MVC is an established metric for impact severity. As acceleration is the rate of delta-V with respect to time, the purpose of this paper is to determine the relation between delta-V (and occupant parameters) with linear head acceleration. Methods: A meta-analysis was conducted for manuscripts collected using the following keywords: head acceleration, concussion, brain injury, head kinematics, delta-V, change in velocity, motor vehicle collision, and rear-end. Ultimately, 280 studies were surveyed, 14 of which fulfilled the inclusion criteria as studies investigating the human response to impacts, reporting head acceleration, and delta-V of the occupant’s vehicle. Statistical analysis was conducted with SPSS and R. The best fit line analysis allowed for an initial understanding of the relation between head acceleration and delta-V. To further investigate the effect of occupant parameters on head acceleration, a quadratic model and a full linear mixed model was developed. Results: From the 14 selected studies, 139 crashes were analyzed with head accelerations and delta-V values ranging from 0.6 to 17.2g and 1.3 to 11.1 km/h, respectively. Initial analysis indicated that the best line of fit (Model 1) was defined as Head Acceleration = 0.465Keywords: acceleration, brain injury, change in velocity, Delta-V, TBI
Procedia PDF Downloads 2371031 Large Scale Production of Polyhydroxyalkanoates (PHAs) from Waste Water: A Study of Techno-Economics, Energy Use, and Greenhouse Gas Emissions
Authors: Cora Fernandez Dacosta, John A. Posada, Andrea Ramirez
Abstract:
The biodegradable family of polymers polyhydroxyalkanoates are interesting substitutes for convectional fossil-based plastics. However, the manufacturing and environmental impacts associated with their production via intracellular bacterial fermentation are strongly dependent on the raw material used and on energy consumption during the extraction process, limiting their potential for commercialization. Industrial wastewater is studied in this paper as a promising alternative feedstock for waste valorization. Based on results from laboratory and pilot-scale experiments, a conceptual process design, techno-economic analysis and life cycle assessment are developed for the large-scale production of the most common type of polyhydroxyalkanoate, polyhydroxbutyrate. Intracellular polyhydroxybutyrate is obtained via fermentation of microbial community present in industrial wastewater and the downstream processing is based on chemical digestion with surfactant and hypochlorite. The economic potential and environmental performance results help identifying bottlenecks and best opportunities to scale-up the process prior to industrial implementation. The outcome of this research indicates that the fermentation of wastewater towards PHB presents advantages compared to traditional PHAs production from sugars because the null environmental burdens and financial costs of the raw material in the bioplastic production process. Nevertheless, process optimization is still required to compete with the petrochemicals counterparts.Keywords: circular economy, life cycle assessment, polyhydroxyalkanoates, waste valorization
Procedia PDF Downloads 4591030 Predicting Low Birth Weight Using Machine Learning: A Study on 53,637 Ethiopian Birth Data
Authors: Kehabtimer Shiferaw Kotiso, Getachew Hailemariam, Abiy Seifu Estifanos
Abstract:
Introduction: Despite the highest share of low birth weight (LBW) for neonatal mortality and morbidity, predicting births with LBW for better intervention preparation is challenging. This study aims to predict LBW using a dataset encompassing 53,637 birth cohorts collected from 36 primary hospitals across seven regions in Ethiopia from February 2022 to June 2024. Methods: We identified ten explanatory variables related to maternal and neonatal characteristics, including maternal education, age, residence, history of miscarriage or abortion, history of preterm birth, type of pregnancy, number of livebirths, number of stillbirths, antenatal care frequency, and sex of the fetus to predict LBW. Using WEKA 3.8.2, we developed and compared seven machine learning algorithms. Data preprocessing included handling missing values, outlier detection, and ensuring data integrity in birth weight records. Model performance was evaluated through metrics such as accuracy, precision, recall, F1-score, and area under the Receiver Operating Characteristic curve (ROC AUC) using 10-fold cross-validation. Results: The results demonstrated that the decision tree, J48, logistic regression, and gradient boosted trees model achieved the highest accuracy (94.5% to 94.6%) with a precision of 93.1% to 93.3%, F1-score of 92.7% to 93.1%, and ROC AUC of 71.8% to 76.6%. Conclusion: This study demonstrates the effectiveness of machine learning models in predicting LBW. The high accuracy and recall rates achieved indicate that these models can serve as valuable tools for healthcare policymakers and providers in identifying at-risk newborns and implementing timely interventions to achieve the sustainable developmental goal (SDG) related to neonatal mortality.Keywords: low birth weight, machine learning, classification, neonatal mortality, Ethiopia
Procedia PDF Downloads 321029 Modeling of Age Hardening Process Using Adaptive Neuro-Fuzzy Inference System: Results from Aluminum Alloy A356/Cow Horn Particulate Composite
Authors: Chidozie C. Nwobi-Okoye, Basil Q. Ochieze, Stanley Okiy
Abstract:
This research reports on the modeling of age hardening process using adaptive neuro-fuzzy inference system (ANFIS). The age hardening output (Hardness) was predicted using ANFIS. The input parameters were ageing time, temperature and percentage composition of cow horn particles (CHp%). The results show the correlation coefficient (R) of the predicted hardness values versus the measured values was of 0.9985. Subsequently, values outside the experimental data points were predicted. When the temperature was kept constant, and other input parameters were varied, the average relative error of the predicted values was 0.0931%. When the temperature was varied, and other input parameters kept constant, the average relative error of the hardness values predictions was 80%. The results show that ANFIS with coarse experimental data points for learning is not very effective in predicting process outputs in the age hardening operation of A356 alloy/CHp particulate composite. The fine experimental data requirements by ANFIS make it more expensive in modeling and optimization of age hardening operations of A356 alloy/CHp particulate composite.Keywords: adaptive neuro-fuzzy inference system (ANFIS), age hardening, aluminum alloy, metal matrix composite
Procedia PDF Downloads 1571028 Pyrolysis of the Reed (Phragmites australis) and Evaluation of Pyrolysis Products
Authors: Ahmet Helvaci, Selcuk Dogan
Abstract:
Reed in especially almost all the lakes in Western Anatolia grows naturally. Due to the abundance of reed, pyrolysis of reed is very economical and practical application. In this study, it is aimed to determine the optimum conditions for the pyrolysis of the reed which is a cheap and abundant raw material and to evaluate pyrolysis products. For this purpose, reed was used obtained from Eber Lake located in the borders of Bolvadin county of Afyonkarahisar. Optimum pyrolysis conditions have been determined by examining the effects of changes in pyrolysis temperature and pyrolysis time. The evaluation of the obtained liquid and solid pyrolysis products has been investigated. Especially evaluability of solid carbon black production of tires has been investigated. Tire samples were prepared with carbon black samples obtained as a result of the pyrolysis process at different temperatures. Then, performance tests were made and compared with reference carbon blacks, used in the market and standards. At the same time, surface area measurement analysis of carbon black samples was made and compared again with reference carbon blacks. In addition, the fuel values of liquid products were also determined by calorimeter. It has been determined that the best surface area (about 370 m²/g) for carbon black samples, for tire production is 40 minutes at 500ᵒC. It was also found that the best result for evaluation studies in tire production was carbon black samples obtained at 450ᵒC pyrolysis temperature. In addition, it was seen that the calorimetry results of the liquid product obtained during 60 minutes of pyrolysis were quite good (around 5500 kcal/kg).Keywords: evaluation of products, optimization, pyrolysis, reed
Procedia PDF Downloads 1941027 Photovoltaic Performance of AgInSe2-Conjugated Polymer Hybrid Systems
Authors: Dinesh Pathaka, Tomas Wagnera, J. M. Nunzib
Abstract:
We investigated blends of MdPVV.PCBM.AIS for photovoltaic application. AgInSe2 powder was synthesized by sealing and heating the stoichiometric constituents in evacuated quartz tube ampule. Fine grinded AIS powder was dispersed in MD-MOPVV and PCBM with and without surfactant. Different concentrations of these particles were suspended in the polymer solutions and spin casted onto ITO glass. Morphological studies have been performed by atomic force microscopy and optical microscopy. The blend layers were also investigated by various techniques like XRD, UV-VIS optical spectroscopy, AFM, PL, after a series of various optimizations with polymers/concentration/deposition/ suspension/surfactants etc. XRD investigation of blend layers shows clear evidence of AIS dispersion in polymers. Diode behavior and cell parameters also revealed it. Bulk heterojunction hybrid photovoltaic device Ag/MoO3/MdPVV.PCBM.AIS/ZnO/ITO was fabricated and tested with standard solar simulator and device characterization system. The best performance and photovoltaic parameters we obtained was an open-circuit voltage of about Voc 0.54 V and a photocurrent of Isc 117 micro A and an efficiency of 0.2 percent using a white light illumination intensity of 23 mW/cm2. Our results are encouraging for further research on the fourth generation inorganic organic hybrid bulk heterojunction photovoltaics for energy. More optimization with spinning rate/thickness/solvents/deposition rates for active layers etc. need to be explored for improved photovoltaic response of these bulk heterojunction devices.Keywords: thin films, photovoltaic, hybrid systems, heterojunction
Procedia PDF Downloads 2761026 Research and Implementation of Cross-domain Data Sharing System in Net-centric Environment
Authors: Xiaoqing Wang, Jianjian Zong, Li Li, Yanxing Zheng, Jinrong Tong, Mao Zhan
Abstract:
With the rapid development of network and communication technology, a great deal of data has been generated in different domains of a network. These data show a trend of increasing scale and more complex structure. Therefore, an effective and flexible cross-domain data-sharing system is needed. The Cross-domain Data Sharing System(CDSS) in a net-centric environment is composed of three sub-systems. The data distribution sub-system provides data exchange service through publish-subscribe technology that supports asynchronism and multi-to-multi communication, which adapts to the needs of the dynamic and large-scale distributed computing environment. The access control sub-system adopts Attribute-Based Access Control(ABAC) technology to uniformly model various data attributes such as subject, object, permission and environment, which effectively monitors the activities of users accessing resources and ensures that legitimate users get effective access control rights within a legal time. The cross-domain access security negotiation subsystem automatically determines the access rights between different security domains in the process of interactive disclosure of digital certificates and access control policies through trust policy management and negotiation algorithms, which provides an effective means for cross-domain trust relationship establishment and access control in a distributed environment. The CDSS’s asynchronous,multi-to-multi and loosely-coupled communication features can adapt well to data exchange and sharing in dynamic, distributed and large-scale network environments. Next, we will give CDSS new features to support the mobile computing environment.Keywords: data sharing, cross-domain, data exchange, publish-subscribe
Procedia PDF Downloads 1281025 Personality Profiles, Emotional Disturbance and Health-Related Quality of Life in Patients with Epilepsy
Authors: Usha Barahmand, Ruhollah Heydari Sheikh Ahmad, Sara Alaie Khoraem
Abstract:
Introduction: The association of epilepsy with several psychological disorders and reduced quality of life has long been recognized. The present study aimed at comparing the personality profiles, quality of life and symptomatology of anxiety and depression in patients with epilepsy and healthy controls. Materials and Methods: Forty seven patients (29 men and 18 women) with diagnosed epilepsy participated in this study. Forty seven healthy controls who matched the patients in age and gender were also recruited. The participants’ personality and psychological profiles were assessed using the Depression, Anxiety, and Stress Scale (DASS-21), the Short-Form Health Survey (SF-36) and the HEXACO Personality Inventory (HEXACO-PI). Scoring algorithms were applied to the SF-36 produce the physical and mental component scores (PCS and MCS). Results: There were statistically significant differences in the total SF-36 score, anxiety, depression and stress scores of the DASS-21 between patients and controls. Anxiety, stress and depression scores significantly correlated inversely with the PCS and MCS. Data analysis showed that females had higher depression scores than males in both patients and controls, while males in both groups scored higher on stress. Patients’ personality scores were also different from those reported by controls on emotional, agreeableness and extroversion. Patients scored higher on emotionality, and lower on agreeableness and extraversion. Patients also scored lower on indices of quality of life. Regression analysis revealed that emotionality, anxiety, stress and MCS accounted for a significant proportion of the variance in severity of epileptic seizures. Conclusion: Stressful situations and psychological conditions as well as the personality trait of neuroticism were related to the occurrence of recurrent epileptic seizures.Keywords: anxiety, depression, epilepsy, neuroticism, personality, quality of life, stress
Procedia PDF Downloads 3721024 Enhancing Rupture Pressure Prediction for Corroded Pipes Through Finite Element Optimization
Authors: Benkouiten Imene, Chabli Ouerdia, Boutoutaou Hamid, Kadri Nesrine, Bouledroua Omar
Abstract:
Algeria is actively enhancing gas productivity by augmenting the supply flow. However, this effort has led to increased internal pressure, posing a potential risk to the pipeline's integrity, particularly in the presence of corrosion defects. Sonatrach relies on a vast network of pipelines spanning 24,000 kilometers for the transportation of gas and oil. The aging of these pipelines raises the likelihood of corrosion both internally and externally, heightening the risk of ruptures. To address this issue, a comprehensive inspection is imperative, utilizing specialized scraping tools. These advanced tools furnish a detailed assessment of all pipeline defects. It is essential to recalculate the pressure parameters to safeguard the corroded pipeline's integrity while ensuring the continuity of production. In this context, Sonatrach employs symbolic pressure limit calculations, such as ASME B31G (2009) and the modified ASME B31G (2012). The aim of this study is to perform a comparative analysis of various limit pressure calculation methods documented in the literature, namely DNV RP F-101, SHELL, P-CORRC, NETTO, and CSA Z662. This comparative assessment will be based on a dataset comprising 329 burst tests published in the literature. Ultimately, we intend to introduce a novel approach grounded in the finite element method, employing ANSYS software.Keywords: pipeline burst pressure, burst test, corrosion defect, corroded pipeline, finite element method
Procedia PDF Downloads 621023 2D Numerical Modeling of Ultrasonic Measurements in Concrete: Wave Propagation in a Multiple-Scattering Medium
Authors: T. Yu, L. Audibert, J. F. Chaix, D. Komatitsch, V. Garnier, J. M. Henault
Abstract:
Linear Ultrasonic Techniques play a major role in Non-Destructive Evaluation (NDE) for civil engineering structures in concrete since they can meet operational requirements. Interpretation of ultrasonic measurements could be improved by a better understanding of ultrasonic wave propagation in a multiple scattering medium. This work aims to develop a 2D numerical model of ultrasonic wave propagation in a heterogeneous medium, like concrete, integrating the multiple scattering phenomena in SPECFEM software. The coherent field of multiple scattering is obtained by averaging numerical wave fields, and it is used to determine the effective phase velocity and attenuation corresponding to an equivalent homogeneous medium. First, this model is applied to one scattering element (a cylinder) in a homogenous medium in a linear-elastic system, and its validation is completed thanks to the comparison with analytical solution. Then, some cases of multiple scattering by a set of randomly located cylinders or polygons are simulated to perform parametric studies on the influence of frequency and scatterer size, concentration, and shape. Also, the effective properties are compared with the predictions of Waterman-Truell model to verify its validity. Finally, the mortar viscoelastic behavior is introduced in the simulation in order to considerer the dispersion and the attenuation due to porosity included in the cement paste. In the future, different steps will be developed: The comparisons with experimental results, the interpretation of NDE measurements, and the optimization of NDE parameters before an auscultation.Keywords: attenuation, multiple-scattering medium, numerical modeling, phase velocity, ultrasonic measurements
Procedia PDF Downloads 2771022 Optimization the Conditions of Electrophoretic Deposition Fabrication of Graphene-Based Electrode to Consider Applications in Electro-Optical Sensors
Authors: Sepehr Lajevardi Esfahani, Shohre Rouhani, Zahra Ranjbar
Abstract:
Graphene has gained much attention owing to its unique optical and electrical properties. Charge carriers in graphene sheets (GS) carry out a linear dispersion relation near the Fermi energy and behave as massless Dirac fermions resulting in unusual attributes such as the quantum Hall effect and ambipolar electric field effect. It also exhibits nondispersive transport characteristics with an extremely high electron mobility (15000 cm2/(Vs)) at room temperature. Recently, several progresses have been achieved in the fabrication of single- or multilayer GS for functional device applications in the fields of optoelectronic such as field-effect transistors ultrasensitive sensors and organic photovoltaic cells. In addition to device applications, graphene also can serve as reinforcement to enhance mechanical, thermal, or electrical properties of composite materials. Electrophoretic deposition (EPD) is an attractive method for development of various coatings and films. It readily applied to any powdered solid that forms a stable suspension. The deposition parameters were controlled in various thicknesses. In this study, the graphene electrodeposition conditions were optimized. The results were obtained from SEM, Ohm resistance measuring technique and AFM characteristic tests. The minimum sheet resistance of electrodeposited reduced graphene oxide layers is achieved at conditions of 2 V in 10 s and it is annealed at 200 °C for 1 minute.Keywords: electrophoretic deposition (EPD), graphene oxide (GO), electrical conductivity, electro-optical devices
Procedia PDF Downloads 1911021 Applying Multiplicative Weight Update to Skin Cancer Classifiers
Authors: Animish Jain
Abstract:
This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer
Procedia PDF Downloads 831020 Modelling and Numerical Analysis of Thermal Non-Destructive Testing on Complex Structure
Authors: Y. L. Hor, H. S. Chu, V. P. Bui
Abstract:
Composite material is widely used to replace conventional material, especially in the aerospace industry to reduce the weight of the devices. It is formed by combining reinforced materials together via adhesive bonding to produce a bulk material with alternated macroscopic properties. In bulk composites, degradation may occur in microscopic scale, which is in each individual reinforced fiber layer or especially in its matrix layer such as delamination, inclusion, disbond, void, cracks, and porosity. In this paper, we focus on the detection of defect in matrix layer which the adhesion between the composite plies is in contact but coupled through a weak bond. In fact, the adhesive defects are tested through various nondestructive methods. Among them, pulsed phase thermography (PPT) has shown some advantages providing improved sensitivity, large-area coverage, and high-speed testing. The aim of this work is to develop an efficient numerical model to study the application of PPT to the nondestructive inspection of weak bonding in composite material. The resulting thermal evolution field is comprised of internal reflections between the interfaces of defects and the specimen, and the important key-features of the defects presented in the material can be obtained from the investigation of the thermal evolution of the field distribution. Computational simulation of such inspections has allowed the improvement of the techniques to apply in various inspections, such as materials with high thermal conductivity and more complex structures.Keywords: pulsed phase thermography, weak bond, composite, CFRP, computational modelling, optimization
Procedia PDF Downloads 1801019 A Second Order Genetic Algorithm for Traveling Salesman Problem
Authors: T. Toathom, M. Munlin, P. Sugunnasil
Abstract:
The traveling salesman problem (TSP) is one of the best-known problems in optimization problem. There are many research regarding the TSP. One of the most usage tool for this problem is the genetic algorithm (GA). The chromosome of the GA for TSP is normally encoded by the order of the visited city. However, the traditional chromosome encoding scheme has some limitations which are twofold: the large solution space and the inability to encapsulate some information. The number of solution for a certain problem is exponentially grow by the number of city. Moreover, the traditional chromosome encoding scheme fails to recognize the misplaced correct relation. It implies that the tradition method focuses only on exact solution. In this work, we relax some of the concept in the GA for TSP which is the exactness of the solution. The proposed work exploits the relation between cities in order to reduce the solution space in the chromosome encoding. In this paper, a second order GA is proposed to solve the TSP. The term second order refers to how the solution is encoded into chromosome. The chromosome is divided into 2 types: the high order chromosome and the low order chromosome. The high order chromosome is the chromosome that focus on the relation between cities such as the city A should be visited before city B. On the other hand, the low order chromosome is a type of chromosome that is derived from a high order chromosome. In other word, low order chromosome is encoded by the traditional chromosome encoding scheme. The genetic operation, mutation and crossover, will be performed on the high order chromosome. Then, the high order chromosome will be mapped to a group of low order chromosomes whose characteristics are satisfied with the high order chromosome. From the mapped set of chromosomes, the champion chromosome will be selected based on the fitness value which will be later used as a representative for the high order chromosome. The experiment is performed on the city data from TSPLIB.Keywords: genetic algorithm, traveling salesman problem, initial population, chromosomes encoding
Procedia PDF Downloads 2751018 Formulation, Evaluation and Statistical Optimization of Transdermal Niosomal Gel of Atenolol
Authors: Lakshmi Sirisha Kotikalapudi
Abstract:
Atenolol, the widely used antihypertensive drug is ionisable and degrades in the acidic environment of the GIT lessening the bioavailability. Transdermal route may be selected as an alternative to enhance the bioavailability. Half-life of the drug is 6-7 hours suggesting the requirement of prolonged release of the drug. The present work of transdermal niosomal gel aims to extend release of the drug and increase the bioavailability. Ethanol injection method was used for the preparation of niosomes using span-60 and cholesterol at different molar ratios following central composite design. The prepared niosomes were characterized for size, zeta-potential, entrapment efficiency, drug content and in-vitro drug release. Optimized formulation was selected by statistically analyzing the results obtained using the software Stat-Ease Design Expert. The optimized formulation also showed high drug retention inside the vesicles over a period of three months at a temperature of 4 °C indicating stability. Niosomes separated as a pellet were dried and incorporated into the hydrogel prepared using chitosan a natural polymer as a gelling agent. The effect of various chemical permeation enhancers was also studied over the gel formulations. The prepared formulations were characterized for viscosity, pH, drug release using Franz diffusion cells, and skin irritation test as well as in-vivo pharmacological activities. Atenolol niosomal gel preparations showed the prolonged release of the drug and pronounced antihypertensive activity indicating the suitability of niosomal gel for topical and systemic delivery of atenolol.Keywords: atenolol, chitosan, niosomes, transdermal
Procedia PDF Downloads 3011017 Investigating the Role of Artificial Intelligence in Developing Creativity in Architecture Education in Egypt: A Case Study of Design Studios
Authors: Ahmed Radwan, Ahmed Abdel Ghaney
Abstract:
This paper delves into the transformative potential of artificial intelligence (AI) in fostering creativity within the domain of architecture education, especially with a specific emphasis on its implications within the Design Studios; the convergence of AI and architectural pedagogy has introduced avenues for redefining the boundaries of creative expression and problem-solving. By harnessing AI-driven tools, students and educators can collaboratively explore a spectrum of design possibilities, stimulate innovative ideation, and engage in multidimensional design processes. This paper investigates the ways in which AI contributes to architectural creativity by facilitating generative design, pattern recognition, virtual reality experiences, and sustainable design optimization. Furthermore, the study examines the balance between AI-enhanced creativity and the preservation of core principles of architectural design/education, ensuring that technology is harnessed to augment rather than replace foundational design skills. Through an exploration of Egypt's architectural heritage and contemporary challenges, this research underscores how AI can synergize with cultural context and historical insights to inspire cutting-edge architectural solutions. By analyzing AI's impact on nurturing creativity among Egyptian architecture students, this paper seeks to contribute to the ongoing discourse on the integration of technology within global architectural education paradigms. It is hoped that this research will guide the thoughtful incorporation of AI in fostering creativity while preserving the authenticity and richness of architectural design education in Egypt and beyond.Keywords: architecture, artificial intelligence, architecture education, Egypt
Procedia PDF Downloads 841016 A Practice of Zero Trust Architecture in Financial Transactions
Authors: Liwen Wang, Yuting Chen, Tong Wu, Shaolei Hu
Abstract:
In order to enhance the security of critical financial infrastructure, this study carries out a transformation of the architecture of a financial trading terminal to a zero trust architecture (ZTA), constructs an active defense system for cybersecurity, improves the security level of trading services in the Internet environment, enhances the ability to prevent network attacks and unknown risks, and reduces the industry and security risks brought about by cybersecurity risks. This study introduces the SDP technology of ZTA, adapts and applies it to a financial trading terminal to achieve security optimization and fine-grained business grading control. The upgraded architecture of the trading terminal moves security protection forward to the user access layer, replaces VPN to optimize remote access, and significantly improves the security protection capability of Internet transactions. The study achieves 1. deep integration with the access control architecture of the transaction system; 2. no impact on the performance of terminals and gateways, and no perception of application system upgrades; 3. customized checklist and policy configuration; 4. introduction of industry-leading security technology such as single-packet authorization (SPA) and secondary authentication. This study carries out a successful application of ZTA in the field of financial trading and provides transformation ideas for other similar systems while improving the security level of financial transaction services in the Internet environment.Keywords: zero trust, trading terminal, architecture, network security, cybersecurity
Procedia PDF Downloads 1741015 Genetic Algorithm Methods for Determination Over Flow Coefficient of Medium Throat Length Morning Glory Spillway Equipped Crest Vortex Breakers
Authors: Roozbeh Aghamajidi
Abstract:
Shaft spillways are circling spillways used generally for emptying unexpected floods on earth and concrete dams. There are different types of shaft spillways: Stepped and Smooth spillways. Stepped spillways pass more flow discharges through themselves in comparison to smooth spillways. Therefore, awareness of flow behavior of these spillways helps using them better and more efficiently. Moreover, using vortex breaker has great effect on passing flow through shaft spillway. In order to use more efficiently, the risk of flow pressure decreases to less than fluid vapor pressure, called cavitations, should be prevented as far as possible. At this research, it has been tried to study different behavior of spillway with different vortex shapes on spillway crest on flow. From the viewpoint of the effects of flow regime changes on spillway, changes of step dimensions, and the change of type of discharge will be studied effectively. Therefore, two spillway models with three different vortex breakers and three arrangements have been used to assess the hydraulic characteristics of flow. With regard to the inlet discharge to spillway, the parameters of pressure and flow velocity on spillway surface have been measured at several points and after each run. Using these kinds of information leads us to create better design criteria of spillway profile. To achieve these purposes, optimization has important role and genetic algorithm are utilized to study the emptying discharge. As a result, it turned out that the best type of spillway with maximum discharge coefficient is smooth spillway with ogee shapes as vortex breaker and 3 number as arrangement. Besides it has been concluded that the genetic algorithm can be used to optimize the results.Keywords: shaft spillway, vortex breaker, flow, genetic algorithm
Procedia PDF Downloads 3751014 Easymodel: Web-based Bioinformatics Software for Protein Modeling Based on Modeller
Authors: Alireza Dantism
Abstract:
Presently, describing the function of a protein sequence is one of the most common problems in biology. Usually, this problem can be facilitated by studying the three-dimensional structure of proteins. In the absence of a protein structure, comparative modeling often provides a useful three-dimensional model of the protein that is dependent on at least one known protein structure. Comparative modeling predicts the three-dimensional structure of a given protein sequence (target) mainly based on its alignment with one or more proteins of known structure (templates). Comparative modeling consists of four main steps 1. Similarity between the target sequence and at least one known template structure 2. Alignment of target sequence and template(s) 3. Build a model based on alignment with the selected template(s). 4. Prediction of model errors 5. Optimization of the built model There are many computer programs and web servers that automate the comparative modeling process. One of the most important advantages of these servers is that it makes comparative modeling available to both experts and non-experts, and they can easily do their own modeling without the need for programming knowledge, but some other experts prefer using programming knowledge and do their modeling manually because by doing this they can maximize the accuracy of their modeling. In this study, a web-based tool has been designed to predict the tertiary structure of proteins using PHP and Python programming languages. This tool is called EasyModel. EasyModel can receive, according to the user's inputs, the desired unknown sequence (which we know as the target) in this study, the protein sequence file (template), etc., which also has a percentage of similarity with the primary sequence, and its third structure Predict the unknown sequence and present the results in the form of graphs and constructed protein files.Keywords: structural bioinformatics, protein tertiary structure prediction, modeling, comparative modeling, modeller
Procedia PDF Downloads 1021013 Development of a Dairy Drink Made of Cocoa, Coffee and Orange By-Products with Antioxidant Activity
Authors: Gianella Franco, Karen Suarez, María Quijano, Patricia Manzano
Abstract:
Agro-industries generate large amounts of waste, which are mostly untapped. This research was carried out to use cocoa, coffee and orange industrial by-products to develop a dairy drink. The product was prepared by making a 10% aqueous extract of the mixture of cocoa and coffee beans shells and orange peel. Extreme Vertices Mixture Design was applied to vary the proportions of the ingredients of the aqueous extract, getting 13 formulations. Each formulation was mixed with skim milk and pasteurized. The attributes of taste, smell, color and appearance were evaluated by a semi-trained panel by multiple comparisons test, comparing the formulations against a standard marked as "R", which consisted of a coffee commercial drink. The formulations with the highest scores were selected to maximize the Total Polyphenol Content (TPC) through a process of linear optimization resulting in the formulation 80.5%: 18.37%: 1.13% of cocoa bean shell, coffee bean shell and orange peel, respectively. The Total Polyphenol Content was 4.99 ± 0.34 mg GAE/g of drink, DPPH radical scavenging activity (%) was 80.14 ± 0.05 and caffeine concentration of 114.78 mg / L, while the coffee commercial drink presented 3.93 ± 0.84 mg GAE / g drink, 55.54 ± 0.03 % and 47.44 mg / L of TPC, DPPH radical scavenging activity and caffeine content, respectively. The results show that it is possible to prepare an antioxidant - rich drink with good sensorial attributes made of industrial by-products.Keywords: DPPH, polyphenols, waste, food science
Procedia PDF Downloads 4721012 Grating Assisted Surface Plasmon Resonance Sensor for Monitoring of Hazardous Toxic Chemicals and Gases in an Underground Mines
Authors: Sanjeev Kumar Raghuwanshi, Yadvendra Singh
Abstract:
The objective of this paper is to develop and optimize the Fiber Bragg (FBG) grating based Surface Plasmon Resonance (SPR) sensor for monitoring the hazardous toxic chemicals and gases in underground mines or any industrial area. A fully cladded telecommunication standard FBG is proposed to develop to produce surface plasmon resonance. A thin few nm gold/silver film (subject to optimization) is proposed to apply over the FBG sensing head using e-beam deposition method. Sensitivity enhancement of the sensor will be done by adding a composite nanostructured Graphene Oxide (GO) sensing layer using the spin coating method. Both sensor configurations suppose to demonstrate high responsiveness towards the changes in resonance wavelength. The GO enhanced sensor may show increased sensitivity of many fold compared to the gold coated traditional fibre optic sensor. Our work is focused on to optimize GO, multilayer structure and to develop fibre coating techniques that will serve well for sensitive and multifunctional detection of hazardous chemicals. This research proposal shows great potential towards future development of optical fiber sensors using readily available components such as Bragg gratings as highly sensitive chemical sensors in areas such as environmental sensing.Keywords: surface plasmon resonance, fibre Bragg grating, sensitivity, toxic gases, MATRIX method
Procedia PDF Downloads 2721011 Assimilating Remote Sensing Data Into Crop Models: A Global Systematic Review
Authors: Luleka Dlamini, Olivier Crespo, Jos van Dam
Abstract:
Accurately estimating crop growth and yield is pivotal for timely sustainable agricultural management and ensuring food security. Crop models and remote sensing can complement each other and form a robust analysis tool to improve crop growth and yield estimations when combined. This study thus aims to systematically evaluate how research that exclusively focuses on assimilating RS data into crop models varies among countries, crops, data assimilation methods, and farming conditions. A strict search string was applied in the Scopus and Web of Science databases, and 497 potential publications were obtained. After screening for relevance with predefined inclusion/exclusion criteria, 123 publications were considered in the final review. Results indicate that over 81% of the studies were conducted in countries associated with high socio-economic and technological advancement, mainly China, the United States of America, France, Germany, and Italy. Many of these studies integrated MODIS or Landsat data into WOFOST to improve crop growth and yield estimation of staple crops at the field and regional scales. Most studies use recalibration or updating methods alongside various algorithms to assimilate remotely sensed leaf area index into crop models. However, these methods cannot account for the uncertainties in remote sensing observations and the crop model itself. l. Over 85% of the studies were based on commercial and irrigated farming systems. Despite a great global interest in data assimilation into crop models, limited research has been conducted in resource- and data-limited regions like Africa. We foresee a great potential for such application in those conditions. Hence facilitating and expanding the use of such an approach, from which developing farming communities could benefit.Keywords: crop models, remote sensing, data assimilation, crop yield estimation
Procedia PDF Downloads 134