Search results for: protocol optimization
628 Geological and Geotechnical Approach for Stabilization of Cut-Slopes in Power House Area of Luhri HEP Stage-I (210 MW), India
Authors: S. P. Bansal, Mukesh Kumar Sharma, Ankit Prabhakar
Abstract:
Luhri Hydroelectric Project Stage-I (210 MW) is a run of the river type development with a dam toe surface powerhouse (122m long, 50.50m wide, and 65.50m high) on the right bank of river Satluj in Himachal Pradesh, India. The project is located in the inner lesser Himalaya between Dhauladhar Range in the south and higher Himalaya in the north in the seismically active region. At the project, the location river is confined within narrow V-shaped valleys with little or no flat areas close to the river bed. Nearly 120m high cut slopes behind the powerhouse are proposed from the powerhouse foundation level of 795m to ± 915m to accommodate the surface powerhouse. The stability of 120m high cut slopes is a prime concern for the reason of risk involved. The slopes behind the powerhouse will be excavated in mainly in augen gneiss, fresh to weathered in nature, and biotite rich at places. The foliation joints are favorable and dipping inside the hill. Two valleys dipping steeper joints will be encountered on the slopes, which can cause instability during excavation. Geological exploration plays a vital role in designing and optimization of cut slopes. SWEDGE software has been used to analyze the geometry and stability of surface wedges in cut slopes. The slopes behind powerhouse have been analyzed in three zones for stability analysis by providing a break in the continuity of cut slopes, which shall provide quite substantial relief for slope stabilization measure. Pseudo static analysis has been carried out for the stabilization of wedges. The results indicate that many large wedges are forming, which have a factor of safety less than 1. The stability measures (support system, bench width, slopes) have been planned so that no wedge failure may occur in the future.Keywords: cut slopes, geotechnical investigations, Himalayan geology, surface powerhouse, wedge failure
Procedia PDF Downloads 116627 A 0-1 Goal Programming Approach to Optimize the Layout of Hospital Units: A Case Study in an Emergency Department in Seoul
Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee
Abstract:
This paper proposes a method to optimize the layout of an emergency department (ED) based on real executions of care processes by considering several planning objectives simultaneously. Recently, demand for healthcare services has been dramatically increased. As the demand for healthcare services increases, so do the need for new healthcare buildings as well as the need for redesign and renovating existing ones. The importance of implementation of a standard set of engineering facilities planning and design techniques has been already proved in both manufacturing and service industry with many significant functional efficiencies. However, high complexity of care processes remains a major challenge to apply these methods in healthcare environments. Process mining techniques applied in this study to tackle the problem of complexity and to enhance care process analysis. Process related information such as clinical pathways extracted from the information system of an ED. A 0-1 goal programming approach is then proposed to find a single layout that simultaneously satisfies several goals. The proposed model solved by optimization software CPLEX 12. The solution reached using the proposed method has 42.2% improvement in terms of walking distance of normal patients and 47.6% improvement in walking distance of critical patients at minimum cost of relocation. It has been observed that lots of patients must unnecessarily walk long distances during their visit to the emergency department because of an inefficient design. A carefully designed layout can significantly decrease patient walking distance and related complications.Keywords: healthcare operation management, goal programming, facility layout problem, process mining, clinical processes
Procedia PDF Downloads 292626 Myomectomy and Blood Loss: A Quality Improvement Project
Authors: Ena Arora, Rong Fan, Aleksandr Fuks, Kolawole Felix Akinnawonu
Abstract:
Introduction: Leiomyomas are benign tumors that are derived from the overgrowth of uterine smooth muscle cells. Women with symptomatic leiomyomas who desire future fertility, myomectomy should be the standard surgical treatment. Perioperative hemorrhage is a common complication in myomectomy. We performed the study to investigate blood transfusion rate in abdominal myomectomies, risk factors influencing blood loss and modalities to improve perioperative blood loss. Methods: Retrospective chart review was done for patients who underwent myomectomy from 2016 to 2022 at Queens hospital center, New York. We looked at preoperative patient demographics, clinical characteristics, intraoperative variables, and postoperative outcomes. Mann-Whitney U test were used for parametric and non-parametric continuous variable comparisons, respectively. Results: A total of 159 myomectomies were performed between 2016 and 2022, including 1 laparoscopic, 65 vaginal and 93 abdominal. 44 patients received blood transfusion during or within 72 hours of abdominal myomectomy. The blood transfusion rate was 47.3%. Blood transfusion rate was found to be twice higher than the average documented rate in literature which is 20%. Risk factors identified were black race, preoperative hematocrit<30%, preoperative blood transfusion within 72 hours, large fibroid burden, prolonged surgical time, and abdominal approach. Conclusion: Preoperative optimization with iron supplements or GnRH agonists is important for patients undergoing myomectomy. Interventions to decrease intra operative blood loss should include cell saver, tourniquet, vasopressin, misoprostol, tranexamic acid and gelatin-thrombin matrix hemostatic sealant.Keywords: myomectomy, perioperative blood loss, cell saver, tranexamic acid
Procedia PDF Downloads 83625 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer
Procedia PDF Downloads 136624 The Use of Unmanned Aerial System (UAS) in Improving the Measurement System on the Example of Textile Heaps
Authors: Arkadiusz Zurek
Abstract:
The potential of using drones is visible in many areas of logistics, especially in terms of their use for monitoring and control of many processes. The technologies implemented in the last decade concern new possibilities for companies that until now have not even considered them, such as warehouse inventories. Unmanned aerial vehicles are no longer seen as a revolutionary tool for Industry 4.0, but rather as tools in the daily work of factories and logistics operators. The research problem is to develop a method for measuring the weight of goods in a selected link of the clothing supply chain by drones. However, the purpose of this article is to analyze the causes of errors in traditional measurements, and then to identify adverse events related to the use of drones for the inventory of a heap of textiles intended for production purposes. On this basis, it will be possible to develop guidelines to eliminate the causes of these events in the measurement process using drones. In a real environment, work was carried out to determine the volume and weight of textiles, including, among others, weighing a textile sample to determine the average density of the assortment, establishing a local geodetic network, terrestrial laser scanning and photogrammetric raid using an unmanned aerial vehicle. As a result of the analysis of measurement data obtained in the facility, the volume and weight of the assortment and the accuracy of their determination were determined. In this article, this work presents how such heaps are currently being tested, what adverse events occur, indicate and describes the current use of photogrammetric techniques of this type of measurements so far performed by external drones for the inventory of wind farms or construction of the station and compare them with the measurement system of the aforementioned textile heap inside a large-format facility.Keywords: drones, unmanned aerial system, UAS, indoor system, security, process automation, cost optimization, photogrammetry, risk elimination, industry 4.0
Procedia PDF Downloads 84623 Leveraging Multimodal Neuroimaging Techniques to in vivo Address Compensatory and Disintegration Patterns in Neurodegenerative Disorders: Evidence from Cortico-Cerebellar Connections in Multiple Sclerosis
Authors: Efstratios Karavasilis, Foteini Christidi, Georgios Velonakis, Agapi Plousi, Kalliopi Platoni, Nikolaos Kelekis, Ioannis Evdokimidis, Efstathios Efstathopoulos
Abstract:
Introduction: Advanced structural and functional neuroimaging techniques contribute to the study of anatomical and functional brain connectivity and its role in the pathophysiology and symptoms’ heterogeneity in several neurodegenerative disorders, including multiple sclerosis (MS). Aim: In the present study, we applied multiparametric neuroimaging techniques to investigate the structural and functional cortico-cerebellar changes in MS patients. Material: We included 51 MS patients (28 with clinically isolated syndrome [CIS], 31 with relapsing-remitting MS [RRMS]) and 51 age- and gender-matched healthy controls (HC) who underwent MRI in a 3.0T MRI scanner. Methodology: The acquisition protocol included high-resolution 3D T1 weighted, diffusion-weighted imaging and echo planar imaging sequences for the analysis of volumetric, tractography and functional resting state data, respectively. We performed between-group comparisons (CIS, RRMS, HC) using CAT12 and CONN16 MATLAB toolboxes for the analysis of volumetric (cerebellar gray matter density) and functional (cortico-cerebellar resting-state functional connectivity) data, respectively. Brainance suite was used for the analysis of tractography data (cortico-cerebellar white matter integrity; fractional anisotropy [FA]; axial and radial diffusivity [AD; RD]) to reconstruct the cerebellum tracts. Results: Patients with CIS did not show significant gray matter (GM) density differences compared with HC. However, they showed decreased FA and increased diffusivity measures in cortico-cerebellar tracts, and increased cortico-cerebellar functional connectivity. Patients with RRMS showed decreased GM density in cerebellar regions, decreased FA and increased diffusivity measures in cortico-cerebellar WM tracts, as well as a pattern of increased and mostly decreased functional cortico-cerebellar connectivity compared to HC. The comparison between CIS and RRMS patients revealed significant GM density difference, reduced FA and increased diffusivity measures in WM cortico-cerebellar tracts and increased/decreased functional connectivity. The identification of decreased WM integrity and increased functional cortico-cerebellar connectivity without GM changes in CIS and the pattern of decreased GM density decreased WM integrity and mostly decreased functional connectivity in RRMS patients emphasizes the role of compensatory mechanisms in early disease stages and the disintegration of structural and functional networks with disease progression. Conclusions: In conclusion, our study highlights the added value of multimodal neuroimaging techniques for the in vivo investigation of cortico-cerebellar brain changes in neurodegenerative disorders. An extension and future opportunity to leverage multimodal neuroimaging data inevitably remain the integration of such data in the recently-applied mathematical approaches of machine learning algorithms to more accurately classify and predict patients’ disease course.Keywords: advanced neuroimaging techniques, cerebellum, MRI, multiple sclerosis
Procedia PDF Downloads 139622 Model-Based Approach as Support for Product Industrialization: Application to an Optical Sensor
Authors: Frederic Schenker, Jonathan J. Hendriks, Gianluca Nicchiotti
Abstract:
In a product industrialization perspective, the end-product shall always be at the peak of technological advancement and developed in the shortest time possible. Thus, the constant growth of complexity and a shorter time-to-market calls for important changes on both the technical and business level. Undeniably, the common understanding of the system is beclouded by its complexity which leads to the communication gap between the engineers and the sale department. This communication link is therefore important to maintain and increase the information exchange between departments to ensure a punctual and flawless delivery to the end customer. This evolution brings engineers to reason with more hindsight and plan ahead. In this sense, they use new viewpoints to represent the data and to express the model deliverables in an understandable way that the different stakeholder may identify their needs and ideas. This article focuses on the usage of Model-Based System Engineering (MBSE) in a perspective of system industrialization and reconnect the engineering with the sales team. The modeling method used and presented in this paper concentrates on displaying as closely as possible the needs of the customer. Firstly, by providing a technical solution to the sales team to help them elaborate commercial offers without omitting technicalities. Secondly, the model simulates between a vast number of possibilities across a wide range of components. It becomes a dynamic tool for powerful analysis and optimizations. Thus, the model is no longer a technical tool for the engineers, but a way to maintain and solidify the communication between departments using different views of the model. The MBSE contribution to cost optimization during New Product Introduction (NPI) activities is made explicit through the illustration of a case study describing the support provided by system models to architectural choices during the industrialization of a novel optical sensor.Keywords: analytical model, architecture comparison, MBSE, product industrialization, SysML, system thinking
Procedia PDF Downloads 158621 Technological Development of a Biostimulant Bioproduct for Fruit Seedlings: An Engineering Overview
Authors: Andres Diaz Garcia
Abstract:
The successful technological development of any bioproduct, including those of the biostimulant type, requires to adequately completion of a series of stages allied to different disciplines that are related to microbiological, engineering, pharmaceutical chemistry, legal and market components, among others. Engineering as a discipline has a key contribution in different aspects of fermentation processes such as the design and optimization of culture media, the standardization of operating conditions within the bioreactor and the scaling of the production process of the active ingredient that it will be used in unit operations downstream. However, all aspects mentioned must take into account many biological factors of the microorganism such as the growth rate, the level of assimilation to various organic and inorganic sources and the mechanisms of action associated with its biological activity. This paper focuses on the practical experience within the Colombian Corporation for Agricultural Research (Agrosavia), which led to the development of a biostimulant bioproduct based on native rhizobacteria Bacillus amyloliquefaciens, oriented mainly to plant growth promotion in cape gooseberry nurseries and fruit crops in Colombia, and the challenges that were overcome from the expertise in the area of engineering. Through the application of strategies and engineering tools, a culture medium was optimized to obtain concentrations higher than 1E09 CFU (colony form units)/ml in liquid fermentation, the process of biomass production was standardized and a scale-up strategy was generated based on geometric (H/D of bioreactor relationships), and operational criteria based on a minimum dissolved oxygen concentration and that took into account the differences in the capacity of control of the process in the laboratory and pilot scales. Currently, the bioproduct obtained through this technological process is in stages of registration in Colombia for cape gooseberry fruits for export.Keywords: biochemical engineering, liquid fermentation, plant growth promoting, scale-up process
Procedia PDF Downloads 111620 Biodiesel Production from Yellow Oleander Seed Oil
Authors: S. Rashmi, Devashish Das, N. Spoorthi, H. V. Manasa
Abstract:
Energy is essential and plays an important role for overall development of a nation. The global economy literally runs on energy. The use of fossil fuels as energy is now widely accepted as unsustainable due to depleting resources and also due to the accumulation of greenhouse gases in the environment, renewable and carbon neutral biodiesel are necessary for environment and economic sustainability. Unfortunately biodiesel produced from oil crop, waste cooking oil and animal fats are not able to replace fossil fuel. Fossil fuels remain the dominant source of primary energy, accounting for 84% of the overall increase in demand. Today biodiesel has come to mean a very specific chemical modification of natural oils. Objectives: To produce biodiesel from yellow oleander seed oil, to test the yield of biodiesel using different types of catalyst (KOH & NaOH). Methodology: Oil is extracted from dried yellow oleander seeds using Soxhlet extractor and oil expeller (bulk). The FFA content of the oil is checked and depending on the FFA value either two steps or single step process is followed to produce biodiesel. Two step processes includes esterfication and transesterification, single step includes only transesterification. The properties of biodiesel are checked. Engine test is done for biodiesel produced. Result: It is concluded that biodiesel quality parameters such as yield(85% & 90%), flash point(1710C & 1760C),fire point(1950C & 1980C), viscosity(4.9991 and 5.21 mm2/s) for the biodiesel from seed oil of Thevetiaperuviana produced by using KOH & NaOH respectively. Thus the seed oil of Thevetiaperuviana is a viable feedstock for good quality fuel.The outcomes of our project are a substitute for conventional fuel, to reduce petro diesel requirement,improved performance in terms of emissions. Future prospects: Optimization of biodiesel production using response surface method.Keywords: yellow oleander seeds, biodiesel, quality parameters, renewable sources
Procedia PDF Downloads 444619 FWGE Production From Wheat Germ Using Co-culture of Saccharomyces cerevisiae and Lactobacillus plantarum
Authors: Valiollah Babaeipour, Mahdi Rahaie
Abstract:
food supplements are rich in specific nutrients and bioactive compounds that eliminate free radicals and improve cellular metabolism. The major bioactive compounds are found in bran and cereal sprouts. Secondary metabolites of these microorganisms have antioxidant properties that can be used alone or in combination with chemotherapy and radiation therapy to treat cancer. Biologically active compounds such as benzoquinone derivatives extracted from fermented wheat germ extract (FWGE) have several positive effects on the overall state of human health and strengthen the immune system. The present work describes the discontinuous fermentation of raw wheat germ for FWGE production through the simultaneous culture process using the probiotic strains of Saccharomyces cerevisiae, Lactobacillus plantarum, and the possibility of using solid waste. To increase production efficiency, first to select important factors in the optimization of each fermentation process, using a factorial statistical scheme of stirring fraction (120 to 200 rpm), dilution of solids to solvent (1 to 8-12), fermentation time (16 to 24 hours) and strain to wheat germ ratio (20% to 50%) were studied and then simultaneous culture was performed to increase the yields of 2 and 6 dimethoxybenzoquinone (2,6-DMBQ). Since 2 and 6 dimethoxy benzoquinone were fermented as the main biologically active compound in wheat germ extract, UV-Vis analysis was performed to confirm the presence of 2 and 6 dimethoxy benzoquinone in the final product. In addition, 2,6-DMBQ of some products was isolated in a non-polar C-18 column and quantified using high performance liquid chromatography (HPLC). Based on our findings, it can be concluded that the increase of 2 and 6 dimethoxybenzoquinone in the simultaneous culture of Saccharomyces cerevisiae - Lactobacillus plantarum compared to pure culture of Saccharomyces cerevisiae (from 1.89 mg / g) to 28.9% (2.66 mg / g) Increased.Keywords: wheat germ, FWGE, saccharomyces cerevisiae, lactobacillus plantarum, co-culture, 2, 6-DMBQ
Procedia PDF Downloads 128618 Wound Healing Process Studied on DC Non-Homogeneous Electric Fields
Authors: Marisa Rio, Sharanya Bola, Richard H. W. Funk, Gerald Gerlach
Abstract:
Cell migration, wound healing and regeneration are some of the physiological phenomena in which electric fields (EFs) have proven to have an important function. Physiologically, cells experience electrical signals in the form of transmembrane potentials, ion fluxes through protein channels as well as electric fields at their surface. As soon as a wound is created, the disruption of the epithelial layers generates an electric field of ca. 40-200 mV/mm, directing cell migration towards the wound site, starting the healing process. In vitro electrotaxis, experiments have shown cells respond to DC EFs polarizing and migrating towards one of the poles (cathode or anode). A standard electrotaxis experiment consists of an electrotaxis chamber where cells are cultured, a DC power source and agar salt bridges that help delaying toxic products from the electrodes to attain the cell surface. The electric field strengths used in such an experiment are uniform and homogeneous. In contrast, the endogenous electric field strength around a wound tend to be multi-field and non-homogeneous. In this study, we present a custom device that enables electrotaxis experiments in non-homogeneous DC electric fields. Its main feature involves the replacement of conventional metallic electrodes, separated from the electrotaxis channel by agarose gel bridges, through electrolyte-filled microchannels. The connection to the DC source is made by Ag/AgCl electrodes, incased in agarose gel and placed at the end of each microfluidic channel. An SU-8 membrane closes the fluidic channels and simultaneously serves as the single connection from each of them to the central electrotaxis chamber. The electric field distribution and current density were numerically simulated with the steady-state electric conduction module from ANSYS 16.0. Simulation data confirms the application of nonhomogeneous EF of physiological strength. To validate the biocompatibility of the device cellular viability of the photoreceptor-derived 661W cell line was accessed. The cells have not shown any signs of apoptosis, damage or detachment during stimulation. Furthermore, immunofluorescence staining, namely by vinculin and actin labelling, allowed the assessment of adhesion efficiency and orientation of the cytoskeleton, respectively. Cellular motility in the presence and absence of applied DC EFs was verified. The movement of individual cells was tracked for the duration of the experiments, confirming the EF-induced, cathodal-directed motility of the studied cell line. The in vitro monolayer wound assay, or “scratch assay” is a standard protocol to quantitatively access cell migration in vitro. It encompasses the growth of a confluent cell monolayer followed by the mechanic creation of a scratch, representing a wound. Hence, wound dynamics was monitored over time and compared for control and applied the electric field to quantify cellular population motility.Keywords: DC non-homogeneous electric fields, electrotaxis, microfluidic biochip, wound healing
Procedia PDF Downloads 269617 Enhanced Photocatalytic H₂ Production from H₂S on Metal Modified Cds-Zns Semiconductors
Authors: Maali-Amel Mersel, Lajos Fodor, Otto Horvath
Abstract:
Photocatalytic H₂ production by H₂S decomposition is regarded to be an environmentally friendly process to produce carbon-free energy through direct solar energy conversion. For this purpose, sulphide-based materials, as photocatalysts, were widely used due to their excellent solar spectrum responses and high photocatalytic activity. The loading of proper co-catalysts that are based on cheap and earth-abundant materials on those semiconductors was shown to play an important role in the improvement of their efficiency. In this research, CdS-ZnS composite was studied because of its controllable band gap and excellent performance for H₂ evolution under visible light irradiation. The effects of the modification of this photocatalyst with different types of materials and the influence of the preparation parameters on its H₂ production activity were investigated. The CdS-ZnS composite with an enhanced photocatalytic activity for H₂ production was synthesized from ammine complexes. Two types of modification were used: compounds of Ni-group metals (NiS, PdS, and Pt) were applied as co-catalyst on the surface of CdS-ZnS semiconductor, while NiS, MnS, CoS, Ag₂S, and CuS were used as a dopant in the bulk of the catalyst. It was found that 0.1% of noble metals didn’t remarkably influence the photocatalytic activity, while the modification with 0.5% of NiS was shown to be more efficient in the bulk than on the surface. The modification with other types of metals results in a decrease of the rate of H₂ production, while the co-doping seems to be more promising. The preparation parameters (such as the amount of ammonia to form the ammine complexes, the order of the preparation steps together with the hydrothermal treatment) were also found to highly influence the rate of H₂ production. SEM, EDS and DRS analyses were made to reveal the structure of the most efficient photocatalysts. Moreover, the detection of the conduction band electron on the surface of the catalyst was also investigated. The excellent photoactivity of the CdS-ZnS catalysts with and without modification encourages further investigations to enhance the hydrogen generation by optimization of the reaction conditions.Keywords: H₂S, photoactivity, photocatalytic H₂ production, CdS-ZnS
Procedia PDF Downloads 127616 Generative Design Method for Cooled Additively Manufactured Gas Turbine Parts
Authors: Thomas Wimmer, Bernhard Weigand
Abstract:
The improvement of gas turbine efficiency is one of the main drivers of research and development in the gas turbine market. This has led to elevated gas turbine inlet temperatures beyond the melting point of the utilized materials. The turbine parts need to be actively cooled in order to withstand these harsh environments. However, the usage of compressor air as coolant decreases the overall gas turbine efficiency. Thus, coolant consumption needs to be minimized in order to gain the maximum advantage from higher turbine inlet temperatures. Therefore, sophisticated cooling designs for gas turbine parts aim to minimize coolant mass flow. New design space is accessible as additive manufacturing is maturing to industrial usage for the creation of hot gas flow path parts. By making use of this technology more efficient cooling schemes can be manufacture. In order to find such cooling schemes a generative design method is being developed. It generates cooling schemes randomly which adhere to a set of rules. These assure the sanity of the design. A huge amount of different cooling schemes are generated and implemented in a simulation environment where it is validated. Criteria for the fitness of the cooling schemes are coolant mass flow, maximum temperature and temperature gradients. This way the whole design space is sampled and a Pareto optimum front can be identified. This approach is applied to a flat plate, which resembles a simplified section of a hot gas flow path part. Realistic boundary conditions are applied and thermal barrier coating is accounted for in the simulation environment. The resulting cooling schemes are presented and compared to representative conventional cooling schemes. Further development of this method can give access to cooling schemes with an even better performance having higher complexity, which makes use of the available design space.Keywords: additive manufacturing, cooling, gas turbine, heat transfer, heat transfer design, optimization
Procedia PDF Downloads 350615 A Discrete Event Simulation Model For Airport Runway Operations Optimization (Case Study)
Authors: Awad Khireldin, Colin Law
Abstract:
Runways are the major infrastructure of airports around the world. Efficient operations of runways are key to ensure that airports are running smoothly with minimal delays. There are many factors that affect the efficiency of runway operations, such as the aircraft wake separation, runways system configuration, the fleet mix, and the runways separation distance. This paper aims to address how to maximize runway operations using a Discrete Event Simulation model. A case study of Cairo International Airport (CIA) is developed to maximize the utilizing of three parallel runways using a simulation model. Different scenarios have been designed where every runway could be assigned for arrival, departure, or mixed operations. A benchmarking study was also included to compare the actual to the proposed results to spot the potential improvements. The simulation model shows that there is a significant difference in utilization and delays between the actual and the proposed ones, there are several recommendations that can be provided to airport management, in the short and long term, to increase the efficiency and to reduce the delays. By including the recommendation with different operations scenarios, such as upgrading the airport slot Coordination from Level 1 to Level 2 in the short term. In the long run, discuss the possibilities to increase the International Air Transport association (IATA) slot coordination to Level 3 as more flights are expected to be handled by the airport. Technological advancements such as radar in the approach full airside simulation model could improve the airport performance where the airport is recommended to review the standard operations procedures with the appropriate authorities. Also, the airport can adopt a future operational plan to accommodate the forecasted additional traffic density in case of adding a fourth terminal building to increase the airport capacity.Keywords: airport performance, runway, discrete event simulation, capacity, airside
Procedia PDF Downloads 127614 Evaluation in Vitro and in Silico of Pleurotus ostreatus Capacity to Decrease the Amount of Low-Density Polyethylene Microplastics Present in Water Sample from the Middle Basin of the Magdalena River, Colombia
Authors: Loren S. Bernal., Catalina Castillo, Carel E. Carvajal, José F. Ibla
Abstract:
Plastic pollution, specifically microplastics, has become a significant issue in aquatic ecosystems worldwide. The large amount of plastic waste carried by water tributaries has resulted in the accumulation of microplastics in water bodies. The polymer aging process caused by environmental influences such as photodegradation and chemical degradation of additives leads to polymer embrittlement and properties change that require degradation or reduction procedures in rivers. However, there is a lack of such procedures for freshwater entities that develop over extended periods. The aim of this study is evaluate the potential of Pleurotus ostreatus a fungus, in reducing lowdensity polyethylene microplastics present in freshwater samples collected from the middle basin of the Magdalena River in Colombia. The study aims to evaluate this process both in vitro and in silico by identifying the growth capacity of Pleurotus ostreatus in the presence of microplastics and identifying the most likely interactions of Pleurotus ostreatus enzymes and their affinity energies. The study follows an engineering development methodology applied on an experimental basis. The in vitro evaluation protocol applied in this study focused on the growth capacity of Pleurotus ostreatus on microplastics using enzymatic inducers. In terms of in silico evaluation, molecular simulations were conducted using the Autodock 1.5.7 program to calculate interaction energies. The molecular dynamics were evaluated by using the myPresto Portal and GROMACS program to calculate radius of gyration and Energies.The results of the study showed that Pleurotus ostreatus has the potential to degrade low-density polyethylene microplastics. The in vitro evaluation revealed the adherence of Pleurotus ostreatus to LDPE using scanning electron microscopy. The best results were obtained with enzymatic inducers as a MnSO4 generating the activation of laccase or manganese peroxidase enzymes in the degradation process. The in silico modelling demonstrated that Pleurotus ostreatus was able to interact with the microplastics present in LDPE, showing affinity energies in molecular docking and molecular dynamics shown a minimum energy and the representative radius of gyration between each enzyme and its substract. The study contributes to the development of bioremediation processes for the removal of microplastics from freshwater sources using the fungus Pleurotus ostreatus. The in silico study provides insights into the affinity energies of Pleurotus ostreatus microplastic degrading enzymes and their interaction with low-density polyethylene. The study demonstrated that Pleurotus ostreatus can interact with LDPE microplastics, making it a good agent for the development of bioremediation processes that aid in the recovery of freshwater sources. The results of the study suggested that bioremediation could be a promising approach to reduce microplastics in freshwater systems.Keywords: bioremediation, in silico modelling, microplastics, Pleurotus ostreatus
Procedia PDF Downloads 113613 Multiple-Material Flow Control in Construction Supply Chain with External Storage Site
Authors: Fatmah Almathkour
Abstract:
Managing and controlling the construction supply chain (CSC) are very important components of effective construction project execution. The goals of managing the CSC are to reduce uncertainty and optimize the performance of a construction project by improving efficiency and reducing project costs. The heart of much SC activity is addressing risk, and the CSC is no different. The delivery and consumption of construction materials is highly variable due to the complexity of construction operations, rapidly changing demand for certain components, lead time variability from suppliers, transportation time variability, and disruptions at the job site. Current notions of managing and controlling CSC, involve focusing on one project at a time with a push-based material ordering system based on the initial construction schedule and, then, holding a tremendous amount of inventory. A two-stage methodology was proposed to coordinate the feed-forward control of advanced order placement with a supplier to a feedback local control in the form of adding the ability to transship materials between projects to improve efficiency and reduce costs. It focused on the single supplier integrated production and transshipment problem with multiple products. The methodology is used as a design tool for the CSC because it includes an external storage site not associated with one of the projects. The idea is to add this feature to a highly constrained environment to explore its effectiveness in buffering the impact of variability and maintaining project schedule at low cost. The methodology uses deterministic optimization models with objectives that minimizing the total cost of the CSC. To illustrate how this methodology can be used in practice and the types of information that can be gleaned, it is tested on a number of cases based on the real example of multiple construction projects in Kuwait.Keywords: construction supply chain, inventory control supply chain, transshipment
Procedia PDF Downloads 121612 Modelling and Optimization of a Combined Sorption Enhanced Biomass Gasification with Hydrothermal Carbonization, Hot Gas Cleaning and Dielectric Barrier Discharge Plasma Reactor to Produce Pure H₂ and Methanol Synthesis
Authors: Vera Marcantonio, Marcello De Falco, Mauro Capocelli, Álvaro Amado-Fierro, Teresa A. Centeno, Enrico Bocci
Abstract:
Concerns about energy security, energy prices, and climate change led scientific research towards sustainable solutions to fossil fuel as renewable energy sources coupled with hydrogen as an energy vector and carbon capture and conversion technologies. Among the technologies investigated in the last decades, biomass gasification acquired great interest owing to the possibility of obtaining low-cost and CO₂ negative emission hydrogen production from a large variety of everywhere available organic wastes. Upstream and downstream treatment were then studied in order to maximize hydrogen yield, reduce the content of organic and inorganic contaminants under the admissible levels for the technologies which are coupled with, capture, and convert carbon dioxide. However, studies which analyse a whole process made of all those technologies are still missing. In order to fill this lack, the present paper investigated the coexistence of hydrothermal carbonization (HTC), sorption enhance gasification (SEG), hot gas cleaning (HGC), and CO₂ conversion by dielectric barrier discharge (DBD) plasma reactor for H₂ production from biomass waste by means of Aspen Plus software. The proposed model aimed to identify and optimise the performance of the plant by varying operating parameters (such as temperature, CaO/biomass ratio, separation efficiency, etc.). The carbon footprint of the global plant is 2.3 kg CO₂/kg H₂, lower than the latest limit value imposed by the European Commission to consider hydrogen as “clean”, that was set to 3 kg CO₂/kg H₂. The hydrogen yield referred to the whole plant is 250 gH₂/kgBIOMASS.Keywords: biomass gasification, hydrogen, aspen plus, sorption enhance gasification
Procedia PDF Downloads 76611 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite
Authors: F. Lazzeri, I. Reiter
Abstract:
Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.
Keywords: time-series, features engineering methods for forecasting, energy demand forecasting, Azure Machine Learning
Procedia PDF Downloads 295610 Research on Localized Operations of Multinational Companies in China
Authors: Zheng Ruoyuan
Abstract:
With the rapid development of economic globalization and increasingly fierce international competition, multinational companies have carried out investment strategy shifts and innovations, and actively promoted localization strategies. Localization strategies have become the main trend in the development of multinational companies. Large-scale entry of multinational companies China has a history of more than 20 years. With the sustained and steady growth of China's economy and the optimization of the investment environment, multinational companies' investment in China has expanded rapidly, which has also had an important impact on the Chinese economy: promoting employment, foreign exchange reserves, and improving the system. etc., has brought a lot of high-tech and advanced management experience; but it has also brought challenges and survival pressure to China's local enterprises. In recent years, multinational companies have gradually regarded China as an important part of their global strategies and began to invest in China. Actively promote localization strategies, including production, marketing, scientific research and development, etc. Many multinational companies have achieved good results in localized operations in China. Not only have their benefits continued to improve, but they have also established a good corporate image and brand in China. image, which has greatly improved their competitiveness in the international market. However, there are also some multinational companies that have difficulties in localized operations in China. This article will closely follow the background of economic globalization and comprehensively use the theory of multinational companies and strategic management theory and business management theory, using data and facts as the entry point, combined with typical cases of representative significance for analysis, to conduct a systematic study of the localized operations of multinational companies in China. At the same time, for each specific link of the operation of multinational companies, we provide multinational enterprises with some inspirations and references.Keywords: localization, business management, multinational, marketing
Procedia PDF Downloads 48609 Optimization of Beneficiation Process for Upgrading Low Grade Egyptian Kaolin
Authors: Nagui A. Abdel-Khalek, Khaled A. Selim, Ahmed Hamdy
Abstract:
Kaolin is naturally occurring ore predominantly containing kaolinite mineral in addition to some gangue minerals. Typical impurities present in kaolin ore are quartz, iron oxides, titanoferrous minerals, mica, feldspar, organic matter, etc. The main coloring impurity, particularly in the ultrafine size range, is titanoferrous minerals. Kaolin is used in many industrial applications such as sanitary ware, table ware, ceramic, paint, and paper industries, each of which should be of certain specifications. For most industrial applications, kaolin should be processed to obtain refined clay so as to match with standard specifications. For example, kaolin used in paper and paint industries need to be of high brightness and low yellowness. Egyptian kaolin is not subjected to any beneficiation process and the Egyptian companies apply selective mining followed by, in some localities, crushing and size reduction only. Such low quality kaolin can be used in refractory and pottery production but not in white ware and paper industries. This paper aims to study the amenability of beneficiation of an Egyptian kaolin ore of El-Teih locality, Sinai, to be suitable for different industrial applications. Attrition scrubbing and classification followed by magnetic separation are applied to remove the associated impurities. Attrition scrubbing and classification are used to separate the coarse silica and feldspars. Wet high intensity magnetic separation was applied to remove colored contaminants such as iron oxide and titanium oxide. Different variables affecting of magnetic separation process such as solid percent, magnetic field, matrix loading capacity, and retention time are studied. The results indicated that substantial decrease in iron oxide (from 1.69% to 0.61% ) and TiO2 (from 3.1% to 0.83%) contents as well as improving iso-brightness (from 63.76% to 75.21% and whiteness (from 79.85% to 86.72%) of the product can be achieved.Keywords: Kaolin, titanoferrous minerals, beneficiation, magnetic separation, attrition scrubbing, classification
Procedia PDF Downloads 358608 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model
Authors: Donatella Giuliani
Abstract:
In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation
Procedia PDF Downloads 215607 Chromatographic Preparation and Performance on Zinc Ion Imprinted Monolithic Column and Its Adsorption Property
Authors: X. Han, S. Duan, C. Liu, C. Zhou, W. Zhu, L. Kong
Abstract:
The ionic imprinting technique refers to the three-dimensional rigid structure with the fixed pore sizes, which was formed by the binding interactions of ions and functional monomers and used ions as the template, it has a high level of recognition to the ionic template. The preparation of monolithic column by the in-situ polymerization need to put the compound of template, functional monomers, cross-linking agent and initiating agent into the solution, dissolve it and inject to the column tube, and then the compound will have a polymerization reaction at a certain temperature, after the synthetic reaction, we washed out the unread template and solution. The monolithic columns are easy to prepare, low consumption and cost-effective with fast mass transfer, besides, they have many chemical functions. But the monolithic columns have some problems in the practical application, such as low-efficiency, quantitative analysis cannot be performed accurately because of the peak shape is wide and has tailing phenomena; the choice of polymerization systems is limited and the lack of theoretical foundations. Thus the optimization of components and preparation methods is an important research direction. During the preparation of ionic imprinted monolithic columns, pore-forming agent can make the polymer generate the porous structure, which can influence the physical properties of polymer, what’ s more, it can directly decide the stability and selectivity of polymerization reaction. The compounds generated in the pre-polymerization reaction could directly decide the identification and screening capabilities of imprinted polymer; thus the choice of pore-forming agent is quite critical in the preparation of imprinted monolithic columns. This article mainly focuses on the research that when using different pore-forming agents, the impact of zinc ion imprinted monolithic column on the enrichment performance of zinc ion.Keywords: high performance liquid chromatography (HPLC), ionic imprinting, monolithic column, pore-forming agent
Procedia PDF Downloads 212606 Optimum Structural Wall Distribution in Reinforced Concrete Buildings Subjected to Earthquake Excitations
Authors: Nesreddine Djafar Henni, Akram Khelaifia, Salah Guettala, Rachid Chebili
Abstract:
Reinforced concrete shear walls and vertical plate-like elements play a pivotal role in efficiently managing a building's response to seismic forces. This study investigates how the performance of reinforced concrete buildings equipped with shear walls featuring different shear wall-to-frame stiffness ratios aligns with the requirements stipulated in the Algerian seismic code RPA99v2003, particularly in high-seismicity regions. Seven distinct 3D finite element models are developed and evaluated through nonlinear static analysis. Engineering Demand Parameters (EDPs) such as lateral displacement, inter-story drift ratio, shear force, and bending moment along the building height are analyzed. The findings reveal two predominant categories of induced responses: force-based and displacement-based EDPs. Furthermore, as the shear wall-to-frame ratio increases, there is a concurrent increase in force-based EDPs and a decrease in displacement-based ones. Examining the distribution of shear walls from both force and displacement perspectives, model G with the highest stiffness ratio, concentrating stiffness at the building's center, intensifies induced forces. This configuration necessitates additional reinforcements, leading to a conservative design approach. Conversely, model C, with the lowest stiffness ratio, distributes stiffness towards the periphery, resulting in minimized induced shear forces and bending moments, representing an optimal scenario with maximal performance and minimal strength requirements.Keywords: dual RC buildings, RC shear walls, modeling, static nonlinear pushover analysis, optimization, seismic performance
Procedia PDF Downloads 55605 Developing Urban Design and Planning Approach to Enhance the Efficiency of Infrastructure and Public Transportation in Order to Reduce GHG Emissions
Authors: A. Rostampouryasouri, A. Maghoul, S. Tahersima
Abstract:
The rapid growth of urbanization and the subsequent increase in population in cities have resulted in the destruction of the environment to cater to the needs of citizens. The industrialization of urban life has led to the production of pollutants, which has significantly contributed to the rise of air pollution. Infrastructure can have both positive and negative effects on air pollution. The effects of infrastructure on air pollution are complex and depend on various factors such as the type of infrastructure, location, and context. This study examines the effects of infrastructure on air pollution, drawing on a range of empirical evidence from Iran and China. Our paper focus for analyzing the data is on the following concepts: 1. Urban design and planning principles and practices 2. Infrastructure efficiency and optimization strategies 3. Public transportation systems and their environmental impact 4. GHG emissions reduction strategies in urban areas 5. Case studies and best practices in sustainable urban development This paper employs a mixed methodology approach with a focus on developmental and applicative purposes. The mixed methods approach combines both quantitative and qualitative research methods to provide a more comprehensive understanding of the research topic. A group of 20 architectural specialists and experts who are proficient in the field of research, design, and implementation of green architecture projects were interviewed in a systematic and purposeful manner. The research method was based on content analysis using MAXQDA2020 software. The findings suggest that policymakers and urban planners should consider the potential impacts of infrastructure on air pollution and take measures to mitigate negative effects while maximizing positive ones. This includes adopting a nature-based approach to urban planning and infrastructure development, investing in information infrastructure, and promoting modern logistic transport infrastructure.Keywords: GHG emissions, infrastructure efficiency, urban development, urban design
Procedia PDF Downloads 75604 Redesigning the Plant Distribution of an Industrial Laundry in Arequipa
Authors: Ana Belon Hercilla
Abstract:
The study is developed in “Reactivos Jeans” company, in the city of Arequipa, whose main business is the laundry of garments at an industrial level. In 2012 the company initiated actions to provide a dry cleaning service of alpaca fiber garments, recognizing that this item is in a growth phase in Peru. Additionally this company took the initiative to use a new greenwashing technology which has not yet been developed in the country. To accomplish this, a redesign of both the process and the plant layout was required. For redesigning the plant, the methodology used was the Systemic Layout Planning, allowing this study divided into four stages. First stage is the information gathering and evaluation of the initial situation of the company, for which a description of the areas, facilities and initial equipment, distribution of the plant, the production process and flows of major operations was made. Second stage is the development of engineering techniques that allow the logging and analysis procedures, such as: Flow Diagram, Route Diagram, DOP (process flowchart), DAP (analysis diagram). Then the planning of the general distribution is carried out. At this stage, proximity factors of the areas are established, the Diagram Paths (TRA) is developed, and the Relational Diagram Activities (DRA). In order to obtain the General Grouping Diagram (DGC), further information is complemented by a time study and Guerchet method is used to calculate the space requirements for each area. Finally, the plant layout redesigning is presented and the implementation of the improvement is made, making it possible to obtain a model much more efficient than the initial design. The results indicate that the implementation of the new machinery, the adequacy of the plant facilities and equipment relocation resulted in a reduction of the production cycle time by 75.67%, routes were reduced by 68.88%, the number of activities during the process were reduced by 40%, waits and storage were removed 100%.Keywords: redesign, time optimization, industrial laundry, greenwashing
Procedia PDF Downloads 393603 Predictors of Response to Interferone Therapy in Chronic Hepatitis C Virus Infection
Authors: Ali Kassem, Ehab Fawzy, Mahmoud Sef el-eslam, Fatma Salah- Eldeen, El zahraa Mohamed
Abstract:
Introduction: The combination of interferon (INF) and ribavirin is the preferred treatment for chronic hepatitis C viral (HCV) infection. However, nonresponse to this therapy remains common and is associated with several factors such as HCV genotype and HCV viral load in addition to host factors such as sex, HLA type and cytokine polymorphisms. Aim of the work: The aim of this study was to determine predictors of response to (INF) therapy in chronic HCV infected patients treated with INF alpha and ribavirin combination therapy. Patients and Methods: The present study included 110 patients (62 males, 48 females) with chronic HCV infection. Their ages ranged from 20-59 years. Inclusion criteria were organized according to the protocol of the Egyptian National Committee for control of viral hepatitis. Patients included in this study were recruited to receive INF ribavirin combination therapy; 54 patients received pegylated NF α-2a (180 μg) and weight based ribavirin therapy (1000 mg if < 75 kg, 1200 mg if > 75 kg) for 48 weeks and 53 patients received pegylated INF α-2b (1.5 ug/kg/week) and weight based ribavirin therapy (800 mg if < 65 kg, 1000 mg if 65-75 kg and 1200 mg if > 75kg). One hundred and seven liver biopsies were included in the study and submitted to histopathological examination. Hematoxylin and eosin (H&E) stained sections were done to assess both the grade and the stage of chronic viral hepatitis, in addition to the degree of steatosis. Modified hepatic activity index (HAI) grading, modified Ishak staging and Metavir grading and staging systems were used. Laboratory follow up including: HCV PCR at the 12th week to assess the early virologic response (EVR) and at the 24th week were done. At the end of the course: HCV PCR was done at the end of the course and tested 6 months later to document end virologic response (ETR) and sustained virologic response (SVR) respectively. Results One hundred seven patients; 62 males (57.9 %) and 45 females (42.1%) completed the course and included in this study. The age of patients ranged from 20-59 years with a mean of 40.39±10.03 years. Six months after the end of treatment patients were categorized into two groups: Group (1): patients who achieved sustained virological response (SVR). Group (2): patients who didn't achieve sustained virological response (non SVR) including non-responders, breakthrough and relapsers. In our study, 58 (54.2%) patients showed SVR, 18 (16.8%) patients were non-responders, 15 (14%) patients showed break-through and 16 (15 %) patients were relapsers. Univariate binary regression analysis of the possible risk factors of non SVR showed that the significant factors were higher age, higher fasting insulin level, higher Metavir stage and higher grade of hepatic steatosis. Multivariate binary regression analysis showed that the only independent risk factor for non SVR was high fasting insulin level. Conclusion: Younger age, lower Metavir stage, lower steatosis grade and lower fasting insulin level are good predictors of SVR and could be used in predicting the treatment response of pegylated interferon/ribavirin therapy.Keywords: chronic HCV infection, interferon ribavirin combination therapy, predictors to antiviral therapy, treatment response
Procedia PDF Downloads 393602 Hydrogen Production at the Forecourt from Off-Peak Electricity and Its Role in Balancing the Grid
Authors: Abdulla Rahil, Rupert Gammon, Neil Brown
Abstract:
The rapid growth of renewable energy sources and their integration into the grid have been motivated by the depletion of fossil fuels and environmental issues. Unfortunately, the grid is unable to cope with the predicted growth of renewable energy which would lead to its instability. To solve this problem, energy storage devices could be used. Electrolytic hydrogen production from an electrolyser is considered a promising option since it is a clean energy source (zero emissions). Choosing flexible operation of an electrolyser (producing hydrogen during the off-peak electricity period and stopping at other times) could bring about many benefits like reducing the cost of hydrogen and helping to balance the electric systems. This paper investigates the price of hydrogen during flexible operation compared with continuous operation, while serving the customer (hydrogen filling station) without interruption. The optimization algorithm is applied to investigate the hydrogen station in both cases (flexible and continuous operation). Three different scenarios are tested to see whether the off-peak electricity price could enhance the reduction of the hydrogen cost. These scenarios are: Standard tariff (1 tier system) during the day (assumed 12 p/kWh) while still satisfying the demand for hydrogen; using off-peak electricity at a lower price (assumed 5 p/kWh) and shutting down the electrolyser at other times; using lower price electricity at off-peak times and high price electricity at other times. This study looks at Derna city, which is located on the coast of the Mediterranean Sea (32° 46′ 0 N, 22° 38′ 0 E) with a high potential for wind resource. Hourly wind speed data which were collected over 24½ years from 1990 to 2014 were in addition to data on hourly radiation and hourly electricity demand collected over a one-year period, together with the petrol station data.Keywords: hydrogen filling station off-peak electricity, renewable energy, off-peak electricity, electrolytic hydrogen
Procedia PDF Downloads 228601 In Silico Exploration of Quinazoline Derivatives as EGFR Inhibitors for Lung Cancer: A Multi-Modal Approach Integrating QSAR-3D, ADMET, Molecular Docking, and Molecular Dynamics Analyses
Authors: Mohamed Moussaoui
Abstract:
A series of thirty-one potential inhibitors targeting the epidermal growth factor receptor kinase (EGFR), derived from quinazoline, underwent 3D-QSAR analysis using CoMFA and CoMSIA methodologies. The training and test sets of quinazoline derivatives were utilized to construct and validate the QSAR models, respectively, with dataset alignment performed using the lowest energy conformer of the most active compound. The best-performing CoMFA and CoMSIA models demonstrated impressive determination coefficients, with R² values of 0.981 and 0.978, respectively, and Leave One Out cross-validation determination coefficients, Q², of 0.645 and 0.729, respectively. Furthermore, external validation using a test set of five compounds yielded predicted determination coefficients, R² test, of 0.929 and 0.909 for CoMFA and CoMSIA, respectively. Building upon these promising results, eighteen new compounds were designed and assessed for drug likeness and ADMET properties through in silico methods. Additionally, molecular docking studies were conducted to elucidate the binding interactions between the selected compounds and the enzyme. Detailed molecular dynamics simulations were performed to analyze the stability, conformational changes, and binding interactions of the quinazoline derivatives with the EGFR kinase. These simulations provided deeper insights into the dynamic behavior of the compounds within the active site. This comprehensive analysis enhances the understanding of quinazoline derivatives as potential anti-cancer agents and provides valuable insights for lead optimization in the early stages of drug discovery, particularly for developing highly potent anticancer therapeuticsKeywords: 3D-QSAR, CoMFA, CoMSIA, ADMET, molecular docking, quinazoline, molecular dynamic, egfr inhibitors, lung cancer, anticancer
Procedia PDF Downloads 46600 A Study of the Carbon Footprint from a Liquid Silicone Rubber Compounding Facility in Malaysia
Authors: Q. R. Cheah, Y. F. Tan
Abstract:
In modern times, the push for a low carbon footprint entails achieving carbon neutrality as a goal for future generations. One possible step towards carbon footprint reduction is the use of more durable materials with longer lifespans, for example, silicone data cableswhich show at least double the lifespan of similar plastic products. By having greater durability and longer lifespans, silicone data cables can reduce the amount of trash produced as compared to plastics. Furthermore, silicone products don’t produce micro contamination harmful to the ocean. Every year the electronics industry produces an estimated 5 billion data cables for USB type C and lightning data cables for tablets and mobile phone devices. Material usage for outer jacketing is 6 to 12 grams per meter. Tests show that the product lifespan of a silicone data cable over plastic can be doubled due to greater durability. This can save at least 40,000 tonnes of material a year just on the outer jacketing of the data cable. The facility in this study specialises in compounding of liquid silicone rubber (LSR) material for the extrusion process in jacketing for the silicone data cable. This study analyses the carbon emissions from the facility, which is presently capable of producing more than 1,000 tonnes of LSR annually. This study uses guidelines from the World Business Council for Sustainable Development (WBCSD) and World Resources Institute (WRI) to define the boundaries of the scope. The scope of emissions is defined as 1. Emissions from operations owned or controlled by the reporting company, 2. Emissions from the generation of purchased or acquired energy such as electricity, steam, heating, or cooling consumed by the reporting company, and 3. All other indirect emissions occurring in the value chain of the reporting company, including both upstream and downstream emissions. As the study is limited to the compounding facility, the system boundaries definition according to GHG protocol is cradle-to-gate instead of cradle-to-grave exercises. Malaysia’s present electricity generation scenario was also used, where natural gas and coal constitute the bulk of emissions. Calculations show the LSR produced for the silicone data cable with high fire retardant capability has scope 1 emissions of 0.82kg CO2/kg, scope 2 emissions of 0.87kg CO2/kg, and scope 3 emissions of 2.76kg CO2/kg, with a total product carbon footprint of 4.45kg CO2/kg. This total product carbon footprint (Cradle-to-gate) is comparable to the industry and to plastic materials per tonne of material. Although per tonne emission is comparable to plastic material, due to greater durability and longer lifespan, there can be significantly reduced use of LSR material. Suggestions to reduce the calculated product carbon footprint in the scope of emissions involve 1. Incorporating the recycling of factory silicone waste into operations, 2. Using green renewable energy for external electricity sources and 3. Sourcing eco-friendly raw materials with low GHG emissions.Keywords: carbon footprint, liquid silicone rubber, silicone data cable, Malaysia facility
Procedia PDF Downloads 92599 The Relationship between Body Fat Percent and Metabolic Syndrome Indices in Childhood Morbid Obesity
Authors: Mustafa Metin Donma
Abstract:
Metabolic syndrome (MetS) is characterized by a series of biochemical, physiological and anthropometric indicators and is a life-threatening health problem due to its close association with chronic diseases such as diabetes mellitus, hypertension, cancer and cardiovascular diseases. The syndrome deserves great interest both in adults and children. Central obesity is the indispensable component of MetS. Particularly, children, who are morbidly obese have a great tendency to develop the disease, because they are under the threat in their future lives. Preventive measures at this stage should be considered. For this, investigators seek for an informative scale or an index for the purpose. So far, several, but not many suggestions come into the stage. However, the diagnostic decision is not so easy and may not be complete particularly in the pediatric population. The aim of the study was to develop a MetS index capable of predicting MetS, while children are at the morbid obesity stage. This study was performed on morbid obese (MO) children, which were divided into two groups. Morbid obese children, who do not possess MetS criteria comprised the first group (n=44). The second group was composed of children (n=42) with MetS diagnosis. Parents were informed about the signed consent forms, which are required for the participation of their children in the study. The approval of the study protocol was taken from the institutional ethics committee of Tekirdag Namik Kemal University. Helsinki Declaration was accepted prior to and during the study. Anthropometric measurements including weight, height, waist circumference (WC), hip C, head C, neck C, biochemical tests including fasting blood glucose (FBG), insulin (INS), triglycerides (TRG), high density lipoprotein cholesterol (HDL-C) and blood pressure measurements (systolic (SBP) and diastolic (DBP)) were performed. Body fat percentage (BFP) values were determined by TANITA’s Bioelectrical Impedance Analysis technology. Body mass index and MetS indices were calculated. The equations for MetS index (MetSI) and advanced Donma MetS index (ADMI) were [(INS/FBG)/(HDL-C/TRG)]*100 and MetSI*[(SBP+DBP/Height)], respectively. Descriptive statistics including median values, compare means tests, correlation-regression analysis were performed within the scope of data evaluation using the statistical package program, SPSS. Statistically significant mean differences were determined by a p value smaller than 0.05. Median values for MetSI and ADMI in MO (MetS-) and MO (MetS+) groups were calculated as (25.9 and 36.5) and (74.0 and 106.1), respectively. Corresponding mean±SD values for BFPs were 35.9±7.1 and 38.2±7.7 in groups. Correlation analysis of these two indices with corresponding general BFP values exhibited significant association with ADMI, close to significance with MetSI in MO group. Any significant correlation was found with neither of the indices in MetS group. In conclusion, important associations observed with MetS indices in MO group were quite meaningful. The presence of these associations in MO group was important for showing the tendency towards the development of MetS in MO (MetS-) participants. The other index, ADMI, was more helpful for predictive purpose.Keywords: body fat percentage, child, index, metabolic syndrome, obesity
Procedia PDF Downloads 58