Search results for: hybrid optimization
661 The Impact of Monetary Policy on Aggregate Market Liquidity: Evidence from Indian Stock Market
Authors: Byomakesh Debata, Jitendra Mahakud
Abstract:
The recent financial crisis has been characterized by massive monetary policy interventions by the Central bank, and it has amplified the importance of liquidity for the stability of the stock market. This paper empirically elucidates the actual impact of monetary policy interventions on stock market liquidity covering all National Stock Exchange (NSE) Stocks, which have been traded continuously from 2002 to 2015. The present study employs a multivariate VAR model along with VAR-granger causality test, impulse response functions, block exogeneity test, and variance decomposition to analyze the direction as well as the magnitude of the relationship between monetary policy and market liquidity. Our analysis posits a unidirectional relationship between monetary policy (call money rate, base money growth rate) and aggregate market liquidity (traded value, turnover ratio, Amihud illiquidity ratio, turnover price impact, high-low spread). The impulse response function analysis clearly depicts the influence of monetary policy on stock liquidity for every unit innovation in monetary policy variables. Our results suggest that an expansionary monetary policy increases aggregate stock market liquidity and the reverse is documented during the tightening of monetary policy. To ascertain whether our findings are consistent across all periods, we divided the period of study as pre-crisis (2002 to 2007) and post-crisis period (2007-2015) and ran the same set of models. Interestingly, all liquidity variables are highly significant in the post-crisis period. However, the pre-crisis period has witnessed a moderate predictability of monetary policy. To check the robustness of our results we ran the same set of VAR models with different monetary policy variables and found the similar results. Unlike previous studies, we found most of the liquidity variables are significant throughout the sample period. This reveals the predictability of monetary policy on aggregate market liquidity. This study contributes to the existing body of literature by documenting a strong predictability of monetary policy on stock liquidity in an emerging economy with an order driven market making system like India. Most of the previous studies have been carried out in developing economies with quote driven or hybrid market making system and their results are ambiguous across different periods. From an eclectic sense, this study may be considered as a baseline study to further find out the macroeconomic determinants of liquidity of stocks at individual as well as aggregate level.Keywords: market liquidity, monetary policy, order driven market, VAR, vector autoregressive model
Procedia PDF Downloads 375660 Visual Representation and the De-Racialization of Public Spaces
Authors: Donna Banks
Abstract:
In 1998 Winston James called for more research on the Caribbean diaspora and this ethnographic study, incorporating participant observation, interviews, and archival research, adds to the scholarship in this area. The research is grounded in the discipline of cultural studies but is cross-disciplinary in nature, engaging anthropology, psychology, and urban planning. This paper centers on community murals and their contribution to a more culturally diverse and representative community. While many museums are in the process of reassessing their collection, acquiring works, and developing programming to be more inclusive, and public art programs are investing millions of dollars in trying to fashion an identity in which all residents can feel included, local artists in neighborhoods in many countries have been using community murals to tell their stories. Community murals serve a historical, political, and social purpose and are an instrumental strategy in creative placemaking projects. Community murals add to the livability of an area. Even though official measurements of livability do not include race, ethnicity, and gender - which are egregious omissions - murals are a way to integrate historically underrepresented people into the wider history of a country. This paper draws attention to a creative placemaking project in the port city of Bristol, England. A city, like many others, with a history of spacializing race and racializing space. For this reason, Bristol’s Seven Saints of St. Pauls® Art & Heritage Trail, which memorializes seven Caribbean-born social and political change agents, is examined. The Seven Saints of St. Pauls® Art & Heritage Trail is crucial to the city, as well as the country, in its contribution to the de-racialization of public spaces. Within British art history, with few exceptions, portraits of non-White people who are not depicted in a subordinate role have been absent. The artist of the mural project, Michelle Curtis, has changed this long-lasting racist and hegemonic narrative. By creating seven large-scale portraits of individuals not typically represented visually, the artist has added them into Britain’s story. In these murals, however, we see more than just the likeness of a person; we are presented with a visual commentary that reflects each Saint’s hybrid identity of being both Black Caribbean and British, as well as their social and political involvement. Additionally, because the mural project is part of a heritage trail, the murals' are therapeutic and contribute to improving the well-being of residents and strengthening their sense of belonging.Keywords: belonging, murals, placemaking, representation
Procedia PDF Downloads 91659 Small Scale Waste to Energy Systems: Optimization of Feedstock Composition for Improved Control of Ash Sintering and Quality of Generated Syngas
Authors: Mateusz Szul, Tomasz Iluk, Aleksander Sobolewski
Abstract:
Small-scale, distributed energy systems enabling cogeneration of heat and power based on gasification of sewage sludge, are considered as the most efficient and environmentally friendly ways of their treatment. However, economic aspects of such an investment are very demanding; therefore, for such a small scale sewage sludge gasification installation to be profitable, it needs to be efficient and simple at the same time. The article presents results of research on air gasification of sewage sludge in fixed bed GazEla reactor. Two of the most important aspects of the research considered the influence of the composition of sewage sludge blends with other feedstocks on properties of generated syngas and ash sintering problems occurring at the fixed bed. Different means of the fuel pretreatment and blending were proposed as a way of dealing with the above mentioned undesired characteristics. Influence of RDF (Refuse Derived Fuel) and biomasses in the fuel blends were evaluated. Ash properties were assessed based on proximate, ultimate, and ash composition analysis of the feedstock. The blends were specified based on complementary characteristics of such criteria as C content, moisture, volatile matter, Si, Al, Mg, and content of basic metals in the ash were analyzed, Obtained results were assessed with use of experimental gasification tests and laboratory ISO-procedure for analysis of ash characteristic melting temperatures. Optimal gasification process conditions were determined by energetic parameters of the generated syngas, its content of tars and lack of ash sinters within the reactor bed. Optimal results were obtained for co-gasification of herbaceous biomasses with sewage sludge where LHV (Lower Heating Value) of the obtained syngas reached a stable value of 4.0 MJ/Nm3 for air/steam gasification.Keywords: ash fusibility, gasification, piston engine, sewage sludge
Procedia PDF Downloads 197658 Experimental and Modal Determination of the State-Space Model Parameters of a Uni-Axial Shaker System for Virtual Vibration Testing
Authors: Jonathan Martino, Kristof Harri
Abstract:
In some cases, the increase in computing resources makes simulation methods more affordable. The increase in processing speed also allows real time analysis or even more rapid tests analysis offering a real tool for test prediction and design process optimization. Vibration tests are no exception to this trend. The so called ‘Virtual Vibration Testing’ offers solution among others to study the influence of specific loads, to better anticipate the boundary conditions between the exciter and the structure under test, to study the influence of small changes in the structure under test, etc. This article will first present a virtual vibration test modeling with a main focus on the shaker model and will afterwards present the experimental parameters determination. The classical way of modeling a shaker is to consider the shaker as a simple mechanical structure augmented by an electrical circuit that makes the shaker move. The shaker is modeled as a two or three degrees of freedom lumped parameters model while the electrical circuit takes the coil impedance and the dynamic back-electromagnetic force into account. The establishment of the equations of this model, describing the dynamics of the shaker, is presented in this article and is strongly related to the internal physical quantities of the shaker. Those quantities will be reduced into global parameters which will be estimated through experiments. Different experiments will be carried out in order to design an easy and practical method for the identification of the shaker parameters leading to a fully functional shaker model. An experimental modal analysis will also be carried out to extract the modal parameters of the shaker and to combine them with the electrical measurements. Finally, this article will conclude with an experimental validation of the model.Keywords: lumped parameters model, shaker modeling, shaker parameters, state-space, virtual vibration
Procedia PDF Downloads 271657 Chronolgy and Developments in Inventory Control Best Practices for FMCG Sector
Authors: Roopa Singh, Anurag Singh, Ajay
Abstract:
Agriculture contributes a major share in the national economy of India. A major portion of Indian economy (about 70%) depends upon agriculture as it forms the main source of income. About 43% of India’s geographical area is used for agricultural activity which involves 65-75% of total population of India. The given work deals with the Fast moving Consumer Goods (FMCG) industries and their inventories which use agricultural produce as their raw material or input for their final product. Since the beginning of inventory practices, many developments took place which can be categorised into three phases, based on the review of various works. The first phase is related with development and utilization of Economic Order Quantity (EOQ) model and methods for optimizing costs and profits. Second phase deals with inventory optimization method, with the purpose of balancing capital investment constraints and service level goals. The third and recent phase has merged inventory control with electrical control theory. Maintenance of inventory is considered negative, as a large amount of capital is blocked especially in mechanical and electrical industries. But the case is different in food processing and agro-based industries and their inventories due to cyclic variation in the cost of raw materials of such industries which is the reason for selection of these industries in the mentioned work. The application of electrical control theory in inventory control makes the decision-making highly instantaneous for FMCG industries without loss in their proposed profits, which happened earlier during first and second phases, mainly due to late implementation of decision. The work also replaces various inventories and work-in-progress (WIP) related errors with their monetary values, so that the decision-making is fully target-oriented.Keywords: control theory, inventory control, manufacturing sector, EOQ, feedback, FMCG sector
Procedia PDF Downloads 354656 Modified Weibull Approach for Bridge Deterioration Modelling
Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight
Abstract:
State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models
Procedia PDF Downloads 729655 Solving LWE by Pregressive Pumps and Its Optimization
Authors: Leizhang Wang, Baocang Wang
Abstract:
General Sieve Kernel (G6K) is considered as currently the fastest algorithm for the shortest vector problem (SVP) and record holder of open SVP challenge. We study the lattice basis quality improvement effects of the Workout proposed in G6K, which is composed of a series of pumps to solve SVP. Firstly, we use a low-dimensional pump output basis to propose a predictor to predict the quality of high-dimensional Pumps output basis. Both theoretical analysis and experimental tests are performed to illustrate that it is more computationally expensive to solve the LWE problems by using a G6K default SVP solving strategy (Workout) than these lattice reduction algorithms (e.g. BKZ 2.0, Progressive BKZ, Pump, and Jump BKZ) with sieving as their SVP oracle. Secondly, the default Workout in G6K is optimized to achieve a stronger reduction and lower computational cost. Thirdly, we combine the optimized Workout and the Pump output basis quality predictor to further reduce the computational cost by optimizing LWE instances selection strategy. In fact, we can solve the TU LWE challenge (n = 65, q = 4225, = 0:005) 13.6 times faster than the G6K default Workout. Fourthly, we consider a combined two-stage (Preprocessing by BKZ- and a big Pump) LWE solving strategy. Both stages use dimension for free technology to give new theoretical security estimations of several LWE-based cryptographic schemes. The security estimations show that the securities of these schemes with the conservative Newhope’s core-SVP model are somewhat overestimated. In addition, in the case of LAC scheme, LWE instances selection strategy can be optimized to further improve the LWE-solving efficiency even by 15% and 57%. Finally, some experiments are implemented to examine the effects of our strategies on the Normal Form LWE problems, and the results demonstrate that the combined strategy is four times faster than that of Newhope.Keywords: LWE, G6K, pump estimator, LWE instances selection strategy, dimension for free
Procedia PDF Downloads 60654 Geological and Geotechnical Approach for Stabilization of Cut-Slopes in Power House Area of Luhri HEP Stage-I (210 MW), India
Authors: S. P. Bansal, Mukesh Kumar Sharma, Ankit Prabhakar
Abstract:
Luhri Hydroelectric Project Stage-I (210 MW) is a run of the river type development with a dam toe surface powerhouse (122m long, 50.50m wide, and 65.50m high) on the right bank of river Satluj in Himachal Pradesh, India. The project is located in the inner lesser Himalaya between Dhauladhar Range in the south and higher Himalaya in the north in the seismically active region. At the project, the location river is confined within narrow V-shaped valleys with little or no flat areas close to the river bed. Nearly 120m high cut slopes behind the powerhouse are proposed from the powerhouse foundation level of 795m to ± 915m to accommodate the surface powerhouse. The stability of 120m high cut slopes is a prime concern for the reason of risk involved. The slopes behind the powerhouse will be excavated in mainly in augen gneiss, fresh to weathered in nature, and biotite rich at places. The foliation joints are favorable and dipping inside the hill. Two valleys dipping steeper joints will be encountered on the slopes, which can cause instability during excavation. Geological exploration plays a vital role in designing and optimization of cut slopes. SWEDGE software has been used to analyze the geometry and stability of surface wedges in cut slopes. The slopes behind powerhouse have been analyzed in three zones for stability analysis by providing a break in the continuity of cut slopes, which shall provide quite substantial relief for slope stabilization measure. Pseudo static analysis has been carried out for the stabilization of wedges. The results indicate that many large wedges are forming, which have a factor of safety less than 1. The stability measures (support system, bench width, slopes) have been planned so that no wedge failure may occur in the future.Keywords: cut slopes, geotechnical investigations, Himalayan geology, surface powerhouse, wedge failure
Procedia PDF Downloads 118653 A 0-1 Goal Programming Approach to Optimize the Layout of Hospital Units: A Case Study in an Emergency Department in Seoul
Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee
Abstract:
This paper proposes a method to optimize the layout of an emergency department (ED) based on real executions of care processes by considering several planning objectives simultaneously. Recently, demand for healthcare services has been dramatically increased. As the demand for healthcare services increases, so do the need for new healthcare buildings as well as the need for redesign and renovating existing ones. The importance of implementation of a standard set of engineering facilities planning and design techniques has been already proved in both manufacturing and service industry with many significant functional efficiencies. However, high complexity of care processes remains a major challenge to apply these methods in healthcare environments. Process mining techniques applied in this study to tackle the problem of complexity and to enhance care process analysis. Process related information such as clinical pathways extracted from the information system of an ED. A 0-1 goal programming approach is then proposed to find a single layout that simultaneously satisfies several goals. The proposed model solved by optimization software CPLEX 12. The solution reached using the proposed method has 42.2% improvement in terms of walking distance of normal patients and 47.6% improvement in walking distance of critical patients at minimum cost of relocation. It has been observed that lots of patients must unnecessarily walk long distances during their visit to the emergency department because of an inefficient design. A carefully designed layout can significantly decrease patient walking distance and related complications.Keywords: healthcare operation management, goal programming, facility layout problem, process mining, clinical processes
Procedia PDF Downloads 297652 Myomectomy and Blood Loss: A Quality Improvement Project
Authors: Ena Arora, Rong Fan, Aleksandr Fuks, Kolawole Felix Akinnawonu
Abstract:
Introduction: Leiomyomas are benign tumors that are derived from the overgrowth of uterine smooth muscle cells. Women with symptomatic leiomyomas who desire future fertility, myomectomy should be the standard surgical treatment. Perioperative hemorrhage is a common complication in myomectomy. We performed the study to investigate blood transfusion rate in abdominal myomectomies, risk factors influencing blood loss and modalities to improve perioperative blood loss. Methods: Retrospective chart review was done for patients who underwent myomectomy from 2016 to 2022 at Queens hospital center, New York. We looked at preoperative patient demographics, clinical characteristics, intraoperative variables, and postoperative outcomes. Mann-Whitney U test were used for parametric and non-parametric continuous variable comparisons, respectively. Results: A total of 159 myomectomies were performed between 2016 and 2022, including 1 laparoscopic, 65 vaginal and 93 abdominal. 44 patients received blood transfusion during or within 72 hours of abdominal myomectomy. The blood transfusion rate was 47.3%. Blood transfusion rate was found to be twice higher than the average documented rate in literature which is 20%. Risk factors identified were black race, preoperative hematocrit<30%, preoperative blood transfusion within 72 hours, large fibroid burden, prolonged surgical time, and abdominal approach. Conclusion: Preoperative optimization with iron supplements or GnRH agonists is important for patients undergoing myomectomy. Interventions to decrease intra operative blood loss should include cell saver, tourniquet, vasopressin, misoprostol, tranexamic acid and gelatin-thrombin matrix hemostatic sealant.Keywords: myomectomy, perioperative blood loss, cell saver, tranexamic acid
Procedia PDF Downloads 85651 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer
Procedia PDF Downloads 137650 The Use of Unmanned Aerial System (UAS) in Improving the Measurement System on the Example of Textile Heaps
Authors: Arkadiusz Zurek
Abstract:
The potential of using drones is visible in many areas of logistics, especially in terms of their use for monitoring and control of many processes. The technologies implemented in the last decade concern new possibilities for companies that until now have not even considered them, such as warehouse inventories. Unmanned aerial vehicles are no longer seen as a revolutionary tool for Industry 4.0, but rather as tools in the daily work of factories and logistics operators. The research problem is to develop a method for measuring the weight of goods in a selected link of the clothing supply chain by drones. However, the purpose of this article is to analyze the causes of errors in traditional measurements, and then to identify adverse events related to the use of drones for the inventory of a heap of textiles intended for production purposes. On this basis, it will be possible to develop guidelines to eliminate the causes of these events in the measurement process using drones. In a real environment, work was carried out to determine the volume and weight of textiles, including, among others, weighing a textile sample to determine the average density of the assortment, establishing a local geodetic network, terrestrial laser scanning and photogrammetric raid using an unmanned aerial vehicle. As a result of the analysis of measurement data obtained in the facility, the volume and weight of the assortment and the accuracy of their determination were determined. In this article, this work presents how such heaps are currently being tested, what adverse events occur, indicate and describes the current use of photogrammetric techniques of this type of measurements so far performed by external drones for the inventory of wind farms or construction of the station and compare them with the measurement system of the aforementioned textile heap inside a large-format facility.Keywords: drones, unmanned aerial system, UAS, indoor system, security, process automation, cost optimization, photogrammetry, risk elimination, industry 4.0
Procedia PDF Downloads 87649 Model-Based Approach as Support for Product Industrialization: Application to an Optical Sensor
Authors: Frederic Schenker, Jonathan J. Hendriks, Gianluca Nicchiotti
Abstract:
In a product industrialization perspective, the end-product shall always be at the peak of technological advancement and developed in the shortest time possible. Thus, the constant growth of complexity and a shorter time-to-market calls for important changes on both the technical and business level. Undeniably, the common understanding of the system is beclouded by its complexity which leads to the communication gap between the engineers and the sale department. This communication link is therefore important to maintain and increase the information exchange between departments to ensure a punctual and flawless delivery to the end customer. This evolution brings engineers to reason with more hindsight and plan ahead. In this sense, they use new viewpoints to represent the data and to express the model deliverables in an understandable way that the different stakeholder may identify their needs and ideas. This article focuses on the usage of Model-Based System Engineering (MBSE) in a perspective of system industrialization and reconnect the engineering with the sales team. The modeling method used and presented in this paper concentrates on displaying as closely as possible the needs of the customer. Firstly, by providing a technical solution to the sales team to help them elaborate commercial offers without omitting technicalities. Secondly, the model simulates between a vast number of possibilities across a wide range of components. It becomes a dynamic tool for powerful analysis and optimizations. Thus, the model is no longer a technical tool for the engineers, but a way to maintain and solidify the communication between departments using different views of the model. The MBSE contribution to cost optimization during New Product Introduction (NPI) activities is made explicit through the illustration of a case study describing the support provided by system models to architectural choices during the industrialization of a novel optical sensor.Keywords: analytical model, architecture comparison, MBSE, product industrialization, SysML, system thinking
Procedia PDF Downloads 161648 Technological Development of a Biostimulant Bioproduct for Fruit Seedlings: An Engineering Overview
Authors: Andres Diaz Garcia
Abstract:
The successful technological development of any bioproduct, including those of the biostimulant type, requires to adequately completion of a series of stages allied to different disciplines that are related to microbiological, engineering, pharmaceutical chemistry, legal and market components, among others. Engineering as a discipline has a key contribution in different aspects of fermentation processes such as the design and optimization of culture media, the standardization of operating conditions within the bioreactor and the scaling of the production process of the active ingredient that it will be used in unit operations downstream. However, all aspects mentioned must take into account many biological factors of the microorganism such as the growth rate, the level of assimilation to various organic and inorganic sources and the mechanisms of action associated with its biological activity. This paper focuses on the practical experience within the Colombian Corporation for Agricultural Research (Agrosavia), which led to the development of a biostimulant bioproduct based on native rhizobacteria Bacillus amyloliquefaciens, oriented mainly to plant growth promotion in cape gooseberry nurseries and fruit crops in Colombia, and the challenges that were overcome from the expertise in the area of engineering. Through the application of strategies and engineering tools, a culture medium was optimized to obtain concentrations higher than 1E09 CFU (colony form units)/ml in liquid fermentation, the process of biomass production was standardized and a scale-up strategy was generated based on geometric (H/D of bioreactor relationships), and operational criteria based on a minimum dissolved oxygen concentration and that took into account the differences in the capacity of control of the process in the laboratory and pilot scales. Currently, the bioproduct obtained through this technological process is in stages of registration in Colombia for cape gooseberry fruits for export.Keywords: biochemical engineering, liquid fermentation, plant growth promoting, scale-up process
Procedia PDF Downloads 113647 Biodiesel Production from Yellow Oleander Seed Oil
Authors: S. Rashmi, Devashish Das, N. Spoorthi, H. V. Manasa
Abstract:
Energy is essential and plays an important role for overall development of a nation. The global economy literally runs on energy. The use of fossil fuels as energy is now widely accepted as unsustainable due to depleting resources and also due to the accumulation of greenhouse gases in the environment, renewable and carbon neutral biodiesel are necessary for environment and economic sustainability. Unfortunately biodiesel produced from oil crop, waste cooking oil and animal fats are not able to replace fossil fuel. Fossil fuels remain the dominant source of primary energy, accounting for 84% of the overall increase in demand. Today biodiesel has come to mean a very specific chemical modification of natural oils. Objectives: To produce biodiesel from yellow oleander seed oil, to test the yield of biodiesel using different types of catalyst (KOH & NaOH). Methodology: Oil is extracted from dried yellow oleander seeds using Soxhlet extractor and oil expeller (bulk). The FFA content of the oil is checked and depending on the FFA value either two steps or single step process is followed to produce biodiesel. Two step processes includes esterfication and transesterification, single step includes only transesterification. The properties of biodiesel are checked. Engine test is done for biodiesel produced. Result: It is concluded that biodiesel quality parameters such as yield(85% & 90%), flash point(1710C & 1760C),fire point(1950C & 1980C), viscosity(4.9991 and 5.21 mm2/s) for the biodiesel from seed oil of Thevetiaperuviana produced by using KOH & NaOH respectively. Thus the seed oil of Thevetiaperuviana is a viable feedstock for good quality fuel.The outcomes of our project are a substitute for conventional fuel, to reduce petro diesel requirement,improved performance in terms of emissions. Future prospects: Optimization of biodiesel production using response surface method.Keywords: yellow oleander seeds, biodiesel, quality parameters, renewable sources
Procedia PDF Downloads 447646 Analytical Validity Of A Tech Transfer Solution To Internalize Genetic Testing
Authors: Lesley Northrop, Justin DeGrazia, Jessica Greenwood
Abstract:
ASPIRA Labs now offers an en-suit and ready-to-implement technology transfer solution to enable labs and hospitals that lack the resources to build it themselves to offer in-house genetic testing. This unique platform employs a patented Molecular Inversion Probe (MIP) technology that combines the specificity of a hybrid capture protocol with the ease of an amplicon-based protocol and utilizes an advanced bioinformatics analysis pipeline based on machine learning. To demonstrate its efficacy, two independent genetic tests were validated on this technology transfer platform: expanded carrier screening (ECS) and hereditary cancer testing (HC). The analytical performance of ECS and HC was validated separately in a blinded manner for calling three different types of variants: SNVs, short indels (typically, <50 bp), and large indels/CNVs defined as multi-exonic del/dup events. The reference set was constructed using samples from Coriell Institute, an external clinical genetic testing laboratory, Maine Molecular Quality Controls Inc. (MMQCI), SeraCare and GIAB Consortium. Overall, the analytical performance showed a sensitivity and specificity of >99.4% for both ECS and HC in detecting SNVs. For indels, both tests reported specificity of 100%, and ECS demonstrated a sensitivity of 100%, whereas HC exhibited a sensitivity of 96.5%. The bioinformatics pipeline also correctly called all reference CNV events resulting in a sensitivity of 100% for both tests. No additional calls were made in the HC panel, leading to a perfect performance (specificity and F-measure of 100%). In the carrier panel, however, three additional positive calls were made outside the reference set. Two of these calls were confirmed using an orthogonal method and were re-classified as true positives leaving only one false positive. The pipeline also correctly identified all challenging carrier statuses, such as positive cases for spinal muscular atrophy and alpha-thalassemia, resulting in 100% sensitivity. After confirmation of additional positive calls via long-range PCR and MLPA, specificity for such cases was estimated at 99%. These performance metrics demonstrate that this tech-transfer solution can be confidently internalized by clinical labs and hospitals to offer mainstream ECS and HC as part of their test catalog, substantially increasing access to quality germline genetic testing for labs of all sizes and resources levels.Keywords: clinical genetics, genetic testing, molecular genetics, technology transfer
Procedia PDF Downloads 178645 FWGE Production From Wheat Germ Using Co-culture of Saccharomyces cerevisiae and Lactobacillus plantarum
Authors: Valiollah Babaeipour, Mahdi Rahaie
Abstract:
food supplements are rich in specific nutrients and bioactive compounds that eliminate free radicals and improve cellular metabolism. The major bioactive compounds are found in bran and cereal sprouts. Secondary metabolites of these microorganisms have antioxidant properties that can be used alone or in combination with chemotherapy and radiation therapy to treat cancer. Biologically active compounds such as benzoquinone derivatives extracted from fermented wheat germ extract (FWGE) have several positive effects on the overall state of human health and strengthen the immune system. The present work describes the discontinuous fermentation of raw wheat germ for FWGE production through the simultaneous culture process using the probiotic strains of Saccharomyces cerevisiae, Lactobacillus plantarum, and the possibility of using solid waste. To increase production efficiency, first to select important factors in the optimization of each fermentation process, using a factorial statistical scheme of stirring fraction (120 to 200 rpm), dilution of solids to solvent (1 to 8-12), fermentation time (16 to 24 hours) and strain to wheat germ ratio (20% to 50%) were studied and then simultaneous culture was performed to increase the yields of 2 and 6 dimethoxybenzoquinone (2,6-DMBQ). Since 2 and 6 dimethoxy benzoquinone were fermented as the main biologically active compound in wheat germ extract, UV-Vis analysis was performed to confirm the presence of 2 and 6 dimethoxy benzoquinone in the final product. In addition, 2,6-DMBQ of some products was isolated in a non-polar C-18 column and quantified using high performance liquid chromatography (HPLC). Based on our findings, it can be concluded that the increase of 2 and 6 dimethoxybenzoquinone in the simultaneous culture of Saccharomyces cerevisiae - Lactobacillus plantarum compared to pure culture of Saccharomyces cerevisiae (from 1.89 mg / g) to 28.9% (2.66 mg / g) Increased.Keywords: wheat germ, FWGE, saccharomyces cerevisiae, lactobacillus plantarum, co-culture, 2, 6-DMBQ
Procedia PDF Downloads 131644 Unravelling Green Entrepreneurial: Insights From a Hybrid Systematic Review
Authors: Shivani, Seema Sharma, Shveta Singh, Akriti Chandra
Abstract:
Business activities contribute to various environmental issues such as deforestation, waste generation, and pollution. Therefore, integration of environmental concerns within manufacturing operations is vital for the long-term survival of businesses. In this context, green entrepreneurial orientation (GEO) is recognized as a firm-level internal strategy to mitigate ecological damage through initiating green business practices. However, despite the surge in research on GEO in recent years, ambiguity remains on the genesis of GEO and the mechanism through which GEO impacts various organizational outcomes. This prompts an examination of the ongoing scholarly discourse about GEO and its domain knowledge structure within the entrepreneurship literature using bibliometric analysis and the Theories, Contexts, Characteristics, and Methodologies (TCCM) framework. The authors analyzed a dataset comprising 73 scientific documents sourced from the Scopus and Web of Science database from 2005 to 2024 to provide insights into the publication trends, prominent journals, authors, articles, countries' collaboration, and keyword analysis in GEO research. The findings indicate that the number of relevant papers and citations has increased consistently, with authors from China being the main contributors. The articles are mainly published in Business Strategy and the Environment and Sustainability. Dynamic capability view is the dominant framework applied in the GEO domain, with large manufacturing firms and SMEs constituting the majority of the sample. Further, various antecedents of GEO have been identified at an organizational level to which managers can focus their attention. The studies have used various contextual factors to explain when GEO translates into superior organizational outcomes. The Method analysis reveals that PLS-SEM is the commonly used approach for analyzing the primary data collected through surveys. Moreover, the content analysis indicates four emerging research frontiers identified as unidimensional vs. multidimensional perspectives of GEO, typologies of green innovation, environmental management in the hospitality industry, and tech-savvy sustainability in the agriculture sector. This study is one of the earliest to apply quantitative methods to synthesize the extant literature on GEO. This research holds relevance for management practice due to the escalating levels of carbon emissions, energy consumption, and waste discharges observed in recent years, resulting in increased apprehension about climate change.Keywords: green entrepreneurship, sustainability, SLR, TCCM
Procedia PDF Downloads 14643 Enhanced Photocatalytic H₂ Production from H₂S on Metal Modified Cds-Zns Semiconductors
Authors: Maali-Amel Mersel, Lajos Fodor, Otto Horvath
Abstract:
Photocatalytic H₂ production by H₂S decomposition is regarded to be an environmentally friendly process to produce carbon-free energy through direct solar energy conversion. For this purpose, sulphide-based materials, as photocatalysts, were widely used due to their excellent solar spectrum responses and high photocatalytic activity. The loading of proper co-catalysts that are based on cheap and earth-abundant materials on those semiconductors was shown to play an important role in the improvement of their efficiency. In this research, CdS-ZnS composite was studied because of its controllable band gap and excellent performance for H₂ evolution under visible light irradiation. The effects of the modification of this photocatalyst with different types of materials and the influence of the preparation parameters on its H₂ production activity were investigated. The CdS-ZnS composite with an enhanced photocatalytic activity for H₂ production was synthesized from ammine complexes. Two types of modification were used: compounds of Ni-group metals (NiS, PdS, and Pt) were applied as co-catalyst on the surface of CdS-ZnS semiconductor, while NiS, MnS, CoS, Ag₂S, and CuS were used as a dopant in the bulk of the catalyst. It was found that 0.1% of noble metals didn’t remarkably influence the photocatalytic activity, while the modification with 0.5% of NiS was shown to be more efficient in the bulk than on the surface. The modification with other types of metals results in a decrease of the rate of H₂ production, while the co-doping seems to be more promising. The preparation parameters (such as the amount of ammonia to form the ammine complexes, the order of the preparation steps together with the hydrothermal treatment) were also found to highly influence the rate of H₂ production. SEM, EDS and DRS analyses were made to reveal the structure of the most efficient photocatalysts. Moreover, the detection of the conduction band electron on the surface of the catalyst was also investigated. The excellent photoactivity of the CdS-ZnS catalysts with and without modification encourages further investigations to enhance the hydrogen generation by optimization of the reaction conditions.Keywords: H₂S, photoactivity, photocatalytic H₂ production, CdS-ZnS
Procedia PDF Downloads 131642 Generative Design Method for Cooled Additively Manufactured Gas Turbine Parts
Authors: Thomas Wimmer, Bernhard Weigand
Abstract:
The improvement of gas turbine efficiency is one of the main drivers of research and development in the gas turbine market. This has led to elevated gas turbine inlet temperatures beyond the melting point of the utilized materials. The turbine parts need to be actively cooled in order to withstand these harsh environments. However, the usage of compressor air as coolant decreases the overall gas turbine efficiency. Thus, coolant consumption needs to be minimized in order to gain the maximum advantage from higher turbine inlet temperatures. Therefore, sophisticated cooling designs for gas turbine parts aim to minimize coolant mass flow. New design space is accessible as additive manufacturing is maturing to industrial usage for the creation of hot gas flow path parts. By making use of this technology more efficient cooling schemes can be manufacture. In order to find such cooling schemes a generative design method is being developed. It generates cooling schemes randomly which adhere to a set of rules. These assure the sanity of the design. A huge amount of different cooling schemes are generated and implemented in a simulation environment where it is validated. Criteria for the fitness of the cooling schemes are coolant mass flow, maximum temperature and temperature gradients. This way the whole design space is sampled and a Pareto optimum front can be identified. This approach is applied to a flat plate, which resembles a simplified section of a hot gas flow path part. Realistic boundary conditions are applied and thermal barrier coating is accounted for in the simulation environment. The resulting cooling schemes are presented and compared to representative conventional cooling schemes. Further development of this method can give access to cooling schemes with an even better performance having higher complexity, which makes use of the available design space.Keywords: additive manufacturing, cooling, gas turbine, heat transfer, heat transfer design, optimization
Procedia PDF Downloads 352641 A Discrete Event Simulation Model For Airport Runway Operations Optimization (Case Study)
Authors: Awad Khireldin, Colin Law
Abstract:
Runways are the major infrastructure of airports around the world. Efficient operations of runways are key to ensure that airports are running smoothly with minimal delays. There are many factors that affect the efficiency of runway operations, such as the aircraft wake separation, runways system configuration, the fleet mix, and the runways separation distance. This paper aims to address how to maximize runway operations using a Discrete Event Simulation model. A case study of Cairo International Airport (CIA) is developed to maximize the utilizing of three parallel runways using a simulation model. Different scenarios have been designed where every runway could be assigned for arrival, departure, or mixed operations. A benchmarking study was also included to compare the actual to the proposed results to spot the potential improvements. The simulation model shows that there is a significant difference in utilization and delays between the actual and the proposed ones, there are several recommendations that can be provided to airport management, in the short and long term, to increase the efficiency and to reduce the delays. By including the recommendation with different operations scenarios, such as upgrading the airport slot Coordination from Level 1 to Level 2 in the short term. In the long run, discuss the possibilities to increase the International Air Transport association (IATA) slot coordination to Level 3 as more flights are expected to be handled by the airport. Technological advancements such as radar in the approach full airside simulation model could improve the airport performance where the airport is recommended to review the standard operations procedures with the appropriate authorities. Also, the airport can adopt a future operational plan to accommodate the forecasted additional traffic density in case of adding a fourth terminal building to increase the airport capacity.Keywords: airport performance, runway, discrete event simulation, capacity, airside
Procedia PDF Downloads 137640 Multiple-Material Flow Control in Construction Supply Chain with External Storage Site
Authors: Fatmah Almathkour
Abstract:
Managing and controlling the construction supply chain (CSC) are very important components of effective construction project execution. The goals of managing the CSC are to reduce uncertainty and optimize the performance of a construction project by improving efficiency and reducing project costs. The heart of much SC activity is addressing risk, and the CSC is no different. The delivery and consumption of construction materials is highly variable due to the complexity of construction operations, rapidly changing demand for certain components, lead time variability from suppliers, transportation time variability, and disruptions at the job site. Current notions of managing and controlling CSC, involve focusing on one project at a time with a push-based material ordering system based on the initial construction schedule and, then, holding a tremendous amount of inventory. A two-stage methodology was proposed to coordinate the feed-forward control of advanced order placement with a supplier to a feedback local control in the form of adding the ability to transship materials between projects to improve efficiency and reduce costs. It focused on the single supplier integrated production and transshipment problem with multiple products. The methodology is used as a design tool for the CSC because it includes an external storage site not associated with one of the projects. The idea is to add this feature to a highly constrained environment to explore its effectiveness in buffering the impact of variability and maintaining project schedule at low cost. The methodology uses deterministic optimization models with objectives that minimizing the total cost of the CSC. To illustrate how this methodology can be used in practice and the types of information that can be gleaned, it is tested on a number of cases based on the real example of multiple construction projects in Kuwait.Keywords: construction supply chain, inventory control supply chain, transshipment
Procedia PDF Downloads 122639 Modelling and Optimization of a Combined Sorption Enhanced Biomass Gasification with Hydrothermal Carbonization, Hot Gas Cleaning and Dielectric Barrier Discharge Plasma Reactor to Produce Pure H₂ and Methanol Synthesis
Authors: Vera Marcantonio, Marcello De Falco, Mauro Capocelli, Álvaro Amado-Fierro, Teresa A. Centeno, Enrico Bocci
Abstract:
Concerns about energy security, energy prices, and climate change led scientific research towards sustainable solutions to fossil fuel as renewable energy sources coupled with hydrogen as an energy vector and carbon capture and conversion technologies. Among the technologies investigated in the last decades, biomass gasification acquired great interest owing to the possibility of obtaining low-cost and CO₂ negative emission hydrogen production from a large variety of everywhere available organic wastes. Upstream and downstream treatment were then studied in order to maximize hydrogen yield, reduce the content of organic and inorganic contaminants under the admissible levels for the technologies which are coupled with, capture, and convert carbon dioxide. However, studies which analyse a whole process made of all those technologies are still missing. In order to fill this lack, the present paper investigated the coexistence of hydrothermal carbonization (HTC), sorption enhance gasification (SEG), hot gas cleaning (HGC), and CO₂ conversion by dielectric barrier discharge (DBD) plasma reactor for H₂ production from biomass waste by means of Aspen Plus software. The proposed model aimed to identify and optimise the performance of the plant by varying operating parameters (such as temperature, CaO/biomass ratio, separation efficiency, etc.). The carbon footprint of the global plant is 2.3 kg CO₂/kg H₂, lower than the latest limit value imposed by the European Commission to consider hydrogen as “clean”, that was set to 3 kg CO₂/kg H₂. The hydrogen yield referred to the whole plant is 250 gH₂/kgBIOMASS.Keywords: biomass gasification, hydrogen, aspen plus, sorption enhance gasification
Procedia PDF Downloads 81638 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite
Authors: F. Lazzeri, I. Reiter
Abstract:
Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.
Keywords: time-series, features engineering methods for forecasting, energy demand forecasting, Azure Machine Learning
Procedia PDF Downloads 299637 Research on Localized Operations of Multinational Companies in China
Authors: Zheng Ruoyuan
Abstract:
With the rapid development of economic globalization and increasingly fierce international competition, multinational companies have carried out investment strategy shifts and innovations, and actively promoted localization strategies. Localization strategies have become the main trend in the development of multinational companies. Large-scale entry of multinational companies China has a history of more than 20 years. With the sustained and steady growth of China's economy and the optimization of the investment environment, multinational companies' investment in China has expanded rapidly, which has also had an important impact on the Chinese economy: promoting employment, foreign exchange reserves, and improving the system. etc., has brought a lot of high-tech and advanced management experience; but it has also brought challenges and survival pressure to China's local enterprises. In recent years, multinational companies have gradually regarded China as an important part of their global strategies and began to invest in China. Actively promote localization strategies, including production, marketing, scientific research and development, etc. Many multinational companies have achieved good results in localized operations in China. Not only have their benefits continued to improve, but they have also established a good corporate image and brand in China. image, which has greatly improved their competitiveness in the international market. However, there are also some multinational companies that have difficulties in localized operations in China. This article will closely follow the background of economic globalization and comprehensively use the theory of multinational companies and strategic management theory and business management theory, using data and facts as the entry point, combined with typical cases of representative significance for analysis, to conduct a systematic study of the localized operations of multinational companies in China. At the same time, for each specific link of the operation of multinational companies, we provide multinational enterprises with some inspirations and references.Keywords: localization, business management, multinational, marketing
Procedia PDF Downloads 52636 Optimization of Beneficiation Process for Upgrading Low Grade Egyptian Kaolin
Authors: Nagui A. Abdel-Khalek, Khaled A. Selim, Ahmed Hamdy
Abstract:
Kaolin is naturally occurring ore predominantly containing kaolinite mineral in addition to some gangue minerals. Typical impurities present in kaolin ore are quartz, iron oxides, titanoferrous minerals, mica, feldspar, organic matter, etc. The main coloring impurity, particularly in the ultrafine size range, is titanoferrous minerals. Kaolin is used in many industrial applications such as sanitary ware, table ware, ceramic, paint, and paper industries, each of which should be of certain specifications. For most industrial applications, kaolin should be processed to obtain refined clay so as to match with standard specifications. For example, kaolin used in paper and paint industries need to be of high brightness and low yellowness. Egyptian kaolin is not subjected to any beneficiation process and the Egyptian companies apply selective mining followed by, in some localities, crushing and size reduction only. Such low quality kaolin can be used in refractory and pottery production but not in white ware and paper industries. This paper aims to study the amenability of beneficiation of an Egyptian kaolin ore of El-Teih locality, Sinai, to be suitable for different industrial applications. Attrition scrubbing and classification followed by magnetic separation are applied to remove the associated impurities. Attrition scrubbing and classification are used to separate the coarse silica and feldspars. Wet high intensity magnetic separation was applied to remove colored contaminants such as iron oxide and titanium oxide. Different variables affecting of magnetic separation process such as solid percent, magnetic field, matrix loading capacity, and retention time are studied. The results indicated that substantial decrease in iron oxide (from 1.69% to 0.61% ) and TiO2 (from 3.1% to 0.83%) contents as well as improving iso-brightness (from 63.76% to 75.21% and whiteness (from 79.85% to 86.72%) of the product can be achieved.Keywords: Kaolin, titanoferrous minerals, beneficiation, magnetic separation, attrition scrubbing, classification
Procedia PDF Downloads 361635 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model
Authors: Donatella Giuliani
Abstract:
In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation
Procedia PDF Downloads 217634 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method
Authors: Dangut Maren David, Skaf Zakwan
Abstract:
Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.Keywords: prognostics, data-driven, imbalance classification, deep learning
Procedia PDF Downloads 175633 Chromatographic Preparation and Performance on Zinc Ion Imprinted Monolithic Column and Its Adsorption Property
Authors: X. Han, S. Duan, C. Liu, C. Zhou, W. Zhu, L. Kong
Abstract:
The ionic imprinting technique refers to the three-dimensional rigid structure with the fixed pore sizes, which was formed by the binding interactions of ions and functional monomers and used ions as the template, it has a high level of recognition to the ionic template. The preparation of monolithic column by the in-situ polymerization need to put the compound of template, functional monomers, cross-linking agent and initiating agent into the solution, dissolve it and inject to the column tube, and then the compound will have a polymerization reaction at a certain temperature, after the synthetic reaction, we washed out the unread template and solution. The monolithic columns are easy to prepare, low consumption and cost-effective with fast mass transfer, besides, they have many chemical functions. But the monolithic columns have some problems in the practical application, such as low-efficiency, quantitative analysis cannot be performed accurately because of the peak shape is wide and has tailing phenomena; the choice of polymerization systems is limited and the lack of theoretical foundations. Thus the optimization of components and preparation methods is an important research direction. During the preparation of ionic imprinted monolithic columns, pore-forming agent can make the polymer generate the porous structure, which can influence the physical properties of polymer, what’ s more, it can directly decide the stability and selectivity of polymerization reaction. The compounds generated in the pre-polymerization reaction could directly decide the identification and screening capabilities of imprinted polymer; thus the choice of pore-forming agent is quite critical in the preparation of imprinted monolithic columns. This article mainly focuses on the research that when using different pore-forming agents, the impact of zinc ion imprinted monolithic column on the enrichment performance of zinc ion.Keywords: high performance liquid chromatography (HPLC), ionic imprinting, monolithic column, pore-forming agent
Procedia PDF Downloads 215632 Optimum Structural Wall Distribution in Reinforced Concrete Buildings Subjected to Earthquake Excitations
Authors: Nesreddine Djafar Henni, Akram Khelaifia, Salah Guettala, Rachid Chebili
Abstract:
Reinforced concrete shear walls and vertical plate-like elements play a pivotal role in efficiently managing a building's response to seismic forces. This study investigates how the performance of reinforced concrete buildings equipped with shear walls featuring different shear wall-to-frame stiffness ratios aligns with the requirements stipulated in the Algerian seismic code RPA99v2003, particularly in high-seismicity regions. Seven distinct 3D finite element models are developed and evaluated through nonlinear static analysis. Engineering Demand Parameters (EDPs) such as lateral displacement, inter-story drift ratio, shear force, and bending moment along the building height are analyzed. The findings reveal two predominant categories of induced responses: force-based and displacement-based EDPs. Furthermore, as the shear wall-to-frame ratio increases, there is a concurrent increase in force-based EDPs and a decrease in displacement-based ones. Examining the distribution of shear walls from both force and displacement perspectives, model G with the highest stiffness ratio, concentrating stiffness at the building's center, intensifies induced forces. This configuration necessitates additional reinforcements, leading to a conservative design approach. Conversely, model C, with the lowest stiffness ratio, distributes stiffness towards the periphery, resulting in minimized induced shear forces and bending moments, representing an optimal scenario with maximal performance and minimal strength requirements.Keywords: dual RC buildings, RC shear walls, modeling, static nonlinear pushover analysis, optimization, seismic performance
Procedia PDF Downloads 58