Search results for: nonlinear optimization with constraints
649 The Use of Unmanned Aerial System (UAS) in Improving the Measurement System on the Example of Textile Heaps
Authors: Arkadiusz Zurek
Abstract:
The potential of using drones is visible in many areas of logistics, especially in terms of their use for monitoring and control of many processes. The technologies implemented in the last decade concern new possibilities for companies that until now have not even considered them, such as warehouse inventories. Unmanned aerial vehicles are no longer seen as a revolutionary tool for Industry 4.0, but rather as tools in the daily work of factories and logistics operators. The research problem is to develop a method for measuring the weight of goods in a selected link of the clothing supply chain by drones. However, the purpose of this article is to analyze the causes of errors in traditional measurements, and then to identify adverse events related to the use of drones for the inventory of a heap of textiles intended for production purposes. On this basis, it will be possible to develop guidelines to eliminate the causes of these events in the measurement process using drones. In a real environment, work was carried out to determine the volume and weight of textiles, including, among others, weighing a textile sample to determine the average density of the assortment, establishing a local geodetic network, terrestrial laser scanning and photogrammetric raid using an unmanned aerial vehicle. As a result of the analysis of measurement data obtained in the facility, the volume and weight of the assortment and the accuracy of their determination were determined. In this article, this work presents how such heaps are currently being tested, what adverse events occur, indicate and describes the current use of photogrammetric techniques of this type of measurements so far performed by external drones for the inventory of wind farms or construction of the station and compare them with the measurement system of the aforementioned textile heap inside a large-format facility.Keywords: drones, unmanned aerial system, UAS, indoor system, security, process automation, cost optimization, photogrammetry, risk elimination, industry 4.0
Procedia PDF Downloads 86648 Microgrid Design Under Optimal Control With Batch Reinforcement Learning
Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion
Abstract:
Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.Keywords: batch-constrained reinforcement learning, control, design, optimal
Procedia PDF Downloads 123647 Model-Based Approach as Support for Product Industrialization: Application to an Optical Sensor
Authors: Frederic Schenker, Jonathan J. Hendriks, Gianluca Nicchiotti
Abstract:
In a product industrialization perspective, the end-product shall always be at the peak of technological advancement and developed in the shortest time possible. Thus, the constant growth of complexity and a shorter time-to-market calls for important changes on both the technical and business level. Undeniably, the common understanding of the system is beclouded by its complexity which leads to the communication gap between the engineers and the sale department. This communication link is therefore important to maintain and increase the information exchange between departments to ensure a punctual and flawless delivery to the end customer. This evolution brings engineers to reason with more hindsight and plan ahead. In this sense, they use new viewpoints to represent the data and to express the model deliverables in an understandable way that the different stakeholder may identify their needs and ideas. This article focuses on the usage of Model-Based System Engineering (MBSE) in a perspective of system industrialization and reconnect the engineering with the sales team. The modeling method used and presented in this paper concentrates on displaying as closely as possible the needs of the customer. Firstly, by providing a technical solution to the sales team to help them elaborate commercial offers without omitting technicalities. Secondly, the model simulates between a vast number of possibilities across a wide range of components. It becomes a dynamic tool for powerful analysis and optimizations. Thus, the model is no longer a technical tool for the engineers, but a way to maintain and solidify the communication between departments using different views of the model. The MBSE contribution to cost optimization during New Product Introduction (NPI) activities is made explicit through the illustration of a case study describing the support provided by system models to architectural choices during the industrialization of a novel optical sensor.Keywords: analytical model, architecture comparison, MBSE, product industrialization, SysML, system thinking
Procedia PDF Downloads 161646 Technological Development of a Biostimulant Bioproduct for Fruit Seedlings: An Engineering Overview
Authors: Andres Diaz Garcia
Abstract:
The successful technological development of any bioproduct, including those of the biostimulant type, requires to adequately completion of a series of stages allied to different disciplines that are related to microbiological, engineering, pharmaceutical chemistry, legal and market components, among others. Engineering as a discipline has a key contribution in different aspects of fermentation processes such as the design and optimization of culture media, the standardization of operating conditions within the bioreactor and the scaling of the production process of the active ingredient that it will be used in unit operations downstream. However, all aspects mentioned must take into account many biological factors of the microorganism such as the growth rate, the level of assimilation to various organic and inorganic sources and the mechanisms of action associated with its biological activity. This paper focuses on the practical experience within the Colombian Corporation for Agricultural Research (Agrosavia), which led to the development of a biostimulant bioproduct based on native rhizobacteria Bacillus amyloliquefaciens, oriented mainly to plant growth promotion in cape gooseberry nurseries and fruit crops in Colombia, and the challenges that were overcome from the expertise in the area of engineering. Through the application of strategies and engineering tools, a culture medium was optimized to obtain concentrations higher than 1E09 CFU (colony form units)/ml in liquid fermentation, the process of biomass production was standardized and a scale-up strategy was generated based on geometric (H/D of bioreactor relationships), and operational criteria based on a minimum dissolved oxygen concentration and that took into account the differences in the capacity of control of the process in the laboratory and pilot scales. Currently, the bioproduct obtained through this technological process is in stages of registration in Colombia for cape gooseberry fruits for export.Keywords: biochemical engineering, liquid fermentation, plant growth promoting, scale-up process
Procedia PDF Downloads 112645 Biodiesel Production from Yellow Oleander Seed Oil
Authors: S. Rashmi, Devashish Das, N. Spoorthi, H. V. Manasa
Abstract:
Energy is essential and plays an important role for overall development of a nation. The global economy literally runs on energy. The use of fossil fuels as energy is now widely accepted as unsustainable due to depleting resources and also due to the accumulation of greenhouse gases in the environment, renewable and carbon neutral biodiesel are necessary for environment and economic sustainability. Unfortunately biodiesel produced from oil crop, waste cooking oil and animal fats are not able to replace fossil fuel. Fossil fuels remain the dominant source of primary energy, accounting for 84% of the overall increase in demand. Today biodiesel has come to mean a very specific chemical modification of natural oils. Objectives: To produce biodiesel from yellow oleander seed oil, to test the yield of biodiesel using different types of catalyst (KOH & NaOH). Methodology: Oil is extracted from dried yellow oleander seeds using Soxhlet extractor and oil expeller (bulk). The FFA content of the oil is checked and depending on the FFA value either two steps or single step process is followed to produce biodiesel. Two step processes includes esterfication and transesterification, single step includes only transesterification. The properties of biodiesel are checked. Engine test is done for biodiesel produced. Result: It is concluded that biodiesel quality parameters such as yield(85% & 90%), flash point(1710C & 1760C),fire point(1950C & 1980C), viscosity(4.9991 and 5.21 mm2/s) for the biodiesel from seed oil of Thevetiaperuviana produced by using KOH & NaOH respectively. Thus the seed oil of Thevetiaperuviana is a viable feedstock for good quality fuel.The outcomes of our project are a substitute for conventional fuel, to reduce petro diesel requirement,improved performance in terms of emissions. Future prospects: Optimization of biodiesel production using response surface method.Keywords: yellow oleander seeds, biodiesel, quality parameters, renewable sources
Procedia PDF Downloads 446644 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry
Authors: C. A. Barros, Ana P. Barroso
Abstract:
Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.Keywords: automotive Industry, industry 4.0, Internet of Things, IATF 16949:2016, measurement system analysis
Procedia PDF Downloads 214643 FWGE Production From Wheat Germ Using Co-culture of Saccharomyces cerevisiae and Lactobacillus plantarum
Authors: Valiollah Babaeipour, Mahdi Rahaie
Abstract:
food supplements are rich in specific nutrients and bioactive compounds that eliminate free radicals and improve cellular metabolism. The major bioactive compounds are found in bran and cereal sprouts. Secondary metabolites of these microorganisms have antioxidant properties that can be used alone or in combination with chemotherapy and radiation therapy to treat cancer. Biologically active compounds such as benzoquinone derivatives extracted from fermented wheat germ extract (FWGE) have several positive effects on the overall state of human health and strengthen the immune system. The present work describes the discontinuous fermentation of raw wheat germ for FWGE production through the simultaneous culture process using the probiotic strains of Saccharomyces cerevisiae, Lactobacillus plantarum, and the possibility of using solid waste. To increase production efficiency, first to select important factors in the optimization of each fermentation process, using a factorial statistical scheme of stirring fraction (120 to 200 rpm), dilution of solids to solvent (1 to 8-12), fermentation time (16 to 24 hours) and strain to wheat germ ratio (20% to 50%) were studied and then simultaneous culture was performed to increase the yields of 2 and 6 dimethoxybenzoquinone (2,6-DMBQ). Since 2 and 6 dimethoxy benzoquinone were fermented as the main biologically active compound in wheat germ extract, UV-Vis analysis was performed to confirm the presence of 2 and 6 dimethoxy benzoquinone in the final product. In addition, 2,6-DMBQ of some products was isolated in a non-polar C-18 column and quantified using high performance liquid chromatography (HPLC). Based on our findings, it can be concluded that the increase of 2 and 6 dimethoxybenzoquinone in the simultaneous culture of Saccharomyces cerevisiae - Lactobacillus plantarum compared to pure culture of Saccharomyces cerevisiae (from 1.89 mg / g) to 28.9% (2.66 mg / g) Increased.Keywords: wheat germ, FWGE, saccharomyces cerevisiae, lactobacillus plantarum, co-culture, 2, 6-DMBQ
Procedia PDF Downloads 130642 Enhanced Photocatalytic H₂ Production from H₂S on Metal Modified Cds-Zns Semiconductors
Authors: Maali-Amel Mersel, Lajos Fodor, Otto Horvath
Abstract:
Photocatalytic H₂ production by H₂S decomposition is regarded to be an environmentally friendly process to produce carbon-free energy through direct solar energy conversion. For this purpose, sulphide-based materials, as photocatalysts, were widely used due to their excellent solar spectrum responses and high photocatalytic activity. The loading of proper co-catalysts that are based on cheap and earth-abundant materials on those semiconductors was shown to play an important role in the improvement of their efficiency. In this research, CdS-ZnS composite was studied because of its controllable band gap and excellent performance for H₂ evolution under visible light irradiation. The effects of the modification of this photocatalyst with different types of materials and the influence of the preparation parameters on its H₂ production activity were investigated. The CdS-ZnS composite with an enhanced photocatalytic activity for H₂ production was synthesized from ammine complexes. Two types of modification were used: compounds of Ni-group metals (NiS, PdS, and Pt) were applied as co-catalyst on the surface of CdS-ZnS semiconductor, while NiS, MnS, CoS, Ag₂S, and CuS were used as a dopant in the bulk of the catalyst. It was found that 0.1% of noble metals didn’t remarkably influence the photocatalytic activity, while the modification with 0.5% of NiS was shown to be more efficient in the bulk than on the surface. The modification with other types of metals results in a decrease of the rate of H₂ production, while the co-doping seems to be more promising. The preparation parameters (such as the amount of ammonia to form the ammine complexes, the order of the preparation steps together with the hydrothermal treatment) were also found to highly influence the rate of H₂ production. SEM, EDS and DRS analyses were made to reveal the structure of the most efficient photocatalysts. Moreover, the detection of the conduction band electron on the surface of the catalyst was also investigated. The excellent photoactivity of the CdS-ZnS catalysts with and without modification encourages further investigations to enhance the hydrogen generation by optimization of the reaction conditions.Keywords: H₂S, photoactivity, photocatalytic H₂ production, CdS-ZnS
Procedia PDF Downloads 131641 Generative Design Method for Cooled Additively Manufactured Gas Turbine Parts
Authors: Thomas Wimmer, Bernhard Weigand
Abstract:
The improvement of gas turbine efficiency is one of the main drivers of research and development in the gas turbine market. This has led to elevated gas turbine inlet temperatures beyond the melting point of the utilized materials. The turbine parts need to be actively cooled in order to withstand these harsh environments. However, the usage of compressor air as coolant decreases the overall gas turbine efficiency. Thus, coolant consumption needs to be minimized in order to gain the maximum advantage from higher turbine inlet temperatures. Therefore, sophisticated cooling designs for gas turbine parts aim to minimize coolant mass flow. New design space is accessible as additive manufacturing is maturing to industrial usage for the creation of hot gas flow path parts. By making use of this technology more efficient cooling schemes can be manufacture. In order to find such cooling schemes a generative design method is being developed. It generates cooling schemes randomly which adhere to a set of rules. These assure the sanity of the design. A huge amount of different cooling schemes are generated and implemented in a simulation environment where it is validated. Criteria for the fitness of the cooling schemes are coolant mass flow, maximum temperature and temperature gradients. This way the whole design space is sampled and a Pareto optimum front can be identified. This approach is applied to a flat plate, which resembles a simplified section of a hot gas flow path part. Realistic boundary conditions are applied and thermal barrier coating is accounted for in the simulation environment. The resulting cooling schemes are presented and compared to representative conventional cooling schemes. Further development of this method can give access to cooling schemes with an even better performance having higher complexity, which makes use of the available design space.Keywords: additive manufacturing, cooling, gas turbine, heat transfer, heat transfer design, optimization
Procedia PDF Downloads 352640 A Discrete Event Simulation Model For Airport Runway Operations Optimization (Case Study)
Authors: Awad Khireldin, Colin Law
Abstract:
Runways are the major infrastructure of airports around the world. Efficient operations of runways are key to ensure that airports are running smoothly with minimal delays. There are many factors that affect the efficiency of runway operations, such as the aircraft wake separation, runways system configuration, the fleet mix, and the runways separation distance. This paper aims to address how to maximize runway operations using a Discrete Event Simulation model. A case study of Cairo International Airport (CIA) is developed to maximize the utilizing of three parallel runways using a simulation model. Different scenarios have been designed where every runway could be assigned for arrival, departure, or mixed operations. A benchmarking study was also included to compare the actual to the proposed results to spot the potential improvements. The simulation model shows that there is a significant difference in utilization and delays between the actual and the proposed ones, there are several recommendations that can be provided to airport management, in the short and long term, to increase the efficiency and to reduce the delays. By including the recommendation with different operations scenarios, such as upgrading the airport slot Coordination from Level 1 to Level 2 in the short term. In the long run, discuss the possibilities to increase the International Air Transport association (IATA) slot coordination to Level 3 as more flights are expected to be handled by the airport. Technological advancements such as radar in the approach full airside simulation model could improve the airport performance where the airport is recommended to review the standard operations procedures with the appropriate authorities. Also, the airport can adopt a future operational plan to accommodate the forecasted additional traffic density in case of adding a fourth terminal building to increase the airport capacity.Keywords: airport performance, runway, discrete event simulation, capacity, airside
Procedia PDF Downloads 131639 Multiple-Material Flow Control in Construction Supply Chain with External Storage Site
Authors: Fatmah Almathkour
Abstract:
Managing and controlling the construction supply chain (CSC) are very important components of effective construction project execution. The goals of managing the CSC are to reduce uncertainty and optimize the performance of a construction project by improving efficiency and reducing project costs. The heart of much SC activity is addressing risk, and the CSC is no different. The delivery and consumption of construction materials is highly variable due to the complexity of construction operations, rapidly changing demand for certain components, lead time variability from suppliers, transportation time variability, and disruptions at the job site. Current notions of managing and controlling CSC, involve focusing on one project at a time with a push-based material ordering system based on the initial construction schedule and, then, holding a tremendous amount of inventory. A two-stage methodology was proposed to coordinate the feed-forward control of advanced order placement with a supplier to a feedback local control in the form of adding the ability to transship materials between projects to improve efficiency and reduce costs. It focused on the single supplier integrated production and transshipment problem with multiple products. The methodology is used as a design tool for the CSC because it includes an external storage site not associated with one of the projects. The idea is to add this feature to a highly constrained environment to explore its effectiveness in buffering the impact of variability and maintaining project schedule at low cost. The methodology uses deterministic optimization models with objectives that minimizing the total cost of the CSC. To illustrate how this methodology can be used in practice and the types of information that can be gleaned, it is tested on a number of cases based on the real example of multiple construction projects in Kuwait.Keywords: construction supply chain, inventory control supply chain, transshipment
Procedia PDF Downloads 122638 Modelling and Optimization of a Combined Sorption Enhanced Biomass Gasification with Hydrothermal Carbonization, Hot Gas Cleaning and Dielectric Barrier Discharge Plasma Reactor to Produce Pure H₂ and Methanol Synthesis
Authors: Vera Marcantonio, Marcello De Falco, Mauro Capocelli, Álvaro Amado-Fierro, Teresa A. Centeno, Enrico Bocci
Abstract:
Concerns about energy security, energy prices, and climate change led scientific research towards sustainable solutions to fossil fuel as renewable energy sources coupled with hydrogen as an energy vector and carbon capture and conversion technologies. Among the technologies investigated in the last decades, biomass gasification acquired great interest owing to the possibility of obtaining low-cost and CO₂ negative emission hydrogen production from a large variety of everywhere available organic wastes. Upstream and downstream treatment were then studied in order to maximize hydrogen yield, reduce the content of organic and inorganic contaminants under the admissible levels for the technologies which are coupled with, capture, and convert carbon dioxide. However, studies which analyse a whole process made of all those technologies are still missing. In order to fill this lack, the present paper investigated the coexistence of hydrothermal carbonization (HTC), sorption enhance gasification (SEG), hot gas cleaning (HGC), and CO₂ conversion by dielectric barrier discharge (DBD) plasma reactor for H₂ production from biomass waste by means of Aspen Plus software. The proposed model aimed to identify and optimise the performance of the plant by varying operating parameters (such as temperature, CaO/biomass ratio, separation efficiency, etc.). The carbon footprint of the global plant is 2.3 kg CO₂/kg H₂, lower than the latest limit value imposed by the European Commission to consider hydrogen as “clean”, that was set to 3 kg CO₂/kg H₂. The hydrogen yield referred to the whole plant is 250 gH₂/kgBIOMASS.Keywords: biomass gasification, hydrogen, aspen plus, sorption enhance gasification
Procedia PDF Downloads 79637 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite
Authors: F. Lazzeri, I. Reiter
Abstract:
Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.
Keywords: time-series, features engineering methods for forecasting, energy demand forecasting, Azure Machine Learning
Procedia PDF Downloads 298636 Research on Localized Operations of Multinational Companies in China
Authors: Zheng Ruoyuan
Abstract:
With the rapid development of economic globalization and increasingly fierce international competition, multinational companies have carried out investment strategy shifts and innovations, and actively promoted localization strategies. Localization strategies have become the main trend in the development of multinational companies. Large-scale entry of multinational companies China has a history of more than 20 years. With the sustained and steady growth of China's economy and the optimization of the investment environment, multinational companies' investment in China has expanded rapidly, which has also had an important impact on the Chinese economy: promoting employment, foreign exchange reserves, and improving the system. etc., has brought a lot of high-tech and advanced management experience; but it has also brought challenges and survival pressure to China's local enterprises. In recent years, multinational companies have gradually regarded China as an important part of their global strategies and began to invest in China. Actively promote localization strategies, including production, marketing, scientific research and development, etc. Many multinational companies have achieved good results in localized operations in China. Not only have their benefits continued to improve, but they have also established a good corporate image and brand in China. image, which has greatly improved their competitiveness in the international market. However, there are also some multinational companies that have difficulties in localized operations in China. This article will closely follow the background of economic globalization and comprehensively use the theory of multinational companies and strategic management theory and business management theory, using data and facts as the entry point, combined with typical cases of representative significance for analysis, to conduct a systematic study of the localized operations of multinational companies in China. At the same time, for each specific link of the operation of multinational companies, we provide multinational enterprises with some inspirations and references.Keywords: localization, business management, multinational, marketing
Procedia PDF Downloads 49635 Optimization of Beneficiation Process for Upgrading Low Grade Egyptian Kaolin
Authors: Nagui A. Abdel-Khalek, Khaled A. Selim, Ahmed Hamdy
Abstract:
Kaolin is naturally occurring ore predominantly containing kaolinite mineral in addition to some gangue minerals. Typical impurities present in kaolin ore are quartz, iron oxides, titanoferrous minerals, mica, feldspar, organic matter, etc. The main coloring impurity, particularly in the ultrafine size range, is titanoferrous minerals. Kaolin is used in many industrial applications such as sanitary ware, table ware, ceramic, paint, and paper industries, each of which should be of certain specifications. For most industrial applications, kaolin should be processed to obtain refined clay so as to match with standard specifications. For example, kaolin used in paper and paint industries need to be of high brightness and low yellowness. Egyptian kaolin is not subjected to any beneficiation process and the Egyptian companies apply selective mining followed by, in some localities, crushing and size reduction only. Such low quality kaolin can be used in refractory and pottery production but not in white ware and paper industries. This paper aims to study the amenability of beneficiation of an Egyptian kaolin ore of El-Teih locality, Sinai, to be suitable for different industrial applications. Attrition scrubbing and classification followed by magnetic separation are applied to remove the associated impurities. Attrition scrubbing and classification are used to separate the coarse silica and feldspars. Wet high intensity magnetic separation was applied to remove colored contaminants such as iron oxide and titanium oxide. Different variables affecting of magnetic separation process such as solid percent, magnetic field, matrix loading capacity, and retention time are studied. The results indicated that substantial decrease in iron oxide (from 1.69% to 0.61% ) and TiO2 (from 3.1% to 0.83%) contents as well as improving iso-brightness (from 63.76% to 75.21% and whiteness (from 79.85% to 86.72%) of the product can be achieved.Keywords: Kaolin, titanoferrous minerals, beneficiation, magnetic separation, attrition scrubbing, classification
Procedia PDF Downloads 361634 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model
Authors: Donatella Giuliani
Abstract:
In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation
Procedia PDF Downloads 217633 Chromatographic Preparation and Performance on Zinc Ion Imprinted Monolithic Column and Its Adsorption Property
Authors: X. Han, S. Duan, C. Liu, C. Zhou, W. Zhu, L. Kong
Abstract:
The ionic imprinting technique refers to the three-dimensional rigid structure with the fixed pore sizes, which was formed by the binding interactions of ions and functional monomers and used ions as the template, it has a high level of recognition to the ionic template. The preparation of monolithic column by the in-situ polymerization need to put the compound of template, functional monomers, cross-linking agent and initiating agent into the solution, dissolve it and inject to the column tube, and then the compound will have a polymerization reaction at a certain temperature, after the synthetic reaction, we washed out the unread template and solution. The monolithic columns are easy to prepare, low consumption and cost-effective with fast mass transfer, besides, they have many chemical functions. But the monolithic columns have some problems in the practical application, such as low-efficiency, quantitative analysis cannot be performed accurately because of the peak shape is wide and has tailing phenomena; the choice of polymerization systems is limited and the lack of theoretical foundations. Thus the optimization of components and preparation methods is an important research direction. During the preparation of ionic imprinted monolithic columns, pore-forming agent can make the polymer generate the porous structure, which can influence the physical properties of polymer, what’ s more, it can directly decide the stability and selectivity of polymerization reaction. The compounds generated in the pre-polymerization reaction could directly decide the identification and screening capabilities of imprinted polymer; thus the choice of pore-forming agent is quite critical in the preparation of imprinted monolithic columns. This article mainly focuses on the research that when using different pore-forming agents, the impact of zinc ion imprinted monolithic column on the enrichment performance of zinc ion.Keywords: high performance liquid chromatography (HPLC), ionic imprinting, monolithic column, pore-forming agent
Procedia PDF Downloads 214632 Bilingual Books in British Sign Language and English: The Development of E-Book
Authors: Katherine O'Grady-Bray
Abstract:
For some deaf children, reading books can be a challenge. Frank Barnes School (FBS) provides guided reading time with Teachers of the Deaf, in which they read books with deaf children using a bilingual approach. The vocabulary and context of the story is explained to deaf children in BSL so they develop skills bridging English and BSL languages. However, the success of this practice is only achieved if the person is fluent in both languages. FBS piloted a scheme to convert an Oxford Reading Tree (ORT) book into an e-book that can be read using tablets. Deaf readers at FBS have access to both languages (BSL and English) during lessons and outside the classroom. The pupils receive guided reading sessions with a Teacher of the Deaf every morning, these one to one sessions give pupils the opportunity to learn how to bridge both languages e.g. how to translate English to BSL and vice versa. Generally, due to our pupils’ lack of access to incidental learning, gaining new information about the world around them is limited. This highlights the importance of quality time to scaffold their language development. In some cases, there is a shortfall of parental support at home due to poor communication skills or an unawareness of how to interact with deaf children. Some families have a limited knowledge of sign language or simply don’t have the required learning environment and strategies needed for language development with deaf children. As the majority of our pupils’ preferred language is BSL we use that to teach reading and writing English. If this is not mirrored at home, there is limited opportunity for joint reading sessions. Development of the e-Book required planning and technical development. The overall production took time as video footage needed to be shot and then edited individually for each page. There were various technical considerations such as having an appropriate background colour so not to draw attention away from the signer. Appointing a signer with the required high level of BSL was essential. The language and pace of the sign language was an important consideration as it was required to match the age and reading level of the book. When translating English text to BSL, careful consideration was given to the nonlinear nature of BSL and the differences in language structure and syntax. The e-book was produced using Apple’s ‘iBook Author’ software which allowed video footage of the signer to be embedded on pages opposite the text and illustration. This enabled BSL translation of the content of the text and inferences of the story. An interpreter was used to directly ‘voice over’ the signer rather than the actual text. The aim behind the structure and layout of the e-book is to allow parents to ‘read’ with their deaf child which helps to develop both languages. From observations, the use of e-books has given pupils confidence and motivation with their reading, developing skills bridging both BSL and English languages and more effective reading time with parents.Keywords: bilingual book, e-book, BSL and English, bilingual e-book
Procedia PDF Downloads 169631 Contribution of Word Decoding and Reading Fluency on Reading Comprehension in Young Typical Readers of Kannada Language
Authors: Vangmayee V. Subban, Suzan Deelan. Pinto, Somashekara Haralakatta Shivananjappa, Shwetha Prabhu, Jayashree S. Bhat
Abstract:
Introduction and Need: During early years of schooling, the instruction in the schools mainly focus on children’s word decoding abilities. However, the skilled readers should master all the components of reading such as word decoding, reading fluency and comprehension. Nevertheless, the relationship between each component during the process of learning to read is less clear. The studies conducted in alphabetical languages have mixed opinion on relative contribution of word decoding and reading fluency on reading comprehension. However, the scenarios in alphasyllabary languages are unexplored. Aim and Objectives: The aim of the study was to explore the role of word decoding, reading fluency on reading comprehension abilities in children learning to read Kannada between the age ranges of 5.6 to 8.6 years. Method: In this cross sectional study, a total of 60 typically developing children, 20 each from Grade I, Grade II, Grade III maintaining equal gender ratio between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. The reading fluency and reading comprehension abilities of the children were assessed using Grade level passages selected from the Kannada text book of children core curriculum. All the passages consist of five questions to assess reading comprehension. The pseudoword decoding skills were assessed using 40 pseudowords with varying syllable length and their Akshara composition. Pseudowords are formed by interchanging the syllables within the meaningful word while maintaining the phonotactic constraints of Kannada language. The assessment material was subjected to content validation and reliability measures before collecting the data on the study samples. The data were collected individually, and reading fluency was assessed for words correctly read per minute. Pseudoword decoding was scored for the accuracy of reading. Results: The descriptive statistics indicated that the mean pseudoword reading, reading comprehension, words accurately read per minute increased with the Grades. The performance of Grade III children found to be higher, Grade I lower and Grade II remained intermediate of Grade III and Grade I. The trend indicated that reading skills gradually improve with the Grades. Pearson’s correlation co-efficient showed moderate and highly significant (p=0.00) positive co-relation between the variables, indicating the interdependency of all the three components required for reading. The hierarchical regression analysis revealed 37% variance in reading comprehension was explained by pseudoword decoding and was highly significant. Subsequent entry of reading fluency measure, there was no significant change in R-square and was only change 3%. Therefore, pseudoword-decoding evolved as a single most significant predictor of reading comprehension during early Grades of reading acquisition. Conclusion: The present study concludes that the pseudoword decoding skills contribute significantly to reading comprehension than reading fluency during initial years of schooling in children learning to read Kannada language.Keywords: alphasyllabary, pseudo-word decoding, reading comprehension, reading fluency
Procedia PDF Downloads 262630 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration
Authors: Matthew Yeager, Christopher Willy, John Bischoff
Abstract:
The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design
Procedia PDF Downloads 183629 Developing Urban Design and Planning Approach to Enhance the Efficiency of Infrastructure and Public Transportation in Order to Reduce GHG Emissions
Authors: A. Rostampouryasouri, A. Maghoul, S. Tahersima
Abstract:
The rapid growth of urbanization and the subsequent increase in population in cities have resulted in the destruction of the environment to cater to the needs of citizens. The industrialization of urban life has led to the production of pollutants, which has significantly contributed to the rise of air pollution. Infrastructure can have both positive and negative effects on air pollution. The effects of infrastructure on air pollution are complex and depend on various factors such as the type of infrastructure, location, and context. This study examines the effects of infrastructure on air pollution, drawing on a range of empirical evidence from Iran and China. Our paper focus for analyzing the data is on the following concepts: 1. Urban design and planning principles and practices 2. Infrastructure efficiency and optimization strategies 3. Public transportation systems and their environmental impact 4. GHG emissions reduction strategies in urban areas 5. Case studies and best practices in sustainable urban development This paper employs a mixed methodology approach with a focus on developmental and applicative purposes. The mixed methods approach combines both quantitative and qualitative research methods to provide a more comprehensive understanding of the research topic. A group of 20 architectural specialists and experts who are proficient in the field of research, design, and implementation of green architecture projects were interviewed in a systematic and purposeful manner. The research method was based on content analysis using MAXQDA2020 software. The findings suggest that policymakers and urban planners should consider the potential impacts of infrastructure on air pollution and take measures to mitigate negative effects while maximizing positive ones. This includes adopting a nature-based approach to urban planning and infrastructure development, investing in information infrastructure, and promoting modern logistic transport infrastructure.Keywords: GHG emissions, infrastructure efficiency, urban development, urban design
Procedia PDF Downloads 77628 Redesigning the Plant Distribution of an Industrial Laundry in Arequipa
Authors: Ana Belon Hercilla
Abstract:
The study is developed in “Reactivos Jeans” company, in the city of Arequipa, whose main business is the laundry of garments at an industrial level. In 2012 the company initiated actions to provide a dry cleaning service of alpaca fiber garments, recognizing that this item is in a growth phase in Peru. Additionally this company took the initiative to use a new greenwashing technology which has not yet been developed in the country. To accomplish this, a redesign of both the process and the plant layout was required. For redesigning the plant, the methodology used was the Systemic Layout Planning, allowing this study divided into four stages. First stage is the information gathering and evaluation of the initial situation of the company, for which a description of the areas, facilities and initial equipment, distribution of the plant, the production process and flows of major operations was made. Second stage is the development of engineering techniques that allow the logging and analysis procedures, such as: Flow Diagram, Route Diagram, DOP (process flowchart), DAP (analysis diagram). Then the planning of the general distribution is carried out. At this stage, proximity factors of the areas are established, the Diagram Paths (TRA) is developed, and the Relational Diagram Activities (DRA). In order to obtain the General Grouping Diagram (DGC), further information is complemented by a time study and Guerchet method is used to calculate the space requirements for each area. Finally, the plant layout redesigning is presented and the implementation of the improvement is made, making it possible to obtain a model much more efficient than the initial design. The results indicate that the implementation of the new machinery, the adequacy of the plant facilities and equipment relocation resulted in a reduction of the production cycle time by 75.67%, routes were reduced by 68.88%, the number of activities during the process were reduced by 40%, waits and storage were removed 100%.Keywords: redesign, time optimization, industrial laundry, greenwashing
Procedia PDF Downloads 394627 Hydrogen Production at the Forecourt from Off-Peak Electricity and Its Role in Balancing the Grid
Authors: Abdulla Rahil, Rupert Gammon, Neil Brown
Abstract:
The rapid growth of renewable energy sources and their integration into the grid have been motivated by the depletion of fossil fuels and environmental issues. Unfortunately, the grid is unable to cope with the predicted growth of renewable energy which would lead to its instability. To solve this problem, energy storage devices could be used. Electrolytic hydrogen production from an electrolyser is considered a promising option since it is a clean energy source (zero emissions). Choosing flexible operation of an electrolyser (producing hydrogen during the off-peak electricity period and stopping at other times) could bring about many benefits like reducing the cost of hydrogen and helping to balance the electric systems. This paper investigates the price of hydrogen during flexible operation compared with continuous operation, while serving the customer (hydrogen filling station) without interruption. The optimization algorithm is applied to investigate the hydrogen station in both cases (flexible and continuous operation). Three different scenarios are tested to see whether the off-peak electricity price could enhance the reduction of the hydrogen cost. These scenarios are: Standard tariff (1 tier system) during the day (assumed 12 p/kWh) while still satisfying the demand for hydrogen; using off-peak electricity at a lower price (assumed 5 p/kWh) and shutting down the electrolyser at other times; using lower price electricity at off-peak times and high price electricity at other times. This study looks at Derna city, which is located on the coast of the Mediterranean Sea (32° 46′ 0 N, 22° 38′ 0 E) with a high potential for wind resource. Hourly wind speed data which were collected over 24½ years from 1990 to 2014 were in addition to data on hourly radiation and hourly electricity demand collected over a one-year period, together with the petrol station data.Keywords: hydrogen filling station off-peak electricity, renewable energy, off-peak electricity, electrolytic hydrogen
Procedia PDF Downloads 232626 In Silico Exploration of Quinazoline Derivatives as EGFR Inhibitors for Lung Cancer: A Multi-Modal Approach Integrating QSAR-3D, ADMET, Molecular Docking, and Molecular Dynamics Analyses
Authors: Mohamed Moussaoui
Abstract:
A series of thirty-one potential inhibitors targeting the epidermal growth factor receptor kinase (EGFR), derived from quinazoline, underwent 3D-QSAR analysis using CoMFA and CoMSIA methodologies. The training and test sets of quinazoline derivatives were utilized to construct and validate the QSAR models, respectively, with dataset alignment performed using the lowest energy conformer of the most active compound. The best-performing CoMFA and CoMSIA models demonstrated impressive determination coefficients, with R² values of 0.981 and 0.978, respectively, and Leave One Out cross-validation determination coefficients, Q², of 0.645 and 0.729, respectively. Furthermore, external validation using a test set of five compounds yielded predicted determination coefficients, R² test, of 0.929 and 0.909 for CoMFA and CoMSIA, respectively. Building upon these promising results, eighteen new compounds were designed and assessed for drug likeness and ADMET properties through in silico methods. Additionally, molecular docking studies were conducted to elucidate the binding interactions between the selected compounds and the enzyme. Detailed molecular dynamics simulations were performed to analyze the stability, conformational changes, and binding interactions of the quinazoline derivatives with the EGFR kinase. These simulations provided deeper insights into the dynamic behavior of the compounds within the active site. This comprehensive analysis enhances the understanding of quinazoline derivatives as potential anti-cancer agents and provides valuable insights for lead optimization in the early stages of drug discovery, particularly for developing highly potent anticancer therapeuticsKeywords: 3D-QSAR, CoMFA, CoMSIA, ADMET, molecular docking, quinazoline, molecular dynamic, egfr inhibitors, lung cancer, anticancer
Procedia PDF Downloads 48625 Identification of Text Domains and Register Variation through the Analysis of Lexical Distribution in a Bangla Mass Media Text Corpus
Authors: Mahul Bhattacharyya, Niladri Sekhar Dash
Abstract:
The present research paper is an experimental attempt to investigate the nature of variation in the register in three major text domains, namely, social, cultural, and political texts collected from the corpus of Bangla printed mass media texts. This present study uses a corpus of a moderate amount of Bangla mass media text that contains nearly one million words collected from different media sources like newspapers, magazines, advertisements, periodicals, etc. The analysis of corpus data reveals that each text has certain lexical properties that not only control their identity but also mark their uniqueness across the domains. At first, the subject domains of the texts are classified into two parameters namely, ‘Genre' and 'Text Type'. Next, some empirical investigations are made to understand how the domains vary from each other in terms of lexical properties like both function and content words. Here the method of comparative-cum-contrastive matching of lexical load across domains is invoked through word frequency count to track how domain-specific words and terms may be marked as decisive indicators in the act of specifying the textual contexts and subject domains. The study shows that the common lexical stock that percolates across all text domains are quite dicey in nature as their lexicological identity does not have any bearing in the act of specifying subject domains. Therefore, it becomes necessary for language users to anchor upon certain domain-specific lexical items to recognize a text that belongs to a specific text domain. The eventual findings of this study confirm that texts belonging to different subject domains in Bangla news text corpus clearly differ on the parameters of lexical load, lexical choice, lexical clustering, lexical collocation. In fact, based on these parameters, along with some statistical calculations, it is possible to classify mass media texts into different types to mark their relation with regard to the domains they should actually belong. The advantage of this analysis lies in the proper identification of the linguistic factors which will give language users a better insight into the method they employ in text comprehension, as well as construct a systemic frame for designing text identification strategy for language learners. The availability of huge amount of Bangla media text data is useful for achieving accurate conclusions with a certain amount of reliability and authenticity. This kind of corpus-based analysis is quite relevant for a resource-poor language like Bangla, as no attempt has ever been made to understand how the structure and texture of Bangla mass media texts vary due to certain linguistic and extra-linguistic constraints that are actively operational to specific text domains. Since mass media language is assumed to be the most 'recent representation' of the actual use of the language, this study is expected to show how the Bangla news texts reflect the thoughts of the society and how they leave a strong impact on the thought process of the speech community.Keywords: Bangla, corpus, discourse, domains, lexical choice, mass media, register, variation
Procedia PDF Downloads 174624 Review on Implementation of Artificial Intelligence and Machine Learning for Controlling Traffic and Avoiding Accidents
Authors: Neha Singh, Shristi Singh
Abstract:
Accidents involving motor vehicles are more likely to cause serious injuries and fatalities. It also has a host of other perpetual issues, such as the regular loss of life and goods in accidents. To solve these issues, appropriate measures must be implemented, such as establishing an autonomous incident detection system that makes use of machine learning and artificial intelligence. In order to reduce traffic accidents, this article examines the overview of artificial intelligence and machine learning in autonomous event detection systems. The paper explores the major issues, prospective solutions, and use of artificial intelligence and machine learning in road transportation systems for minimising traffic accidents. There is a lot of discussion on additional, fresh, and developing approaches that less frequent accidents in the transportation industry. The study structured the following subtopics specifically: traffic management using machine learning and artificial intelligence and an incident detector with these two technologies. The internet of vehicles and vehicle ad hoc networks, as well as the use of wireless communication technologies like 5G wireless networks and the use of machine learning and artificial intelligence for the planning of road transportation systems, are elaborated. In addition, safety is the primary concern of road transportation. Route optimization, cargo volume forecasting, predictive fleet maintenance, real-time vehicle tracking, and traffic management, according to the review's key conclusions, are essential for ensuring the safety of road transportation networks. In addition to highlighting research trends, unanswered problems, and key research conclusions, the study also discusses the difficulties in applying artificial intelligence to road transport systems. Planning and managing the road transportation system might use the work as a resource.Keywords: artificial intelligence, machine learning, incident detector, road transport systems, traffic management, automatic incident detection, deep learning
Procedia PDF Downloads 114623 Optimization and Kinetic Analysis of the Enzymatic Hydrolysis of Oil Palm Empty Fruit Bunch To Xylose Using Crude Xylanase from Trichoderma Viride ITB CC L.67
Authors: Efri Mardawati, Ronny Purwadi, Made Tri Ari Penia Kresnowati, Tjandra Setiadi
Abstract:
EFB are mainly composed of cellulose (≈ 43%), hemicellulose (≈ 23%) and lignin (≈20%). The palm oil empty fruit bunches (EFB) is the lignosellulosic waste from crude palm oil industries mainly compose of (≈ 43%), hemicellulose (≈ 23%) and lignin (≈20%). Xylan, a polymer made of pentose sugar xylose and the most abundant component of hemicellulose in plant cell wall. Further xylose can be used as a raw material for production of a wide variety of chemicals such as xylitol, which is extensively used in food, pharmaceutical and thin coating applications. Currently, xylose is mostly produced from xylan via chemical hydrolysis processes. However, these processes are normally conducted at a high temperature and pressure, which is costly, and the required downstream processes are relatively complex. As an alternative method, enzymatic hydrolysis of xylan to xylose offers an environmentally friendly biotechnological process, which is performed at ambient temperature and pressure with high specificity and at low cost. This process is catalysed by xylanolytic enzymes that can be produced by some fungal species such as Aspergillus niger, Penicillium crysogenum, Tricoderma reseei, etc. Fungal that will be used to produce crude xylanase enzyme in this study is T. Viride ITB CC L.67. It is the purposes of this research to study the influence of pretreatment of EFB for the enzymatic hydrolysis process, optimation of temperature and pH of the hydrolysis process, the influence of substrate and enzyme concentration to the enzymatic hydrolysis process, the dynamics of hydrolysis process and followingly to study the kinetics of this process. Xylose as the product of enzymatic hydrolysis process analyzed by HPLC. The results show that the thermal pretreatment of EFB enhance the enzymatic hydrolysis process. The enzymatic hydrolysis can be well approached by the Michaelis Menten kinetic model, and kinetic parameters are obtained from experimental data.Keywords: oil palm empty fruit bunches (EFB), xylose, enzymatic hydrolysis, kinetic modelling
Procedia PDF Downloads 389622 The Effect of Lead(II) Lone Electron Pair and Non-Covalent Interactions on the Supramolecular Assembly and Fluorescence Properties of Pb(II)-Pyrrole-2-Carboxylato Polymer
Authors: M. Kowalik, J. Masternak, K. Kazimierczuk, O. V. Khavryuchenko, B. Kupcewicz, B. Barszcz
Abstract:
Recently, the growing interest of chemists in metal-organic coordination polymers (MOCPs) is primarily derived from their intriguing structures and potential applications in catalysis, gas storage, molecular sensing, ion exchanges, nonlinear optics, luminescence, etc. Currently, we are devoting considerable effort to finding the proper method of synthesizing new coordination polymers containing S- or N-heteroaromatic carboxylates as linkers and characterizing the obtained Pb(II) compounds according to their structural diversity, luminescence, and thermal properties. The choice of Pb(II) as the central ion of MOCPs was motivated by several reasons mentioned in the literature: i) a large ionic radius allowing for a wide range of coordination numbers, ii) the stereoactivity of the 6s2 lone electron pair leading to a hemidirected or holodirected geometry, iii) a flexible coordination environment, and iv) the possibility to form secondary bonds and unusual non-covalent interactions, such as classic hydrogen bonds and π···π stacking interactions, as well as nonconventional hydrogen bonds and rarely reported tetrel bonds, Pb(lone pair)···π interactions, C–H···Pb agostic-type interactions or hydrogen bonds, and chelate ring stacking interactions. Moreover, the construction of coordination polymers requires the selection of proper ligands acting as linkers, because we are looking for materials exhibiting different network topologies and fluorescence properties, which point to potential applications. The reaction of Pb(NO₃)₂ with 1H-pyrrole-2-carboxylic acid (2prCOOH) leads to the formation of a new four-nuclear Pb(II) polymer, [Pb4(2prCOO)₈(H₂O)]ₙ, which has been characterized by CHN, FT-IR, TG, PL and single-crystal X-ray diffraction methods. In view of the primary Pb–O bonds, Pb1 and Pb2 show hemidirected pentagonal pyramidal geometries, while Pb2 and Pb4 display hemidirected octahedral geometries. The topology of the strongest Pb–O bonds was determined as the (4·8²) fes topology. Taking the secondary Pb–O bonds into account, the coordination number of Pb centres increased, Pb1 exhibited a hemidirected monocapped pentagonal pyramidal geometry, Pb2 and Pb4 exhibited a holodirected tricapped trigonal prismatic geometry, and Pb3 exhibited a holodirected bicapped trigonal prismatic geometry. Moreover, the Pb(II) lone pair stereoactivity was confirmed by DFT calculations. The 2D structure was expanded into 3D by the existence of non-covalent O/C–H···π and Pb···π interactions, which was confirmed by the Hirshfeld surface analysis. The above mentioned interactions improve the rigidity of the structure and facilitate the charge and energy transfer between metal centres, making the polymer a promising luminescent compound.Keywords: coordination polymers, fluorescence properties, lead(II), lone electron pair stereoactivity, non-covalent interactions
Procedia PDF Downloads 145621 Influence of Flight Design on Discharging Profiles of Granular Material in Rotary Dryer
Authors: I. Benhsine, M. Hellou, F. Lominé, Y. Roques
Abstract:
During the manufacture of fertilizer, it is necessary to add water for granulation purposes. The water content is then removed or reduced using rotary dryers. They are commonly used to dry wet granular materials and they are usually fitted with lifting flights. The transport of granular materials occurs when particles cascade from the lifting flights and fall into the air stream. Each cascade consists of a lifting and a falling cycle. Lifting flights are thus of great importance for the transport of granular materials along the dryer. They also enhance the contact between solid particles and the air stream. Optimization of the drying process needs an understanding of the behavior of granular materials inside a rotary dryer. Different approaches exist to study the movement of granular materials inside the dryer. Most common of them are based on empirical formulations or on study the movement of the bulk material. In the present work, we are interested in the behavior of each particle in the cross section of the dryer using Discrete Element Method (DEM) to understand. In this paper, we focus on studying the hold-up, the cascade patterns, the falling time and the falling length of the particles leaving the flights. We will be using two segment flights. Three different profiles are used: a straight flight (180° between both segments), an angled flight (with an angle of 150°), and a right-angled flight (90°). The profile of the flight affects significantly the movement of the particles in the dryer. Changing the flight angle changes the flight capacity which leads to different discharging profile of the flight, thus affecting the hold-up in the flight. When the angle of the flight is reduced, the range of the discharge angle increases leading to a more uniformed cascade pattern in time. The falling length and the falling time of the particles also increase up to a maximum value then they start decreasing. Moreover, the results show an increase in the falling length and the falling time up to 70% and 50%, respectively, when using a right-angled flight instead of a straight one.Keywords: discrete element method, granular materials, lifting flight, rotary dryer
Procedia PDF Downloads 327620 Development of Wave-Dissipating Block Installation Simulation for Inexperienced Worker Training
Authors: Hao Min Chuah, Tatsuya Yamazaki, Ryosui Iwasawa, Tatsumi Suto
Abstract:
In recent years, with the advancement of digital technology, the movement to introduce so-called ICT (Information and Communication Technology), such as computer technology and network technology, to civil engineering construction sites and construction sites is accelerating. As part of this movement, attempts are being made in various situations to reproduce actual sites inside computers and use them for designing and construction planning, as well as for training inexperienced engineers. The installation of wave-dissipating blocks on coasts, etc., is a type of work that has been carried out by skilled workers based on their years of experience and is one of the tasks that is difficult for inexperienced workers to carry out on site. Wave-dissipating blocks are structures that are designed to protect coasts, beaches, and so on from erosion by reducing the energy of ocean waves. Wave-dissipating blocks usually weigh more than 1 t and are installed by being suspended by a crane, so it would be time-consuming and costly for inexperienced workers to train on-site. In this paper, therefore, a block installation simulator is developed based on Unity 3D, a game development engine. The simulator computes porosity. Porosity is defined as the ratio of the total volume of the wave breaker blocks inside the structure to the final shape of the ideal structure. Using the evaluation of porosity, the simulator can determine how well the user is able to install the blocks. The voxelization technique is used to calculate the porosity of the structure, simplifying the calculations. Other techniques, such as raycasting and box overlapping, are employed for accurate simulation. In the near future, the simulator will install an automatic block installation algorithm based on combinatorial optimization solutions and compare the user-demonstrated block installation and the appropriate installation solved by the algorithm.Keywords: 3D simulator, porosity, user interface, voxelization, wave-dissipating blocks
Procedia PDF Downloads 103