Search results for: facility performance evaluation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17965

Search results for: facility performance evaluation

1795 Using High Performance Concrete in Finite Element Modeling of Grouted Connections for Offshore Wind Turbine Structures

Authors: A. Aboubakr, E. Fehling, S. A. Mourad, M. Omar

Abstract:

Wind energy is one of the most effective renewable sources especially offshore wind energy although offshore wind technology is more costly to produce. It is well known that offshore wind energy can potentially be very cheap once infrastructure and researches improve. Laterally, the trend is to construct offshore wind energy to generate the electricity form wind. This leads to intensive research in order to improve the infrastructures. Offshore wind energy is the construction of wind farms in bodies of water to generate electricity from wind. The most important part in offshore wind turbine structure is the foundation and its connection with the wind tower. This is the main difference between onshore and offshore structures. Grouted connection between the foundation and the wind tower is the most important part of the building process when constructing wind offshore turbines. Most attention should be paid to the actual grout connection as this transfers the loads safely from tower to foundations and the soil also. In this paper, finite element analyses have been carried out for studying the behaviour of offshore grouted connection for wind turbine structures. ATENA program have been used for non-linear analysis simulation of the real structural behavior thus demonstrating the crushing, cracking, contact between the two materials and steel yielding. A calibration of the material used in the simulation has been carried out assuring an accurate model of the used material by ATENA program. This calibration was performed by comparing the results from the ATENA program with experimental results to validate the material properties used in ATENA program. Three simple patch test models with different properties have been performed. The research is concluded with a result that the calibration showing a good agreement between the ATENA program material behaviors and the experimental results.

Keywords: grouted connection, 3D modeling, finite element analysis, offshore wind energy turbines, stresses

Procedia PDF Downloads 512
1794 Building Tutor and Tutee Pedagogical Agents to Enhance Learning in Adaptive Educational Games

Authors: Ogar Ofut Tumenayu, Olga Shabalina

Abstract:

This paper describes the application of two types of pedagogical agents’ technology with different functions in an adaptive educational game with the sole aim of improving learning and enhancing interactivities in Digital Educational Games (DEG). This idea could promote the elimination of some problems of DEG, like isolation in game-based learning, by introducing a tutor and tutee pedagogical agents. We present an analysis of a learning companion interacting in a peer tutoring environment as a step toward improving social interactions in the educational game environment. We show that tutor and tutee agents use different interventions and interactive approaches: the tutor agent is engaged in tracking the learner’s activities and inferring the learning state, while the tutee agent initiates interactions with the learner at the appropriate times and in appropriate manners. In order to provide motivation to prevent mistakes and clarity a game task, the tutor agent uses the help dialog tool to provide assistance, while the tutee agent provides collaboration assistance by using the hind tool. We presented our idea on a prototype game called “Pyramid Programming Game,” a 2D game that was developed using Libgdx. The game's Pyramid component symbolizes a programming task that is presented to the player in the form of a puzzle. During gameplay, the Agents can instruct, direct, inspire, and communicate emotions. They can also rapidly alter the instructional pattern in response to the learner's performance and knowledge. The pyramid must be effectively destroyed in order to win the game. The game also teaches and illustrates the advantages of utilizing educational agents such as TrA and TeA to assist and motivate students. Our findings support the idea that the functionality of a pedagogical agent should be dualized into an instructional and learner’s companion agent in order to enhance interactivity in a game-based environment.

Keywords: tutor agent, tutee agent, learner’s companion interaction, agent collaboration

Procedia PDF Downloads 55
1793 Effect of Carbon Nanotubes on Ultraviolet and Immersion Stability of Diglycidyl Ether of Bisphenol A Epoxy Coating

Authors: Artemova Anastasiia, Shen Zexiang, Savilov Serguei

Abstract:

The marine environment is very aggressive for a number of factors, such as moisture, temperature, winds, ultraviolet radiation, chloride ion concentration, oxygen concentration, pollution, and biofouling, all contributing to marine corrosion. Protective organic coatings provide protection either by a barrier action from the layer, which is limited due to permeability to water and oxygen or from active corrosion inhibition and cathodic protection due to the pigments in the coating. Carbon nanotubes can play not only barrier effect but also passivation effect via adsorbing molecular species of oxygen, hydroxyl, chloride and sulphate anions. Multiwall carbon nanotubes composite provide very important properties such as mechanical strength, non-cytotoxicity, outstanding thermal and electrical conductivity, and very strong absorption of ultraviolet radiation. The samples of stainless steel (316L) coated by epoxy resin with carbon nanotubes-based pigments were exposed to UV irradiation (340nm), and immersion to the sodium chloride solution for 1000h and corrosion behavior in 3.5 wt% sodium chloride (NaCl) solution was investigated. Experimental results showed that corrosion current significantly decreased in the presence of carbon nanotube-based materials, especially nitrogen-doped ones, in the composite coating. Importance of the structure and composition of the pigment materials and its composition was established, and the mechanism of the protection was described. Finally, the effect of nitrogen doping on the corrosion behavior was investigated. The pigment-polymer crosslinking improves the coating performance and the corrosion rate decreases in comparison with pure epoxy coating from 5.7E-05 to 1.4E-05mm/yr for the coating without any degradation; in more than 6 times for the coating after ultraviolet degradation; and more than 16% for the coatings after immersion degradation.

Keywords: corrosion, coating, carbon nanotubes, degradation

Procedia PDF Downloads 146
1792 Worldwide GIS Based Earthquake Information System/Alarming System for Microzonation/Liquefaction and It’s Application for Infrastructure Development

Authors: Rajinder Kumar Gupta, Rajni Kant Agrawal, Jaganniwas

Abstract:

One of the most frightening phenomena of nature is the occurrence of earthquake as it has terrible and disastrous effects. Many earthquakes occur every day worldwide. There is need to have knowledge regarding the trends in earthquake occurrence worldwide. The recoding and interpretation of data obtained from the establishment of the worldwide system of seismological stations made this possible. From the analysis of recorded earthquake data, the earthquake parameters and source parameters can be computed and the earthquake catalogues can be prepared. These catalogues provide information on origin, time, epicenter locations (in term of latitude and longitudes) focal depths, magnitude and other related details of the recorded earthquakes. Theses catalogues are used for seismic hazard estimation. Manual interpretation and analysis of these data is tedious and time consuming. A geographical information system is a computer based system designed to store, analyzes and display geographic information. The implementation of integrated GIS technology provides an approach which permits rapid evaluation of complex inventor database under a variety of earthquake scenario and allows the user to interactively view results almost immediately. GIS technology provides a powerful tool for displaying outputs and permit to users to see graphical distribution of impacts of different earthquake scenarios and assumptions. An endeavor has been made in present study to compile the earthquake data for the whole world in visual Basic on ARC GIS Plate form so that it can be used easily for further analysis to be carried out by earthquake engineers. The basic data on time of occurrence, location and size of earthquake has been compiled for further querying based on various parameters. A preliminary analysis tool is also provided in the user interface to interpret the earthquake recurrence in region. The user interface also includes the seismic hazard information already worked out under GHSAP program. The seismic hazard in terms of probability of exceedance in definite return periods is provided for the world. The seismic zones of the Indian region are included in the user interface from IS 1893-2002 code on earthquake resistant design of buildings. The City wise satellite images has been inserted in Map and based on actual data the following information could be extracted in real time: • Analysis of soil parameters and its effect • Microzonation information • Seismic hazard and strong ground motion • Soil liquefaction and its effect in surrounding area • Impacts of liquefaction on buildings and infrastructure • Occurrence of earthquake in future and effect on existing soil • Propagation of earth vibration due of occurrence of Earthquake GIS based earthquake information system has been prepared for whole world in Visual Basic on ARC GIS Plate form and further extended micro level based on actual soil parameters. Individual tools has been developed for liquefaction, earthquake frequency etc. All information could be used for development of infrastructure i.e. multi story structure, Irrigation Dam & Its components, Hydro-power etc in real time for present and future.

Keywords: GIS based earthquake information system, microzonation, analysis and real time information about liquefaction, infrastructure development

Procedia PDF Downloads 301
1791 Experimental Investigation of Visual Comfort Requirement in Garment Factories and Identify the Cost Saving Opportunities

Authors: M. A. Wijewardane, S. A. N. C. Sudasinghe, H. K. G. Punchihewa, W. K. D. L. Wickramasinghe, S. A. Philip, M. R. S. U. Kumara

Abstract:

Visual comfort is one of the major parameters that can be taken to measure the human comfort in any environment. If the provided illuminance level in a working environment does not meet the workers visual comfort, it will lead to eye-strain, fatigue, headache, stress, accidents and finally, poor productivity. However, improvements in lighting do not necessarily mean that the workplace requires more light. Unnecessarily higher illuminance levels will also cause poor visual comfort and health risks. In addition, more power consumption on lighting will also result in higher energy costs. So, during this study, visual comfort and the illuminance requirement for the workers in textile/apparel industry were studied to perform different tasks (i.e. cutting, sewing and knitting) at their workplace. Experimental studies were designed to identify the optimum illuminance requirement depending upon the varied fabric colour and type and finally, energy saving potentials due to controlled illuminance level depending on the workforce requirement were analysed. Visual performance of workers during the sewing operation was studied using the ‘landolt ring experiment’. It was revealed that around 36.3% of the workers would like to work if the illuminance level varies from 601 lux to 850 lux illuminance level and 45.9% of the workers are not happy to work if the illuminance level reduces less than 600 lux and greater than 850 lux. Moreover, more than 65% of the workers who do not satisfy with the existing illuminance levels of the production floors suggested that they have headache, eye diseases, or both diseases due to poor visual comfort. In addition, findings of the energy analysis revealed that the energy-saving potential of 5%, 10%, 24%, 8% and 16% can be anticipated for fabric colours, red, blue, yellow, black and white respectively, when the 800 lux is the prevailing illuminance level for sewing operation.

Keywords: Landolt Ring experiment, lighting energy consumption, illuminance, textile and apparel industry, visual comfort

Procedia PDF Downloads 195
1790 Taguchi Robust Design for Optimal Setting of Process Wastes Parameters in an Automotive Parts Manufacturing Company

Authors: Charles Chikwendu Okpala, Christopher Chukwutoo Ihueze

Abstract:

As a technique that reduces variation in a product by lessening the sensitivity of the design to sources of variation, rather than by controlling their sources, Taguchi Robust Design entails the designing of ideal goods, by developing a product that has minimal variance in its characteristics and also meets the desired exact performance. This paper examined the concept of the manufacturing approach and its application to brake pad product of an automotive parts manufacturing company. Although the firm claimed that only defects, excess inventory, and over-production were the few wastes that grossly affect their productivity and profitability, a careful study and analysis of their manufacturing processes with the application of Single Minute Exchange of Dies (SMED) tool showed that the waste of waiting is the fourth waste that bedevils the firm. The selection of the Taguchi L9 orthogonal array which is based on the four parameters and the three levels of variation for each parameter revealed that with a range of 2.17, that waiting is the major waste that the company must reduce in order to continue to be viable. Also, to enhance the company’s throughput and profitability, the wastes of over-production, excess inventory, and defects with ranges of 2.01, 1.46, and 0.82, ranking second, third, and fourth respectively must also be reduced to the barest minimum. After proposing -33.84 as the highest optimum Signal-to-Noise ratio to be maintained for the waste of waiting, the paper advocated for the adoption of all the tools and techniques of Lean Production System (LPS), and Continuous Improvement (CI), and concluded by recommending SMED in order to drastically reduce set up time which leads to unnecessary waiting.

Keywords: lean production system, single minute exchange of dies, signal to noise ratio, Taguchi robust design, waste

Procedia PDF Downloads 113
1789 Graph Clustering Unveiled: ClusterSyn - A Machine Learning Framework for Predicting Anti-Cancer Drug Synergy Scores

Authors: Babak Bahri, Fatemeh Yassaee Meybodi, Changiz Eslahchi

Abstract:

In the pursuit of effective cancer therapies, the exploration of combinatorial drug regimens is crucial to leverage synergistic interactions between drugs, thereby improving treatment efficacy and overcoming drug resistance. However, identifying synergistic drug pairs poses challenges due to the vast combinatorial space and limitations of experimental approaches. This study introduces ClusterSyn, a machine learning (ML)-powered framework for classifying anti-cancer drug synergy scores. ClusterSyn employs a two-step approach involving drug clustering and synergy score prediction using a fully connected deep neural network. For each cell line in the training dataset, a drug graph is constructed, with nodes representing drugs and edge weights denoting synergy scores between drug pairs. Drugs are clustered using the Markov clustering (MCL) algorithm, and vectors representing the similarity of drug pairs to each cluster are input into the deep neural network for synergy score prediction (synergy or antagonism). Clustering results demonstrate effective grouping of drugs based on synergy scores, aligning similar synergy profiles. Subsequently, neural network predictions and synergy scores of the two drugs on others within their clusters are used to predict the synergy score of the considered drug pair. This approach facilitates comparative analysis with clustering and regression-based methods, revealing the superior performance of ClusterSyn over state-of-the-art methods like DeepSynergy and DeepDDS on diverse datasets such as Oniel and Almanac. The results highlight the remarkable potential of ClusterSyn as a versatile tool for predicting anti-cancer drug synergy scores.

Keywords: drug synergy, clustering, prediction, machine learning., deep learning

Procedia PDF Downloads 59
1788 Survey on Resilience of Chinese Nursing Interns: A Cross-Sectional Study

Authors: Yutong Xu, Wanting Zhang, Jia Wang, Zihan Guo, Weiguang Ma

Abstract:

Background: The resilience education of intern nursing students has significant implications for the development and improvement of the nursing workforce. The clinical internship period is a critical time for enhancing resilience. Aims: To evaluate the resilience level of Chinese nursing interns and identify the factors affecting resilience early in their careers. Methods: The cross-sectional study design was adopted. From March 2022 to May 2023, 512 nursing interns in tertiary care hospitals were surveyed online with the Connor-Davidson Resilience Scale, the Clinical Learning Environment scale for Nurse, and the Career Adapt-Abilities Scale. Structural equation modeling was used to clarify the relationships among these factors. Indirect effects were tested using bootstrapped Confidence Intervals. Results: The nursing interns showed a moderately high level of resilience[M(SD)=70.15(19.90)]. Gender, scholastic attainment, had a scholarship, career adaptability and clinical learning environment were influencing factors of nursing interns’ resilience. Career adaptability and clinical learning environment positively and directly affected their resilience level (β = 0.58, 0.12, respectively, p<0.01). career adaptability also positively affected career adaptability (β = 0.26, p < 0.01), and played a fully mediating role in the relationship between clinical learning environment and resilience. Conclusion: Career adaptability can enhance the influence of clinical learning environment on resilience. The promotion of career adaptability and the clinical teaching environment should be the potential strategies for nursing interns to improve their resilience, especially for those female nursing interns with low academic performance. Implications for Nursing Educators Nursing educators should pay attention to the cultivation of nursing students' resilience; for example, by helping them integrate to the clinical learning environment and improving their career adaptability. Reporting Method: The STROBE criteria were used to report the results of the observations critically. Patient or Public Contribution No patient or public contribution.

Keywords: resilience, clinical learning environment, career adaptability, nursing interns

Procedia PDF Downloads 66
1787 Conduction Transfer Functions for the Calculation of Heat Demands in Heavyweight Facade Systems

Authors: Mergim Gasia, Bojan Milovanovica, Sanjin Gumbarevic

Abstract:

Better energy performance of the building envelope is one of the most important aspects of energy savings if the goals set by the European Union are to be achieved in the future. Dynamic heat transfer simulations are being used for the calculation of building energy consumption because they give more realistic energy demands compared to the stationary calculations that do not take the building’s thermal mass into account. Software used for these dynamic simulation use methods that are based on the analytical models since numerical models are insufficient for longer periods. The analytical models used in this research fall in the category of the conduction transfer functions (CTFs). Two methods for calculating the CTFs covered by this research are the Laplace method and the State-Space method. The literature review showed that the main disadvantage of these methods is that they are inadequate for heavyweight façade elements and shorter time periods used for the calculation. The algorithms for both the Laplace and State-Space methods are implemented in Mathematica, and the results are compared to the results from EnergyPlus and TRNSYS since these software use similar algorithms for the calculation of the building’s energy demand. This research aims to check the efficiency of the Laplace and the State-Space method for calculating the building’s energy demand for heavyweight building elements and shorter sampling time, and it also gives the means for the improvement of the algorithms used by these methods. As the reference point for the boundary heat flux density, the finite difference method (FDM) is used. Even though the dynamic heat transfer simulations are superior to the calculation based on the stationary boundary conditions, they have their limitations and will give unsatisfactory results if not properly used.

Keywords: Laplace method, state-space method, conduction transfer functions, finite difference method

Procedia PDF Downloads 113
1786 Correlation of Hyperlipidemia with Platelet Parameters in Blood Donors

Authors: S. Nishat Fatima Rizvi, Tulika Chandra, Abbas Ali Mahdi, Devisha Agarwal

Abstract:

Introduction: Blood components are an unexplored area prone to numerous discoveries which influence patient’s care. Experiments at different levels will further change the present concept of blood banking. Hyperlipidemia is a condition of elevated plasma level of low-density lipoprotein (LDL) as well as decreased plasma level of high-density lipoprotein (HDL). Studies show that platelets play a vital role in the progression of atherosclerosis and thrombosis, a major cause of death worldwide. They are activated by many triggers like elevated LDL in the blood resulting in aggregation and formation of plaques. Hyperlipidemic platelets are frequently transfused to patients with various disorders. Screening the random donor platelets for hyperlipidemia and correlating the condition with other donor criteria such as lipid rich diet, oral contraceptive pills intake, weight, alcohol intake, smoking, sedentary lifestyle, family history of heart diseases will lead to further deciding the exclusion criteria for donor selection. This will help in making the patients safe as well as the donor deferral criteria more stringent to improve the quality of blood supply. Technical evaluation and assessment will enable blood bankers to supply safe blood and improve the guidelines for blood safety. Thus, we try to study the correlation between hyperlipidemic platelets with platelets parameters, weight, and specific history of the donors. Methodology: This case control study included 100 blood samples of Blood donors, out of 100 only 30 samples were found to be hyperlipidemic and were included as cases, while rest were taken as controls. Lipid Profile were measured by fully automated analyzer (TRIGL:triglycerides),(LDL-C:LDL –Cholesterol plus 2nd generation),CHOL 2: Cholesterol Gen 2), HDL C 3: HDL-Cholesterol plus 3rdgeneration)-(Cobas C311-Roche Diagnostic).And Platelets parameters were analyzed by the Sysmex KX21 automated hematology analyzer. Results: A significant correlation was found amongst hyperlipidemic level in single time donor. In which 80% donors have history of heart disease, 66.66% donors have sedentary life style, 83.3% donors were smokers, 50% donors were alcoholic, and 63.33% donors had taken lipid rich diet. Active physical activity was found amongst 40% donors. We divided donors sample in two groups based on their body weight. In group 1, hyperlipidemic samples: Platelet Parameters were 75% in normal 25% abnormal in >70Kg weight while in 50-70Kg weight 90% were normal 10% were abnormal. In-group 2, Non Hyperlipidemic samples: platelet Parameters were 95% normal and 5% abnormal in >70Kg weight, while in 50-70Kg Weight, 66.66% normal and 33.33% abnormal. Conclusion: The findings indicate that Hyperlipidemic status of donors may affect the platelet parameters and can be distinguished on history by their weight, Smoking, Alcoholic intake, Sedentary lifestyle, Active physical activity, Lipid rich diet, Oral contraceptive pills intake, and Family history of heart disease. However further studies on a large sample size will affirm this finding.

Keywords: blood donors, hyperlipidemia, platelet, weight

Procedia PDF Downloads 296
1785 Cost Overrun in Construction Projects

Authors: Hailu Kebede Bekele

Abstract:

Construction delays are suitable where project events occur at a certain time expected due to causes related to the client, consultant, and contractor. Delay is the major cause of the cost overrun that leads to the poor efficiency of the project. The cost difference between completion and the originally estimated is known as cost overrun. The common ways of cost overruns are not simple issues that can be neglected, but more attention should be given to prevent the organization from being devastated to be failed, and financial expenses to be extended. The reasons that may raised in different studies show that the problem may arise in construction projects due to errors in budgeting, lack of favorable weather conditions, inefficient machinery, and the availability of extravagance. The study is focused on the pace of mega projects that can have a significant change in the cost overrun calculation.15 mega projects are identified to study the problem of the cost overrun in the site. The contractor, consultant, and client are the principal stakeholders in the mega projects. 20 people from each sector were selected to participate in the investigation of the current mega construction project. The main objective of the study on the construction cost overrun is to prioritize the major causes of the cost overrun problem. The methodology that was employed in the construction cost overrun is the qualitative methodology that mostly rates the causes of construction project cost overrun. Interviews, open-ended and closed-ended questions group discussions, and rating qualitative methods are the best methodologies to study construction projects overrun. The result shows that design mistakes, lack of labor, payment delay, old equipment and scheduling, weather conditions, lack of skilled labor, payment delays, transportation, inflation, and order variations, market price fluctuation, and people's thoughts and philosophies, the prior cause of the cost overrun that fail the project performance. The institute shall follow the scheduled activities to bring a positive forward in the project life.

Keywords: cost overrun, delay, mega projects, design

Procedia PDF Downloads 49
1784 Power Generation and Treatment potential of Microbial Fuel Cell (MFC) from Landfill Leachate

Authors: Beenish Saba, Ann D. Christy

Abstract:

Modern day municipal solid waste landfills are operated and controlled to protect the environment from contaminants during the biological stabilization and degradation of the solid waste. They are equipped with liners, caps, gas and leachate collection systems. Landfill gas is passively or actively collected and can be used as bio fuel after necessary purification, but leachate treatment is the more difficult challenge. Leachate, if not recirculated in a bioreactor landfill system, is typically transported to a local wastewater treatment plant for treatment. These plants are designed for sewage treatment, and often charge additional fees for higher strength wastewaters such as leachate if they accept them at all. Different biological, chemical, physical and integrated techniques can be used to treat the leachate. Treating that leachate with simultaneous power production using microbial fuel cells (MFC) technology has been a recent innovation, reported its application in its earliest starting phase. High chemical oxygen demand (COD), ionic strength and salt concentration are some of the characteristics which make leachate an excellent substrate for power production in MFCs. Different materials of electrodes, microbial communities, carbon co-substrates and temperature conditions are some factors that can be optimized to achieve simultaneous power production and treatment. The advantage of the MFC is its dual functionality but lower power production and high costs are the hurdles in its commercialization and more widespread application. The studies so far suggest that landfill leachate MFCs can produce 1.8 mW/m2 with 79% COD removal, while amendment with food leachate or domestic wastewater can increase performance up to 18W/m3 with 90% COD removal. The columbic efficiency is reported to vary between 2-60%. However efforts towards biofilm optimization, efficient electron transport system studies and use of genetic tools can increase the efficiency of the MFC and can determine its future potential in treating landfill leachate.

Keywords: microbial fuel cell, landfill leachate, power generation, MFC

Procedia PDF Downloads 302
1783 Synthesis, Characterization, and Catalytic Application of Modified Hierarchical Zeolites

Authors: A. Feliczak Guzik, I. Nowak

Abstract:

Zeolites, classified as microporous materials, are a large group of crystalline aluminosilicate materials commonly used in the chemical industry. These materials are characterized by large specific surface area, high adsorption capacity, hydrothermal and thermal stability. However, the micropores present in them impose strong mass transfer limitations, resulting in low catalytic performance. Consequently, mesoporous (hierarchical) zeolites have attracted considerable attention from researchers. These materials possess additional porosity in the mesopore size region (2-50 nm according to IUPAC). Mesoporous zeolites, based on commercial MFI-type zeolites modified with silver, were synthesized as follows: 0.5 g of zeolite was dispersed in a mixture containing CTABr (template), water, ethanol, and ammonia under ultrasound for 30 min at 65°C. The silicon source, which was tetraethyl orthosilicate, was then added and stirred for 4 h. After this time, silver(I) nitrate was added. In a further step, the whole mixture was filtered and washed with water: ethanol mixture. The template was removed by calcination at 550°C for 5h. All the materials obtained were characterized by the following techniques: X-ray diffraction (XRD), transmission electron microscopy (TEM), scanning electron microscopy (SEM), nitrogen adsorption/desorption isotherms, FTIR spectroscopy. X-ray diffraction and low-temperature nitrogen adsorption/desorption isotherms revealed additional secondary porosity. Moreover, the structure of the commercial zeolite was preserved during most of the material syntheses. The aforementioned materials were used in the epoxidation reaction of cyclohexene using conventional heating and microwave radiation heating. The composition of the reaction mixture was analyzed every 1 h by gas chromatography. As a result, about 60% conversion of cyclohexene and high selectivity to the desired reaction products i.e., 1,2-epoxy cyclohexane and 1,2-cyclohexane diol, were obtained.

Keywords: catalytic application, characterization, epoxidation, hierarchical zeolites, synthesis

Procedia PDF Downloads 74
1782 A Crystallization Kinetic Model for Long Fiber-Based Composite with Thermoplastic Semicrystalline Polymer Matrix

Authors: Nicolas Bigot, M'hamed Boutaous, Nahiene Hamila, Shihe Xin

Abstract:

Composite materials with polymer matrices are widely used in most industrial areas, particularly in aeronautical and automotive ones. Thanks to the development of a high-performance thermoplastic semicrystalline polymer matrix, those materials exhibit more and more efficient properties. The polymer matrix in composite materials can manifest a specific crystalline structure characteristic of crystallization in a fibrous medium. In order to guarantee a good mechanical behavior of structures and to optimize their performances, it is necessary to define realistic mechanical constitutive laws of such materials considering their physical structure. The interaction between fibers and matrix is a key factor in the mechanical behavior of composite materials. Transcrystallization phenomena which develops in the matrix around the fibers constitute the interphase which greatly affects and governs the nature of the fiber-matrix interaction. Hence, it becomes fundamental to quantify its impact on the thermo-mechanical behavior of composites material in relationship with processing conditions. In this work, we propose a numerical model coupling the thermal and crystallization kinetics in long fiber-based composite materials, considering both the spherulitic and transcrystalline types of the induced structures. After validation of the model with comparison to results from the literature and noticing a good correlation, a parametric study has been led on the effects of the thermal kinetics, the fibers volume fractions, the deformation, and the pressure on the crystallization rate in the material, under processing conditions. The ratio of the transcrystallinity is highlighted and analyzed with regard to the thermal kinetics and gradients in the material. Experimental results on the process are foreseen and pave the way to establish a mechanical constitutive law describing, with the introduction of the role on the crystallization rates and types on the thermo-mechanical behavior of composites materials.

Keywords: composite materials, crystallization, heat transfer, modeling, transcrystallization

Procedia PDF Downloads 180
1781 Supply Chain Collaboration Comparison Practices between Developed and Developing Countries

Authors: Maria Jose Granero Paris, Ana Isabel Jimenez Zarco, Agustin Pablo Alvarez Herranz

Abstract:

In the industrial sector the collaboration along the supply chain is key especially in order to develop product, production methods or process innovations. The access to resources and knowledge not being available inside the company, the achievement of cost competitive solutions, the reduction of the time required to innovate are some of the benefits linked with the collaboration with suppliers. The big industrial manufacturers have a long tradition to collaborate with their suppliers to develop new products in the developed countries. Since they have increased their global supply chains and global sourcing activities, the objective of the research is to analyse if the same best practices, way of working, experiences, information technology tools, governance methodologies are applied when collaborating with suppliers in the developed world or in developing countries. Most of the current research focuses to analyse the Supply Chain Collaboration in the developed countries and in recent years the number of publications related to the Supply Chain Collaboration in developing countries has increased, but there is still a lack of research comparing both and analysing the similarities, differences and key success factors among the Supply Chain Collaboration practices in developed and developing countries. With this gap in mind, the research under preparation will focus on the following goals: -Identify the most important elements required for a successful supply chain collaboration in the developed and developing countries. -Set up the optimal governance framework to manage the supply chain collaboration in the developed and developing countries. -Define some recommendations about required improvements in the current supply chain collaboration business relationship practices in place. Following the case methodology we will analyze the way manufacturers and suppliers collaborate in the development of new products, production methods or process innovations and in the set up of new global supply chains in two industries with different level of technology intensity and collaboration history being the automotive and aerospace industries.

Keywords: global supply chain networks, Supply Chain Collaboration, supply chain governance, supply chain performance

Procedia PDF Downloads 577
1780 Fully Instrumented Small-Scale Fire Resistance Benches for Aeronautical Composites Assessment

Authors: Fabienne Samyn, Pauline Tranchard, Sophie Duquesne, Emilie Goncalves, Bruno Estebe, Serge Boubigot

Abstract:

Stringent fire safety regulations are enforced in the aeronautical industry due to the consequences that potential fire event on an aircraft might imply. This is so much true that the fire issue is considered right from the design of the aircraft structure. Due to the incorporation of an increasing amount of polymer matrix composites in replacement of more conventional materials like metals, the nature of the fire risks is changing. The choice of materials used is consequently of prime importance as well as the evaluation of its resistance to fire. The fire testing is mostly done using the so-called certification tests according to standards such as the ISO2685:1998(E). The latter describes a protocol to evaluate the fire resistance of structures located in fire zone (ability to withstand fire for 5min). The test consists in exposing an at least 300x300mm² sample to an 1100°C propane flame with a calibrated heat flux of 116kW/m². This type of test is time-consuming, expensive and gives access to limited information in terms of fire behavior of the materials (pass or fail test). Consequently, it can barely be used for material development purposes. In this context, the laboratory UMET in collaboration with industrial partners has developed a horizontal and a vertical small-scale instrumented fire benches for the characterization of the fire behavior of composites. The benches using smaller samples (no more than 150x150mm²) enables to cut downs costs and hence to increase sampling throughput. However, the main added value of our benches is the instrumentation used to collect useful information to understand the behavior of the materials. Indeed, measurements of the sample backside temperature are performed using IR camera in both configurations. In addition, for the vertical set up, a complete characterization of the degradation process, can be achieved via mass loss measurements and quantification of the gasses released during the tests. These benches have been used to characterize and study the fire behavior of aeronautical carbon/epoxy composites. The horizontal set up has been used in particular to study the performances and durability of protective intumescent coating on 2mm thick 2D laminates. The efficiency of this approach has been validated, and the optimized coating thickness has been determined as well as the performances after aging. Reductions of the performances after aging were attributed to the migration of some of the coating additives. The vertical set up has enabled to investigate the degradation process of composites under fire. An isotropic and a unidirectional 4mm thick laminates have been characterized using the bench and post-fire analyses. The mass loss measurements and the gas phase analyses of both composites do not present significant differences unlike the temperature profiles in the thickness of the samples. The differences have been attributed to differences of thermal conductivity as well as delamination that is much more pronounced for the isotropic composite (observed on the IR-images). This has been confirmed by X-ray microtomography. The developed benches have proven to be valuable tools to develop fire safe composites.

Keywords: aeronautical carbon/epoxy composite, durability, intumescent coating, small-scale ‘ISO 2685 like’ fire resistance test, X-ray microtomography

Procedia PDF Downloads 257
1779 Determination of Phenolic Compounds in Apples Grown in Different Geographical Regions

Authors: Mindaugas Liaudanskas, Monika Tallat-Kelpsaite, Darius Kviklys, Jonas Viskelis, Pranas Viskelis, Norbertas Uselis, Juozas Lanauskas, Valdimaras Janulis

Abstract:

Apples are an important source of various biologically active compounds used for human health. Phenolic compounds detected in apples are natural antioxidants and have antimicrobial, anti-inflammatory, anticarcinogenic, and cardiovascular protective activity. The quantitative composition of phenolic compounds in apples may be affected by various factors. It is important to investigate it in order to provide the consumer with high-quality well-known composition apples and products made out of it. The objective of this study was to evaluate phenolic compounds quantitative composition in apple fruits grown in a different geographical region. In this study, biological replicates of apple cv. 'Ligol', grown in Lithuania, Latvia, Poland, and Estonia, were investigated. Three biological replicates were analyzed; one of each contained 10 apples. Samples of lyophilized apple fruits were extracted with 70% ethanol (v/v) for 20 min at 40∘C temperature using the ultrasonic bath. The ethanol extracts of apple fruits were analyzed by the high-performance liquid chromatography method. The study found that the geographical location of apple-trees had an impact on the composition of phenolic compounds in apples. The number of quercetin glycosides varied from 314.78±9.47 µg/g (Poland) to 648.17±5.61 µg/g (Estonia). The same trend was also observed with flavan-3-ols (from 829.56±47.17 µg/g to 2300.85±35.49 µg/g), phloridzin (from 55.29±1.7 µg/g to 208.78±0.35 µg/g), and chlorogenic acid (from 501.39±28.84 µg/g to 1704.35±22.65 µg/g). It was observed that the amount of investigated phenolic compounds tended to increase from apples grown in the southern location (Poland) (1701.02±75.38 µg/g) to apples grown northern location (Estonia) (4862.15±56.37 µg/g). Apples (cv. 'Ligol') grown in Estonia accumulated approx. 2.86 times higher amount of phenolic compounds than apples grown in Poland. Acknowledgment: This work was supported by a grant from the Research Council of Lithuania, project No. S-MIP-17-8.

Keywords: apples, cultivar 'Ligol', geographical regions, HPLC, phenolic compounds

Procedia PDF Downloads 168
1778 Numerical Study of a Ventilation Principle Based on Flow Pulsations

Authors: Amir Sattari, Mac Panah, Naeim Rashidfarokhi

Abstract:

To enhance the mixing of fluid in a rectangular enclosure with a circular inlet and outlet, an energy-efficient approach is further investigated through computational fluid dynamics (CFD). Particle image velocimetry (PIV) measurements help confirm that the pulsation of the inflow velocity improves the mixing performance inside the enclosure considerably without increasing energy consumption. In this study, multiple CFD simulations with different turbulent models were performed. The results obtained were compared with experimental PIV results. This study investigates small-scale representations of flow patterns in a ventilated rectangular room. The objective is to validate the concept of an energy-efficient ventilation strategy with improved thermal comfort and reduction of stagnant air inside the room. Experimental and simulated results confirm that through pulsation of the inflow velocity, strong secondary vortices are generated downstream of the entrance wall-jet. The pulsatile inflow profile promotes a periodic generation of vortices with stronger eddies despite a relatively low inlet velocity, which leads to a larger boundary layer with increased kinetic energy in the occupied zone. A real-scale study was not conducted; however, it can be concluded that a constant velocity inflow profile can be replaced with a lower pulsated flow rate profile while preserving the mixing efficiency. Among the turbulent CFD models demonstrated in this study, SST-kω is most advantageous, exhibiting a similar global airflow pattern as in the experiments. The detailed near-wall velocity profile is utilized to identify the wall-jet instabilities that consist of mixing and boundary layers. The SAS method was later applied to predict the turbulent parameters in the center of the domain. In both cases, the predictions are in good agreement with the measured results.

Keywords: CFD, PIV, pulsatile inflow, ventilation, wall-jet

Procedia PDF Downloads 163
1777 Delivery of Ginseng Extract Containing Phytosome Loaded Microsphere System: A Preclinical Approach for Treatment of Neuropathic Pain in Rodent Model

Authors: Nitin Kumar

Abstract:

Purpose: The current research work focuses mainly on evolving a delivery system for ginseng extract (GE), which in turn will ameliorate the neuroprotective potential by means of enhancing the ginsenoside (Rb1) bio-availability (BA). For more noteworthy enhancement in oral bioavailability (OBA) along with pharmacological properties, the drug carriers’ performance can be strengthened by utilizing phytosomes-loaded microspheres (PM) delivery system. Methods: For preparing the disparate phytosome complexes (F1, F2, and F3), an aqueous extract of ginseng roots (GR) along with phospholipids were reacted in disparate ratio. Considering the outcomes, F3 formulation (spray-dried) was chosen for preparing the phytosomes powder (PP), PM, and extract microspheres (EM). PM was made by means of loading of F3 into Gum Arabic (GA) in addition to maltodextrin polymer mixture, whereas EM was prepared by means of the addition of extract directly into the same polymer mixture. For investigating the neuroprotective effect (NPE) in addition to their pharmacokinetic (PK) properties, PP, PM, and EM formulations were assessed. Results: F3 formulation gave enhanced entrapment efficiency (EE) (i.e., 50.61%) along with good homogeneity of spherical shaped particle size (PS) (42.58 ± 1.4 nm) with least polydispersity index (PDI) (i.e., 0.193 ± 0.01). The sustained release (up to 24 h) of ginsenoside Rb1 (GRb1) is revealed by the dissolution study of PM. A significantly (p < 0.05) greater anti-oxidant (AO) potential of PM can well be perceived as of the diminution in the lipid peroxidase level in addition to the rise in the glutathione superoxide dismutase (SOD) in addition to catalase levels. It also showed a greater neuroprotective potential exhibiting significant (p < 0.05) augmentation in the nociceptive threshold together with the diminution in damage to nerves. A noteworthy enhancement in the relative BA (157.94%) of GRb1 through the PM formulation can well be seen in the PK studies. Conclusion: It is exhibited that the PM system is an optimistic and feasible strategy to enhance the delivery of GE for the effectual treatment of neuropathic pain.

Keywords: ginseng, neuropathic, phytosome, pain

Procedia PDF Downloads 177
1776 Using Hyperspectral Sensor and Machine Learning to Predict Water Potentials of Wild Blueberries during Drought Treatment

Authors: Yongjiang Zhang, Kallol Barai, Umesh R. Hodeghatta, Trang Tran, Vikas Dhiman

Abstract:

Detecting water stress on crops early and accurately is crucial to minimize its impact. This study aims to measure water stress in wild blueberry crops non-destructively by analyzing proximal hyperspectral data. The data collection took place in the summer growing season of 2022. A drought experiment was conducted on wild blueberries in the randomized block design in the greenhouse, incorporating various genotypes and irrigation treatments. Hyperspectral data ( spectral range: 400-1000 nm) using a handheld spectroradiometer and leaf water potential data using a pressure chamber were collected from wild blueberry plants. Machine learning techniques, including multiple regression analysis and random forest models, were employed to predict leaf water potential (MPa). We explored the optimal wavelength bands for simple differences (RY1-R Y2), simple ratios (RY1/RY2), and normalized differences (|RY1-R Y2|/ (RY1-R Y2)). NDWI ((R857 - R1241)/(R857 + R1241)), SD (R2188 – R2245), and SR (R1752 / R1756) emerged as top predictors for predicting leaf water potential, significantly contributing to the highest model performance. The base learner models achieved an R-squared value of approximately 0.81, indicating their capacity to explain 81% of the variance. Research is underway to develop a neural vegetation index (NVI) that automates the process of index development by searching for specific wavelengths in the space ratio of linear functions of reflectance. The NVI framework could work across species and predict different physiological parameters.

Keywords: hyperspectral reflectance, water potential, spectral indices, machine learning, wild blueberries, optimal bands

Procedia PDF Downloads 54
1775 Improving Search Engine Performance by Removing Indexes to Malicious URLs

Authors: Durga Toshniwal, Lokesh Agrawal

Abstract:

As the web continues to play an increasing role in information exchange, and conducting daily activities, computer users have become the target of miscreants which infects hosts with malware or adware for financial gains. Unfortunately, even a single visit to compromised web site enables the attacker to detect vulnerabilities in the user’s applications and force the downloading of multitude of malware binaries. We provide an approach to effectively scan the so-called drive-by downloads on the Internet. Drive-by downloads are result of URLs that attempt to exploit their visitors and cause malware to be installed and run automatically. To scan the web for malicious pages, the first step is to use a crawler to collect URLs that live on the Internet, and then to apply fast prefiltering techniques to reduce the amount of pages that are needed to be examined by precise, but slower, analysis tools (such as honey clients or antivirus programs). Although the technique is effective, it requires a substantial amount of resources. A main reason is that the crawler encounters many pages on the web that are legitimate and needs to be filtered. In this paper, to characterize the nature of this rising threat, we present implementation of a web crawler on Python, an approach to search the web more efficiently for pages that are likely to be malicious, filtering benign pages and passing remaining pages to antivirus program for detection of malwares. Our approaches starts from an initial seed of known, malicious web pages. Using these seeds, our system generates search engines queries to identify other malicious pages that are similar to the ones in the initial seed. By doing so, it leverages the crawling infrastructure of search engines to retrieve URLs that are much more likely to be malicious than a random page on the web. The results shows that this guided approach is able to identify malicious web pages more efficiently when compared to random crawling-based approaches.

Keywords: web crawler, malwares, seeds, drive-by-downloads, security

Procedia PDF Downloads 218
1774 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format

Authors: Maryam Fallahpoor, Biswajeet Pradhan

Abstract:

Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.

Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format

Procedia PDF Downloads 62
1773 Towards an Enhanced Quality of IPTV Media Server Architecture over Software Defined Networking

Authors: Esmeralda Hysenbelliu

Abstract:

The aim of this paper is to present the QoE (Quality of Experience) IPTV SDN-based media streaming server enhanced architecture for configuring, controlling, management and provisioning the improved delivery of IPTV service application with low cost, low bandwidth, and high security. Furthermore, it is given a virtual QoE IPTV SDN-based topology to provide an improved IPTV service based on QoE Control and Management of multimedia services functionalities. Inside OpenFlow SDN Controller there are enabled in high flexibility and efficiency Service Load-Balancing Systems; based on the Loading-Balance module and based on GeoIP Service. This two Load-balancing system improve IPTV end-users Quality of Experience (QoE) with optimal management of resources greatly. Through the key functionalities of OpenFlow SDN controller, this approach produced several important features, opportunities for overcoming the critical QoE metrics for IPTV Service like achieving incredible Fast Zapping time (Channel Switching time) < 0.1 seconds. This approach enabled Easy and Powerful Transcoding system via FFMPEG encoder. It has the ability to customize streaming dimensions bitrates, latency management and maximum transfer rates ensuring delivering of IPTV streaming services (Audio and Video) in high flexibility, low bandwidth and required performance. This QoE IPTV SDN-based media streaming architecture unlike other architectures provides the possibility of Channel Exchanging between several IPTV service providers all over the word. This new functionality brings many benefits as increasing the number of TV channels received by end –users with low cost, decreasing stream failure time (Channel Failure time < 0.1 seconds) and improving the quality of streaming services.

Keywords: improved quality of experience (QoE), OpenFlow SDN controller, IPTV service application, softwarization

Procedia PDF Downloads 134
1772 Reliability Analysis of Variable Stiffness Composite Laminate Structures

Authors: A. Sohouli, A. Suleman

Abstract:

This study focuses on reliability analysis of variable stiffness composite laminate structures to investigate the potential structural improvement compared to conventional (straight fibers) composite laminate structures. A computational framework was developed which it consists of a deterministic design step and reliability analysis. The optimization part is Discrete Material Optimization (DMO) and the reliability of the structure is computed by Monte Carlo Simulation (MCS) after using Stochastic Response Surface Method (SRSM). The design driver in deterministic optimization is the maximum stiffness, while optimization method concerns certain manufacturing constraints to attain industrial relevance. These manufacturing constraints are the change of orientation between adjacent patches cannot be too large and the maximum number of successive plies of a particular fiber orientation should not be too high. Variable stiffness composites may be manufactured by Automated Fiber Machines (AFP) which provides consistent quality with good production rates. However, laps and gaps are the most important challenges to steer fibers that effect on the performance of the structures. In this study, the optimal curved fiber paths at each layer of composites are designed in the first step by DMO, and then the reliability analysis is applied to investigate the sensitivity of the structure with different standard deviations compared to the straight fiber angle composites. The random variables are material properties and loads on the structures. The results show that the variable stiffness composite laminate structures are much more reliable, even for high standard deviation of material properties, than the conventional composite laminate structures. The reason is that the variable stiffness composite laminates allow tailoring stiffness and provide the possibility of adjusting stress and strain distribution favorably in the structures.

Keywords: material optimization, Monte Carlo simulation, reliability analysis, response surface method, variable stiffness composite structures

Procedia PDF Downloads 501
1771 Synthesis of Highly Active Octahedral NaInS₂ for Enhanced H₂ Evolution

Authors: C. K. Ngaw

Abstract:

Crystal facet engineering, which involves tuning and controlling a crystal surface and morphology, is a commonly employed strategy to optimize the performance of crystalline nanocrystals. The principle behind this strategy is that surface atomic rearrangement and coordination, which inherently determines their catalytic activity, can be easily tuned by morphological control. Because of this, the catalytic properties of a nanocrystal are closely related to the surface of an exposed facet, and it has provided great motivation for researchers to synthesize photocatalysts with high catalytic activity by maximizing reactive facets exposed through morphological control. In this contribution, octahedral NaInS₂ crystals have been successfully developed via solvothermal method. The formation of the octahedral NaInS₂ crystals was investigated using field emission scanning electron microscope (FESEM) and X-Ray diffraction (XRD), and results have shown that the concentration of sulphur precursor plays an important role in the growth process, leading to the formation of other NaInS₂ crystal structures in the form of hexagonal nanosheets and microspheres. Structural modeling analysis suggests that the octahedral NaInS₂ crystals were enclosed with {012} and {001} facets, while the nanosheets and microspheres are bounded with {001} facets only and without any specific facets, respectively. Visible-light photocatalytic H₂ evolution results revealed that the octahedral NaInS₂ crystals (~67 μmol/g/hr) exhibit ~6.1 and ~2.3 times enhancement as compared to the conventional NaInS₂ microspheres (~11 μmol/g/hr) and nanosheets (~29 μmol/g/hr), respectively. The H₂ enhancement of the NaInS₂ octahedral crystal is attributed to the presence of {012} facets on the surface. Detailed analysis of the octahedron model revealed obvious differences in the atomic arrangement between the {001} and {012} facets and this can affect the interaction between the water molecules and the surface facets before reducing into H₂ gas. These results highlight the importance of tailoring crystal morphology with highly reactive facets in improving photocatalytic properties.

Keywords: H₂ evolution, photocatalysis, octahedral, reactive facets

Procedia PDF Downloads 55
1770 Sustainable Milling Process for Tensile Specimens

Authors: Shilpa Kumari, Ramakumar Jayachandran

Abstract:

Machining of aluminium extrusion profiles in the automotive industry has gained much interest in the last decade, particularly due to the higher utilization of aluminum profiles and the weight reduction benefits it brings. Milling is the most common material removal process, where the rotary milling cutter is moved against a workpiece. The physical contact of the milling cutter to the workpiece increases the friction between them, thereby affecting the longevity of the milling tool and also the surface finish of the workpiece. To minimise this issue, the milling process uses cutting fluids or emulsions; however, the use of emulsion in the process has a negative impact on the environment ( such as consumption of water, oils and the used emulsion needs to be treated before disposal) and also on the personal ( may cause respiratory problems, exposure to microbial toxins generated by bacteria in the emulsions on prolonged use) working close to the process. Furthermore, the workpiece also needs to be cleaned after the milling process, which is not adding value to the process, and the cleaning also disperses mist of emulsion in the working environment. Hydro Extrusion is committed to improving the performance of sustainability from its operations, and with the negative impact of using emulsion in the milling process, a new innovative process- Dry Milling was developed to minimise the impact the cutting fluid brings. In this paper, the authors present one application of dry milling in the machining of tensile specimens in the laboratory. Dry milling is an innovative milling process without the use of any cooling/lubrication and has several advantages. Several million tensile tests are carried out in extrusion laboratories worldwide with the wet milling process. The machining of tensile specimens has a significant impact on the reliability of test results. The paper presents the results for different 6xxx alloys with different wall thicknesses of the specimens, which were machined by both dry and wet milling processes. For both different 6xxx alloys and different wall thicknesses, mechanical properties were similar for samples milled using dry and wet milling. Several tensile specimens were prepared using both dry and wet milling to compare the results, and the outcome showed the dry milling process does not affect the reliability of tensile test results.

Keywords: dry milling, tensile testing, wet milling, 6xxx alloy

Procedia PDF Downloads 181
1769 A Study on the Chemical Composition of Kolkheti's Sphagnum Peat Peloids to Evaluate the Perspective of Use in Medical Practice

Authors: Al. Tsertsvadze. L. Ebralidze, I. Matchutadze. D. Berashvili, A. Bakuridze

Abstract:

Peatlands are landscape elements, they are formed over a very long period by physical, chemical, biologic, and geologic processes. In the moderate zone of Caucasus, the Kolkheti lowlands are distinguished by the diversity of relictual plants, a high degree of endemism, orographic, climate, landscape, and other characteristics of high levels of biodiversity. The unique properties of the Kolkheti region lead to the formation of special, so-called, endemic peat peloids. The composition and properties of peloids strongly depend on peat-forming plants. Peat is considered a unique complex of raw materials, which can be used in different fields of the industry: agriculture, metallurgy, energy, biotechnology, chemical industry, health care. They are formed in permanent wetland areas. As a result of decay, higher plants remain in the anaerobic area, with the participation of microorganisms. Peat mass absorbs soil and groundwater. Peloids are predominantly rich with humic substances, which are characterized by high biological activity. Humic acids stimulate enzymatic activity, regenerative processes, and have anti-inflammatory activity. Objects of the research were Kolkheti peat peloids (Ispani, Anaklia, Churia, Chirukhi, Peranga) possessing different formation phases. Due to specific physical and chemical properties of research objects, the aim of the research was to develop analytical methods in order to study the chemical composition of the objects. The research was held using modern instrumental methods of analysis: Ultraviolet-visible spectroscopy and Infrared spectroscopy, Scanning Electron Microscopy, Centrifuge, dry oven, Ultraturax, pH meter, fluorescence spectrometer, Gas chromatography-mass spectrometry (GC-MS/MS), Gas chromatography. Based on the research ration between organic and inorganic substances, the spectrum of micro and macro elements, also the content of minerals was determined. The content of organic nitrogen was determined using the Kjeldahl method. The total composition of amino acids was studied by a spectrophotometric method using standard solutions of glutamic and aspartic acids. Fatty acid was determined using GC (Gas chromatography). Based on the obtained results, we can conclude that the method is valid to identify fatty acids in the research objects. The content of organic substances in the research objects was held using GC-MS. Using modern instrumental methods of analysis, the chemical composition of research objects was studied. Each research object is predominantly reached with a broad spectrum of organic (fatty acids, amino acids, carbocyclic and heterocyclic compounds, organic acids and their esters, steroids) and inorganic (micro and macro elements, minerals) substances. Modified methods used in the presented research may be utilized for the evaluation of cosmetological balneological and pharmaceutical means prepared on the base of Kolkheti's Sphagnum Peat Peloids.

Keywords: modern analytical methods, natural resources, peat, chemistry

Procedia PDF Downloads 114
1768 Driving towards Sustainability with Shared Electric Mobility: A Case Study of Time-Sharing Electric Cars on University’s Campus

Authors: Jiayi Pan, Le Qin, Shichan Zhang

Abstract:

Following the worldwide growing interest in the sharing economy, especially in China, innovations within the field are rapidly emerging. It is, therefore, appropriate to address the under-investigated sustainability issues related to the development of shared mobility. In 2019, Shanghai Jiao Tong University (SJTU) introduced one of the first on-campus Time-sharing Electric Cars (TEC) that counts now about 4000 users. The increasing popularity of this original initiative highlights the necessity to assess its sustainability and find ways to extend the performance and availability of this new transport option. This study used an online questionnaire survey on TEC usage and experience to collect answers among students and university staff. The study also conducted interviews with TEC’s team in order to better understand its motivations and operating model. Data analysis underscores that TEC’s usage frequency is positively associated with a lower carbon footprint, showing that this scheme contributes to improving the environmental sustainability of transportation on campus. This study also demonstrates that TEC provides a convenient solution to those not owning a car in situations where soft mobility cannot satisfy their needs, this contributing to a globally positive assessment of TEC in the social domains of sustainability. As SJTU’s TEC project belongs to the non-profit sector and aims at serving current research, its economical sustainability is not among the main preoccupations, and TEC, along with similar projects, could greatly benefit from this study’s findings to better evaluate the overall benefits and develop operation on a larger scale. This study suggests various ways to further improve the TEC users’ experience and enhance its promotion. This research believably provides meaningful insights on the position of shared transportation within transport mode choice and how to assess the overall sustainability of such innovations.

Keywords: shared mobility, sharing economy, sustainability assessment, sustainable transportation, urban electric transportation

Procedia PDF Downloads 191
1767 Effect of the Magnetite Nanoparticles Concentration on Biogas and Methane Production from Chicken Litter

Authors: Guadalupe Stefanny Aguilar-Moreno, Miguel Angel Aguilar-Mendez, Teodoro Espinosa-Solares

Abstract:

In the agricultural sector, one of the main emitters of greenhouse gases is manure management, which has been increased considerably in recent years. Biogas is an energy source that can be produced from different organic materials through anaerobic digestion (AD); however, production efficiency is still low. Several techniques have been studied to increase its performance, such as co-digestion, the variation of digestion conditions, and nanomaterials used. Therefore, the aim of this investigation was to evaluate the effect of magnetite nanoparticles (NPs) concentration, synthesized by co-precipitation, on the biogas and methane production in AD using chicken litter as a substrate. Synthesis of NPs was performed according to the co-precipitation method, for which a fractional factorial experimental design 25⁻² with two replications was used. The study factors were concentrations (precursors and passivating), time of sonication and dissolution temperatures, and the response variables were size, hydrodynamic diameter (HD) and zeta potential. Subsequently, the treatment that presented the smallest NPs was chosen for their use on AD. The AD was established in serological bottles with a working volume of 250 mL, incubated at 36 ± 1 °C for 80 days. The treatments consisted of the addition of different concentrations of NPs in the microcosms: chicken litter only (control), 20 mg∙L⁻¹ of NPs + chicken litter, 40 mg∙L⁻¹ of NPs + chicken litter and 60 mg∙L⁻¹ of NPs + chicken litter, all by triplicate. Methane and biogas production were evaluated daily. The smallest HD (49.5 nm) and the most stable NPs (21.22 mV) were obtained with the highest passivating concentration and the lower precursors dissolution temperature, which were the only factors that had a significant effect on the HD. In the transmission electron microscopy performed to these NPs, an average size of 4.2 ± 0.73 nm was observed. The highest biogas and methane production was obtained with the treatment that had 20 mg∙L⁻¹ of NPs, being 29.5 and 73.9%, respectively, higher than the control, while the treatment with the highest concentration of NPs was not statistically different from the control. From the above, it can be concluded that the magnetite NPs promote the biogas and methane production in AD; however, high concentrations may cause inhibitory effects among methanogenic microorganisms.

Keywords: agricultural sector, anaerobic digestion, nanotechnology, waste management

Procedia PDF Downloads 125
1766 An Argument for Agile, Lean, and Hybrid Project Management in Museum Conservation Practice: A Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts

Authors: Maria Ledinskaya

Abstract:

This paper is part case study and part literature review. It seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation by looking at their practical application on a recent conservation project at the Sainsbury Centre for Visual Arts. The author outlines the advantages of leaner and more agile conservation practices in today’s faster, less certain, and more budget-conscious museum climate where traditional project structures are no longer as relevant or effective. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre by private collectors Michael and Joyce Morris. It was a medium-sized conservation project of moderate complexity, planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown conditions and materials, unconfirmed budget. The project was later impacted by the COVID-19 pandemic, introducing indeterminate lockdowns, budget cuts, staff changes, and the need to accommodate social distancing and remote communications. The author, then a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. The paper examines the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, including the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics. Although not intentionally planned as such, the Morris Project had a number of Agile and Lean features which were instrumental to its successful delivery. These key features are identified as distributed decision-making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point in favour of a hybrid model, which combines traditional and alternative project processes and tools to suit the specific needs of the project.

Keywords: agile project management, conservation, hybrid project management, lean project management, waterfall project management

Procedia PDF Downloads 57