Search results for: buckling optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3390

Search results for: buckling optimization

2010 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations

Authors: Deepak Singh, Rail Kuliev

Abstract:

The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.

Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization

Procedia PDF Downloads 65
2009 Designing and Simulation of the Rotor and Hub of the Unmanned Helicopter

Authors: Zbigniew Czyz, Ksenia Siadkowska, Krzysztof Skiba, Karol Scislowski

Abstract:

Today’s progress in the rotorcraft is mostly associated with an optimization of aircraft performance achieved by active and passive modifications of main rotor assemblies and a tail propeller. The key task is to improve their performance, improve the hover quality factor for rotors but not change in specific fuel consumption. One of the tasks to improve the helicopter is an active optimization of the main rotor providing for flight stages, i.e., an ascend, flight, a descend. An active interference with the airflow around the rotor blade section can significantly change characteristics of the aerodynamic airfoil. The efficiency of actuator systems modifying aerodynamic coefficients in the current solutions is relatively high and significantly affects the increase in strength. The solution to actively change aerodynamic characteristics assumes a periodic change of geometric features of blades depending on flight stages. Changing geometric parameters of blade warping enables an optimization of main rotor performance depending on helicopter flight stages. Structurally, an adaptation of shape memory alloys does not significantly affect rotor blade fatigue strength, which contributes to reduce costs associated with an adaptation of the system to the existing blades, and gains from a better performance can easily amortize such a modification and improve profitability of such a structure. In order to obtain quantitative and qualitative data to solve this research problem, a number of numerical analyses have been necessary. The main problem is a selection of design parameters of the main rotor and a preliminary optimization of its performance to improve the hover quality factor for rotors. This design concept assumes a three-bladed main rotor with a chord of 0.07 m and radius R = 1 m. The value of rotor speed is a calculated parameter of an optimization function. To specify the initial distribution of geometric warping, a special software has been created that uses a numerical method of a blade element which respects dynamic design features such as fluctuations of a blade in its joints. A number of performance analyses as a function of rotor speed, forward speed, and altitude have been performed. The calculations were carried out for the full model assembly. This approach makes it possible to observe the behavior of components and their mutual interaction resulting from the forces. The key element of each rotor is the shaft, hub and pins holding the joints and blade yokes. These components are exposed to the highest loads. As a result of the analysis, the safety factor was determined at the level of k > 1.5, which gives grounds to obtain certification for the strength of the structure. The construction of the joint rotor has numerous moving elements in its structure. Despite the high safety factor, the places with the highest stresses, where the signs of wear and tear may appear, have been indicated. The numerical analysis carried out showed that the most loaded element is the pin connecting the modular bearing of the blade yoke with the element of the horizontal oscillation joint. The stresses in this element result in a safety factor of k=1.7. The other analysed rotor components have a safety factor of more than 2 and in the case of the shaft, this factor is more than 3. However, it must be remembered that the structure is as strong as the weakest cell is. Designed rotor for unmanned aerial vehicles adapted to work with blades with intelligent materials in its structure meets the requirements for certification testing. Acknowledgement: This work has been financed by the Polish National Centre for Research and Development under the LIDER program, Grant Agreement No. LIDER/45/0177/L-9/17/NCBR/2018.

Keywords: main rotor, rotorcraft aerodynamics, shape memory alloy, materials, unmanned helicopter

Procedia PDF Downloads 156
2008 Maximum Power and Bone Variables in Young Adult Men

Authors: Anthony Khawaja, Jacques Prioux, Ghassan Maalouf, Rawad El Hage

Abstract:

The regular practice of physical activities characterized by significant mechanical stresses stimulates bone formation and improves bone mineral density (BMD) in the most solicited sites. The purpose of this study was to explore the relationships between maximum power and bone variables in a group of young adult men. Identification of new determinants of BMD, bone mineral content (BMC) and hip geometric indices in young adult men, would allow screening and early management of future cases of osteopenia and osteoporosis. Fifty-three young adult men (18 – 35yr) voluntarily participated in this study. Weight and height were measured, and body mass index was calculated. Body composition, BMC and BMD were determined for each individual by Dual-energy X-ray absorptiometry (DXA; GE Healthcare, Madison, WI) at whole body (WB), lumbar spine (L1-L4), total hip (TH), and femoral neck (FN). FN cross-sectional area (CSA), strength index (SI), buckling ratio (BR), FN section modulus (Z), cross-sectional moment of inertia (CSMI) and L1-L4 TBS were also evaluated by DXA. The vertical jump was evaluated using a field test (sargent test). Two main parameters were retained: vertical jump performance (cm) and power (w). The subjects performed three jumps with 2 minutes of recovery between jumps. The highest vertical jump was selected. Maximum power (P max, in watts) was calculated. Maximum power was positively correlated to WB BMD (r = 0.41; p < 0.01), WB BMC (r = 0.65; p < 0.001), L1-L4 BMC (r = 0.54; p < 0.001), FN BMC (r = 0.35; p < 0.01), TH BMC (r = 0.50; p < 0.001), CSMI (r = 0.50; p < 0.001), CSA (r = 0.33; p < 0.05). Vertical jump was positively correlated to WB BMC (r = 0.31; p < 0.05), L1-L4 BMC (r = 0.40; p < 0.01), CSMI (r = 0.29; p < 0.05). The current study suggests that maximum power is a positive determinant of BMD, BMC and hip geometric indices in young adult men. In addition, it shows also that maximum power is a stronger positive determinant of bone variables than vertical jump in this population. Implementing strategies to increase maximum power in young adult men may be useful for preventing osteoporotic fractures later in life.

Keywords: bone variables, maximum power, osteopenia, osteoporosis, vertical jump, young adult men

Procedia PDF Downloads 176
2007 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction

Authors: Luis C. Parra

Abstract:

The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.

Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms

Procedia PDF Downloads 104
2006 An A-Star Approach for the Quickest Path Problem with Time Windows

Authors: Christofas Stergianos, Jason Atkin, Herve Morvan

Abstract:

As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.

Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling

Procedia PDF Downloads 225
2005 Quantifying Multivariate Spatiotemporal Dynamics of Malaria Risk Using Graph-Based Optimization in Southern Ethiopia

Authors: Yonas Shuke Kitawa

Abstract:

Background: Although malaria incidence has substantially fallen sharply over the past few years, the rate of decline varies by district, time, and malaria type. Despite this turn-down, malaria remains a major public health threat in various districts of Ethiopia. Consequently, the present study is aimed at developing a predictive model that helps to identify the spatio-temporal variation in malaria risk by multiple plasmodium species. Methods: We propose a multivariate spatio-temporal Bayesian model to obtain a more coherent picture of the temporally varying spatial variation in disease risk. The spatial autocorrelation in such a data set is typically modeled by a set of random effects that assign a conditional autoregressive prior distribution. However, the autocorrelation considered in such cases depends on a binary neighborhood matrix specified through the border-sharing rule. Over here, we propose a graph-based optimization algorithm for estimating the neighborhood matrix that merely represents the spatial correlation by exploring the areal units as the vertices of a graph and the neighbor relations as the series of edges. Furthermore, we used aggregated malaria count in southern Ethiopia from August 2013 to May 2019. Results: We recognized that precipitation, temperature, and humidity are positively associated with the malaria threat in the area. On the other hand, enhanced vegetation index, nighttime light (NTL), and distance from coastal areas are negatively associated. Moreover, nonlinear relationships were observed between malaria incidence and precipitation, temperature, and NTL. Additionally, lagged effects of temperature and humidity have a significant effect on malaria risk by either species. More elevated risk of P. falciparum was observed following the rainy season, and unstable transmission of P. vivax was observed in the area. Finally, P. vivax risks are less sensitive to environmental factors than those of P. falciparum. Conclusion: The improved inference was gained by employing the proposed approach in comparison to the commonly used border-sharing rule. Additionally, different covariates are identified, including delayed effects, and elevated risks of either of the cases were observed in districts found in the central and western regions. As malaria transmission operates in a spatially continuous manner, a spatially continuous model should be employed when it is computationally feasible.

Keywords: disease mapping, MSTCAR, graph-based optimization algorithm, P. falciparum, P. vivax, waiting matrix

Procedia PDF Downloads 75
2004 Deep Reinforcement Learning-Based Computation Offloading for 5G Vehicle-Aware Multi-Access Edge Computing Network

Authors: Ziying Wu, Danfeng Yan

Abstract:

Multi-Access Edge Computing (MEC) is one of the key technologies of the future 5G network. By deploying edge computing centers at the edge of wireless access network, the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios. Meanwhile, with the development of IOV (Internet of Vehicles) technology, various delay-sensitive and compute-intensive in-vehicle applications continue to appear. Compared with traditional internet business, these computation tasks have higher processing priority and lower delay requirements. In this paper, we design a 5G-based Vehicle-Aware Multi-Access Edge Computing Network (VAMECN) and propose a joint optimization problem of minimizing total system cost. In view of the problem, a deep reinforcement learning-based joint computation offloading and task migration optimization (JCOTM) algorithm is proposed, considering the influences of multiple factors such as concurrent multiple computation tasks, system computing resources distribution, and network communication bandwidth. And, the mixed integer nonlinear programming problem is described as a Markov Decision Process. Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption, optimize computing offloading and resource allocation schemes, and improve system resource utilization, compared with other computing offloading policies.

Keywords: multi-access edge computing, computation offloading, 5th generation, vehicle-aware, deep reinforcement learning, deep q-network

Procedia PDF Downloads 114
2003 CO2 Emission and Cost Optimization of Reinforced Concrete Frame Designed by Performance Based Design Approach

Authors: Jin Woo Hwang, Byung Kwan Oh, Yousok Kim, Hyo Seon Park

Abstract:

As greenhouse effect has been recognized as serious environmental problem of the world, interests in carbon dioxide (CO2) emission which comprises major part of greenhouse gas (GHG) emissions have been increased recently. Since construction industry takes a relatively large portion of total CO2 emissions of the world, extensive studies about reducing CO2 emissions in construction and operation of building have been carried out after the 2000s. Also, performance based design (PBD) methodology based on nonlinear analysis has been robustly developed after Northridge Earthquake in 1994 to assure and assess seismic performance of building more exactly because structural engineers recognized that prescriptive code based design approach cannot address inelastic earthquake responses directly and assure performance of building exactly. Although CO2 emissions and PBD approach are recent rising issues on construction industry and structural engineering, there were few or no researches considering these two issues simultaneously. Thus, the objective of this study is to minimize the CO2 emissions and cost of building designed by PBD approach in structural design stage considering structural materials. 4 story and 4 span reinforced concrete building optimally designed to minimize CO2 emissions and cost of building and to satisfy specific seismic performance (collapse prevention in maximum considered earthquake) of building satisfying prescriptive code regulations using non-dominated sorting genetic algorithm-II (NSGA-II). Optimized design result showed that minimized CO2 emissions and cost of building were acquired satisfying specific seismic performance. Therefore, the methodology proposed in this paper can be used to reduce both CO2 emissions and cost of building designed by PBD approach.

Keywords: CO2 emissions, performance based design, optimization, sustainable design

Procedia PDF Downloads 404
2002 Construction and Optimization of Green Infrastructure Network in Mountainous Counties Based on Morphological Spatial Pattern Analysis and Minimum Cumulative Resistance Models: A Case Study of Shapingba District, Chongqing

Authors: Yuning Guan

Abstract:

Under the background of rapid urbanization, mountainous counties need to break through mountain barriers for urban expansion due to undulating topography, resulting in ecological problems such as landscape fragmentation and reduced biodiversity. Green infrastructure networks are constructed to alleviate the contradiction between urban expansion and ecological protection, promoting the healthy and sustainable development of urban ecosystems. This study applies the MSPA model, the MCR model and Linkage Mapper Tools to identify eco-sources and eco-corridors in the Shapingba District of Chongqing and combined with landscape connectivity assessment and circuit theory to delineate the importance levels to extract ecological pinch point areas on the corridors. The results show that: (1) 20 ecological sources are identified, with a total area of 126.47 km², accounting for 31.88% of the study area, and showing a pattern of ‘one core, three corridors, multi-point distribution’. (2) 37 ecological corridors are formed in the area, with a total length of 62.52km, with a ‘more in the west, less in the east’ pattern. (3) 42 ecological pinch points are extracted, accounting for 25.85% of the length of the corridors, which are mainly distributed in the eastern new area. Accordingly, this study proposes optimization strategies for sub-area protection of ecological sources, grade-level construction of ecological corridors, and precise restoration of ecological pinch points.

Keywords: green infrastructure network, morphological spatial pattern, minimal cumulative resistance, mountainous counties, circuit theory, shapingba district

Procedia PDF Downloads 40
2001 Enhancing Algal Bacterial Photobioreactor Efficiency: Nutrient Removal and Cost Analysis Comparison for Light Source Optimization

Authors: Shahrukh Ahmad, Purnendu Bose

Abstract:

Algal-Bacterial photobioreactors (ABPBRs) have emerged as a promising technology for sustainable biomass production and wastewater treatment. Nutrient removal is seldom done in sewage treatment plants and large volumes of wastewater which still have nutrients are being discharged and that can lead to eutrophication. That is why ABPBR plays a vital role in wastewater treatment. However, improving the efficiency of ABPBR remains a significant challenge. This study aims to enhance ABPBR efficiency by focusing on two key aspects: nutrient removal and cost-effective optimization of the light source. By integrating nutrient removal and cost analysis for light source optimization, this study proposes practical strategies for improving ABPBR efficiency. To reduce organic carbon and convert ammonia to nitrates, domestic wastewater from a 130 MLD sewage treatment plant (STP) was aerated with a hydraulic retention time (HRT) of 2 days. The treated supernatant had an approximate nitrate and phosphate values of 16 ppm as N and 6 ppm as P, respectively. This supernatant was then fed into the ABPBR, and the removal of nutrients (nitrate as N and phosphate as P) was observed using different colored LED bulbs, namely white, blue, red, yellow, and green. The ABPBR operated with a 9-hour light and 3-hour dark cycle, using only one color of bulbs per cycle. The study found that the white LED bulb, with a photosynthetic photon flux density (PPFD) value of 82.61 µmol.m-2 .sec-1 , exhibited the highest removal efficiency. It achieved a removal rate of 91.56% for nitrate and 86.44% for phosphate, surpassing the other colored bulbs. Conversely, the green LED bulbs showed the lowest removal efficiencies, with 58.08% for nitrate and 47.48% for phosphate at an HRT of 5 days. The quantum PAR (Photosynthetic Active Radiation) meter measured the photosynthetic photon flux density for each colored bulb setting inside the photo chamber, confirming that white LED bulbs operated at a wider wavelength band than the others. Furthermore, a cost comparison was conducted for each colored bulb setting. The study revealed that the white LED bulb had the lowest average cost (Indian Rupee)/light intensity (µmol.m-2 .sec-1 ) value at 19.40, while the green LED bulbs had the highest average cost (INR)/light intensity (µmol.m-2 .sec-1 ) value at 115.11. Based on these comparative tests, it was concluded that the white LED bulbs were the most efficient and costeffective light source for an algal photobioreactor. They can be effectively utilized for nutrient removal from secondary treated wastewater which helps in improving the overall wastewater quality before it is discharged back into the environment.

Keywords: algal bacterial photobioreactor, domestic wastewater, nutrient removal, led bulbs

Procedia PDF Downloads 71
2000 Educational Engineering Tool on Smartphone

Authors: Maya Saade, Rafic Younes, Pascal Lafon

Abstract:

This paper explores the transformative impact of smartphones on pedagogy and presents a smartphone application developed specifically for engineering problem-solving and educational purposes. The widespread availability and advanced capabilities of smartphones have revolutionized the way we interact with technology, including in education. The ubiquity of smartphones allows learners to access educational resources anytime and anywhere, promoting personalized and self-directed learning. The first part of this paper discusses the overall influence of smartphones on pedagogy, emphasizing their potential to improve learning experiences through mobile technology. In the context of engineering education, this paper focuses on the development of a dedicated smartphone application that serves as a powerful tool for both engineering problem-solving and education. The application features an intuitive and user-friendly interface, allowing engineering students and professionals to perform complex calculations and analyses on their smartphones. The smartphone application primarily focuses on beam calculations and serves as a comprehensive beam calculator tailored to engineering education. It caters to various engineering disciplines by offering interactive modules that allow students to learn key concepts through hands-on activities and simulations. With a primary emphasis on beam analysis, this application empowers users to perform calculations for statically determinate beams, statically indeterminate beams, and beam buckling phenomena. Furthermore, the app includes a comprehensive library of engineering formulas and reference materials, facilitating a deeper understanding and practical application of the fundamental principles in beam analysis. By offering a wide range of features specifically tailored for beam calculation, this application provides an invaluable tool for engineering students and professionals looking to enhance their understanding and proficiency in this crucial aspect of a structural engineer.

Keywords: mobile devices in education, solving engineering problems, smartphone application, engineering education

Procedia PDF Downloads 65
1999 Approximate Spring Balancing for the Arm of a Humanoid Robot to Reduce Actuator Torque

Authors: Apurva Patil, Ashay Aswale, Akshay Kulkarni, Shubham Bharadiya

Abstract:

The potential benefit of gravity compensation of linkages in mechanisms using springs to reduce actuator requirements is well recognized, but practical applications have been elusive. Although existing methods provide exact spring balance, they require additional masses or auxiliary links, or all the springs used originate from the ground, which makes the resulting device bulky and space-inefficient. This paper uses a method of static balancing of mechanisms with conservative loads such as gravity and spring loads using non-zero-free-length springs with child–parent connections and no auxiliary links. Application of this method to the developed arm of a humanoid robot is presented here. Spring balancing is particularly important in this case because the serial chain of linkages has to work against gravity.This work involves approximate spring balancing of the open-loop chain of linkages using minimization of potential energy variance. It uses the approach of flattening the potential energy distribution over the workspace and fuses it with numerical optimization. The results show the considerable reduction in actuator torque requirement with practical spring design and arrangement. Reduced actuator torque facilitates the use of lower end actuators which are generally smaller in weight and volume thereby lowering the space requirements and the total weight of the arm. This is particularly important for humanoid robots where the parent actuator has to handle the weight of the subsequent actuators as well. Actuators with lower actuation requirements are more energy efficient, thereby reduce the energy consumption of the mechanism. Lower end actuators are lower in cost and facilitate the development of low-cost devices. Although the method provides only an approximate balancing, it is versatile, flexible in choosing appropriate control variables that are relevant to the design problem and easy to implement. The true potential of this technique lies in the fact that it uses a very simple optimization to find the spring constant, free-length of the spring and the optimal attachment points subject to the optimization constraints. Also, it uses physically realizable non-zero-free-length springs directly, thereby reducing the complexity involved in simulating zero-free-length springs from non-zero-free-length springs. This method allows springs to be attached to the preceding parent link, which makes the implementation of spring balancing practical. Because auxiliary linkages can be avoided, the resultant arm of the humanoid robot is compact. The cost benefits and reduced complexity can be significant advantages in the development of this arm of the humanoid robot.

Keywords: actuator torque, child-parent connections, spring balancing, the arm of a humanoid robot

Procedia PDF Downloads 241
1998 Modern Seismic Design Approach for Buildings with Hysteretic Dampers

Authors: Vanessa A. Segovia, Sonia E. Ruiz

Abstract:

The use of energy dissipation systems for seismic applications has increased worldwide, thus it is necessary to develop practical and modern criteria for their optimal design. Here, a direct displacement-based seismic design approach for frame buildings with hysteretic energy dissipation systems (HEDS) is applied. The building is constituted by two individual structural systems consisting of: 1) A main elastic structural frame designed for service loads and 2) A secondary system, corresponding to the HEDS, that controls the effects of lateral loads. The procedure implies to control two design parameters: A) The stiffness ratio (α=K_frame/K_(total system)), and B) The strength ratio (γ= V_damper / V_(total system)). The proposed damage-controlled approach contributes to the design of a more sustainable and resilient building because the structural damage is concentrated on the HEDS. The reduction of the design displacement spectrum is done by means of a damping factor (recently published) for elastic structural systems with HEDS, located in Mexico City. Two limit states are verified: Serviceability and near collapse. Instead of the traditional trial-error approach, a procedure that allows the designer to establish the preliminary sizes of the structural elements of both systems is proposed. The design methodology is applied to an 8-story steel building with buckling restrained braces, located in soft soil of Mexico City. With the aim of choosing the optimal design parameters, a parametric study is developed considering different values of α and γ. The simplified methodology is for preliminary sizing, design, and evaluation of the effectiveness of HEDS, and it constitutes a modern and practical tool that enables the structural designer to select the best design parameters.

Keywords: damage-controlled buildings, direct displacement-based seismic design, optimal hysteretic energy dissipation systems, hysteretic dampers

Procedia PDF Downloads 481
1997 Non-Linear Static Analysis of Screwed Moment Connections in Cold-Formed Steel Frames

Authors: Jikhil Joseph, Satish Kumar S R.

Abstract:

Cold-formed steel frames are preferable for framed constructions due to its low seismic weights and results into low seismic forces, but on the contrary, significant lateral deflections are expected under seismic/wind loading. The various factors affecting the lateral stiffness of steel frames are the stiffness of connections, beams and columns. So, by increasing the stiffness of beam, column and making the connections rigid will enhance the lateral stiffness. The present study focused on Structural elements made of rectangular hollow sections and fastened with screwed in-plane moment connections for the building frames. The self-drilling screws can be easily drilled on either side of the connection area with the help of gusset plates. The strength of screwed connections can be made 1.2 times the connecting elements. However, achieving high stiffness in connections is also a challenging job. Hence in addition to beam and column stiffness’s the connection stiffness are also going to be a governing parameter in the lateral deflections of the frames. SAP 2000 Non-linear static analysis has been planned to study the seismic behavior of steel frames. The SAP model will be consisting of nonlinear spring model for the connection to account the semi-rigid connections and the nonlinear hinges will be assigned for beam and column sections according to FEMA 273 guidelines. The reliable spring and hinge parameters will be assigned based on an experimental and analytical database. The non-linear static analysis is mainly focused on the identification of various hinge formations and the estimation of lateral deflection and these will contribute as an inputs for the direct displacement-based Seismic design. The research output from this study are the modelling techniques and suitable design guidelines for the performance-based seismic design of cold-formed steel frames.

Keywords: buckling, cold formed steel, nonlinear static analysis, screwed connections

Procedia PDF Downloads 174
1996 Preparation and Properties of Gelatin-Bamboo Fibres Foams for Packaging Applications

Authors: Luo Guidong, Song Hang, Jim Song, Virginia Martin Torrejon

Abstract:

Due to their excellent properties, polymer packaging foams have become increasingly essential in our current lifestyles. They are cost-effective and lightweight, with excellent mechanical and thermal insulation properties. However, they constitute a major environmental and health concern due to litter generation, ocean pollution, and microplastic contamination of the food chain. In recent years, considerable efforts have been made to develop more sustainable alternatives to conventional polymer packaging foams. As a result, biobased and compostable foams are increasingly becoming commercially available, such as starch-based loose-fill or PLA trays. However, there is still a need for bulk manufacturing of bio-foams planks for packaging applications as a viable alternative to their fossil fuel counterparts (i.e., polystyrene, polyethylene, and polyurethane). Gelatin is a promising biopolymer for packaging applications due to its biodegradability, availability, and biocompatibility, but its mechanical properties are poor compared to conventional plastics. However, as widely reported for other biopolymers, such as starch, the mechanical properties of gelatin-based bioplastics can be enhanced by formulation optimization, such as the incorporation of fibres from different crops, such as bamboo. This research aimed to produce gelatin-bamboo fibre foams by mechanical foaming and to study the effect of fibre content on the foams' properties and structure. As a result, foams with virtually no shrinkage, low density (<40 kg/m³), low thermal conductivity (<0.044 W/m•K), and mechanical properties comparable to conventional plastics were produced. Further work should focus on developing formulations suitable for the packaging of water-sensitive products and processing optimization, especially the reduction of the drying time.

Keywords: biobased and compostable foam, sustainable packaging, natural polymer hydrogel, cold chain packaging

Procedia PDF Downloads 99
1995 A Study in Optimization of FSI(Floor Space Index) in Kerala

Authors: Anjali Suresh

Abstract:

Kerala is well known for its unique settlement pattern; comprising the most part, a continuous spread of habitation. The notable urbanization trend in Kerala is urban spread rather than concentration which points out the increasing urbanization of peripheral areas of existing urban centers. This has thrown a challenge for the authorities to cater the needs of the urban population like to provide affordable housing and infrastructure facilities to sustain their livelihood; which is a matter of concern that needs policy attention in fixing the optimum FSI value. Based on recent reports (Post Disaster Need Analysis –PDNA) from the UN, addressing the unsafe situation of the carpet FAR/FSI practice in the state showcasing the varying geological & climatic conditions should also be the matter of concern. The FSI (Floor space index- the ratio of the built-up space on a plot to the area of the plot) value is certainly one of the key regulation factors in checking the land utilization for the varying occupancies desired for the overall development of a state with limitation in land availability when compared to its neighbors. The pattern of urbanization, physical conditions, topography, etc., varies within the state and can change remarkably over time which identifies that the practicing FSI norms in Kerala does not fulfils the intended function. Thus the FSI regulation is expected to change dynamically from location to location. So for determining the optimum value of FSI /FAR of a region in the state of Kerala, the government agencies should consider the optimum land utilization for the growing urbanization. On the other hand, shall keep in check the overutilization of the same in par with environmental and geographic nature. Therefore the study identifies parameters that should be considered for assigning FSI within the Kerala context, and through expert surveys; opinions arrive at a methodology for assigning an optimum FSI value of a region in the state of Kerala.

Keywords: floor space index, urbanization, density, civic pressure, optimization

Procedia PDF Downloads 97
1994 Solving a Micromouse Maze Using an Ant-Inspired Algorithm

Authors: Rolando Barradas, Salviano Soares, António Valente, José Alberto Lencastre, Paulo Oliveira

Abstract:

This article reviews the Ant Colony Optimization, a nature-inspired algorithm, and its implementation in the Scratch/m-Block programming environment. The Ant Colony Optimization is a part of Swarm Intelligence-based algorithms and is a subset of biological-inspired algorithms. Starting with a problem in which one has a maze and needs to find its path to the center and return to the starting position. This is similar to an ant looking for a path to a food source and returning to its nest. Starting with the implementation of a simple wall follower simulator, the proposed solution uses a dynamic graphical interface that allows young students to observe the ants’ movement while the algorithm optimizes the routes to the maze’s center. Things like interface usability, Data structures, and the conversion of algorithmic language to Scratch syntax were some of the details addressed during this implementation. This gives young students an easier way to understand the computational concepts of sequences, loops, parallelism, data, events, and conditionals, as they are used through all the implemented algorithms. Future work includes the simulation results with real contest mazes and two different pheromone update methods and the comparison with the optimized results of the winners of each one of the editions of the contest. It will also include the creation of a Digital Twin relating the virtual simulator with a real micromouse in a full-size maze. The first test results show that the algorithm found the same optimized solutions that were found by the winners of each one of the editions of the Micromouse contest making this a good solution for maze pathfinding.

Keywords: nature inspired algorithms, scratch, micromouse, problem-solving, computational thinking

Procedia PDF Downloads 122
1993 Response Surface Methodology for the Optimization of Radioactive Wastewater Treatment with Chitosan-Argan Nutshell Beads

Authors: Fatima Zahra Falah, Touria El. Ghailassi, Samia Yousfi, Ahmed Moussaif, Hasna Hamdane, Mouna Latifa Bouamrani

Abstract:

The management and treatment of radioactive wastewater pose significant challenges to environmental safety and public health. This study presents an innovative approach to optimizing radioactive wastewater treatment using a novel biosorbent: chitosan-argan nutshell beads. By employing Response Surface Methodology (RSM), we aimed to determine the optimal conditions for maximum removal efficiency of radioactive contaminants. Chitosan, a biodegradable and non-toxic biopolymer, was combined with argan nutshell powder to create composite beads. The argan nutshell, a waste product from argan oil production, provides additional adsorption sites and mechanical stability to the biosorbent. The beads were characterized using Fourier Transform Infrared Spectroscopy (FTIR), Scanning Electron Microscopy (SEM), and X-ray Diffraction (XRD) to confirm their structure and composition. A three-factor, three-level Box-Behnken design was utilized to investigate the effects of pH (3-9), contact time (30-150 minutes), and adsorbent dosage (0.5-2.5 g/L) on the removal efficiency of radioactive isotopes, primarily focusing on cesium-137. Batch adsorption experiments were conducted using synthetic radioactive wastewater with known concentrations of these isotopes. The RSM analysis revealed that all three factors significantly influenced the adsorption process. A quadratic model was developed to describe the relationship between the factors and the removal efficiency. The model's adequacy was confirmed through analysis of variance (ANOVA) and various diagnostic plots. Optimal conditions for maximum removal efficiency were pH 6.8, a contact time of 120 minutes, and an adsorbent dosage of 0.8 g/L. Under these conditions, the experimental removal efficiency for cesium-137 was 94.7%, closely matching the model's predictions. Adsorption isotherms and kinetics were also investigated to elucidate the mechanism of the process. The Langmuir isotherm and pseudo-second-order kinetic model best described the adsorption behavior, indicating a monolayer adsorption process on a homogeneous surface. This study demonstrates the potential of chitosan-argan nutshell beads as an effective and sustainable biosorbent for radioactive wastewater treatment. The use of RSM allowed for the efficient optimization of the process parameters, potentially reducing the time and resources required for large-scale implementation. Future work will focus on testing the biosorbent's performance with real radioactive wastewater samples and investigating its regeneration and reusability for long-term applications.

Keywords: adsorption, argan nutshell, beads, chitosan, mechanism, optimization, radioactive wastewater, response surface methodology

Procedia PDF Downloads 30
1992 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling

Authors: M. Almutairi, S. Hadjiloucas

Abstract:

The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.

Keywords: harmonics, passive filter, power factor, power quality

Procedia PDF Downloads 304
1991 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing

Authors: Tolulope Aremu

Abstract:

The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.

Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods

Procedia PDF Downloads 0
1990 The Sub-Optimality of the Electricity Subsidy on Tube Wells in Balochistan (Pakistan): An Analysis Based on Socio-Cultural and Policy Distortions

Authors: Rameesha Javaid

Abstract:

Agriculture is the backbone of the economy of the province of Balochistan which is known as the ‘fruit basket’ of Pakistan. Its climate zones comprising highlands and plateaus, dependent on rain water, are more suited for the production of deciduous fruit. The vagaries of weather and more so the persistent droughts prompted the government to announce flat rates of electricity bills per month irrespective of the size of the farm, quantum or water used and the category of crop group. That has, no doubt, resulted in increased cropping intensity, more production and employment but has enormously burdened the official exchequer which picks up the residual bills in certain percentages amongst the federal and provincial governments and the local electricity company. This study tests the desirability of continuing the subsidy in the present mode. Optimization of social welfare of farmers has been the focus of the study with emphasis on the contribution of positive externalities and distortions caused in terms of negative externalities. By using the optimization technique with due allowance for distortions, it has been established that the subsidy calls for limiting policy distortions as they cause sub-optimal utilization of the tube well subsidy and improved policy programming. The sensitivity analysis with changed rankings of contributing variables towards social welfare does not significantly change the result. Therefore it leads to the net findings and policy recommendations of significantly reducing the subsidy size, correcting and curtailing policy distortions and targeting the subsidy grant more towards small farmers to generate more welfare by saving a sizeable amount from the subsidy for investment in the wellbeing of the farmers in rural Balochistan.

Keywords: distortion, policy distortion, socio-cultural distortion, social welfare, subsidy

Procedia PDF Downloads 287
1989 Design and Optimisation of 2-Oxoglutarate Dioxygenase Expression in Escherichia coli Strains for Production of Bioethylene from Crude Glycerol

Authors: Idan Chiyanzu, Maruping Mangena

Abstract:

Crude glycerol, a major by-product from the transesterification of triacylglycerides with alcohol to biodiesel, is known to have a broad range of applications. For example, its bioconversion can afford a wide range of chemicals including alcohols, organic acids, hydrogen, solvents and intermediate compounds. In bacteria, the 2-oxoglutarate dioxygenase (2-OGD) enzymes are widely found among the Pseudomonas syringae species and have been recognized with an emerging importance in ethylene formation. However, the use of optimized enzyme function in recombinant systems for crude glycerol conversion to ethylene is still not been reported. The present study investigated the production of ethylene from crude glycerol using engineered E. coli MG1655 and JM109 strains. Ethylene production with an optimized expression system for 2-OGD in E. coli using a codon optimized construct of the ethylene-forming gene was studied. The codon-optimization resulted in a 20-fold increase of protein production and thus an enhanced production of the ethylene gas. For a reliable bioreactor performance, the effect of temperature, fermentation time, pH, substrate concentration, the concentration of methanol, concentration of potassium hydroxide and media supplements on ethylene yield was investigated. The results demonstrate that the recombinant enzyme can be used for future studies to exploit the conversion of low-priced crude glycerol into advanced value products like light olefins, and tools including recombineering techniques for DNA, molecular biology, and bioengineering can be used to allowing unlimited the production of ethylene directly from the fermentation of crude glycerol. It can be concluded that recombinant E.coli production systems represent significantly secure, renewable and environmentally safe alternative to thermochemical approach to ethylene production.

Keywords: crude glycerol, bioethylene, recombinant E. coli, optimization

Procedia PDF Downloads 277
1988 Optimizing CNC Production Line Efficiency Using NSGA-II: Adaptive Layout and Operational Sequence for Enhanced Manufacturing Flexibility

Authors: Yi-Ling Chen, Dung-Ying Lin

Abstract:

In the manufacturing process, computer numerical control (CNC) machining plays a crucial role. CNC enables precise machinery control through computer programs, achieving automation in the production process and significantly enhancing production efficiency. However, traditional CNC production lines often require manual intervention for loading and unloading operations, which limits the production line's operational efficiency and production capacity. Additionally, existing CNC automation systems frequently lack sufficient intelligence and fail to achieve optimal configuration efficiency, resulting in the need for substantial time to reconfigure production lines when producing different products, thereby impacting overall production efficiency. Using the NSGA-II algorithm, we generate production line layout configurations that consider field constraints and select robotic arm specifications from an arm list. This allows us to calculate loading and unloading times for each job order, perform demand allocation, and assign processing sequences. The NSGA-II algorithm is further employed to determine the optimal processing sequence, with the aim of minimizing demand completion time and maximizing average machine utilization. These objectives are used to evaluate the performance of each layout, ultimately determining the optimal layout configuration. By employing this method, it enhance the configuration efficiency of CNC production lines and establish an adaptive capability that allows the production line to respond promptly to changes in demand. This will minimize production losses caused by the need to reconfigure the layout, ensuring that the CNC production line can maintain optimal efficiency even when adjustments are required due to fluctuating demands.

Keywords: evolutionary algorithms, multi-objective optimization, pareto optimality, layout optimization, operations sequence

Procedia PDF Downloads 14
1987 The Scenario Analysis of Shale Gas Development in China by Applying Natural Gas Pipeline Optimization Model

Authors: Meng Xu, Alexis K. H. Lau, Ming Xu, Bill Barron, Narges Shahraki

Abstract:

As an emerging unconventional energy, shale gas has been an economically viable step towards a cleaner energy future in U.S. China also has shale resources that are estimated to be potentially the largest in the world. In addition, China has enormous unmet for a clean alternative to substitute coal. Nonetheless, the geological complexity of China’s shale basins and issues of water scarcity potentially impose serious constraints on shale gas development in China. Further, even if China could replicate to a significant degree the U.S. shale gas boom, China faces the problem of transporting the gas efficiently overland with its limited pipeline network throughput capacity and coverage. The aim of this study is to identify the potential bottlenecks in China’s gas transmission network, as well as to examine the shale gas development affecting particular supply locations and demand centers. We examine this through application of three scenarios with projecting domestic shale gas supply by 2020: optimistic, medium and conservative shale gas supply, taking references from the International Energy Agency’s (IEA’s) projections and China’s shale gas development plans. Separately we project the gas demand at provincial level, since shale gas will have more significant impact regionally than nationally. To quantitatively assess each shale gas development scenario, we formulated a gas pipeline optimization model. We used ArcGIS to generate the connectivity parameters and pipeline segment length. Other parameters are collected from provincial “twelfth-five year” plans and “China Oil and Gas Pipeline Atlas”. The multi-objective optimization model uses GAMs and Matlab. It aims to minimize the demands that are unable to be met, while simultaneously seeking to minimize total gas supply and transmission costs. The results indicate that, even if the primary objective is to meet the projected gas demand rather than cost minimization, there’s a shortfall of 9% in meeting total demand under the medium scenario. Comparing the results between the optimistic and medium supply of shale gas scenarios, almost half of the shale gas produced in Sichuan province and Chongqing won’t be able to be transmitted out by pipeline. On the demand side, the Henan province and Shanghai gas demand gap could be filled as much as 82% and 39% respectively, with increased shale gas supply. To conclude, the pipeline network in China is currently not sufficient in meeting the projected natural gas demand in 2020 under medium and optimistic scenarios, indicating the need for substantial pipeline capacity expansion for some of the existing network, and the importance of constructing new pipelines from particular supply to demand sites. If the pipeline constraint is overcame, Beijing, Shanghai, Jiangsu and Henan’s gas demand gap could potentially be filled, and China could thereby reduce almost 25% its dependency on LNG imports under the optimistic scenario.

Keywords: energy policy, energy systematic analysis, scenario analysis, shale gas in China

Procedia PDF Downloads 282
1986 Reinforcement-Learning Based Handover Optimization for Cellular Unmanned Aerial Vehicles Connectivity

Authors: Mahmoud Almasri, Xavier Marjou, Fanny Parzysz

Abstract:

The demand for services provided by Unmanned Aerial Vehicles (UAVs) is increasing pervasively across several sectors including potential public safety, economic, and delivery services. As the number of applications using UAVs grows rapidly, more and more powerful, quality of service, and power efficient computing units are necessary. Recently, cellular technology draws more attention to connectivity that can ensure reliable and flexible communications services for UAVs. In cellular technology, flying with a high speed and altitude is subject to several key challenges, such as frequent handovers (HOs), high interference levels, connectivity coverage holes, etc. Additional HOs may lead to “ping-pong” between the UAVs and the serving cells resulting in a decrease of the quality of service and energy consumption. In order to optimize the number of HOs, we develop in this paper a Q-learning-based algorithm. While existing works focus on adjusting the number of HOs in a static network topology, we take into account the impact of cells deployment for three different simulation scenarios (Rural, Semi-rural and Urban areas). We also consider the impact of the decision distance, where the drone has the choice to make a switching decision on the number of HOs. Our results show that a Q-learning-based algorithm allows to significantly reduce the average number of HOs compared to a baseline case where the drone always selects the cell with the highest received signal. Moreover, we also propose which hyper-parameters have the largest impact on the number of HOs in the three tested environments, i.e. Rural, Semi-rural, or Urban.

Keywords: drones connectivity, reinforcement learning, handovers optimization, decision distance

Procedia PDF Downloads 106
1985 Technical Sustainable Management: An Instrument to Increase Energy Efficiency in Wastewater Treatment Plants, a Case Study in Jordan

Authors: Dirk Winkler, Leon Koevener, Lamees AlHayary

Abstract:

This paper contributes to the improvement of the municipal wastewater systems in Jordan. An important goal is increased energy efficiency in wastewater treatment plants and therefore lower expenses due to reduced electricity consumption. The chosen way to achieve this goal is through the implementation of Technical Sustainable Management adapted to the Jordanian context. Three wastewater treatment plants in Jordan have been chosen as a case study for the investigation. These choices were supported by the fact that the three treatment plants are suitable for average performance and size. Beyond that, an energy assessment has been recently conducted in those facilities. The project succeeded in proving the following hypothesis: Energy efficiency in wastewater treatment plants can be improved by implementing principles of Technical Sustainable Management adapted to the Jordanian context. With this case study, a significant increase in energy efficiency can be achieved by optimization of operational performance, identifying and eliminating shortcomings and appropriate plant management. Implementing Technical Sustainable Management as a low-cost tool with a comparable little workload, provides several additional benefits supplementing increased energy efficiency, including compliance with all legal and technical requirements, process optimization, but also increased work safety and convenient working conditions. The research in the chosen field continues because there are indications for possible integration of the adapted tool into other regions and sectors. The concept of Technical Sustainable Management adapted to the Jordanian context could be extended to other wastewater treatment plants in all regions of Jordan but also into other sectors including water treatment, water distribution, wastewater network, desalination, or chemical industry.

Keywords: energy efficiency, quality management system, technical sustainable management, wastewater treatment

Procedia PDF Downloads 157
1984 Revolutionizing Manufacturing: Embracing Additive Manufacturing with Eggshell Polylactide (PLA) Polymer

Authors: Choy Sonny Yip Hong

Abstract:

This abstract presents an exploration into the creation of a sustainable bio-polymer compound for additive manufacturing, specifically 3D printing, with a focus on eggshells and polylactide (PLA) polymer. The project initially conducted experiments using a variety of food by-products to create bio-polymers, and promising results were obtained when combining eggshells with PLA polymer. The research journey involved precise measurements, drying of PLA to remove moisture, and the utilization of a filament-making machine to produce 3D printable filaments. The project began with exploratory research and experiments, testing various combinations of food by-products to create bio-polymers. After careful evaluation, it was discovered that eggshells and PLA polymer produced promising results. The initial mixing of the two materials involved heating them just above the melting point. To make the compound 3D printable, the research focused on finding the optimal formulation and production process. The process started with precise measurements of the PLA and eggshell materials. The PLA was placed in a heating oven to remove any absorbed moisture. Handmade testing samples were created to guide the planning for 3D-printed versions. The scrap PLA was recycled and ground into a powdered state. The drying process involved gradual moisture evaporation, which required several hours. The PLA and eggshell materials were then placed into the hopper of a filament-making machine. The machine's four heating elements controlled the temperature of the melted compound mixture, allowing for optimal filament production with accurate and consistent thickness. The filament-making machine extruded the compound, producing filament that could be wound on a wheel. During the testing phase, trials were conducted with different percentages of eggshell in the PLA mixture, including a high percentage (20%). However, poor extrusion results were observed for high eggshell percentage mixtures. Samples were created, and continuous improvement and optimization were pursued to achieve filaments with good performance. To test the 3D printability of the DIY filament, a 3D printer was utilized, set to print the DIY filament smoothly and consistently. Samples were printed and mechanically tested using a universal testing machine to determine their mechanical properties. This testing process allowed for the evaluation of the filament's performance and suitability for additive manufacturing applications. In conclusion, the project explores the creation of a sustainable bio-polymer compound using eggshells and PLA polymer for 3D printing. The research journey involved precise measurements, drying of PLA, and the utilization of a filament-making machine to produce 3D printable filaments. Continuous improvement and optimization were pursued to achieve filaments with good performance. The project's findings contribute to the advancement of additive manufacturing, offering opportunities for design innovation, carbon footprint reduction, supply chain optimization, and collaborative potential. The utilization of eggshell PLA polymer in additive manufacturing has the potential to revolutionize the manufacturing industry, providing a sustainable alternative and enabling the production of intricate and customized products.

Keywords: additive manufacturing, 3D printing, eggshell PLA polymer, design innovation, carbon footprint reduction, supply chain optimization, collaborative potential

Procedia PDF Downloads 70
1983 Optimization of Enzymatic Hydrolysis of Cooked Porcine Blood to Obtain Hydrolysates with Potential Biological Activities

Authors: Miguel Pereira, Lígia Pimentel, Manuela Pintado

Abstract:

Animal blood is a major by-product of slaughterhouses and still represents a cost and environmental problem in some countries. To be eliminated, blood should be stabilised by cooking and afterwards the slaughterhouses must have to pay for its incineration. In order to reduce the elimination costs and valorise the high protein content the aim of this study was the optimization of hydrolysis conditions, in terms of enzyme ratio and time, in order to obtain hydrolysates with biological activity. Two enzymes were tested in this assay: pepsin and proteases from Cynara cardunculus (cardosins). The latter has the advantage to be largely used in the Portuguese Dairy Industry and has a low price. The screening assays were carried out in a range of time between 0 and 10 h and using a ratio of enzyme/reaction volume between 0 and 5%. The assays were performed at the optimal conditions of pH and temperature for each enzyme: 55 °C at pH 5.2 for cardosins and 37 °C at pH 2.0 for pepsin. After reaction, the hydrolysates were evaluated by FPLC (Fast Protein Liquid Chromatography) and tested for their antioxidant activity by ABTS method. FPLC chromatograms showed different profiles when comparing the enzymatic reactions with the control (no enzyme added). The chromatogram exhibited new peaks with lower MW that were not present in control samples, demonstrating the hydrolysis by both enzymes. Regarding to the antioxidant activity, the best results for both enzymes were obtained using a ratio enzyme/reactional volume of 5% during 5 h of hydrolysis. However, the extension of reaction did not affect significantly the antioxidant activity. This has an industrial relevant aspect in what concerns to the process cost. In conclusion, the enzymatic blood hydrolysis can be a better alternative to the current elimination process allowing to the industry the reuse of an ingredient with biological properties and economic value.

Keywords: antioxidant activity, blood, by-products, enzymatic hydrolysis

Procedia PDF Downloads 504
1982 Enhancing Wire Electric Discharge Machining Efficiency through ANOVA-Based Process Optimization

Authors: Rahul R. Gurpude, Pallvita Yadav, Amrut Mulay

Abstract:

In recent years, there has been a growing focus on advanced manufacturing processes, and one such emerging process is wire electric discharge machining (WEDM). WEDM is a precision machining process specifically designed for cutting electrically conductive materials with exceptional accuracy. It achieves material removal from the workpiece metal through spark erosion facilitated by electricity. Initially developed as a method for precision machining of hard materials, WEDM has witnessed significant advancements in recent times, with numerous studies and techniques based on electrical discharge phenomena being proposed. These research efforts and methods in the field of ED encompass a wide range of applications, including mirror-like finish machining, surface modification of mold dies, machining of insulating materials, and manufacturing of micro products. WEDM has particularly found extensive usage in the high-precision machining of complex workpieces that possess varying hardness and intricate shapes. During the cutting process, a wire with a diameter ranging from 0.18mm is employed. The evaluation of EDM performance typically revolves around two critical factors: material removal rate (MRR) and surface roughness (SR). To comprehensively assess the impact of machining parameters on the quality characteristics of EDM, an Analysis of Variance (ANOVA) was conducted. This statistical analysis aimed to determine the significance of various machining parameters and their relative contributions in controlling the response of the EDM process. By undertaking this analysis, optimal levels of machining parameters were identified to achieve desirable material removal rates and surface roughness.

Keywords: WEDM, MRR, optimization, surface roughness

Procedia PDF Downloads 73
1981 Patient-Specific Design Optimization of Cardiovascular Grafts

Authors: Pegah Ebrahimi, Farshad Oveissi, Iman Manavi-Tehrani, Sina Naficy, David F. Fletcher, Fariba Dehghani, David S. Winlaw

Abstract:

Despite advances in modern surgery, congenital heart disease remains a medical challenge and a major cause of infant mortality. Cardiovascular prostheses are routinely used in surgical procedures to address congenital malformations, for example establishing a pathway from the right ventricle to the pulmonary arteries in pulmonary valvar atresia. Current off-the-shelf options including human and adult products have limited biocompatibility and durability, and their fixed size necessitates multiple subsequent operations to upsize the conduit to match with patients’ growth over their lifetime. Non-physiological blood flow is another major problem, reducing the longevity of these prostheses. These limitations call for better designs that take into account the hemodynamical and anatomical characteristics of different patients. We have integrated tissue engineering techniques with modern medical imaging and image processing tools along with mathematical modeling to optimize the design of cardiovascular grafts in a patient-specific manner. Computational Fluid Dynamics (CFD) analysis is done according to models constructed from each individual patient’s data. This allows for improved geometrical design and achieving better hemodynamic performance. Tissue engineering strives to provide a material that grows with the patient and mimic the durability and elasticity of the native tissue. Simulations also give insight on the performance of the tissues produced in our lab and reduce the need for costly and time-consuming methods of evaluation of the grafts. We are also developing a methodology for the fabrication of the optimized designs.

Keywords: computational fluid dynamics, cardiovascular grafts, design optimization, tissue engineering

Procedia PDF Downloads 238