Search results for: maintenance optimization
305 Extracorporeal Co2 Removal (Ecco2r): An Option for Treatment for Refractory Hypercapnic Respiratory Failure
Authors: Shweh Fern Loo, Jun Yin Ong, Than Zaw Oo
Abstract:
Acute respiratory distress syndrome (ARDS) is a common serious condition of bilateral lung infiltrates that develops secondary to various underlying conditions such as diseases or injuries. ARDS with severe hypercapnia is associated with higher ICU mortality and morbidity. Venovenous Extracorporeal membrane oxygenation (VV-ECMO) support has been established to avert life-threatening hypoxemia and hypercapnic respiratory failure despite optimal conventional mechanical ventilation. However, VV-ECMO is relatively not advisable in particular groups of patients, especially in multi-organ failure, advanced age, hemorrhagic complications and irreversible central nervous system pathology. We presented a case of a 79-year-old Chinese lady without any pre-existing lung disease admitted to our hospital intensive care unit (ICU) after acute presentation of breathlessness and chest pain. After extensive workup, she was diagnosed with rapidly progressing acute interstitial pneumonia with ARDS and hypercapnia respiratory failure. The patient received lung protective strategies of mechanical ventilation and neuromuscular blockage therapy as per clinical guidelines. However, hypercapnia respiratory failure was refractory, and she was deemed not a good candidate for VV-ECMO support given her advanced age and high vasopressor requirements from shock. Alternative therapy with extracorporeal CO2 removal (ECCO2R) was considered and implemented. The patient received 12 days of ECCO2R paired with muscle paralysis, optimization of lung-protective mechanical ventilation and dialysis. Unfortunately, the patient still had refractory hypercapnic respiratory failure with dual vasopressor support despite prolonged therapy. Given failed and futile medical treatment, the family opted for withdrawal of care, a conservative approach, and comfort care, which led to her demise. The effectivity of extracorporeal CO2 removal may depend on disease burden, involvement and severity of the disease. There is insufficient data to make strong recommendations about its benefit-risk ratio for ECCO2R devices, and further studies and data would be required. Nonetheless, ECCO2R can be considered an alternative treatment for refractory hypercapnic respiratory failure patients who are unsuitable for initiating venovenous ECMO.Keywords: extracorporeal CO2 removal (ECCO2R), acute respiratory distress syndrome (ARDS), acute interstitial pneumonia (AIP), hypercapnic respiratory failure
Procedia PDF Downloads 65304 Dynamic Conformal Arc versus Intensity Modulated Radiotherapy for Image Guided Stereotactic Radiotherapy of Cranial Lesion
Authors: Chor Yi Ng, Christine Kong, Loretta Teo, Stephen Yau, FC Cheung, TL Poon, Francis Lee
Abstract:
Purpose: Dynamic conformal arc (DCA) and intensity modulated radiotherapy (IMRT) are two treatment techniques commonly used for stereotactic radiosurgery/radiotherapy of cranial lesions. IMRT plans usually give better dose conformity while DCA plans have better dose fall off. Rapid dose fall off is preferred for radiotherapy of cranial lesions, but dose conformity is also important. For certain lesions, DCA plans have good conformity, while for some lesions, the conformity is just unacceptable with DCA plans, and IMRT has to be used. The choice between the two may not be apparent until each plan is prepared and dose indices compared. We described a deviation index (DI) which is a measurement of the deviation of the target shape from a sphere, and test its functionality to choose between the two techniques. Method and Materials: From May 2015 to May 2017, our institute has performed stereotactic radiotherapy for 105 patients treating a total of 115 lesions (64 DCA plans and 51 IMRT plans). Patients were treated with the Varian Clinac iX with HDMLC. Brainlab Exactrac system was used for patient setup. Treatment planning was done with Brainlab iPlan RT Dose (Version 4.5.4). DCA plans were found to give better dose fall off in terms of R50% (R50% (DCA) = 4.75 Vs R50% (IMRT) = 5.242) while IMRT plans have better conformity in terms of treatment volume ratio (TVR) (TVR(DCA) = 1.273 Vs TVR(IMRT) = 1.222). Deviation Index (DI) is proposed to better facilitate the choice between the two techniques. DI is the ratio of the volume of a 1 mm shell of the PTV and the volume of a 1 mm shell of a sphere of identical volume. DI will be close to 1 for a near spherical PTV while a large DI will imply a more irregular PTV. To study the functionality of DI, 23 cases were chosen with PTV volume ranged from 1.149 cc to 29.83 cc, and DI ranged from 1.059 to 3.202. For each case, we did a nine field IMRT plan with one pass optimization and a five arc DCA plan. Then the TVR and R50% of each case were compared and correlated with the DI. Results: For the 23 cases, TVRs and R50% of the DCA and IMRT plans were examined. The conformity for IMRT plans are better than DCA plans, with majority of the TVR(DCA)/TVR(IMRT) ratios > 1, values ranging from 0.877 to1.538. While the dose fall off is better for DCA plans, with majority of the R50%(DCA)/ R50%(IMRT) ratios < 1. Their correlations with DI were also studied. A strong positive correlation was found between the ratio of TVRs and DI (correlation coefficient = 0.839), while the correlation between the ratio of R50%s and DI was insignificant (correlation coefficient = -0.190). Conclusion: The results suggest DI can be used as a guide for choosing the planning technique. For DI greater than a certain value, we can expect the conformity for DCA plans to become unacceptably great, and IMRT will be the technique of choice.Keywords: cranial lesions, dynamic conformal arc, IMRT, image guided radiotherapy, stereotactic radiotherapy
Procedia PDF Downloads 241303 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 364302 From Modelled Design to Reality through Material and Machinery Lab and Field Tests: Porous Concrete Carparks at the Wanda Metropolitano Stadium in Madrid
Authors: Manuel de Pazos-Liano, Manuel Cifuentes-Antonio, Juan Fisac-Gozalo, Sara Perales-Momparler, Carlos Martinez-Montero
Abstract:
The first-ever game in the Wanda Metropolitano Stadium, the new home of the Club Atletico de Madrid, was played on September 16, 2017, thanks to the work of a multidisciplinary team that made it possible to combine urban development with sustainability goals. The new football ground sits on a 1.2 km² land owned by the city of Madrid. Its construction has dramatically increased the sealed area of the site (transforming the runoff coefficient from 0.35 to 0.9), and the surrounding sewer network has no capacity for that extra flow. As an alternative to enlarge the existing 2.5 m diameter pipes, it was decided to detain runoff on site by means of an integrated and durable infrastructure that would not blow up the construction cost nor represent a burden on the municipality’s maintenance tasks. Instead of the more conventional option of building a large concrete detention tank, the decision was taken on the use of pervious pavement on the 3013 car parking spaces for sub-surface water storage, a solution aligned with the city water ordinance and the Madrid + Natural project. Making the idea a reality, in only five months and during the summer season (which forced to pour the porous concrete only overnight), was a challenge never faced before in Spain, that required of innovation both at the material as well as the machinery side. The process consisted on: a) defining the characteristics required for the porous concrete (compressive strength of 15 N/mm2 and 20% voids); b) testing of different porous concrete dosages at the construction company laboratory; c) stablishing the cross section in order to provide structural strength and sufficient water detention capacity (20 cm porous concrete over a 5 cm 5/10 gravel, that sits on a 50 cm coarse 40/50 aggregate sub-base separated by a virgin fiber polypropylene geotextile fabric); d) hydraulic computer modelling (using the Full Hydrograph Method based on the Wallingford Procedure) to estimate design peak flows decrease (an average of 69% at the three car parking lots); e) use of a variety of machinery for the application of the porous concrete to achieve both structural strength and permeable surface (including an inverse rotating rolling imported from USA, and the so-called CMI, a sliding concrete paver used in the construction of motorways with rigid pavements); f) full-scale pilots and final construction testing by an accredited laboratory (pavement compressive strength average value of 15 N/mm2 and 0,0032 m/s permeability). The continuous testing and innovating construction process explained in detail within this article, allowed for a growing performance with time, finally proving the use of the CMI valid also for large porous car park applications. All this process resulted in a successful story that converts the Wanda Metropolitano Stadium into a great demonstration site that will help the application of the Spanish Royal Decree 638/2016 (it also counts with rainwater harvesting for grass irrigation).Keywords: construction machinery, permeable carpark, porous concrete, SUDS, sustainable develpoment
Procedia PDF Downloads 144301 Optimization of Temperature Coefficients for MEMS Based Piezoresistive Pressure Sensor
Authors: Vijay Kumar, Jaspreet Singh, Manoj Wadhwa
Abstract:
Piezo-resistive pressure sensors were one of the first developed micromechanical system (MEMS) devices and still display a significant growth prompted by the advancements in micromachining techniques and material technology. In MEMS based piezo-resistive pressure sensors, temperature can be considered as the main environmental condition which affects the system performance. The study of the thermal behavior of these sensors is essential to define the parameters that cause the output characteristics to drift. In this work, a study on the effects of temperature and doping concentration in a boron implanted piezoresistor for a silicon-based pressure sensor is discussed. We have optimized the temperature coefficient of resistance (TCR) and temperature coefficient of sensitivity (TCS) values to determine the effect of temperature drift on the sensor performance. To be more precise, in order to reduce the temperature drift, a high doping concentration is needed. And it is well known that the Wheatstone bridge in a pressure sensor is supplied with a constant voltage or a constant current input supply. With a constant voltage supply, the thermal drift can be compensated along with an external compensation circuit, whereas the thermal drift in the constant current supply can be directly compensated by the bridge itself. But it would be beneficial to also compensate the temperature coefficient of piezoresistors so as to further reduce the temperature drift. So, with a current supply, the TCS is dependent on both the TCπ and TCR. As TCπ is a negative quantity and TCR is a positive quantity, it is possible to choose an appropriate doping concentration at which both of them cancel each other. An exact cancellation of TCR and TCπ values is not readily attainable; therefore, an adjustable approach is generally used in practical applications. Thus, one goal of this work has been to better understand the origin of temperature drift in pressure sensor devices so that the temperature effects can be minimized or eliminated. This paper describes the optimum doping levels for the piezoresistors where the TCS of the pressure transducers will be zero due to the cancellation of TCR and TCπ values. Also, the fabrication and characterization of the pressure sensor are carried out. The optimized TCR value obtained for the fabricated die is 2300 ± 100ppm/ᵒC, for which the piezoresistors are implanted at a doping concentration of 5E13 ions/cm³ and the TCS value of -2100ppm/ᵒC is achieved. Therefore, the desired TCR and TCS value is achieved, which are approximately equal to each other, so the thermal effects are considerably reduced. Finally, we have calculated the effect of temperature and doping concentration on the output characteristics of the sensor. This study allows us to predict the sensor behavior against temperature and to minimize this effect by optimizing the doping concentration.Keywords: piezo-resistive, pressure sensor, doping concentration, TCR, TCS
Procedia PDF Downloads 182300 6-Degree-Of-Freedom Spacecraft Motion Planning via Model Predictive Control and Dual Quaternions
Authors: Omer Burak Iskender, Keck Voon Ling, Vincent Dubanchet, Luca Simonini
Abstract:
This paper presents Guidance and Control (G&C) strategy to approach and synchronize with potentially rotating targets. The proposed strategy generates and tracks a safe trajectory for space servicing missions, including tasks like approaching, inspecting, and capturing. The main objective of this paper is to validate the G&C laws using a Hardware-In-the-Loop (HIL) setup with realistic rendezvous and docking equipment. Throughout this work, the assumption of full relative state feedback is relaxed by onboard sensors that bring realistic errors and delays and, while the proposed closed loop approach demonstrates the robustness to the above mentioned challenge. Moreover, G&C blocks are unified via the Model Predictive Control (MPC) paradigm, and the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description. In this work, G&C is formulated as a convex optimization problem where constraints such as thruster limits and the output constraints are explicitly handled. Furthermore, the Monte-Carlo method is used to evaluate the robustness of the proposed method to the initial condition errors, the uncertainty of the target's motion and attitude, and actuator errors. A capture scenario is tested with the robotic test bench that has onboard sensors which estimate the position and orientation of a drifting satellite through camera imagery. Finally, the approach is compared with currently used robust H-infinity controllers and guidance profile provided by the industrial partner. The HIL experiments demonstrate that the proposed strategy is a potential candidate for future space servicing missions because 1) the algorithm is real-time implementable as convex programming offers deterministic convergence properties and guarantee finite time solution, 2) critical physical and output constraints are respected, 3) robustness to sensor errors and uncertainties in the system is proven, 4) couples translational motion with rotational motion.Keywords: dual quaternion, model predictive control, real-time experimental test, rendezvous and docking, spacecraft autonomy, space servicing
Procedia PDF Downloads 146299 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling
Authors: Danlei Yang, Luofeng Huang
Abstract:
The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence
Procedia PDF Downloads 8298 Multi-Agent Searching Adaptation Using Levy Flight and Inferential Reasoning
Authors: Sagir M. Yusuf, Chris Baber
Abstract:
In this paper, we describe how to achieve knowledge understanding and prediction (Situation Awareness (SA)) for multiple-agents conducting searching activity using Bayesian inferential reasoning and learning. Bayesian Belief Network was used to monitor agents' knowledge about their environment, and cases are recorded for the network training using expectation-maximisation or gradient descent algorithm. The well trained network will be used for decision making and environmental situation prediction. Forest fire searching by multiple UAVs was the use case. UAVs are tasked to explore a forest and find a fire for urgent actions by the fire wardens. The paper focused on two problems: (i) effective agents’ path planning strategy and (ii) knowledge understanding and prediction (SA). The path planning problem by inspiring animal mode of foraging using Lévy distribution augmented with Bayesian reasoning was fully described in this paper. Results proof that the Lévy flight strategy performs better than the previous fixed-pattern (e.g., parallel sweeps) approaches in terms of energy and time utilisation. We also introduced a waypoint assessment strategy called k-previous waypoints assessment. It improves the performance of the ordinary levy flight by saving agent’s resources and mission time through redundant search avoidance. The agents (UAVs) are to report their mission knowledge at the central server for interpretation and prediction purposes. Bayesian reasoning and learning were used for the SA and results proof effectiveness in different environments scenario in terms of prediction and effective knowledge representation. The prediction accuracy was measured using learning error rate, logarithm loss, and Brier score and the result proves that little agents mission that can be used for prediction within the same or different environment. Finally, we described a situation-based knowledge visualization and prediction technique for heterogeneous multi-UAV mission. While this paper proves linkage of Bayesian reasoning and learning with SA and effective searching strategy, future works is focusing on simplifying the architecture.Keywords: Levy flight, distributed constraint optimization problem, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence
Procedia PDF Downloads 144297 Environmental Planning for Sustainable Utilization of Lake Chamo Biodiversity Resources: Geospatially Supported Approach, Ethiopia
Authors: Alemayehu Hailemicael Mezgebe, A. J. Solomon Raju
Abstract:
Context: Lake Chamo is a significant lake in the Ethiopian Rift Valley, known for its diversity of wildlife and vegetation. However, the lake is facing various threats due to human activities and global effects. The poor management of resources could lead to food insecurity, ecological degradation, and loss of biodiversity. Research Aim: The aim of this study is to analyze the environmental implications of lake level changes using GIS and remote sensing. The research also aims to examine the floristic composition of the lakeside vegetation and propose spatially oriented environmental planning for the sustainable utilization of the biodiversity resources. Methodology: The study utilizes multi-temporal satellite images and aerial photographs to analyze the changes in the lake area over the past 45 years. Geospatial analysis techniques are employed to assess land use and land cover changes and change detection matrix. The composition and role of the lakeside vegetation in the ecological and hydrological functions are also examined. Findings: The analysis reveals that the lake has shrunk by 14.42% over the years, with significant modifications to its upstream segment. The study identifies various threats to the lake-wetland ecosystem, including changes in water chemistry, overfishing, and poor waste management. The study also highlights the impact of human activities on the lake's limnology, with an increase in conductivity, salinity, and alkalinity. Floristic composition analysis of the lake-wetland ecosystem showed definite pattern of the vegetation distribution. The vegetation composition can be generally categorized into three belts namely, the herbaceous belt, the legume belt and the bush-shrub-small trees belt. The vegetation belts collectively act as different-sized sieve screen system and calm down the pace of incoming foreign matter. This stratified vegetation provides vital information to decide the management interventions for the sustainability of lake-wetland ecosystem.Theoretical Importance: The study contributes to the understanding of the environmental changes and threats faced by Lake Chamo. It provides insights into the impact of human activities on the lake-wetland ecosystem and emphasizes the need for sustainable resource management. Data Collection and Analysis Procedures: The study utilizes aerial photographs, satellite imagery, and field observations to collect data. Geospatial analysis techniques are employed to process and analyze the data, including land use/land cover changes and change detection matrices. Floristic composition analysis is conducted to assess the vegetation patterns Question Addressed: The study addresses the question of how lake level changes and human activities impact the environmental health and biodiversity of Lake Chamo. It also explores the potential opportunities and threats related to water utilization and waste management. Conclusion: The study recommends the implementation of spatially oriented environmental planning to ensure the sustainable utilization and maintenance of Lake Chamo's biodiversity resources. It emphasizes the need for proper waste management, improved irrigation facilities, and a buffer zone with specific vegetation patterns to restore and protect the lake outskirt.Keywords: buffer zone, geo-spatial, lake chamo, lake level changes, sustainable utilization
Procedia PDF Downloads 87296 Exploring Type V Hydrogen Storage Tanks: Shape Analysis and Material Evaluation for Enhanced Safety and Efficiency Focusing on Drop Test Performance
Authors: Mariam Jaber, Abdullah Yahya, Mohammad Alkhedher
Abstract:
The shift toward sustainable energy solutions increasingly focuses on hydrogen, recognized for its potential as a clean energy carrier. Despite its benefits, hydrogen storage poses significant challenges, primarily due to its low energy density and high volatility. Among the various solutions, pressure vessels designed for hydrogen storage range from Type I to Type V, each tailored for specific needs and benefits. Notably, Type V vessels, with their all-composite, liner-less design, significantly reduce weight and costs while optimizing space and decreasing maintenance demands. This study focuses on optimizing Type V hydrogen storage tanks by examining how different shapes affect performance in drop tests—a crucial aspect of achieving ISO 15869 certification. This certification ensures that if a tank is dropped, it will fail in a controlled manner, ideally by leaking before bursting. While cylindrical vessels are predominant in mobile applications due to their manufacturability and efficient use of space, spherical vessels offer superior stress distribution and require significantly less material thickness for the same pressure tolerance, making them advantageous for high-pressure scenarios. However, spherical tanks are less efficient in terms of packing and more complex to manufacture. Additionally, this study introduces toroidal vessels to assess their performance relative to the more traditional shapes, noting that the toroidal shape offers a more space-efficient option. The research evaluates how different shapes—spherical, cylindrical, and toroidal—affect drop test outcomes when combined with various composite materials and layup configurations. The ultimate goal is to identify optimal vessel geometries that enhance the safety and efficiency of hydrogen storage systems. For our materials, we selected high-performance composites such as Carbon T-700/Epoxy, Kevlar/Epoxy, E-Glass Fiber/Epoxy, and Basalt/Epoxy, configured in various orientations like [0,90]s, [45,-45]s, and [54,-54]. Our tests involved dropping tanks from different angles—horizontal, vertical, and 45 degrees—with an internal pressure of 35 MPa to replicate real-world scenarios as closely as possible. We used finite element analysis and first-order shear deformation theory, conducting tests with the Abaqus Explicit Dynamics software, which is ideal for handling the quick, intense stresses of an impact. The results from these simulations will provide valuable insights into how different designs and materials can enhance the durability and safety of hydrogen storage tanks. Our findings aim to guide future designs, making them more effective at withstanding impacts and safer overall. Ultimately, this research will contribute to the broader field of lightweight composite materials and polymers, advancing more innovative and practical approaches to hydrogen storage. By refining how we design these tanks, we are moving toward more reliable and economically feasible hydrogen storage solutions, further emphasizing hydrogen's role in the landscape of sustainable energy carriers.Keywords: hydrogen storage, drop test, composite materials, type V tanks, finite element analysis
Procedia PDF Downloads 46295 Alkali Activated Materials Based on Natural Clay from Raciszyn
Authors: Michal Lach, Maria Hebdowska-Krupa, Justyna Stefanek, Artur Stanek, Anna Stefanska, Janusz Mikula, Marek Hebda
Abstract:
Limited resources of raw materials determine the necessity of obtaining materials from other sources. In this area, the most known and widespread are recycling processes, which are mainly focused on the reuse of material. Another possible solution used in various companies to achieve improvement in sustainable development is waste-free production. It involves the production exclusively from such materials, whose waste is included in the group of renewable raw materials. This means that they can: (i) be recycled directly during the manufacturing process of further products or (ii) be raw material obtained by other companies for the production of alternative products. The article presents the possibility of using post-production clay from the Jurassic limestone deposit "Raciszyn II" as a raw material for the production of alkali activated materials (AAM). Such products are currently increasingly used, mostly in various building applications. However, their final properties depend significantly on many factors; the most important of them are: chemical composition of the raw material, particle size, specific surface area, type and concentration of the activator and the temperature range of the heat treatment. Conducted mineralogical and chemical analyzes of clay from the “Raciszyn II” deposit confirmed that this material, due to its high content of aluminosilicates, can be used as raw material for the production of AAM. In order to obtain the product with the best properties, the optimization of the clay calcining process was also carried out. Based on the obtained results, it was found that this process should occur in the range between 750 oC and 800 oC. The use of a lower temperature causes getting a raw material with low metakaolin content which is the main component of materials suitable for alkaline activation processes. On the other hand, higher heat treatment temperatures cause thermal dissociation of large amounts of calcite, which is associated with the release of large amounts of CO2 and the formation of calcium oxide. This compound significantly accelerates the binding process, which consequently often prevents the correct formation of geopolymer mass. The effect of the use of various activators: (i) NaOH, (ii) KOH and (iii) a mixture of KOH to NaOH in a ratio of 10%, 25% and 50% by volume on the compressive strength of the AAM was also analyzed. Obtained results depending on the activator used were in the range from 25 MPa to 40 MPa. These values are comparable with the results obtained for materials produced on the basis of Portland cement, which is one of the most popular building materials.Keywords: alkaline activation, aluminosilicates, calcination, compressive strength
Procedia PDF Downloads 153294 The Study of Intangible Assets at Various Firm States
Authors: Gulnara Galeeva, Yulia Kasperskaya
Abstract:
The study deals with the relevant problem related to the formation of the efficient investment portfolio of an enterprise. The structure of the investment portfolio is connected to the degree of influence of intangible assets on the enterprise’s income. This determines the importance of research on the content of intangible assets. However, intangible assets studies do not take into consideration how the enterprise state can affect the content and the importance of intangible assets for the enterprise`s income. This affects accurateness of the calculations. In order to study this problem, the research was divided into several stages. In the first stage, intangible assets were classified based on their synergies as the underlying intangibles and the additional intangibles. In the second stage, this classification was applied. It showed that the lifecycle model and the theory of abrupt development of the enterprise, that are taken into account while designing investment projects, constitute limit cases of a more general theory of bifurcations. The research identified that the qualitative content of intangible assets significant depends on how close the enterprise is to being in crisis. In the third stage, the author developed and applied the Wide Pairwise Comparison Matrix method. This allowed to establish that using the ratio of the standard deviation to the mean value of the elements of the vector of priority of intangible assets makes it possible to estimate the probability of a full-blown crisis of the enterprise. The author has identified a criterion, which allows making fundamental decisions on investment feasibility. The study also developed an additional rapid method of assessing the enterprise overall status based on using the questionnaire survey with its Director. The questionnaire consists only of two questions. The research specifically focused on the fundamental role of stochastic resonance in the emergence of bifurcation (crisis) in the economic development of the enterprise. The synergetic approach made it possible to describe the mechanism of the crisis start in details and also to identify a range of universal ways of overcoming the crisis. It was outlined that the structure of intangible assets transforms into a more organized state with the strengthened synchronization of all processes as a result of the impact of the sporadic (white) noise. Obtained results offer managers and business owners a simple and an affordable method of investment portfolio optimization, which takes into account how close the enterprise is to a state of a full-blown crisis.Keywords: analytic hierarchy process, bifurcation, investment portfolio, intangible assets, wide matrix
Procedia PDF Downloads 208293 Virtual Screening and in Silico Toxicity Property Prediction of Compounds against Mycobacterium tuberculosis Lipoate Protein Ligase B (LipB)
Authors: Junie B. Billones, Maria Constancia O. Carrillo, Voltaire G. Organo, Stephani Joy Y. Macalino, Inno A. Emnacen, Jamie Bernadette A. Sy
Abstract:
The drug discovery and development process is generally known to be a very lengthy and labor-intensive process. Therefore, in order to be able to deliver prompt and effective responses to cure certain diseases, there is an urgent need to reduce the time and resources needed to design, develop, and optimize potential drugs. Computer-aided drug design (CADD) is able to alleviate this issue by applying computational power in order to streamline the whole drug discovery process, starting from target identification to lead optimization. This drug design approach can be predominantly applied to diseases that cause major public health concerns, such as tuberculosis. Hitherto, there has been no concrete cure for this disease, especially with the continuing emergence of drug resistant strains. In this study, CADD is employed for tuberculosis by first identifying a key enzyme in the mycobacterium’s metabolic pathway that would make a good drug target. One such potential target is the lipoate protein ligase B enzyme (LipB), which is a key enzyme in the M. tuberculosis metabolic pathway involved in the biosynthesis of the lipoic acid cofactor. Its expression is considerably up-regulated in patients with multi-drug resistant tuberculosis (MDR-TB) and it has no known back-up mechanism that can take over its function when inhibited, making it an extremely attractive target. Using cutting-edge computational methods, compounds from AnalytiCon Discovery Natural Derivatives database were screened and docked against the LipB enzyme in order to rank them based on their binding affinities. Compounds which have better binding affinities than LipB’s known inhibitor, decanoic acid, were subjected to in silico toxicity evaluation using the ADMET and TOPKAT protocols. Out of the 31,692 compounds in the database, 112 of these showed better binding energies than decanoic acid. Furthermore, 12 out of the 112 compounds showed highly promising ADMET and TOPKAT properties. Future studies involving in vitro or in vivo bioassays may be done to further confirm the therapeutic efficacy of these 12 compounds, which eventually may then lead to a novel class of anti-tuberculosis drugs.Keywords: pharmacophore, molecular docking, lipoate protein ligase B (LipB), ADMET, TOPKAT
Procedia PDF Downloads 424292 Experimental Design in Extraction of Pseudomonas sp. Protease from Fermented Broth by Polyethylene Glycol/Citrate Aqueous Two-Phase System
Authors: Omar Pillaca-Pullo, Arturo Alejandro-Paredes, Carol Flores-Fernandez, Marijuly Sayuri Kina, Amparo Iris Zavaleta
Abstract:
Aqueous two-phase system (ATPS) is an interesting alternative for separating industrial enzymes due to it is easy to scale-up and low cost. Polyethylene glycol (PEG) mixed with potassium phosphate or magnesium sulfate is one of the most frequently polymer/salt ATPS used, but the consequences of its use is a high concentration of phosphates and sulfates in wastewater causing environmental issues. Citrate could replace these inorganic salts due to it is biodegradable and does not produce toxic compounds. On the other hand, statistical design of experiments is widely used for ATPS optimization and it allows to study the effects of the involved variables in the purification, and to estimate their significant effects on selected responses and interactions. The 24 factorial design with four central points (20 experiments) was employed to study the partition and purification of proteases produced by Pseudomonas sp. in PEG/citrate ATPS system. ATPS was prepared with different sodium citrate concentrations [14, 16 and 18% (w/w)], pH values (7, 8 and 9), PEG molecular weight (2,000; 4,000 and 6,000 g/mol) and PEG concentrations [18, 20 and 22 % (w/w)]. All system components were mixed with 15% (w/w) of the fermented broth and deionized water was added to a final weight of 12.5 g. Then, the systems were mixed and kept at room temperature until to reach two-phases separation. Volumes of the top and bottom phases were measured, and aliquots from both phases were collected for subsequent proteolytic activity and total protein determination. Influence of variables such as PEG molar mass (MPEG), PEG concentration (CPEG), citrate concentration (CSal) and pH were evaluated on the following responses: purification factor (PF), activity yield (Y), partition coefficient (K) and selectivity (S). STATISTICA program version 10 was used for the analysis. According to the obtained results, higher levels of CPEG and MPEG had a positive effect on extraction, while pH did not influence on the process. On the other hand, the CSal could be related with low values of Y because of the citrate ions have a negative effect on solubility and enzymatic structure. The optimum values of Y (66.4 %), PF (1.8), K (5.5) and S (4.3) were obtained at CSal (18%), MPEG (6,000 g/mol), CPEG (22%) and pH 9. These results indicated that the PEG/citrate system is accurate to purify these Pseudomonas sp. proteases from fermented broth as a first purification step.Keywords: citrate, polyethylene glycol, protease, Pseudomonas sp
Procedia PDF Downloads 194291 Impact of Six-Minute Walk or Rest Break during Extended GamePlay on Executive Function in First Person Shooter Esport Players
Authors: Joanne DiFrancisco-Donoghue, Seth E. Jenny, Peter C. Douris, Sophia Ahmad, Kyle Yuen, Hillary Gan, Kenney Abraham, Amber Sousa
Abstract:
Background: Guidelines for the maintenance of health of esports players and the cognitive changes that accompany competitive gaming are understudied. Executive functioning is an important cognitive skill for an esports player. The relationship between executive functions and physical exercise has been well established. However, the effects of prolonged sitting regardless of physical activity level have not been established. Prolonged uninterrupted sitting reduces cerebral blood flow. Reduced cerebral blood flow is associated with lower cognitive function and fatigue. This decrease in cerebral blood flow has been shown to be offset by frequent and short walking breaks. These short breaks can be as little as 2 minutes at the 30-minute mark and 6 minutes following 60 minutes of prolonged sitting. The rationale is the increase in blood flow and the positive effects this has on metabolic responses. The primary purpose of this study was to evaluate executive function changes following 6-minute bouts of walking and complete rest mid-session, compared to no break, during prolonged gameplay in competitive first-person shooter (FPS) esports players. Methods: This study was conducted virtually due to the Covid-19 pandemic and was approved by the New York Institute of Technology IRB. Twelve competitive FPS participants signed written consent to participate in this randomized pilot study. All participants held a gold ranking or higher. Participants were asked to play for 2 hours on three separate days. Outcome measures to test executive function included the Color Stroop and the Tower of London tests which were administered online each day prior to gaming and at the completion of gaming. All participants completed the tests prior to testing for familiarization. One day of testing consisted of a 6-minute walk break after 60-75 minutes of play. The Rate of Perceived Exertion (RPE) was recorded. The participant continued to play for another 60-75 minutes and completed the tests again. Another day the participants repeated the same methods replacing the 6-minute walk with lying down and resting for 6 minutes. On the last day, the participant played continuously with no break for 2 hours and repeated the outcome tests pre and post-play. A Latin square was used to randomize the treatment order. Results: Using descriptive statistics, the largest change in mean reaction time incorrect congruent pre to post play was seen following the 6-minute walk (662.0 (609.6) ms pre to 602.8 (539.2) ms post), followed by the 6-minute rest group (681.7(618.1) ms pre to 666.3 (607.9) ms post), and with minimal change in the continuous group (594.0(534.1) ms pre to 589.6(552.9) ms post). The mean solution time was fastest in the resting condition (7774.6(6302.8)ms), followed by the walk condition (7929.4 (5992.8)ms), with the continuous condition being slowest (9337.3(7228.7)ms). the continuous group 9337.3(7228.7) ms; 7929.4 (5992.8 ) ms 774.6(6302.8) ms. Conclusion: Short walking breaks improve blood flow and reduce the risk of venous thromboembolism during prolonged sitting. This pilot study demonstrated that a low intensity 6 -minute walk break, following 60 minutes of play, may also improve executive function in FPS gamers.Keywords: executive function, FPS, physical activity, prolonged sitting
Procedia PDF Downloads 228290 Unified Coordinate System Approach for Swarm Search Algorithms in Global Information Deficit Environments
Authors: Rohit Dey, Sailendra Karra
Abstract:
This paper aims at solving the problem of multi-target searching in a Global Positioning System (GPS) denied environment using swarm robots with limited sensing and communication abilities. Typically, existing swarm-based search algorithms rely on the presence of a global coordinate system (vis-à-vis, GPS) that is shared by the entire swarm which, in turn, limits its application in a real-world scenario. This can be attributed to the fact that robots in a swarm need to share information among themselves regarding their location and signal from targets to decide their future course of action but this information is only meaningful when they all share the same coordinate frame. The paper addresses this very issue by eliminating any dependency of a search algorithm on the need of a predetermined global coordinate frame by the unification of the relative coordinate of individual robots when within the communication range, therefore, making the system more robust in real scenarios. Our algorithm assumes that all the robots in the swarm are equipped with range and bearing sensors and have limited sensing range and communication abilities. Initially, every robot maintains their relative coordinate frame and follow Levy walk random exploration until they come in range with other robots. When two or more robots are within communication range, they share sensor information and their location w.r.t. their coordinate frames based on which we unify their coordinate frames. Now they can share information about the areas that were already explored, information about the surroundings, and target signal from their location to make decisions about their future movement based on the search algorithm. During the process of exploration, there can be several small groups of robots having their own coordinate systems but eventually, it is expected for all the robots to be under one global coordinate frame where they can communicate information on the exploration area following swarm search techniques. Using the proposed method, swarm-based search algorithms can work in a real-world scenario without GPS and any initial information about the size and shape of the environment. Initial simulation results show that running our modified-Particle Swarm Optimization (PSO) without global information we can still achieve the desired results that are comparable to basic PSO working with GPS. In the full paper, we plan on doing the comparison study between different strategies to unify the coordinate system and to implement them on other bio-inspired algorithms, to work in GPS denied environment.Keywords: bio-inspired search algorithms, decentralized control, GPS denied environment, swarm robotics, target searching, unifying coordinate systems
Procedia PDF Downloads 137289 Variable Renewable Energy Droughts in the Power Sector – A Model-based Analysis and Implications in the European Context
Authors: Martin Kittel, Alexander Roth
Abstract:
The continuous integration of variable renewable energy sources (VRE) in the power sector is required for decarbonizing the European economy. Power sectors become increasingly exposed to weather variability, as the availability of VRE, i.e., mainly wind and solar photovoltaic, is not persistent. Extreme events, e.g., long-lasting periods of scarce VRE availability (‘VRE droughts’), challenge the reliability of supply. Properly accounting for the severity of VRE droughts is crucial for designing a resilient renewable European power sector. Energy system modeling is used to identify such a design. Our analysis reveals the sensitivity of the optimal design of the European power sector towards VRE droughts. We analyze how VRE droughts impact optimal power sector investments, especially in generation and flexibility capacity. We draw upon work that systematically identifies VRE drought patterns in Europe in terms of frequency, duration, and seasonality, as well as the cross-regional and cross-technological correlation of most extreme drought periods. Based on their analysis, the authors provide a selection of relevant historical weather years representing different grades of VRE drought severity. These weather years will serve as input for the capacity expansion model for the European power sector used in this analysis (DIETER). We additionally conduct robustness checks varying policy-relevant assumptions on capacity expansion limits, interconnections, and level of sector coupling. Preliminary results illustrate how an imprudent selection of weather years may cause underestimating the severity of VRE droughts, flawing modeling insights concerning the need for flexibility. Sub-optimal European power sector designs vulnerable to extreme weather can result. Using relevant weather years that appropriately represent extreme weather events, our analysis identifies a resilient design of the European power sector. Although the scope of this work is limited to the European power sector, we are confident that our insights apply to other regions of the world with similar weather patterns. Many energy system studies still rely on one or a limited number of sometimes arbitrarily chosen weather years. We argue that the deliberate selection of relevant weather years is imperative for robust modeling results.Keywords: energy systems, numerical optimization, variable renewable energy sources, energy drought, flexibility
Procedia PDF Downloads 72288 Optimizing Cell Culture Performance in an Ambr15 Microbioreactor Using Dynamic Flux Balance and Computational Fluid Dynamic Modelling
Authors: William Kelly, Sorelle Veigne, Xianhua Li, Zuyi Huang, Shyamsundar Subramanian, Eugene Schaefer
Abstract:
The ambr15™ bioreactor is a single-use microbioreactor for cell line development and process optimization. The ambr system offers fully automatic liquid handling with the possibility of fed-batch operation and automatic control of pH and oxygen delivery. With operating conditions for large scale biopharmaceutical production properly scaled down, micro bioreactors such as the ambr15™ can potentially be used to predict the effect of process changes such as modified media or different cell lines. In this study, gassing rates and dilution rates were varied for a semi-continuous cell culture system in the ambr15™ bioreactor. The corresponding changes to metabolite production and consumption, as well as cell growth rate and therapeutic protein production were measured. Conditions were identified in the ambr15™ bioreactor that produced metabolic shifts and specific metabolic and protein production rates also seen in the corresponding larger (5 liter) scale perfusion process. A Dynamic Flux Balance model was employed to understand and predict the metabolic changes observed. The DFB model-predicted trends observed experimentally, including lower specific glucose consumption when CO₂ was maintained at higher levels (i.e. 100 mm Hg) in the broth. A Computational Fluid Dynamic (CFD) model of the ambr15™ was also developed, to understand transfer of O₂ and CO₂ to the liquid. This CFD model predicted gas-liquid flow in the bioreactor using the ANSYS software. The two-phase flow equations were solved via an Eulerian method, with population balance equations tracking the size of the gas bubbles resulting from breakage and coalescence. Reasonable results were obtained in that the Carbon Dioxide mass transfer coefficient (kLa) and the air hold up increased with higher gas flow rate. Volume-averaged kLa values at 500 RPM increased as the gas flow rate was doubled and matched experimentally determined values. These results form a solid basis for optimizing the ambr15™, using both CFD and FBA modelling approaches together, for use in microscale simulations of larger scale cell culture processes.Keywords: cell culture, computational fluid dynamics, dynamic flux balance analysis, microbioreactor
Procedia PDF Downloads 282287 Dual Challenges in Host State Regulation on Transnational Corporate Damages: China's Dilemma and Breakthrough
Authors: Xinchao Liu
Abstract:
Regulating environmental and human rights damages caused by transnational corporations in host States is a core issue in the business and human rights discourse. In current regulatory practices, host States, which are territorially based and should bear primary regulation responsibility, face dual challenges at both domestic and international levels, leading to their continued marginalization. Specifically, host States as TNC damage regulators are constrained domestically by territorial jurisdiction limitations and internationally by the neoliberal international economic order exemplified by investment protection mechanisms. Taking China as a sample, it currently lacks a comprehensive regulation system to address TNC damages; while domestic constraints manifest as the marginalization of judicial regulation, the absence of corporate duty of care, and inadequate extraterritorial regulation effectiveness, international constraints are reflected in the absence of foreign investor obligations in investment agreements and the asymmetry of dispute resolution clauses, challenging regulatory sovereignty. As China continues to advance its policy of high-quality opening up, the risks of negative externalities from transnational capital will continue to increase, necessitating a focus on building and perfecting a regulation mechanism for TNC damages within the framework of international law. To address domestic constraints, it is essential to clarify the division of regulation responsibilities between judicial and administrative bodies, promote the normalization of judicial regulation, and enhance judicial oversight of governmental settlements. Improving the choice of law rules for cross-border torts and the standards for parent company liability for omissions, and enhancing extraterritorial judicial effectiveness through transnational judicial dialogue and cooperation mechanisms are also crucial. To counteract international constraints, specifying investor obligations in investment treaties and designing symmetrical dispute resolution clauses are indispensable to eliminate regulatory chill. Additionally, actively advancing the implementation of TNC obligations in business and human rights treaty negotiations will lay an international legal foundation for the regulation sovereignty of host States.Keywords: transnational corporate damages, home state litigation, optimization limit, investor-state dispute settlement
Procedia PDF Downloads 10286 Sustainability Impact Assessment of Construction Ecology to Engineering Systems and Climate Change
Authors: Moustafa Osman Mohammed
Abstract:
Construction industry, as one of the main contributor in depletion of natural resources, influences climate change. This paper discusses incremental and evolutionary development of the proposed models for optimization of a life-cycle analysis to explicit strategy for evaluation systems. The main categories are virtually irresistible for introducing uncertainties, uptake composite structure model (CSM) as environmental management systems (EMSs) in a practice science of evaluation small and medium-sized enterprises (SMEs). The model simplified complex systems to reflect nature systems’ input, output and outcomes mode influence “framework measures” and give a maximum likelihood estimation of how elements are simulated over the composite structure. The traditional knowledge of modeling is based on physical dynamic and static patterns regarding parameters influence environment. It unified methods to demonstrate how construction systems ecology interrelated from management prospective in procedure reflects the effect of the effects of engineering systems to ecology as ultimately unified technologies in extensive range beyond constructions impact so as, - energy systems. Sustainability broadens socioeconomic parameters to practice science that meets recovery performance, engineering reflects the generic control of protective systems. When the environmental model employed properly, management decision process in governments or corporations could address policy for accomplishment strategic plans precisely. The management and engineering limitation focuses on autocatalytic control as a close cellular system to naturally balance anthropogenic insertions or aggregation structure systems to pound equilibrium as steady stable conditions. Thereby, construction systems ecology incorporates engineering and management scheme, as a midpoint stage between biotic and abiotic components to predict constructions impact. The later outcomes’ theory of environmental obligation suggests either a procedures of method or technique that is achieved in sustainability impact of construction system ecology (SICSE), as a relative mitigation measure of deviation control, ultimately.Keywords: sustainability, environmental impact assessment, environemtal management, construction ecology
Procedia PDF Downloads 393285 Knowledge Creation and Diffusion Dynamics under Stable and Turbulent Environment for Organizational Performance Optimization
Authors: Jessica Gu, Yu Chen
Abstract:
Knowledge Management (KM) is undoubtable crucial to organizational value creation, learning, and adaptation. Although the rapidly growing KM domain has been fueled with full-fledged methodologies and technologies, studies on KM evolution that bridge the organizational performance and adaptation to the organizational environment are still rarely attempted. In particular, creation (or generation) and diffusion (or share/exchange) of knowledge are of the organizational primary concerns on the problem-solving perspective, however, the optimized distribution of knowledge creation and diffusion endeavors are still unknown to knowledge workers. This research proposed an agent-based model of knowledge creation and diffusion in an organization, aiming at elucidating how the intertwining knowledge flows at microscopic level lead to optimized organizational performance at macroscopic level through evolution, and exploring what exogenous interventions by the policy maker and endogenous adjustments of the knowledge workers can better cope with different environmental conditions. With the developed model, a series of simulation experiments are conducted. Both long-term steady-state and time-dependent developmental results on organizational performance, network and structure, social interaction and learning among individuals, knowledge audit and stocktaking, and the likelihood of choosing knowledge creation and diffusion by the knowledge workers are obtained. One of the interesting findings reveals a non-monotonic phenomenon on organizational performance under turbulent environment while a monotonic phenomenon on organizational performance under a stable environment. Hence, whether the environmental condition is turbulence or stable, the most suitable exogenous KM policy and endogenous knowledge creation and diffusion choice adjustments can be identified for achieving the optimized organizational performance. Additional influential variables are further discussed and future work directions are finally elaborated. The proposed agent-based model generates evidence on how knowledge worker strategically allocates efforts on knowledge creation and diffusion, how the bottom-up interactions among individuals lead to emerged structure and optimized performance, and how environmental conditions bring in challenges to the organization system. Meanwhile, it serves as a roadmap and offers great macro and long-term insights to policy makers without interrupting the real organizational operation, sacrificing huge overhead cost, or introducing undesired panic to employees.Keywords: knowledge creation, knowledge diffusion, agent-based modeling, organizational performance, decision making evolution
Procedia PDF Downloads 241284 Optimization of Culture Conditions of Paecilomyces Tenuipes, Entomopathogenic Fungi Inoculated into the Silkworm Larva, Bombyx Mori
Authors: Sung-Hee Nam, Kwang-Gill Lee, You-Young Jo, HaeYong Kweon
Abstract:
Entomopathogenic fungi is a Cordyceps species that is isolated from dead silkworm and cicada. Fungi on cicadas were described in old Chinese medicinal books and From ancient times, vegetable wasps and plant worms were widely known to have active substance and have been studied for pharmacological use. Among many fungi belonging to the genus Cordyceps, Cordyceps sinensis have been demonstrated to yield natural products possessing various biological activities and many bioactive components. Generally, It is commonly used to replenish the kidney and soothe the lung, and for the treatment of fatigue. Due to their commercial and economic importance, the demand for Cordyceps has been rapidly increased. However, a supply of Cordyceps specimen could not meet the increasing demand because of their sole dependence on field collection and habitat destruction. Because it is difficult to obtain many insect hosts in nature and the edibility of host insect needs to be verified in a pharmacological aspect. Recently, this setback was overcome that P. tenuipes was able to be cultivated in a large scale using silkworm as host. Pharmacological effects of P. tenuipes cultured on silkworm such as strengthening immune function, anti-fatigue, anti-tumor activity and controlling liver etc have been proved. They are widely commercialized. In this study, we attempted to establish a method for stable growth inhibition of P. tenuipes on silkworm hosts and an optimal condition for synnemata formation. To determine optimum culturing conditions, temperature and light conditions were varied. The length and number of synnemata was highest at 25℃ temperature and 100~300 lux illumination. On an average, the synnemata of wild P. tenuipes measures 70 ㎜ in length and 20 in number; those of the cultured strain were relatively shorter and more in number. The number of synnemata may have increased as a result of inoculating the host with highly concentrated conidia, while the length may have decreased due to limited nutrition per individual. It is not able that changes in light illumination cause morphological variations in the synnemata. However, regulation of only light and temperature could not produce stromata like perithecia, asci, and ascospores. Yamanaka reported that although a complete fruiting body can be produced under optimal culture conditions, it should be regarded as synnemata because it does not develop into an ascoma bearing ascospores.Keywords: paecilomyces tenuipes, entomopathogenic fungi, silkworm larva, bombyx mori
Procedia PDF Downloads 320283 Workflow Based Inspection of Geometrical Adaptability from 3D CAD Models Considering Production Requirements
Authors: Tobias Huwer, Thomas Bobek, Gunter Spöcker
Abstract:
Driving forces for enhancements in production are trends like digitalization and individualized production. Currently, such developments are restricted to assembly parts. Thus, complex freeform surfaces are not addressed in this context. The need for efficient use of resources and near-net-shape production will require individualized production of complex shaped workpieces. Due to variations between nominal model and actual geometry, this can lead to changes in operations in Computer-aided process planning (CAPP) to make CAPP manageable for an adaptive serial production. In this context, 3D CAD data can be a key to realizing that objective. Along with developments in the geometrical adaptation, a preceding inspection method based on CAD data is required to support the process planner by finding objective criteria to make decisions about the adaptive manufacturability of workpieces. Nowadays, this kind of decisions is depending on the experience-based knowledge of humans (e.g. process planners) and results in subjective decisions – leading to a variability of workpiece quality and potential failure in production. In this paper, we present an automatic part inspection method, based on design and measurement data, which evaluates actual geometries of single workpiece preforms. The aim is to automatically determine the suitability of the current shape for further machining, and to provide a basis for an objective decision about subsequent adaptive manufacturability. The proposed method is realized by a workflow-based approach, keeping in mind the requirements of industrial applications. Workflows are a well-known design method of standardized processes. Especially in applications like aerospace industry standardization and certification of processes are an important aspect. Function blocks, providing a standardized, event-driven abstraction to algorithms and data exchange, will be used for modeling and execution of inspection workflows. Each analysis step of the inspection, such as positioning of measurement data or checking of geometrical criteria, will be carried out by function blocks. One advantage of this approach is its flexibility to design workflows and to adapt algorithms specific to the application domain. In general, within the specified tolerance range it will be checked if a geometrical adaption is possible. The development of particular function blocks is predicated on workpiece specific information e.g. design data. Furthermore, for different product lifecycle phases, appropriate logics and decision criteria have to be considered. For example, tolerances for geometric deviations are different in type and size for new-part production compared to repair processes. In addition to function blocks, appropriate referencing systems are important. They need to support exact determination of position and orientation of the actual geometries to provide a basis for precise analysis. The presented approach provides an inspection methodology for adaptive and part-individual process chains. The analysis of each workpiece results in an inspection protocol and an objective decision about further manufacturability. A representative application domain is the product lifecycle of turbine blades containing a new-part production and a maintenance process. In both cases, a geometrical adaptation is required to calculate individual production data. In contrast to existing approaches, the proposed initial inspection method provides information to decide between different potential adaptive machining processes.Keywords: adaptive, CAx, function blocks, turbomachinery
Procedia PDF Downloads 297282 Processes and Application of Casting Simulation and Its Software’s
Authors: Surinder Pal, Ajay Gupta, Johny Khajuria
Abstract:
Casting simulation helps visualize mold filling and casting solidification; predict related defects like cold shut, shrinkage porosity and hard spots; and optimize the casting design to achieve the desired quality with high yield. Flow and solidification of molten metals are, however, a very complex phenomenon that is difficult to simulate correctly by conventional computational techniques, especially when the part geometry is intricate and the required inputs (like thermo-physical properties and heat transfer coefficients) are not available. Simulation software is based on the process of modeling a real phenomenon with a set of mathematical formulas. It is, essentially, a program that allows the user to observe an operation through simulation without actually performing that operation. Simulation software is used widely to design equipment so that the final product will be as close to design specs as possible without expensive in process modification. Simulation software with real-time response is often used in gaming, but it also has important industrial applications. When the penalty for improper operation is costly, such as airplane pilots, nuclear power plant operators, or chemical plant operators, a mockup of the actual control panel is connected to a real-time simulation of the physical response, giving valuable training experience without fear of a disastrous outcome. The all casting simulation software has own requirements, like magma cast has only best for crack simulation. The latest generation software Auto CAST developed at IIT Bombay provides a host of functions to support method engineers, including part thickness visualization, core design, multi-cavity mold design with common gating and feeding, application of various feed aids (feeder sleeves, chills, padding, etc.), simulation of mold filling and casting solidification, automatic optimization of feeders and gating driven by the desired quality level, and what-if cost analysis. IIT Bombay has developed a set of applications for the foundry industry to improve casting yield and quality. Casting simulation is a fast and efficient solution for process for advanced tool which is the result of more than 20 years of collaboration with major industrial partners and academic institutions around the world. In this paper the process of casting simulation is studied.Keywords: casting simulation software’s, simulation technique’s, casting simulation, processes
Procedia PDF Downloads 475281 New Recombinant Netrin-a Protein of Lucilia Sericata Larvae by Bac to Bac Expression Vector System in Sf9 Insect Cell
Authors: Hamzeh Alipour, Masoumeh Bagheri, Abbasali Raz, Javad Dadgar Pakdel, Kourosh Azizi, Aboozar Soltani, Mohammad Djaefar Moemenbellah-Fard
Abstract:
Background: Maggot debridement therapy is an appropriate, effective, and controlled method using sterilized larvae of Luciliasericata (L.sericata) to treat wounds. Netrin-A is an enzyme in the Laminins family which secreted from salivary gland of L.sericata with a central role in neural regeneration and angiogenesis. This study aimed to production of new recombinant Netrin-A protein of Luciliasericata larvae by baculovirus expression vector system (BEVS) in SF9. Material and methods: In the first step, gene structure was subjected to the in silico studies, which were include determination of Antibacterial activity, Prion formation risk, homology modeling, Molecular docking analysis, and Optimization of recombinant protein. In the second step, the Netrin-A gene was cloned and amplified in pTG19 vector. After digestion with BamH1 and EcoR1 restriction enzymes, it was cloned in pFastBac HTA vector. It was then transformed into DH10Bac competent cells, and the recombinant Bacmid was subsequently transfected into insect Sf9 cells. The expressed recombinant Netrin-A was thus purified in the Ni-NTA agarose. This protein evaluation was done using SDS-PAGE and western blot, respectively. Finally, its concentration was calculated with the Bradford assay method. Results: The Bacmid vector structure with Netrin-A was successfully constructed and then expressed as Netrin-A protein in the Sf9 cell lane. The molecular weight of this protein was 52 kDa with 404 amino acids. In the in silico studies, fortunately, we predicted that recombinant LSNetrin-A have Antibacterial activity and without any prion formation risk.This molecule hasa high binding affinity to the Neogenin and a lower affinity to the DCC-specific receptors. Signal peptide located between amino acids 24 and 25. The concentration of Netrin-A recombinant protein was calculated to be 48.8 μg/ml. it was confirmed that the characterized gene in our previous study codes L. sericata Netrin-A enzyme. Conclusions: Successful generation of the recombinant Netrin-A, a secreted protein in L.sericata salivary glands, and because Luciliasericata larvae are used in larval therapy. Therefore, the findings of the present study could be useful to researchers in future studies on wound healing.Keywords: blowfly, BEVS, gene, immature insect, recombinant protein, Sf9
Procedia PDF Downloads 93280 Modeling of an Insulin Mircopump
Authors: Ahmed Slami, Med El Amine Brixi Nigassa, Nassima Labdelli, Sofiane Soulimane, Arnaud Pothier
Abstract:
Many people suffer from diabetes, a disease marked by abnormal levels of sugar in the blood; 285 million people have diabetes, 6.6% of the world adult population (in 2010), according to the International Diabetes Federation. Insulin medicament is invented to be injected into the body. Generally, the injection requires the patient to do it manually. However, in many cases he will be unable to inject the drug, saw that among the side effects of hyperglycemia is the weakness of the whole body. The researchers designed a medical device that injects insulin too autonomously by using micro-pumps. Many micro-pumps of concepts have been investigated during the last two decades for injecting molecules in blood or in the body. However, all these micro-pumps are intended for slow infusion of drug (injection of few microliters by minute). Now, the challenge is to develop micro-pumps for fast injections (1 microliter in 10 seconds) with accuracy of the order of microliter. Recently, studies have shown that only piezoelectric actuators can achieve this performance, knowing that few systems at the microscopic level were presented. These reasons lead us to design new smart microsystems injection drugs. Therefore, many technological advances are still to achieve the improvement of materials to their uses, while going through their characterization and modeling action mechanisms themselves. Moreover, it remains to study the integration of the piezoelectric micro-pump in the microfluidic platform features to explore and evaluate the performance of these new micro devices. In this work, we propose a new micro-pump model based on piezoelectric actuation with a new design. Here, we use a finite element model with Comsol software. Our device is composed of two pumping chambers, two diaphragms and two actuators (piezoelectric disks). The latter parts will apply a mechanical force on the membrane in a periodic manner. The membrane deformation allows the fluid pumping, the suction and discharge of the liquid. In this study, we present the modeling results as function as device geometry properties, films thickness, and materials properties. Here, we demonstrate that we can achieve fast injection. The results of these simulations will provide quantitative performance of our micro-pumps. Concern the spatial actuation, fluid rate and allows optimization of the fabrication process in terms of materials and integration steps.Keywords: COMSOL software, piezoelectric, micro-pump, microfluidic
Procedia PDF Downloads 342279 Removal of Chromium by UF5kDa Membrane: Its Characterization, Optimization of Parameters, and Evaluation of Coefficients
Authors: Bharti Verma, Chandrajit Balomajumder
Abstract:
Water pollution is escalated owing to industrialization and random ejection of one or more toxic heavy metal ions from the semiconductor industry, electroplating, metallurgical, mining, chemical manufacturing, tannery industries, etc., In semiconductor industry various kinds of chemicals in wafers preparation are used . Fluoride, toxic solvent, heavy metals, dyes and salts, suspended solids and chelating agents may be found in wastewater effluent of semiconductor manufacturing industry. Also in the chrome plating, in the electroplating industry, the effluent contains heavy amounts of Chromium. Since Cr(VI) is highly toxic, its exposure poses an acute risk of health. Also, its chronic exposure can even lead to mutagenesis and carcinogenesis. On the contrary, Cr (III) which is naturally occurring, is much less toxic than Cr(VI). Discharge limit of hexavalent chromium and trivalent chromium are 0.05 mg/L and 5 mg/L, respectively. There are numerous methods such as adsorption, chemical precipitation, membrane filtration, ion exchange, and electrochemical methods for the heavy metal removal. The present study focuses on the removal of Chromium ions by using flat sheet UF5kDa membrane. The Ultra filtration membrane process is operated above micro filtration membrane process. Thus separation achieved may be influenced due to the effect of Sieving and Donnan effect. Ultrafiltration is a promising method for the rejection of heavy metals like chromium, fluoride, cadmium, nickel, arsenic, etc. from effluent water. Benefits behind ultrafiltration process are that the operation is quite simple, the removal efficiency is high as compared to some other methods of removal and it is reliable. Polyamide membranes have been selected for the present study on rejection of Cr(VI) from feed solution. The objective of the current work is to examine the rejection of Cr(VI) from aqueous feed solutions by flat sheet UF5kDa membranes with different parameters such as pressure, feed concentration and pH of the feed. The experiments revealed that with increasing pressure, the removal efficiency of Cr(VI) is increased. Also, the effect of pH of feed solution, the initial dosage of chromium in the feed solution has been studied. The membrane has been characterized by FTIR, SEM and AFM before and after the run. The mass transfer coefficients have been estimated. Membrane transport parameters have been calculated and have been found to be in a good correlation with the applied model.Keywords: heavy metal removal, membrane process, waste water treatment, ultrafiltration
Procedia PDF Downloads 139278 Dual-Phase High Entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅) BxCy Ceramics Produced by Spark Plasma Sintering
Authors: Ana-Carolina Feltrin, Daniel Hedman, Farid Akhtar
Abstract:
High entropy ceramic (HEC) materials are characterized by their compositional disorder due to different metallic element atoms occupying the cation position and non-metal elements occupying the anion position. Several studies have focused on the processing and characterization of high entropy carbides and high entropy borides, as these HECs present interesting mechanical and chemical properties. A few studies have been published on HECs containing two non-metallic elements in the composition. Dual-phase high entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)BxCy ceramics with different amounts of x and y, (0.25 HfC + 0.25 ZrC + 0.25 VC + 0.25 TiB₂), (0.25 HfC + 0.25 ZrC + 0.25 VB2 + 0.25 TiB₂) and (0.25 HfC + 0.25 ZrB2 + 0.25 VB2 + 0.25 TiB₂) were sintered from boride and carbide precursor powders using SPS at 2000°C with holding time of 10 min, uniaxial pressure of 50 MPa and under Ar atmosphere. The sintered specimens formed two HEC phases: a Zr-Hf rich FCC phase and a Ti-V HCP phase, and both phases contained all the metallic elements from 5-50 at%. Phase quantification analysis of XRD data revealed that the molar amount of hexagonal phase increased with increased mole fraction of borides in the starting powders, whereas cubic FCC phase increased with increased carbide in the starting powders. SPS consolidated (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)BC0.5 and (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)B1.5C0.25 had respectively 94.74% and 88.56% relative density. (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)B0.5C0.75 presented the highest relative density of 95.99%, with Vickers hardness of 26.58±1.2 GPa for the borides phase and 18.29±0.8 GPa for the carbides phase, which exceeded the reported hardness values reported in the literature for high entropy ceramics. The SPS sintered specimens containing lower boron and higher carbon presented superior properties even though the metallic composition in each phase was similar to other compositions investigated. Dual-phase high entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅H₀.₂₅)BxCy ceramics were successfully fabricated in a boride-carbide solid solution and the amount of boron and carbon was shown to influence the phase fraction, hardness of phases, and density of the consolidated HECs. The microstructure and phase formation was highly dependent on the amount of non-metallic elements in the composition and not only the molar ratio between metals when producing high entropy ceramics with more than one anion in the sublattice. These findings show the importance of further studies about the optimization of the ratio between C and B for further improvements in the properties of dual-phase high entropy ceramics.Keywords: high-entropy ceramics, borides, carbides, dual-phase
Procedia PDF Downloads 172277 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus
Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert
Abstract:
Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.Keywords: building information modeling, digital terrain model, existing buildings, interoperability
Procedia PDF Downloads 112276 Simulation of the Visco-Elasto-Plastic Deformation Behaviour of Short Glass Fibre Reinforced Polyphthalamides
Authors: V. Keim, J. Spachtholz, J. Hammer
Abstract:
The importance of fibre reinforced plastics continually increases due to the excellent mechanical properties, low material and manufacturing costs combined with significant weight reduction. Today, components are usually designed and calculated numerically by using finite element methods (FEM) to avoid expensive laboratory tests. These programs are based on material models including material specific deformation characteristics. In this research project, material models for short glass fibre reinforced plastics are presented to simulate the visco-elasto-plastic deformation behaviour. Prior to modelling specimens of the material EMS Grivory HTV-5H1, consisting of a Polyphthalamide matrix reinforced by 50wt.-% of short glass fibres, are characterized experimentally in terms of the highly time dependent deformation behaviour of the matrix material. To minimize the experimental effort, the cyclic deformation behaviour under tensile and compressive loading (R = −1) is characterized by isothermal complex low cycle fatigue (CLCF) tests. Combining cycles under two strain amplitudes and strain rates within three orders of magnitude and relaxation intervals into one experiment the visco-elastic deformation is characterized. To identify visco-plastic deformation monotonous tensile tests either displacement controlled or strain controlled (CERT) are compared. All relevant modelling parameters for this complex superposition of simultaneously varying mechanical loadings are quantified by these experiments. Subsequently, two different material models are compared with respect to their accuracy describing the visco-elasto-plastic deformation behaviour. First, based on Chaboche an extended 12 parameter model (EVP-KV2) is used to model cyclic visco-elasto-plasticity at two time scales. The parameters of the model including a total separation of elastic and plastic deformation are obtained by computational optimization using an evolutionary algorithm based on a fitness function called genetic algorithm. Second, the 12 parameter visco-elasto-plastic material model by Launay is used. In detail, the model contains a different type of a flow function based on the definition of the visco-plastic deformation as a part of the overall deformation. The accuracy of the models is verified by corresponding experimental LCF testing.Keywords: complex low cycle fatigue, material modelling, short glass fibre reinforced polyphthalamides, visco-elasto-plastic deformation
Procedia PDF Downloads 215