Search results for: Ant Colony Optimization
290 Search of Сompounds with Antimicrobial and Antifungal Activity in the Series of 1-(2-(1H-Tetrazol-5-yl)-R1-phenyl)-3-R2-phenyl(ethyl)ureas
Authors: O. Antypenko, I. Vasilieva, S. Kovalenko
Abstract:
Investigations for new effective and less toxic antimicrobials agents are always up-to-date. The tetrazole derivatives are quite interesting objects as for synthesis as well as for pharmacological screening. Thus, some derivatives of tetrazole demonstrated antimicrobial activity, namely 5-phenyl-tetrazolo[1,5-c]quinazoline was effective one against Staphylococcus aureus and Esherichia faecalis (MIC = 250 mg/L). Besides, investigation of the 9-bromo(chloro)-5-morpholin(piperidine)-4-yl-tetrazolo[1,5-c]quinazoline’s antimicrobial activity against Esherichia coli and Enterococcus faecalis, Pseudomonas aeruginosa and Staphylococcus aureus revealed that sensitivity of Gram-positive bacteria to the compounds was higher than that of Gram-negative bacteria. So, our previously synthesized, 31 derivatives of 1-(2-(1H-tetrazol-5-yl)-R1-phenyl)-3-R2-phenyl(ethyl)ureas were decided to test for their in vitro antibacterial activity against Gram-positive bacteria (Staphylococcus aureus ATCC 25923, Enterobacter aerogenes, Enterococcus faecalis ATCC 29212), Gram-negative bacteria (Pseudomonas aeruginosa ATCC 9027, Escherichia coli ATCC 25922, Klebsiella pneumoniae 68) and antifungal properties against Candida albicans ATCC 885653. Agar-diffusion method was used for determination of the preliminary activity compared to well-known reference antimicrobials. All the compounds were dissolved in DMSO at a concentration of 100 μg/disk, using inhibition zone diameter (IZD, mm) as a measure for the antimicrobial activity. The most active turned to be 3 structures, that inhibited several bacterial strains: 1-ethyl-3-(5-fluoro-2-(1H-tetrazol-5-yl)phenyl)urea (1), 1-(4-bromo-2-(1H-tetrazol-5-yl)-phenyl)-3-(4-(trifluoromethyl)phenyl)urea (2) and 1-(4-chloro-2-(1H-tetrazol-5-yl)phenyl)-3-(3-(trifluoromethyl)phenyl)urea (3). IZM (mm) was 40 (Escherichia coli), 25 (Klebsiella pneumonia) for compound 1; 12 (Pseudomonas aeruginosa), 15 (Staphylococcus aureus), 10 (Enterococcus faecalis) for compound 2; 25 (Staphylococcus aureus), 15 (Enterococcus faecalis) for compound 3. The most sensitive to the activity of the substances were Gram-negative bacteria Pseudomonas aeruginosa. While none of compound effected on Candida albicans. Speaking about, reference drugs: Amikacin (30 µg/disk) showed 27 and Ceftazide (30 µg/disk) 25 against Pseudomonas aeruginosa. That is, unfortunately, higher than studied 1-(2-(1H-tetrazol-5-yl)-R1-phenyl)-3-R2-phenyl(ethyl)ureas. Obtained results will be used for further purposeful optimization of the leading compounds in the more effective antimicrobials because of the ever-mounting problem of microorganism’s resistance.Keywords: antimicrobial, antifungal, compounds, 1-(2-(1H-tetrazol-5-yl)-R1-phenyl)-3-R2-phenyl(ethyl)ureas
Procedia PDF Downloads 359289 Standardizing and Achieving Protocol Objectives for ChestWall Radiotherapy Treatment Planning Process using an O-ring Linac in High-, Low- and Middle-income Countries
Authors: Milton Ixquiac, Erick Montenegro, Francisco Reynoso, Matthew Schmidt, Thomas Mazur, Tianyu Zhao, Hiram Gay, Geoffrey Hugo, Lauren Henke, Jeff Michael Michalski, Angel Velarde, Vicky de Falla, Franky Reyes, Osmar Hernandez, Edgar Aparicio Ruiz, Baozhou Sun
Abstract:
Purpose: Radiotherapy departments in low- and middle-income countries (LMICs) like Guatemala have recently introduced intensity-modulated radiotherapy (IMRT). IMRT has become the standard of care in high-income countries (HIC) due to reduced toxicity and improved outcomes in some cancers. The purpose of this work is to show the agreement between the dosimetric results shown in the Dose Volume Histograms (DVH) to the objectives proposed in the adopted protocol. This is the initial experience with an O-ring Linac. Methods and Materials: An O-Linac Linac was installed at our clinic in Guatemala in 2019 and has been used to treat approximately 90 patients daily with IMRT. This Linac is a completely Image Guided Device since to deliver each radiotherapy session must take a Mega Voltage Cone Beam Computerized Tomography (MVCBCT). In each MVCBCT, the Linac deliver 9 UM, and they are taken into account while performing the planning. To start the standardization, the TG263 was employed in the nomenclature and adopted a hypofractionated protocol to treat ChestWall, including supraclavicular nodes achieving 40.05Gy in 15 fractions. The planning was developed using 4 semiarcs from 179-305 degrees. The planner must create optimization volumes for targets and Organs at Risk (OARs); the difficulty for the planner was the dose base due to the MVCBCT. To evaluate the planning modality, we used 30 chestwall cases. Results: The plans created manually achieve the protocol objectives. The protocol objectives are the same as the RTOG1005, and the DHV curves look clinically acceptable. Conclusions: Despite the O-ring Linac doesn´t have the capacity to obtain kv images, the cone beam CT was created using MV energy, the dose delivered by the daily image setup process still without affect the dosimetric quality of the plans, and the dose distribution is acceptable achieving the protocol objectives.Keywords: hypofrationation, VMAT, chestwall, radiotherapy planning
Procedia PDF Downloads 118288 Spark Plasma Sintering/Synthesis of Alumina-Graphene Composites
Authors: Nikoloz Jalabadze, Roin Chedia, Lili Nadaraia, Levan Khundadze
Abstract:
Nanocrystalline materials in powder condition can be manufactured by a number of different methods, however manufacture of composite materials product in the same nanocrystalline state is still a problem because the processes of compaction and synthesis of nanocrystalline powders go with intensive growth of particles – the process which promotes formation of pieces in an ordinary crystalline state instead of being crystallized in the desirable nanocrystalline state. To date spark plasma sintering (SPS) has been considered as the most promising and energy efficient method for producing dense bodies of composite materials. An advantage of the SPS method in comparison with other methods is mainly low temperature and short time of the sintering procedure. That finally gives an opportunity to obtain dense material with nanocrystalline structure. Graphene has recently garnered significant interest as a reinforcing phase in composite materials because of its excellent electrical, thermal and mechanical properties. Graphene nanoplatelets (GNPs) in particular have attracted much interest as reinforcements for ceramic matrix composites (mostly in Al2O3, Si3N4, TiO2, ZrB2 a. c.). SPS has been shown to fully densify a variety of ceramic systems effectively including Al2O3 and often with improvements in mechanical and functional behavior. Alumina consolidated by SPS has been shown to have superior hardness, fracture toughness, plasticity and optical translucency compared to conventionally processed alumina. Knowledge of how GNPs influence sintering behavior is important to effectively process and manufacture process. In this study, the effects of GNPs on the SPS processing of Al2O3 are investigated by systematically varying sintering temperature, holding time and pressure. Our experiments showed that SPS process is also appropriate for the synthesis of nanocrystalline powders of alumina-graphene composites. Depending on the size of the molds, it is possible to obtain different amount of nanopowders. Investigation of the structure, physical-chemical, mechanical and performance properties of the elaborated composite materials was performed. The results of this study provide a fundamental understanding of the effects of GNP on sintering behavior, thereby providing a foundation for future optimization of the processing of these promising nanocomposite systems.Keywords: alumina oxide, ceramic matrix composites, graphene nanoplatelets, spark-plasma sintering
Procedia PDF Downloads 376287 Network Based Speed Synchronization Control for Multi-Motor via Consensus Theory
Authors: Liqin Zhang, Liang Yan
Abstract:
This paper addresses the speed synchronization control problem for a network-based multi-motor system from the perspective of cluster consensus theory. Each motor is considered as a single agent connected through fixed and undirected network. This paper presents an improved control protocol from three aspects. First, for the purpose of improving both tracking and synchronization performance, this paper presents a distributed leader-following method. The improved control protocol takes the importance of each motor’s speed into consideration, and all motors are divided into different groups according to speed weights. Specifically, by using control parameters optimization, the synchronization error and tracking error can be regulated and decoupled to some extent. The simulation results demonstrate the effectiveness and superiority of the proposed strategy. In practical engineering, the simplified models are unrealistic, such as single-integrator and double-integrator. And previous algorithms require the acceleration information of the leader available to all followers if the leader has a varying velocity, which is also difficult to realize. Therefore, the method focuses on an observer-based variable structure algorithm for consensus tracking, which gets rid of the leader acceleration. The presented scheme optimizes synchronization performance, as well as provides satisfactory robustness. What’s more, the existing algorithms can obtain a stable synchronous system; however, the obtained stable system may encounter some disturbances that may destroy the synchronization. Focus on this challenging technological problem, a state-dependent-switching approach is introduced. In the presence of unmeasured angular speed and unknown failures, this paper investigates a distributed fault-tolerant consensus tracking algorithm for a group non-identical motors. The failures are modeled by nonlinear functions, and the sliding mode observer is designed to estimate the angular speed and nonlinear failures. The convergence and stability of the given multi-motor system are proved. Simulation results have shown that all followers asymptotically converge to a consistent state when one follower fails to follow the virtual leader during a large enough disturbance, which illustrates the good performance of synchronization control accuracy.Keywords: consensus control, distributed follow, fault-tolerant control, multi-motor system, speed synchronization
Procedia PDF Downloads 125286 Cooperative Learning Promotes Successful Learning. A Qualitative Study to Analyze Factors that Promote Interaction and Cooperation among Students in Blended Learning Environments
Authors: Pia Kastl
Abstract:
Potentials of blended learning are the flexibility of learning and the possibility to get in touch with lecturers and fellow students on site. By combining face-to-face sessions with digital self-learning units, the learning process can be optimized, and learning success increased. To examine wether blended learning outperforms online and face-to-face teaching, a theory-based questionnaire survey was conducted. The results show that the interaction and cooperation among students is poorly provided in blended learning, and face-to-face teaching performs better in this respect. The aim of this article is to identify concrete suggestions students have for improving cooperation and interaction in blended learning courses. For this purpose, interviews were conducted with students from various academic disciplines in face-to-face, online, or blended learning courses (N= 60). The questions referred to opinions and suggestions for improvement regarding the course design of the respective learning environment. The analysis was carried out by qualitative content analysis. The results show that students perceive the interaction as beneficial to their learning. They verbalize their knowledge and are exposed to different perspectives. In addition, emotional support is particularly important in exam phases. Interaction and cooperation were primarily enabled in the face-to-face component of the courses studied, while there was very limited contact with fellow students in the asynchronous component. Forums offered were hardly used or not used at all because the barrier to asking a question publicly is too high, and students prefer private channels for communication. This is accompanied by the disadvantage that the interaction occurs only among people who already know each other. Creating contacts is not fostered in the blended learning courses. Students consider optimization possibilities as a task of the lecturers in the face-to-face sessions: Here, interaction and cooperation should be encouraged through get-to-know-you rounds or group work. It is important here to group the participants randomly to establish contact with new people. In addition, sufficient time for interaction is desired in the lecture, e.g., in the context of discussions or partner work. In the digital component, students prefer synchronous exchange at a fixed time, for example, in breakout rooms or an MS Teams channel. The results provide an overview of how interaction and cooperation can be implemented in blended learning courses. Positive design possibilities are partly dependent on subject area and course. Future studies could tie in here with a course-specific analysis.Keywords: blended learning, higher education, hybrid teaching, qualitative research, student learning
Procedia PDF Downloads 70285 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging
Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati
Abstract:
Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization
Procedia PDF Downloads 74284 Extracorporeal Co2 Removal (Ecco2r): An Option for Treatment for Refractory Hypercapnic Respiratory Failure
Authors: Shweh Fern Loo, Jun Yin Ong, Than Zaw Oo
Abstract:
Acute respiratory distress syndrome (ARDS) is a common serious condition of bilateral lung infiltrates that develops secondary to various underlying conditions such as diseases or injuries. ARDS with severe hypercapnia is associated with higher ICU mortality and morbidity. Venovenous Extracorporeal membrane oxygenation (VV-ECMO) support has been established to avert life-threatening hypoxemia and hypercapnic respiratory failure despite optimal conventional mechanical ventilation. However, VV-ECMO is relatively not advisable in particular groups of patients, especially in multi-organ failure, advanced age, hemorrhagic complications and irreversible central nervous system pathology. We presented a case of a 79-year-old Chinese lady without any pre-existing lung disease admitted to our hospital intensive care unit (ICU) after acute presentation of breathlessness and chest pain. After extensive workup, she was diagnosed with rapidly progressing acute interstitial pneumonia with ARDS and hypercapnia respiratory failure. The patient received lung protective strategies of mechanical ventilation and neuromuscular blockage therapy as per clinical guidelines. However, hypercapnia respiratory failure was refractory, and she was deemed not a good candidate for VV-ECMO support given her advanced age and high vasopressor requirements from shock. Alternative therapy with extracorporeal CO2 removal (ECCO2R) was considered and implemented. The patient received 12 days of ECCO2R paired with muscle paralysis, optimization of lung-protective mechanical ventilation and dialysis. Unfortunately, the patient still had refractory hypercapnic respiratory failure with dual vasopressor support despite prolonged therapy. Given failed and futile medical treatment, the family opted for withdrawal of care, a conservative approach, and comfort care, which led to her demise. The effectivity of extracorporeal CO2 removal may depend on disease burden, involvement and severity of the disease. There is insufficient data to make strong recommendations about its benefit-risk ratio for ECCO2R devices, and further studies and data would be required. Nonetheless, ECCO2R can be considered an alternative treatment for refractory hypercapnic respiratory failure patients who are unsuitable for initiating venovenous ECMO.Keywords: extracorporeal CO2 removal (ECCO2R), acute respiratory distress syndrome (ARDS), acute interstitial pneumonia (AIP), hypercapnic respiratory failure
Procedia PDF Downloads 65283 Dynamic Conformal Arc versus Intensity Modulated Radiotherapy for Image Guided Stereotactic Radiotherapy of Cranial Lesion
Authors: Chor Yi Ng, Christine Kong, Loretta Teo, Stephen Yau, FC Cheung, TL Poon, Francis Lee
Abstract:
Purpose: Dynamic conformal arc (DCA) and intensity modulated radiotherapy (IMRT) are two treatment techniques commonly used for stereotactic radiosurgery/radiotherapy of cranial lesions. IMRT plans usually give better dose conformity while DCA plans have better dose fall off. Rapid dose fall off is preferred for radiotherapy of cranial lesions, but dose conformity is also important. For certain lesions, DCA plans have good conformity, while for some lesions, the conformity is just unacceptable with DCA plans, and IMRT has to be used. The choice between the two may not be apparent until each plan is prepared and dose indices compared. We described a deviation index (DI) which is a measurement of the deviation of the target shape from a sphere, and test its functionality to choose between the two techniques. Method and Materials: From May 2015 to May 2017, our institute has performed stereotactic radiotherapy for 105 patients treating a total of 115 lesions (64 DCA plans and 51 IMRT plans). Patients were treated with the Varian Clinac iX with HDMLC. Brainlab Exactrac system was used for patient setup. Treatment planning was done with Brainlab iPlan RT Dose (Version 4.5.4). DCA plans were found to give better dose fall off in terms of R50% (R50% (DCA) = 4.75 Vs R50% (IMRT) = 5.242) while IMRT plans have better conformity in terms of treatment volume ratio (TVR) (TVR(DCA) = 1.273 Vs TVR(IMRT) = 1.222). Deviation Index (DI) is proposed to better facilitate the choice between the two techniques. DI is the ratio of the volume of a 1 mm shell of the PTV and the volume of a 1 mm shell of a sphere of identical volume. DI will be close to 1 for a near spherical PTV while a large DI will imply a more irregular PTV. To study the functionality of DI, 23 cases were chosen with PTV volume ranged from 1.149 cc to 29.83 cc, and DI ranged from 1.059 to 3.202. For each case, we did a nine field IMRT plan with one pass optimization and a five arc DCA plan. Then the TVR and R50% of each case were compared and correlated with the DI. Results: For the 23 cases, TVRs and R50% of the DCA and IMRT plans were examined. The conformity for IMRT plans are better than DCA plans, with majority of the TVR(DCA)/TVR(IMRT) ratios > 1, values ranging from 0.877 to1.538. While the dose fall off is better for DCA plans, with majority of the R50%(DCA)/ R50%(IMRT) ratios < 1. Their correlations with DI were also studied. A strong positive correlation was found between the ratio of TVRs and DI (correlation coefficient = 0.839), while the correlation between the ratio of R50%s and DI was insignificant (correlation coefficient = -0.190). Conclusion: The results suggest DI can be used as a guide for choosing the planning technique. For DI greater than a certain value, we can expect the conformity for DCA plans to become unacceptably great, and IMRT will be the technique of choice.Keywords: cranial lesions, dynamic conformal arc, IMRT, image guided radiotherapy, stereotactic radiotherapy
Procedia PDF Downloads 241282 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 363281 Optimization of Temperature Coefficients for MEMS Based Piezoresistive Pressure Sensor
Authors: Vijay Kumar, Jaspreet Singh, Manoj Wadhwa
Abstract:
Piezo-resistive pressure sensors were one of the first developed micromechanical system (MEMS) devices and still display a significant growth prompted by the advancements in micromachining techniques and material technology. In MEMS based piezo-resistive pressure sensors, temperature can be considered as the main environmental condition which affects the system performance. The study of the thermal behavior of these sensors is essential to define the parameters that cause the output characteristics to drift. In this work, a study on the effects of temperature and doping concentration in a boron implanted piezoresistor for a silicon-based pressure sensor is discussed. We have optimized the temperature coefficient of resistance (TCR) and temperature coefficient of sensitivity (TCS) values to determine the effect of temperature drift on the sensor performance. To be more precise, in order to reduce the temperature drift, a high doping concentration is needed. And it is well known that the Wheatstone bridge in a pressure sensor is supplied with a constant voltage or a constant current input supply. With a constant voltage supply, the thermal drift can be compensated along with an external compensation circuit, whereas the thermal drift in the constant current supply can be directly compensated by the bridge itself. But it would be beneficial to also compensate the temperature coefficient of piezoresistors so as to further reduce the temperature drift. So, with a current supply, the TCS is dependent on both the TCπ and TCR. As TCπ is a negative quantity and TCR is a positive quantity, it is possible to choose an appropriate doping concentration at which both of them cancel each other. An exact cancellation of TCR and TCπ values is not readily attainable; therefore, an adjustable approach is generally used in practical applications. Thus, one goal of this work has been to better understand the origin of temperature drift in pressure sensor devices so that the temperature effects can be minimized or eliminated. This paper describes the optimum doping levels for the piezoresistors where the TCS of the pressure transducers will be zero due to the cancellation of TCR and TCπ values. Also, the fabrication and characterization of the pressure sensor are carried out. The optimized TCR value obtained for the fabricated die is 2300 ± 100ppm/ᵒC, for which the piezoresistors are implanted at a doping concentration of 5E13 ions/cm³ and the TCS value of -2100ppm/ᵒC is achieved. Therefore, the desired TCR and TCS value is achieved, which are approximately equal to each other, so the thermal effects are considerably reduced. Finally, we have calculated the effect of temperature and doping concentration on the output characteristics of the sensor. This study allows us to predict the sensor behavior against temperature and to minimize this effect by optimizing the doping concentration.Keywords: piezo-resistive, pressure sensor, doping concentration, TCR, TCS
Procedia PDF Downloads 182280 6-Degree-Of-Freedom Spacecraft Motion Planning via Model Predictive Control and Dual Quaternions
Authors: Omer Burak Iskender, Keck Voon Ling, Vincent Dubanchet, Luca Simonini
Abstract:
This paper presents Guidance and Control (G&C) strategy to approach and synchronize with potentially rotating targets. The proposed strategy generates and tracks a safe trajectory for space servicing missions, including tasks like approaching, inspecting, and capturing. The main objective of this paper is to validate the G&C laws using a Hardware-In-the-Loop (HIL) setup with realistic rendezvous and docking equipment. Throughout this work, the assumption of full relative state feedback is relaxed by onboard sensors that bring realistic errors and delays and, while the proposed closed loop approach demonstrates the robustness to the above mentioned challenge. Moreover, G&C blocks are unified via the Model Predictive Control (MPC) paradigm, and the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description. In this work, G&C is formulated as a convex optimization problem where constraints such as thruster limits and the output constraints are explicitly handled. Furthermore, the Monte-Carlo method is used to evaluate the robustness of the proposed method to the initial condition errors, the uncertainty of the target's motion and attitude, and actuator errors. A capture scenario is tested with the robotic test bench that has onboard sensors which estimate the position and orientation of a drifting satellite through camera imagery. Finally, the approach is compared with currently used robust H-infinity controllers and guidance profile provided by the industrial partner. The HIL experiments demonstrate that the proposed strategy is a potential candidate for future space servicing missions because 1) the algorithm is real-time implementable as convex programming offers deterministic convergence properties and guarantee finite time solution, 2) critical physical and output constraints are respected, 3) robustness to sensor errors and uncertainties in the system is proven, 4) couples translational motion with rotational motion.Keywords: dual quaternion, model predictive control, real-time experimental test, rendezvous and docking, spacecraft autonomy, space servicing
Procedia PDF Downloads 146279 Multi-Agent Searching Adaptation Using Levy Flight and Inferential Reasoning
Authors: Sagir M. Yusuf, Chris Baber
Abstract:
In this paper, we describe how to achieve knowledge understanding and prediction (Situation Awareness (SA)) for multiple-agents conducting searching activity using Bayesian inferential reasoning and learning. Bayesian Belief Network was used to monitor agents' knowledge about their environment, and cases are recorded for the network training using expectation-maximisation or gradient descent algorithm. The well trained network will be used for decision making and environmental situation prediction. Forest fire searching by multiple UAVs was the use case. UAVs are tasked to explore a forest and find a fire for urgent actions by the fire wardens. The paper focused on two problems: (i) effective agents’ path planning strategy and (ii) knowledge understanding and prediction (SA). The path planning problem by inspiring animal mode of foraging using Lévy distribution augmented with Bayesian reasoning was fully described in this paper. Results proof that the Lévy flight strategy performs better than the previous fixed-pattern (e.g., parallel sweeps) approaches in terms of energy and time utilisation. We also introduced a waypoint assessment strategy called k-previous waypoints assessment. It improves the performance of the ordinary levy flight by saving agent’s resources and mission time through redundant search avoidance. The agents (UAVs) are to report their mission knowledge at the central server for interpretation and prediction purposes. Bayesian reasoning and learning were used for the SA and results proof effectiveness in different environments scenario in terms of prediction and effective knowledge representation. The prediction accuracy was measured using learning error rate, logarithm loss, and Brier score and the result proves that little agents mission that can be used for prediction within the same or different environment. Finally, we described a situation-based knowledge visualization and prediction technique for heterogeneous multi-UAV mission. While this paper proves linkage of Bayesian reasoning and learning with SA and effective searching strategy, future works is focusing on simplifying the architecture.Keywords: Levy flight, distributed constraint optimization problem, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence
Procedia PDF Downloads 144278 Alkali Activated Materials Based on Natural Clay from Raciszyn
Authors: Michal Lach, Maria Hebdowska-Krupa, Justyna Stefanek, Artur Stanek, Anna Stefanska, Janusz Mikula, Marek Hebda
Abstract:
Limited resources of raw materials determine the necessity of obtaining materials from other sources. In this area, the most known and widespread are recycling processes, which are mainly focused on the reuse of material. Another possible solution used in various companies to achieve improvement in sustainable development is waste-free production. It involves the production exclusively from such materials, whose waste is included in the group of renewable raw materials. This means that they can: (i) be recycled directly during the manufacturing process of further products or (ii) be raw material obtained by other companies for the production of alternative products. The article presents the possibility of using post-production clay from the Jurassic limestone deposit "Raciszyn II" as a raw material for the production of alkali activated materials (AAM). Such products are currently increasingly used, mostly in various building applications. However, their final properties depend significantly on many factors; the most important of them are: chemical composition of the raw material, particle size, specific surface area, type and concentration of the activator and the temperature range of the heat treatment. Conducted mineralogical and chemical analyzes of clay from the “Raciszyn II” deposit confirmed that this material, due to its high content of aluminosilicates, can be used as raw material for the production of AAM. In order to obtain the product with the best properties, the optimization of the clay calcining process was also carried out. Based on the obtained results, it was found that this process should occur in the range between 750 oC and 800 oC. The use of a lower temperature causes getting a raw material with low metakaolin content which is the main component of materials suitable for alkaline activation processes. On the other hand, higher heat treatment temperatures cause thermal dissociation of large amounts of calcite, which is associated with the release of large amounts of CO2 and the formation of calcium oxide. This compound significantly accelerates the binding process, which consequently often prevents the correct formation of geopolymer mass. The effect of the use of various activators: (i) NaOH, (ii) KOH and (iii) a mixture of KOH to NaOH in a ratio of 10%, 25% and 50% by volume on the compressive strength of the AAM was also analyzed. Obtained results depending on the activator used were in the range from 25 MPa to 40 MPa. These values are comparable with the results obtained for materials produced on the basis of Portland cement, which is one of the most popular building materials.Keywords: alkaline activation, aluminosilicates, calcination, compressive strength
Procedia PDF Downloads 153277 The Study of Intangible Assets at Various Firm States
Authors: Gulnara Galeeva, Yulia Kasperskaya
Abstract:
The study deals with the relevant problem related to the formation of the efficient investment portfolio of an enterprise. The structure of the investment portfolio is connected to the degree of influence of intangible assets on the enterprise’s income. This determines the importance of research on the content of intangible assets. However, intangible assets studies do not take into consideration how the enterprise state can affect the content and the importance of intangible assets for the enterprise`s income. This affects accurateness of the calculations. In order to study this problem, the research was divided into several stages. In the first stage, intangible assets were classified based on their synergies as the underlying intangibles and the additional intangibles. In the second stage, this classification was applied. It showed that the lifecycle model and the theory of abrupt development of the enterprise, that are taken into account while designing investment projects, constitute limit cases of a more general theory of bifurcations. The research identified that the qualitative content of intangible assets significant depends on how close the enterprise is to being in crisis. In the third stage, the author developed and applied the Wide Pairwise Comparison Matrix method. This allowed to establish that using the ratio of the standard deviation to the mean value of the elements of the vector of priority of intangible assets makes it possible to estimate the probability of a full-blown crisis of the enterprise. The author has identified a criterion, which allows making fundamental decisions on investment feasibility. The study also developed an additional rapid method of assessing the enterprise overall status based on using the questionnaire survey with its Director. The questionnaire consists only of two questions. The research specifically focused on the fundamental role of stochastic resonance in the emergence of bifurcation (crisis) in the economic development of the enterprise. The synergetic approach made it possible to describe the mechanism of the crisis start in details and also to identify a range of universal ways of overcoming the crisis. It was outlined that the structure of intangible assets transforms into a more organized state with the strengthened synchronization of all processes as a result of the impact of the sporadic (white) noise. Obtained results offer managers and business owners a simple and an affordable method of investment portfolio optimization, which takes into account how close the enterprise is to a state of a full-blown crisis.Keywords: analytic hierarchy process, bifurcation, investment portfolio, intangible assets, wide matrix
Procedia PDF Downloads 208276 Virtual Screening and in Silico Toxicity Property Prediction of Compounds against Mycobacterium tuberculosis Lipoate Protein Ligase B (LipB)
Authors: Junie B. Billones, Maria Constancia O. Carrillo, Voltaire G. Organo, Stephani Joy Y. Macalino, Inno A. Emnacen, Jamie Bernadette A. Sy
Abstract:
The drug discovery and development process is generally known to be a very lengthy and labor-intensive process. Therefore, in order to be able to deliver prompt and effective responses to cure certain diseases, there is an urgent need to reduce the time and resources needed to design, develop, and optimize potential drugs. Computer-aided drug design (CADD) is able to alleviate this issue by applying computational power in order to streamline the whole drug discovery process, starting from target identification to lead optimization. This drug design approach can be predominantly applied to diseases that cause major public health concerns, such as tuberculosis. Hitherto, there has been no concrete cure for this disease, especially with the continuing emergence of drug resistant strains. In this study, CADD is employed for tuberculosis by first identifying a key enzyme in the mycobacterium’s metabolic pathway that would make a good drug target. One such potential target is the lipoate protein ligase B enzyme (LipB), which is a key enzyme in the M. tuberculosis metabolic pathway involved in the biosynthesis of the lipoic acid cofactor. Its expression is considerably up-regulated in patients with multi-drug resistant tuberculosis (MDR-TB) and it has no known back-up mechanism that can take over its function when inhibited, making it an extremely attractive target. Using cutting-edge computational methods, compounds from AnalytiCon Discovery Natural Derivatives database were screened and docked against the LipB enzyme in order to rank them based on their binding affinities. Compounds which have better binding affinities than LipB’s known inhibitor, decanoic acid, were subjected to in silico toxicity evaluation using the ADMET and TOPKAT protocols. Out of the 31,692 compounds in the database, 112 of these showed better binding energies than decanoic acid. Furthermore, 12 out of the 112 compounds showed highly promising ADMET and TOPKAT properties. Future studies involving in vitro or in vivo bioassays may be done to further confirm the therapeutic efficacy of these 12 compounds, which eventually may then lead to a novel class of anti-tuberculosis drugs.Keywords: pharmacophore, molecular docking, lipoate protein ligase B (LipB), ADMET, TOPKAT
Procedia PDF Downloads 424275 Benefits of Monitoring Acid Sulfate Potential of Coffee Rock (Indurated Sand) across Entire Dredge Cycle in South East Queensland
Authors: S. Albert, R. Cossu, A. Grinham, C. Heatherington, C. Wilson
Abstract:
Shipping trends suggest increasing vessel size and draught visiting Australian ports highlighting potential challenges to port infrastructure and requiring optimization of shipping channels to ensure safe passage for vessels. The Port of Brisbane in Queensland, Australia has an 80 km long access shipping channel which vessels must transit 15 km of relatively shallow coffee rock (generic class of indurated sands where sand grains are bound within an organic clay matrix) outcrops towards the northern passage in Moreton Bay. This represents a risk to shipping channel deepening and maintenance programs as the dredgeability of this material is more challenging due to its high cohesive strength compared with the surrounding marine sands and potential higher acid sulfate risk. In situ assessment of acid sulfate sediment for dredge spoil control is an important tool in mitigating ecological harm. The coffee rock in an anoxic undisturbed state does not pose any acid sulfate risk, however when disturbed via dredging it’s vital to ensure that any present iron sulfides are either insignificant or neutralized. To better understand the potential risk we examined the reduction potential of coffee rock across the entire dredge cycle in order to accurately portray the true outcome of disturbed acid sulfate sediment in dredging operations in Moreton Bay. In December 2014 a dredge trial was undertaken with a trailing suction hopper dredger. In situ samples were collected prior to dredging revealed acid sulfate potential above threshold guidelines which could lead to expensive dredge spoil management. However, potential acid sulfate risk was then monitored in the hopper and subsequent discharge, both showing a significant reduction in acid sulfate potential had occurred. Additionally, the acid neutralizing capacity significantly increased due to the inclusion of shell fragments (calcium carbonate) from the dredge target areas. This clearly demonstrates the importance of assessing potential acid sulfate risk across the entire dredging cycle and highlights the need to carefully evaluate sources of acidity.Keywords: acid sulfate, coffee rock, indurated sand, dredging, maintenance dredging
Procedia PDF Downloads 368274 Experimental Design in Extraction of Pseudomonas sp. Protease from Fermented Broth by Polyethylene Glycol/Citrate Aqueous Two-Phase System
Authors: Omar Pillaca-Pullo, Arturo Alejandro-Paredes, Carol Flores-Fernandez, Marijuly Sayuri Kina, Amparo Iris Zavaleta
Abstract:
Aqueous two-phase system (ATPS) is an interesting alternative for separating industrial enzymes due to it is easy to scale-up and low cost. Polyethylene glycol (PEG) mixed with potassium phosphate or magnesium sulfate is one of the most frequently polymer/salt ATPS used, but the consequences of its use is a high concentration of phosphates and sulfates in wastewater causing environmental issues. Citrate could replace these inorganic salts due to it is biodegradable and does not produce toxic compounds. On the other hand, statistical design of experiments is widely used for ATPS optimization and it allows to study the effects of the involved variables in the purification, and to estimate their significant effects on selected responses and interactions. The 24 factorial design with four central points (20 experiments) was employed to study the partition and purification of proteases produced by Pseudomonas sp. in PEG/citrate ATPS system. ATPS was prepared with different sodium citrate concentrations [14, 16 and 18% (w/w)], pH values (7, 8 and 9), PEG molecular weight (2,000; 4,000 and 6,000 g/mol) and PEG concentrations [18, 20 and 22 % (w/w)]. All system components were mixed with 15% (w/w) of the fermented broth and deionized water was added to a final weight of 12.5 g. Then, the systems were mixed and kept at room temperature until to reach two-phases separation. Volumes of the top and bottom phases were measured, and aliquots from both phases were collected for subsequent proteolytic activity and total protein determination. Influence of variables such as PEG molar mass (MPEG), PEG concentration (CPEG), citrate concentration (CSal) and pH were evaluated on the following responses: purification factor (PF), activity yield (Y), partition coefficient (K) and selectivity (S). STATISTICA program version 10 was used for the analysis. According to the obtained results, higher levels of CPEG and MPEG had a positive effect on extraction, while pH did not influence on the process. On the other hand, the CSal could be related with low values of Y because of the citrate ions have a negative effect on solubility and enzymatic structure. The optimum values of Y (66.4 %), PF (1.8), K (5.5) and S (4.3) were obtained at CSal (18%), MPEG (6,000 g/mol), CPEG (22%) and pH 9. These results indicated that the PEG/citrate system is accurate to purify these Pseudomonas sp. proteases from fermented broth as a first purification step.Keywords: citrate, polyethylene glycol, protease, Pseudomonas sp
Procedia PDF Downloads 194273 Unified Coordinate System Approach for Swarm Search Algorithms in Global Information Deficit Environments
Authors: Rohit Dey, Sailendra Karra
Abstract:
This paper aims at solving the problem of multi-target searching in a Global Positioning System (GPS) denied environment using swarm robots with limited sensing and communication abilities. Typically, existing swarm-based search algorithms rely on the presence of a global coordinate system (vis-à-vis, GPS) that is shared by the entire swarm which, in turn, limits its application in a real-world scenario. This can be attributed to the fact that robots in a swarm need to share information among themselves regarding their location and signal from targets to decide their future course of action but this information is only meaningful when they all share the same coordinate frame. The paper addresses this very issue by eliminating any dependency of a search algorithm on the need of a predetermined global coordinate frame by the unification of the relative coordinate of individual robots when within the communication range, therefore, making the system more robust in real scenarios. Our algorithm assumes that all the robots in the swarm are equipped with range and bearing sensors and have limited sensing range and communication abilities. Initially, every robot maintains their relative coordinate frame and follow Levy walk random exploration until they come in range with other robots. When two or more robots are within communication range, they share sensor information and their location w.r.t. their coordinate frames based on which we unify their coordinate frames. Now they can share information about the areas that were already explored, information about the surroundings, and target signal from their location to make decisions about their future movement based on the search algorithm. During the process of exploration, there can be several small groups of robots having their own coordinate systems but eventually, it is expected for all the robots to be under one global coordinate frame where they can communicate information on the exploration area following swarm search techniques. Using the proposed method, swarm-based search algorithms can work in a real-world scenario without GPS and any initial information about the size and shape of the environment. Initial simulation results show that running our modified-Particle Swarm Optimization (PSO) without global information we can still achieve the desired results that are comparable to basic PSO working with GPS. In the full paper, we plan on doing the comparison study between different strategies to unify the coordinate system and to implement them on other bio-inspired algorithms, to work in GPS denied environment.Keywords: bio-inspired search algorithms, decentralized control, GPS denied environment, swarm robotics, target searching, unifying coordinate systems
Procedia PDF Downloads 137272 Variable Renewable Energy Droughts in the Power Sector – A Model-based Analysis and Implications in the European Context
Authors: Martin Kittel, Alexander Roth
Abstract:
The continuous integration of variable renewable energy sources (VRE) in the power sector is required for decarbonizing the European economy. Power sectors become increasingly exposed to weather variability, as the availability of VRE, i.e., mainly wind and solar photovoltaic, is not persistent. Extreme events, e.g., long-lasting periods of scarce VRE availability (‘VRE droughts’), challenge the reliability of supply. Properly accounting for the severity of VRE droughts is crucial for designing a resilient renewable European power sector. Energy system modeling is used to identify such a design. Our analysis reveals the sensitivity of the optimal design of the European power sector towards VRE droughts. We analyze how VRE droughts impact optimal power sector investments, especially in generation and flexibility capacity. We draw upon work that systematically identifies VRE drought patterns in Europe in terms of frequency, duration, and seasonality, as well as the cross-regional and cross-technological correlation of most extreme drought periods. Based on their analysis, the authors provide a selection of relevant historical weather years representing different grades of VRE drought severity. These weather years will serve as input for the capacity expansion model for the European power sector used in this analysis (DIETER). We additionally conduct robustness checks varying policy-relevant assumptions on capacity expansion limits, interconnections, and level of sector coupling. Preliminary results illustrate how an imprudent selection of weather years may cause underestimating the severity of VRE droughts, flawing modeling insights concerning the need for flexibility. Sub-optimal European power sector designs vulnerable to extreme weather can result. Using relevant weather years that appropriately represent extreme weather events, our analysis identifies a resilient design of the European power sector. Although the scope of this work is limited to the European power sector, we are confident that our insights apply to other regions of the world with similar weather patterns. Many energy system studies still rely on one or a limited number of sometimes arbitrarily chosen weather years. We argue that the deliberate selection of relevant weather years is imperative for robust modeling results.Keywords: energy systems, numerical optimization, variable renewable energy sources, energy drought, flexibility
Procedia PDF Downloads 72271 Optimizing Cell Culture Performance in an Ambr15 Microbioreactor Using Dynamic Flux Balance and Computational Fluid Dynamic Modelling
Authors: William Kelly, Sorelle Veigne, Xianhua Li, Zuyi Huang, Shyamsundar Subramanian, Eugene Schaefer
Abstract:
The ambr15™ bioreactor is a single-use microbioreactor for cell line development and process optimization. The ambr system offers fully automatic liquid handling with the possibility of fed-batch operation and automatic control of pH and oxygen delivery. With operating conditions for large scale biopharmaceutical production properly scaled down, micro bioreactors such as the ambr15™ can potentially be used to predict the effect of process changes such as modified media or different cell lines. In this study, gassing rates and dilution rates were varied for a semi-continuous cell culture system in the ambr15™ bioreactor. The corresponding changes to metabolite production and consumption, as well as cell growth rate and therapeutic protein production were measured. Conditions were identified in the ambr15™ bioreactor that produced metabolic shifts and specific metabolic and protein production rates also seen in the corresponding larger (5 liter) scale perfusion process. A Dynamic Flux Balance model was employed to understand and predict the metabolic changes observed. The DFB model-predicted trends observed experimentally, including lower specific glucose consumption when CO₂ was maintained at higher levels (i.e. 100 mm Hg) in the broth. A Computational Fluid Dynamic (CFD) model of the ambr15™ was also developed, to understand transfer of O₂ and CO₂ to the liquid. This CFD model predicted gas-liquid flow in the bioreactor using the ANSYS software. The two-phase flow equations were solved via an Eulerian method, with population balance equations tracking the size of the gas bubbles resulting from breakage and coalescence. Reasonable results were obtained in that the Carbon Dioxide mass transfer coefficient (kLa) and the air hold up increased with higher gas flow rate. Volume-averaged kLa values at 500 RPM increased as the gas flow rate was doubled and matched experimentally determined values. These results form a solid basis for optimizing the ambr15™, using both CFD and FBA modelling approaches together, for use in microscale simulations of larger scale cell culture processes.Keywords: cell culture, computational fluid dynamics, dynamic flux balance analysis, microbioreactor
Procedia PDF Downloads 282270 Dual Challenges in Host State Regulation on Transnational Corporate Damages: China's Dilemma and Breakthrough
Authors: Xinchao Liu
Abstract:
Regulating environmental and human rights damages caused by transnational corporations in host States is a core issue in the business and human rights discourse. In current regulatory practices, host States, which are territorially based and should bear primary regulation responsibility, face dual challenges at both domestic and international levels, leading to their continued marginalization. Specifically, host States as TNC damage regulators are constrained domestically by territorial jurisdiction limitations and internationally by the neoliberal international economic order exemplified by investment protection mechanisms. Taking China as a sample, it currently lacks a comprehensive regulation system to address TNC damages; while domestic constraints manifest as the marginalization of judicial regulation, the absence of corporate duty of care, and inadequate extraterritorial regulation effectiveness, international constraints are reflected in the absence of foreign investor obligations in investment agreements and the asymmetry of dispute resolution clauses, challenging regulatory sovereignty. As China continues to advance its policy of high-quality opening up, the risks of negative externalities from transnational capital will continue to increase, necessitating a focus on building and perfecting a regulation mechanism for TNC damages within the framework of international law. To address domestic constraints, it is essential to clarify the division of regulation responsibilities between judicial and administrative bodies, promote the normalization of judicial regulation, and enhance judicial oversight of governmental settlements. Improving the choice of law rules for cross-border torts and the standards for parent company liability for omissions, and enhancing extraterritorial judicial effectiveness through transnational judicial dialogue and cooperation mechanisms are also crucial. To counteract international constraints, specifying investor obligations in investment treaties and designing symmetrical dispute resolution clauses are indispensable to eliminate regulatory chill. Additionally, actively advancing the implementation of TNC obligations in business and human rights treaty negotiations will lay an international legal foundation for the regulation sovereignty of host States.Keywords: transnational corporate damages, home state litigation, optimization limit, investor-state dispute settlement
Procedia PDF Downloads 9269 Sustainability Impact Assessment of Construction Ecology to Engineering Systems and Climate Change
Authors: Moustafa Osman Mohammed
Abstract:
Construction industry, as one of the main contributor in depletion of natural resources, influences climate change. This paper discusses incremental and evolutionary development of the proposed models for optimization of a life-cycle analysis to explicit strategy for evaluation systems. The main categories are virtually irresistible for introducing uncertainties, uptake composite structure model (CSM) as environmental management systems (EMSs) in a practice science of evaluation small and medium-sized enterprises (SMEs). The model simplified complex systems to reflect nature systems’ input, output and outcomes mode influence “framework measures” and give a maximum likelihood estimation of how elements are simulated over the composite structure. The traditional knowledge of modeling is based on physical dynamic and static patterns regarding parameters influence environment. It unified methods to demonstrate how construction systems ecology interrelated from management prospective in procedure reflects the effect of the effects of engineering systems to ecology as ultimately unified technologies in extensive range beyond constructions impact so as, - energy systems. Sustainability broadens socioeconomic parameters to practice science that meets recovery performance, engineering reflects the generic control of protective systems. When the environmental model employed properly, management decision process in governments or corporations could address policy for accomplishment strategic plans precisely. The management and engineering limitation focuses on autocatalytic control as a close cellular system to naturally balance anthropogenic insertions or aggregation structure systems to pound equilibrium as steady stable conditions. Thereby, construction systems ecology incorporates engineering and management scheme, as a midpoint stage between biotic and abiotic components to predict constructions impact. The later outcomes’ theory of environmental obligation suggests either a procedures of method or technique that is achieved in sustainability impact of construction system ecology (SICSE), as a relative mitigation measure of deviation control, ultimately.Keywords: sustainability, environmental impact assessment, environemtal management, construction ecology
Procedia PDF Downloads 393268 Knowledge Creation and Diffusion Dynamics under Stable and Turbulent Environment for Organizational Performance Optimization
Authors: Jessica Gu, Yu Chen
Abstract:
Knowledge Management (KM) is undoubtable crucial to organizational value creation, learning, and adaptation. Although the rapidly growing KM domain has been fueled with full-fledged methodologies and technologies, studies on KM evolution that bridge the organizational performance and adaptation to the organizational environment are still rarely attempted. In particular, creation (or generation) and diffusion (or share/exchange) of knowledge are of the organizational primary concerns on the problem-solving perspective, however, the optimized distribution of knowledge creation and diffusion endeavors are still unknown to knowledge workers. This research proposed an agent-based model of knowledge creation and diffusion in an organization, aiming at elucidating how the intertwining knowledge flows at microscopic level lead to optimized organizational performance at macroscopic level through evolution, and exploring what exogenous interventions by the policy maker and endogenous adjustments of the knowledge workers can better cope with different environmental conditions. With the developed model, a series of simulation experiments are conducted. Both long-term steady-state and time-dependent developmental results on organizational performance, network and structure, social interaction and learning among individuals, knowledge audit and stocktaking, and the likelihood of choosing knowledge creation and diffusion by the knowledge workers are obtained. One of the interesting findings reveals a non-monotonic phenomenon on organizational performance under turbulent environment while a monotonic phenomenon on organizational performance under a stable environment. Hence, whether the environmental condition is turbulence or stable, the most suitable exogenous KM policy and endogenous knowledge creation and diffusion choice adjustments can be identified for achieving the optimized organizational performance. Additional influential variables are further discussed and future work directions are finally elaborated. The proposed agent-based model generates evidence on how knowledge worker strategically allocates efforts on knowledge creation and diffusion, how the bottom-up interactions among individuals lead to emerged structure and optimized performance, and how environmental conditions bring in challenges to the organization system. Meanwhile, it serves as a roadmap and offers great macro and long-term insights to policy makers without interrupting the real organizational operation, sacrificing huge overhead cost, or introducing undesired panic to employees.Keywords: knowledge creation, knowledge diffusion, agent-based modeling, organizational performance, decision making evolution
Procedia PDF Downloads 241267 Optimization of Culture Conditions of Paecilomyces Tenuipes, Entomopathogenic Fungi Inoculated into the Silkworm Larva, Bombyx Mori
Authors: Sung-Hee Nam, Kwang-Gill Lee, You-Young Jo, HaeYong Kweon
Abstract:
Entomopathogenic fungi is a Cordyceps species that is isolated from dead silkworm and cicada. Fungi on cicadas were described in old Chinese medicinal books and From ancient times, vegetable wasps and plant worms were widely known to have active substance and have been studied for pharmacological use. Among many fungi belonging to the genus Cordyceps, Cordyceps sinensis have been demonstrated to yield natural products possessing various biological activities and many bioactive components. Generally, It is commonly used to replenish the kidney and soothe the lung, and for the treatment of fatigue. Due to their commercial and economic importance, the demand for Cordyceps has been rapidly increased. However, a supply of Cordyceps specimen could not meet the increasing demand because of their sole dependence on field collection and habitat destruction. Because it is difficult to obtain many insect hosts in nature and the edibility of host insect needs to be verified in a pharmacological aspect. Recently, this setback was overcome that P. tenuipes was able to be cultivated in a large scale using silkworm as host. Pharmacological effects of P. tenuipes cultured on silkworm such as strengthening immune function, anti-fatigue, anti-tumor activity and controlling liver etc have been proved. They are widely commercialized. In this study, we attempted to establish a method for stable growth inhibition of P. tenuipes on silkworm hosts and an optimal condition for synnemata formation. To determine optimum culturing conditions, temperature and light conditions were varied. The length and number of synnemata was highest at 25℃ temperature and 100~300 lux illumination. On an average, the synnemata of wild P. tenuipes measures 70 ㎜ in length and 20 in number; those of the cultured strain were relatively shorter and more in number. The number of synnemata may have increased as a result of inoculating the host with highly concentrated conidia, while the length may have decreased due to limited nutrition per individual. It is not able that changes in light illumination cause morphological variations in the synnemata. However, regulation of only light and temperature could not produce stromata like perithecia, asci, and ascospores. Yamanaka reported that although a complete fruiting body can be produced under optimal culture conditions, it should be regarded as synnemata because it does not develop into an ascoma bearing ascospores.Keywords: paecilomyces tenuipes, entomopathogenic fungi, silkworm larva, bombyx mori
Procedia PDF Downloads 320266 Processes and Application of Casting Simulation and Its Software’s
Authors: Surinder Pal, Ajay Gupta, Johny Khajuria
Abstract:
Casting simulation helps visualize mold filling and casting solidification; predict related defects like cold shut, shrinkage porosity and hard spots; and optimize the casting design to achieve the desired quality with high yield. Flow and solidification of molten metals are, however, a very complex phenomenon that is difficult to simulate correctly by conventional computational techniques, especially when the part geometry is intricate and the required inputs (like thermo-physical properties and heat transfer coefficients) are not available. Simulation software is based on the process of modeling a real phenomenon with a set of mathematical formulas. It is, essentially, a program that allows the user to observe an operation through simulation without actually performing that operation. Simulation software is used widely to design equipment so that the final product will be as close to design specs as possible without expensive in process modification. Simulation software with real-time response is often used in gaming, but it also has important industrial applications. When the penalty for improper operation is costly, such as airplane pilots, nuclear power plant operators, or chemical plant operators, a mockup of the actual control panel is connected to a real-time simulation of the physical response, giving valuable training experience without fear of a disastrous outcome. The all casting simulation software has own requirements, like magma cast has only best for crack simulation. The latest generation software Auto CAST developed at IIT Bombay provides a host of functions to support method engineers, including part thickness visualization, core design, multi-cavity mold design with common gating and feeding, application of various feed aids (feeder sleeves, chills, padding, etc.), simulation of mold filling and casting solidification, automatic optimization of feeders and gating driven by the desired quality level, and what-if cost analysis. IIT Bombay has developed a set of applications for the foundry industry to improve casting yield and quality. Casting simulation is a fast and efficient solution for process for advanced tool which is the result of more than 20 years of collaboration with major industrial partners and academic institutions around the world. In this paper the process of casting simulation is studied.Keywords: casting simulation software’s, simulation technique’s, casting simulation, processes
Procedia PDF Downloads 475265 New Recombinant Netrin-a Protein of Lucilia Sericata Larvae by Bac to Bac Expression Vector System in Sf9 Insect Cell
Authors: Hamzeh Alipour, Masoumeh Bagheri, Abbasali Raz, Javad Dadgar Pakdel, Kourosh Azizi, Aboozar Soltani, Mohammad Djaefar Moemenbellah-Fard
Abstract:
Background: Maggot debridement therapy is an appropriate, effective, and controlled method using sterilized larvae of Luciliasericata (L.sericata) to treat wounds. Netrin-A is an enzyme in the Laminins family which secreted from salivary gland of L.sericata with a central role in neural regeneration and angiogenesis. This study aimed to production of new recombinant Netrin-A protein of Luciliasericata larvae by baculovirus expression vector system (BEVS) in SF9. Material and methods: In the first step, gene structure was subjected to the in silico studies, which were include determination of Antibacterial activity, Prion formation risk, homology modeling, Molecular docking analysis, and Optimization of recombinant protein. In the second step, the Netrin-A gene was cloned and amplified in pTG19 vector. After digestion with BamH1 and EcoR1 restriction enzymes, it was cloned in pFastBac HTA vector. It was then transformed into DH10Bac competent cells, and the recombinant Bacmid was subsequently transfected into insect Sf9 cells. The expressed recombinant Netrin-A was thus purified in the Ni-NTA agarose. This protein evaluation was done using SDS-PAGE and western blot, respectively. Finally, its concentration was calculated with the Bradford assay method. Results: The Bacmid vector structure with Netrin-A was successfully constructed and then expressed as Netrin-A protein in the Sf9 cell lane. The molecular weight of this protein was 52 kDa with 404 amino acids. In the in silico studies, fortunately, we predicted that recombinant LSNetrin-A have Antibacterial activity and without any prion formation risk.This molecule hasa high binding affinity to the Neogenin and a lower affinity to the DCC-specific receptors. Signal peptide located between amino acids 24 and 25. The concentration of Netrin-A recombinant protein was calculated to be 48.8 μg/ml. it was confirmed that the characterized gene in our previous study codes L. sericata Netrin-A enzyme. Conclusions: Successful generation of the recombinant Netrin-A, a secreted protein in L.sericata salivary glands, and because Luciliasericata larvae are used in larval therapy. Therefore, the findings of the present study could be useful to researchers in future studies on wound healing.Keywords: blowfly, BEVS, gene, immature insect, recombinant protein, Sf9
Procedia PDF Downloads 93264 Developing a Maturity Model of Digital Twin Application for Infrastructure Asset Management
Authors: Qingqing Feng, S. Thomas Ng, Frank J. Xu, Jiduo Xing
Abstract:
Faced with unprecedented challenges including aging assets, lack of maintenance budget, overtaxed and inefficient usage, and outcry for better service quality from the society, today’s infrastructure systems has become the main focus of many metropolises to pursue sustainable urban development and improve resilience. Digital twin, being one of the most innovative enabling technologies nowadays, may open up new ways for tackling various infrastructure asset management (IAM) problems. Digital twin application for IAM, as its name indicated, represents an evolving digital model of intended infrastructure that possesses functions including real-time monitoring; what-if events simulation; and scheduling, maintenance, and management optimization based on technologies like IoT, big data and AI. Up to now, there are already vast quantities of global initiatives of digital twin applications like 'Virtual Singapore' and 'Digital Built Britain'. With digital twin technology permeating the IAM field progressively, it is necessary to consider the maturity of the application and how those institutional or industrial digital twin application processes will evolve in future. In order to deal with the gap of lacking such kind of benchmark, a draft maturity model is developed for digital twin application in the IAM field. Firstly, an overview of current smart cities maturity models is given, based on which the draft Maturity Model of Digital Twin Application for Infrastructure Asset Management (MM-DTIAM) is developed for multi-stakeholders to evaluate and derive informed decision. The process of development follows a systematic approach with four major procedures, namely scoping, designing, populating and testing. Through in-depth literature review, interview and focus group meeting, the key domain areas are populated, defined and iteratively tuned. Finally, the case study of several digital twin projects is conducted for self-verification. The findings of the research reveal that: (i) the developed maturity model outlines five maturing levels leading to an optimised digital twin application from the aspects of strategic intent, data, technology, governance, and stakeholders’ engagement; (ii) based on the case study, levels 1 to 3 are already partially implemented in some initiatives while level 4 is on the way; and (iii) more practices are still needed to refine the draft to be mutually exclusive and collectively exhaustive in key domain areas.Keywords: digital twin, infrastructure asset management, maturity model, smart city
Procedia PDF Downloads 157263 Modeling of an Insulin Mircopump
Authors: Ahmed Slami, Med El Amine Brixi Nigassa, Nassima Labdelli, Sofiane Soulimane, Arnaud Pothier
Abstract:
Many people suffer from diabetes, a disease marked by abnormal levels of sugar in the blood; 285 million people have diabetes, 6.6% of the world adult population (in 2010), according to the International Diabetes Federation. Insulin medicament is invented to be injected into the body. Generally, the injection requires the patient to do it manually. However, in many cases he will be unable to inject the drug, saw that among the side effects of hyperglycemia is the weakness of the whole body. The researchers designed a medical device that injects insulin too autonomously by using micro-pumps. Many micro-pumps of concepts have been investigated during the last two decades for injecting molecules in blood or in the body. However, all these micro-pumps are intended for slow infusion of drug (injection of few microliters by minute). Now, the challenge is to develop micro-pumps for fast injections (1 microliter in 10 seconds) with accuracy of the order of microliter. Recently, studies have shown that only piezoelectric actuators can achieve this performance, knowing that few systems at the microscopic level were presented. These reasons lead us to design new smart microsystems injection drugs. Therefore, many technological advances are still to achieve the improvement of materials to their uses, while going through their characterization and modeling action mechanisms themselves. Moreover, it remains to study the integration of the piezoelectric micro-pump in the microfluidic platform features to explore and evaluate the performance of these new micro devices. In this work, we propose a new micro-pump model based on piezoelectric actuation with a new design. Here, we use a finite element model with Comsol software. Our device is composed of two pumping chambers, two diaphragms and two actuators (piezoelectric disks). The latter parts will apply a mechanical force on the membrane in a periodic manner. The membrane deformation allows the fluid pumping, the suction and discharge of the liquid. In this study, we present the modeling results as function as device geometry properties, films thickness, and materials properties. Here, we demonstrate that we can achieve fast injection. The results of these simulations will provide quantitative performance of our micro-pumps. Concern the spatial actuation, fluid rate and allows optimization of the fabrication process in terms of materials and integration steps.Keywords: COMSOL software, piezoelectric, micro-pump, microfluidic
Procedia PDF Downloads 342262 Removal of Chromium by UF5kDa Membrane: Its Characterization, Optimization of Parameters, and Evaluation of Coefficients
Authors: Bharti Verma, Chandrajit Balomajumder
Abstract:
Water pollution is escalated owing to industrialization and random ejection of one or more toxic heavy metal ions from the semiconductor industry, electroplating, metallurgical, mining, chemical manufacturing, tannery industries, etc., In semiconductor industry various kinds of chemicals in wafers preparation are used . Fluoride, toxic solvent, heavy metals, dyes and salts, suspended solids and chelating agents may be found in wastewater effluent of semiconductor manufacturing industry. Also in the chrome plating, in the electroplating industry, the effluent contains heavy amounts of Chromium. Since Cr(VI) is highly toxic, its exposure poses an acute risk of health. Also, its chronic exposure can even lead to mutagenesis and carcinogenesis. On the contrary, Cr (III) which is naturally occurring, is much less toxic than Cr(VI). Discharge limit of hexavalent chromium and trivalent chromium are 0.05 mg/L and 5 mg/L, respectively. There are numerous methods such as adsorption, chemical precipitation, membrane filtration, ion exchange, and electrochemical methods for the heavy metal removal. The present study focuses on the removal of Chromium ions by using flat sheet UF5kDa membrane. The Ultra filtration membrane process is operated above micro filtration membrane process. Thus separation achieved may be influenced due to the effect of Sieving and Donnan effect. Ultrafiltration is a promising method for the rejection of heavy metals like chromium, fluoride, cadmium, nickel, arsenic, etc. from effluent water. Benefits behind ultrafiltration process are that the operation is quite simple, the removal efficiency is high as compared to some other methods of removal and it is reliable. Polyamide membranes have been selected for the present study on rejection of Cr(VI) from feed solution. The objective of the current work is to examine the rejection of Cr(VI) from aqueous feed solutions by flat sheet UF5kDa membranes with different parameters such as pressure, feed concentration and pH of the feed. The experiments revealed that with increasing pressure, the removal efficiency of Cr(VI) is increased. Also, the effect of pH of feed solution, the initial dosage of chromium in the feed solution has been studied. The membrane has been characterized by FTIR, SEM and AFM before and after the run. The mass transfer coefficients have been estimated. Membrane transport parameters have been calculated and have been found to be in a good correlation with the applied model.Keywords: heavy metal removal, membrane process, waste water treatment, ultrafiltration
Procedia PDF Downloads 139261 Dual-Phase High Entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅) BxCy Ceramics Produced by Spark Plasma Sintering
Authors: Ana-Carolina Feltrin, Daniel Hedman, Farid Akhtar
Abstract:
High entropy ceramic (HEC) materials are characterized by their compositional disorder due to different metallic element atoms occupying the cation position and non-metal elements occupying the anion position. Several studies have focused on the processing and characterization of high entropy carbides and high entropy borides, as these HECs present interesting mechanical and chemical properties. A few studies have been published on HECs containing two non-metallic elements in the composition. Dual-phase high entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)BxCy ceramics with different amounts of x and y, (0.25 HfC + 0.25 ZrC + 0.25 VC + 0.25 TiB₂), (0.25 HfC + 0.25 ZrC + 0.25 VB2 + 0.25 TiB₂) and (0.25 HfC + 0.25 ZrB2 + 0.25 VB2 + 0.25 TiB₂) were sintered from boride and carbide precursor powders using SPS at 2000°C with holding time of 10 min, uniaxial pressure of 50 MPa and under Ar atmosphere. The sintered specimens formed two HEC phases: a Zr-Hf rich FCC phase and a Ti-V HCP phase, and both phases contained all the metallic elements from 5-50 at%. Phase quantification analysis of XRD data revealed that the molar amount of hexagonal phase increased with increased mole fraction of borides in the starting powders, whereas cubic FCC phase increased with increased carbide in the starting powders. SPS consolidated (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)BC0.5 and (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)B1.5C0.25 had respectively 94.74% and 88.56% relative density. (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)B0.5C0.75 presented the highest relative density of 95.99%, with Vickers hardness of 26.58±1.2 GPa for the borides phase and 18.29±0.8 GPa for the carbides phase, which exceeded the reported hardness values reported in the literature for high entropy ceramics. The SPS sintered specimens containing lower boron and higher carbon presented superior properties even though the metallic composition in each phase was similar to other compositions investigated. Dual-phase high entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅H₀.₂₅)BxCy ceramics were successfully fabricated in a boride-carbide solid solution and the amount of boron and carbon was shown to influence the phase fraction, hardness of phases, and density of the consolidated HECs. The microstructure and phase formation was highly dependent on the amount of non-metallic elements in the composition and not only the molar ratio between metals when producing high entropy ceramics with more than one anion in the sublattice. These findings show the importance of further studies about the optimization of the ratio between C and B for further improvements in the properties of dual-phase high entropy ceramics.Keywords: high-entropy ceramics, borides, carbides, dual-phase
Procedia PDF Downloads 172