Search results for: cash flow optimization
899 Demonstration Operation of Distributed Power Generation System Based on Carbonized Biomass Gasification
Authors: Kunio Yoshikawa, Ding Lu
Abstract:
Small-scale, distributed and low-cost biomass power generation technologies are highly required in the modern society. There are big needs for these technologies in the disaster areas of developed countries and un-electrified rural areas of developing countries. This work aims to present a technical feasibility of the portable ultra-small power generation system based on the gasification of carbonized wood pellets/briquettes. Our project is designed for enabling independent energy production from various kinds of biomass resources in the open-field. The whole process mainly consists of two processes: biomass and waste pretreatment; gasification and power generation. The first process includes carbonization, densification (briquetting or pelletization), and the second includes updraft fixed bed gasification of carbonized pellets/briquettes, syngas purification, and power generation employing an internal combustion gas engine. A combined pretreatment processes including carbonization without external energy and densification were adopted to deal with various biomass. Carbonized pellets showed a better gasification performance than carbonized briquettes and their mixture. The 100-hour continuous operation results indicated that pelletization/briquetting of carbonized fuel realized the stable operation of an updraft gasifier if there were no blocking issues caused by the accumulation of tar. The cold gas efficiency and the carbon conversion during carbonized wood pellets gasification was about 49.2% and 70.5% with the air equivalence ratio value of around 0.32, and the corresponding overall efficiency of the gas engine was 20.3% during the stable stage. Moreover, the maximum output power was 21 kW at the air flow rate of 40 Nm³·h⁻¹. Therefore, the comprehensive system covering biomass carbonization, densification, gasification, syngas purification, and engine system is feasible for portable, ultra-small power generation. This work has been supported by Innovative Science and Technology Initiative for Security (Ministry of Defence, Japan).Keywords: biomass carbonization, densification, distributed power generation, gasification
Procedia PDF Downloads 154898 Coupled Space and Time Homogenization of Viscoelastic-Viscoplastic Composites
Authors: Sarra Haouala, Issam Doghri
Abstract:
In this work, a multiscale computational strategy is proposed for the analysis of structures, which are described at a refined level both in space and in time. The proposal is applied to two-phase viscoelastic-viscoplastic (VE-VP) reinforced thermoplastics subjected to large numbers of cycles. The main aim is to predict the effective long time response while reducing the computational cost considerably. The proposed computational framework is a combination of the mean-field space homogenization based on the generalized incrementally affine formulation for VE-VP composites, and the asymptotic time homogenization approach for coupled isotropic VE-VP homogeneous solids under large numbers of cycles. The time homogenization method is based on the definition of micro and macro-chronological time scales, and on asymptotic expansions of the unknown variables. First, the original anisotropic VE-VP initial-boundary value problem of the composite material is decomposed into coupled micro-chronological (fast time scale) and macro-chronological (slow time-scale) problems. The former is purely VE, and solved once for each macro time step, whereas the latter problem is nonlinear and solved iteratively using fully implicit time integration. Second, mean-field space homogenization is used for both micro and macro-chronological problems to determine the micro and macro-chronological effective behavior of the composite material. The response of the matrix material is VE-VP with J2 flow theory assuming small strains. The formulation exploits the return-mapping algorithm for the J2 model, with its two steps: viscoelastic predictor and plastic corrections. The proposal is implemented for an extended Mori-Tanaka scheme, and verified against finite element simulations of representative volume elements, for a number of polymer composite materials subjected to large numbers of cycles.Keywords: asymptotic expansions, cyclic loadings, inclusion-reinforced thermoplastics, mean-field homogenization, time homogenization
Procedia PDF Downloads 368897 Poly-ε-Caprolactone Nanofibers with Synthetic Growth Factor Enriched Liposomes as Controlled Drug Delivery System
Authors: Vera Sovkova, Andrea Mickova, Matej Buzgo, Karolina Vocetkova, Eva Filova, Evzen Amler
Abstract:
PCL (poly-ε-caprolactone) nanofibrous scaffolds with adhered liposomes were prepared and tested as a possible drug delivery system for various synthetic growth factors. TGFβ, bFGF, and IGF-I have been shown to increase hMSC (human mesenchymal stem cells) proliferation and to induce hMSC differentiation. Functionalized PCL nanofibers were prepared with synthetic growth factors encapsulated in liposomes adhered to them in three different concentrations. Other samples contained PCL nanofibers with adhered, free synthetic growth factors. The synthetic growth factors free medium served as a control. The interaction of liposomes with the PCL nanofibers was visualized by SEM, and the release kinetics were determined by ELISA testing. The potential of liposomes, immobilized on the biodegradable scaffolds, as a delivery system for synthetic growth factors, and as a suitable system for MSCs adhesion, proliferation and differentiation in vitro was evaluated by MTS assay, dsDNA amount determination, confocal microscopy, flow cytometry and real-time PCR. The results showed that the growth factors adhered to the PCL nanofibers stimulated cell proliferation mainly up to day 11 and that subsequently their effect was lower. By contrast, the release of the lowest concentration of growth factors from liposomes resulted in gradual proliferation of MSCs throughout the experiment. Moreover, liposomes, as well as free growth factors, stimulated type II collagen production, which was confirmed by immunohistochemical staining using monoclonal antibody against type II collagen. The results of this study indicate that growth factors enriched liposomes adhered to surface of PCL nanofibers could be useful as a drug delivery instrument for application in short timescales, be combined with nanofiber scaffolds to promote local and persistent delivery while mimicking the local microenvironment. This work was supported by project LO1508 from the Ministry of Education, Youth and Sports of the Czech RepublicKeywords: drug delivery, growth factors, hMSC, liposomes, nanofibres
Procedia PDF Downloads 287896 Spark Plasma Sintering/Synthesis of Alumina-Graphene Composites
Authors: Nikoloz Jalabadze, Roin Chedia, Lili Nadaraia, Levan Khundadze
Abstract:
Nanocrystalline materials in powder condition can be manufactured by a number of different methods, however manufacture of composite materials product in the same nanocrystalline state is still a problem because the processes of compaction and synthesis of nanocrystalline powders go with intensive growth of particles – the process which promotes formation of pieces in an ordinary crystalline state instead of being crystallized in the desirable nanocrystalline state. To date spark plasma sintering (SPS) has been considered as the most promising and energy efficient method for producing dense bodies of composite materials. An advantage of the SPS method in comparison with other methods is mainly low temperature and short time of the sintering procedure. That finally gives an opportunity to obtain dense material with nanocrystalline structure. Graphene has recently garnered significant interest as a reinforcing phase in composite materials because of its excellent electrical, thermal and mechanical properties. Graphene nanoplatelets (GNPs) in particular have attracted much interest as reinforcements for ceramic matrix composites (mostly in Al2O3, Si3N4, TiO2, ZrB2 a. c.). SPS has been shown to fully densify a variety of ceramic systems effectively including Al2O3 and often with improvements in mechanical and functional behavior. Alumina consolidated by SPS has been shown to have superior hardness, fracture toughness, plasticity and optical translucency compared to conventionally processed alumina. Knowledge of how GNPs influence sintering behavior is important to effectively process and manufacture process. In this study, the effects of GNPs on the SPS processing of Al2O3 are investigated by systematically varying sintering temperature, holding time and pressure. Our experiments showed that SPS process is also appropriate for the synthesis of nanocrystalline powders of alumina-graphene composites. Depending on the size of the molds, it is possible to obtain different amount of nanopowders. Investigation of the structure, physical-chemical, mechanical and performance properties of the elaborated composite materials was performed. The results of this study provide a fundamental understanding of the effects of GNP on sintering behavior, thereby providing a foundation for future optimization of the processing of these promising nanocomposite systems.Keywords: alumina oxide, ceramic matrix composites, graphene nanoplatelets, spark-plasma sintering
Procedia PDF Downloads 375895 Reduction of Biofilm Formation in Closed Circuit Cooling Towers
Authors: Irfan Turetgen
Abstract:
Closed-circuit cooling towers are cooling units that operate according to the indirect cooling principle. Unlike the open-loop cooling tower, the filler material includes a closed-loop water-operated heat exchanger. The main purpose of this heat exchanger is to prevent the cooled process water from contacting with the external environment. In order to ensure that the hot water is cooled, the water is cooled by the air flow and the circulation water of the tower as it passes through the pipe. They are now more commonly used than open loop cooling towers that provide cooling with plastic filling material. As with all surfaces in contact with water, there is a biofilm formation on the outer surface of the pipe. Although biofilm has been studied very well on plastic surfaces in open loop cooling towers, studies on biofilm layer formed on the heat exchangers of the closed circuit tower have not been found. In the recent study, natural biofilm formation was observed on the heat exchangers of the closed loop tower for 6 months. At the same time, nano-silica coating, which is known to reduce the formation of the biofilm layer, a comparison was made between the two different surfaces in terms of biofilm formation potential. Test surfaces were placed into biofilm reactor along with the untreated control coupons up to 6-months period for biofilm maturation. Natural bacterial communities were monitored to analyze the impact to mimic the real-life conditions. Surfaces were monthly analyzed in situ for their microbial load using epifluorescence microscopy. Wettability is known to play a key role in biofilm formation on surfaces, because characteristics of surface properties affect the bacterial adhesion. Results showed that surface-conditioning with nano-silica significantly reduce (up to 90%) biofilm formation. Easy coating process is a facile and low-cost method to prepare hydrophobic surface without any kinds of expensive compounds or methods.Keywords: biofilms, cooling towers, fill material, nano silica
Procedia PDF Downloads 127894 Network Based Speed Synchronization Control for Multi-Motor via Consensus Theory
Authors: Liqin Zhang, Liang Yan
Abstract:
This paper addresses the speed synchronization control problem for a network-based multi-motor system from the perspective of cluster consensus theory. Each motor is considered as a single agent connected through fixed and undirected network. This paper presents an improved control protocol from three aspects. First, for the purpose of improving both tracking and synchronization performance, this paper presents a distributed leader-following method. The improved control protocol takes the importance of each motor’s speed into consideration, and all motors are divided into different groups according to speed weights. Specifically, by using control parameters optimization, the synchronization error and tracking error can be regulated and decoupled to some extent. The simulation results demonstrate the effectiveness and superiority of the proposed strategy. In practical engineering, the simplified models are unrealistic, such as single-integrator and double-integrator. And previous algorithms require the acceleration information of the leader available to all followers if the leader has a varying velocity, which is also difficult to realize. Therefore, the method focuses on an observer-based variable structure algorithm for consensus tracking, which gets rid of the leader acceleration. The presented scheme optimizes synchronization performance, as well as provides satisfactory robustness. What’s more, the existing algorithms can obtain a stable synchronous system; however, the obtained stable system may encounter some disturbances that may destroy the synchronization. Focus on this challenging technological problem, a state-dependent-switching approach is introduced. In the presence of unmeasured angular speed and unknown failures, this paper investigates a distributed fault-tolerant consensus tracking algorithm for a group non-identical motors. The failures are modeled by nonlinear functions, and the sliding mode observer is designed to estimate the angular speed and nonlinear failures. The convergence and stability of the given multi-motor system are proved. Simulation results have shown that all followers asymptotically converge to a consistent state when one follower fails to follow the virtual leader during a large enough disturbance, which illustrates the good performance of synchronization control accuracy.Keywords: consensus control, distributed follow, fault-tolerant control, multi-motor system, speed synchronization
Procedia PDF Downloads 123893 Molecular Mechanisms of Lipid Metabolism and Obesity Modulation by Caspase-1/11 and nlrp3 Inflammasome in Mice
Authors: Lívia Pimentel Sant'ana Dourado, Raquel Das Neves Almeida, Luís Henrique Costa Corrêa Neto, Nayara Soares, Kelly Grace Magalhães
Abstract:
Introduction: Obesity and high-fat diet intake have a crucial impact on immune cells and inflammatory profile, highlighting an emerging realization that obesity is an inflammatory disease. In the present work, we aimed to characterize the role of caspase-1/11 and NLRP3 inflammasome in the establishment of mice obesity and modulation of inflammatory lipid metabolism induced by high fat diet intake. Methods and results: Wild type, caspase-1/11 and NLRP3 knockout mice were fed with standard fat diet (SFD) or high fat diet (HFD) for 90 days. The weight of animals was measured weekly to monitor the weight gain. After 90 days, the blood, peritoneal lavage cells, heart and liver were collected from mice studied here. Cytokines were measured in serum by ELISA and analyzed in spectrophotometry. Lipid antigen presentation molecule CD1d expression, reactive oxygen species (ROS) generation and lipid droplets biogenesis were analyzed in cells from mice peritoneal cavity by flow cytometry. Liver histopathology was performed for morphological evaluation of the organ. The absence of caspase-1/11, but not NLRP3, in mice fed with HFD favored the mice weight gain, increased liver size, induced development of hepatic steatosis and IL-12 secretion in mice compared to mice fed with SFD. In addition, caspase-1/11 knockout mice fed with HFD presented an increased CD1d molecule expression, as well as higher levels of lipid droplets biogenesis and ROS generation compared to wild type mice also fed with HFD. Conclusion: Our data suggest that caspase-1/11 knockout mice have greater susceptibility to obesity as well as increased activation of lipid metabolism and inflammatory markers.Keywords: caspase 1, caspase 11, inflamassome, obesity, lipids
Procedia PDF Downloads 319892 Cooperative Learning Promotes Successful Learning. A Qualitative Study to Analyze Factors that Promote Interaction and Cooperation among Students in Blended Learning Environments
Authors: Pia Kastl
Abstract:
Potentials of blended learning are the flexibility of learning and the possibility to get in touch with lecturers and fellow students on site. By combining face-to-face sessions with digital self-learning units, the learning process can be optimized, and learning success increased. To examine wether blended learning outperforms online and face-to-face teaching, a theory-based questionnaire survey was conducted. The results show that the interaction and cooperation among students is poorly provided in blended learning, and face-to-face teaching performs better in this respect. The aim of this article is to identify concrete suggestions students have for improving cooperation and interaction in blended learning courses. For this purpose, interviews were conducted with students from various academic disciplines in face-to-face, online, or blended learning courses (N= 60). The questions referred to opinions and suggestions for improvement regarding the course design of the respective learning environment. The analysis was carried out by qualitative content analysis. The results show that students perceive the interaction as beneficial to their learning. They verbalize their knowledge and are exposed to different perspectives. In addition, emotional support is particularly important in exam phases. Interaction and cooperation were primarily enabled in the face-to-face component of the courses studied, while there was very limited contact with fellow students in the asynchronous component. Forums offered were hardly used or not used at all because the barrier to asking a question publicly is too high, and students prefer private channels for communication. This is accompanied by the disadvantage that the interaction occurs only among people who already know each other. Creating contacts is not fostered in the blended learning courses. Students consider optimization possibilities as a task of the lecturers in the face-to-face sessions: Here, interaction and cooperation should be encouraged through get-to-know-you rounds or group work. It is important here to group the participants randomly to establish contact with new people. In addition, sufficient time for interaction is desired in the lecture, e.g., in the context of discussions or partner work. In the digital component, students prefer synchronous exchange at a fixed time, for example, in breakout rooms or an MS Teams channel. The results provide an overview of how interaction and cooperation can be implemented in blended learning courses. Positive design possibilities are partly dependent on subject area and course. Future studies could tie in here with a course-specific analysis.Keywords: blended learning, higher education, hybrid teaching, qualitative research, student learning
Procedia PDF Downloads 70891 Monitoring the Thin Film Formation of Carrageenan and PNIPAm Microgels
Authors: Selim Kara, Ertan Arda, Fahrettin Dolastir, Önder Pekcan
Abstract:
Biomaterials and thin film coatings play a fundamental role in medical, food and pharmaceutical industries. Carrageenan is a linear sulfated polysaccharide extracted from algae and seaweeds. To date, such biomaterials have been used in many smart drug delivery systems due to their biocompatibility and antimicrobial activity properties. Poly (N-isopropylacrylamide) (PNIPAm) gels and copolymers have also been used in medical applications. PNIPAm shows lower critical solution temperature (LCST) property at about 32-34 °C which is very close to the human body temperature. Below and above the LCST point, PNIPAm gels exhibit distinct phase transitions between swollen and collapsed states. A special class of gels are microgels which can react to environmental changes significantly faster than microgels due to their small sizes. Quartz crystal microbalance (QCM) measurement technique is one of the attractive techniques which has been used for monitoring the thin-film formation process. A sensitive QCM system was designed as to detect 0.1 Hz difference in resonance frequency and 10-7 change in energy dissipation values, which are the measures of the deposited mass and the film rigidity, respectively. PNIPAm microgels with the diameter around few hundred nanometers in water were produced via precipitation polymerization process. 5 MHz quartz crystals with functionalized gold surfaces were used for the deposition of the carrageenan molecules and microgels in the solutions which were slowly pumped through a flow cell. Interactions between charged carrageenan and microgel particles were monitored during the formation of the film layers, and the Sauerbrey masses of the deposited films were calculated. The critical phase transition temperatures around the LCST were detected during the heating and cooling cycles. It was shown that it is possible to monitor the interactions between PNIPAm microgels and biopolymer molecules, and it is also possible to specify the critical phase transition temperatures by using a QCM system.Keywords: carrageenan, phase transitions, PNIPAm microgels, quartz crystal microbalance (QCM)
Procedia PDF Downloads 229890 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging
Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati
Abstract:
Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization
Procedia PDF Downloads 74889 Development of Fixture for Pipe to Pipe Friction Stir Welding of Dissimilar Materials
Authors: Aashutosh A. Tadse, Kush Mehta, Hardik Vyas
Abstract:
Friction Stir Welding is a process in which an FSW tool produces friction heat and thus penetrates through the junction and upon rotation carries out the weld by exchange of material within the 2 metals being welded. It involves holding the workpieces stiff enough to bear the force of the tool moving across the junction to carry out a successful weld. The weld that has flat plates as workpieces, has a quite simpler geometry in terms of fixture holding them. In the case of FSW of pipes, the pipes need to be held firm with the chucks and jaws according to the diameter of the pipes being welded; the FSW tool is then revolved around the pipes to carry out the weld. Machine requires a larger area and it becomes more costly because of such a setup. To carry out the weld on the Milling machine, the newly designed fixture must be set-up on the table of milling machine and must facilitate rotation of pipes by the motor being shafted to one end of the fixture, and the other end automatically rotated because of the rotating jaws held tight enough with the pipes. The set-up has tapered cones as the jaws that would go in the pipes thus holding it with the help of its knurled surface providing the required grip. The process has rotation of pipes with the stationary rotating tool penetrating into the junction. The FSW on pipes in this process requires a very low RPM of pipes to carry out a fine weld and the speed shall change with every combination of material and diameter of pipes, so a variable speed setting motor shall serve the purpose. To withstand the force of the tool, an attachment to the shaft is provided which will be diameter specific that will resist flow of material towards the center during the weld. The welded joint thus carried out will be proper to required standards and specifications. Current industrial requirements state the need of space efficient, cost-friendly and more generalized form of fixtures and set-ups of machines to be put up. The proposed design considers every mentioned factor and thus proves to be positive in the same.Keywords: force of tool, friction stir welding, milling machine, rotation of pipes, tapered cones
Procedia PDF Downloads 112888 Extracorporeal Co2 Removal (Ecco2r): An Option for Treatment for Refractory Hypercapnic Respiratory Failure
Authors: Shweh Fern Loo, Jun Yin Ong, Than Zaw Oo
Abstract:
Acute respiratory distress syndrome (ARDS) is a common serious condition of bilateral lung infiltrates that develops secondary to various underlying conditions such as diseases or injuries. ARDS with severe hypercapnia is associated with higher ICU mortality and morbidity. Venovenous Extracorporeal membrane oxygenation (VV-ECMO) support has been established to avert life-threatening hypoxemia and hypercapnic respiratory failure despite optimal conventional mechanical ventilation. However, VV-ECMO is relatively not advisable in particular groups of patients, especially in multi-organ failure, advanced age, hemorrhagic complications and irreversible central nervous system pathology. We presented a case of a 79-year-old Chinese lady without any pre-existing lung disease admitted to our hospital intensive care unit (ICU) after acute presentation of breathlessness and chest pain. After extensive workup, she was diagnosed with rapidly progressing acute interstitial pneumonia with ARDS and hypercapnia respiratory failure. The patient received lung protective strategies of mechanical ventilation and neuromuscular blockage therapy as per clinical guidelines. However, hypercapnia respiratory failure was refractory, and she was deemed not a good candidate for VV-ECMO support given her advanced age and high vasopressor requirements from shock. Alternative therapy with extracorporeal CO2 removal (ECCO2R) was considered and implemented. The patient received 12 days of ECCO2R paired with muscle paralysis, optimization of lung-protective mechanical ventilation and dialysis. Unfortunately, the patient still had refractory hypercapnic respiratory failure with dual vasopressor support despite prolonged therapy. Given failed and futile medical treatment, the family opted for withdrawal of care, a conservative approach, and comfort care, which led to her demise. The effectivity of extracorporeal CO2 removal may depend on disease burden, involvement and severity of the disease. There is insufficient data to make strong recommendations about its benefit-risk ratio for ECCO2R devices, and further studies and data would be required. Nonetheless, ECCO2R can be considered an alternative treatment for refractory hypercapnic respiratory failure patients who are unsuitable for initiating venovenous ECMO.Keywords: extracorporeal CO2 removal (ECCO2R), acute respiratory distress syndrome (ARDS), acute interstitial pneumonia (AIP), hypercapnic respiratory failure
Procedia PDF Downloads 64887 Study Variation of Blade Angle on the Performance of the Undershot Waterwheel on the Pico Scale
Authors: Warjito, Kevin Geraldo, Budiarso, Muhammad Mizan, Rafi Adhi Pranata, Farhan Rizqi Syahnakri
Abstract:
According to data from 2021, the number of households in Indonesia that have access to on-grid electricity is claimed to have reached 99.28%, which means that around 0.7% of Indonesia's population (1.95 million people) still have no proper access to electricity and 38.1% of it comes from remote areas in Nusa Tenggara Timur. Remote areas are classified as areas with a small population of 30 to 60 families, have limited infrastructure, have scarce access to electricity and clean water, have a relatively weak economy, are behind in access to technological innovation, and earn a living mostly as farmers or fishermen. These people still need electricity but can’t afford the high cost of electricity from national on-grid sources. To overcome this, it is proposed that a hydroelectric power plant driven by a pico-hydro turbine with an undershot water wheel will be a suitable pico-hydro turbine technology because of the design, materials and installation of the turbine that is believed to be easier (i.e., operational and maintenance) and cheaper (i.e., investment and operating costs) than any other type. The comparative study of the angle of the undershot water wheel blades will be discussed comprehensively. This study will look into the best variation of curved blades on an undershot water wheel that produces maximum hydraulic efficiency. In this study, the blade angles were varied by 180 ̊, 160 ̊, and 140 ̊. Two methods of analysis will be used, which are analytical and numerical methods. The analytical method will be based on calculations of the amount of torque and rotational speed of the turbine, which is used to obtain the input and output power of the turbine. Whereas the numerical method will use the ANSYS application to simulate the flow during the collision with the designed turbine blades. It can be concluded, based on the analytical and numerical methods, that the best angle for the blade is 140 ̊, with an efficiency of 43.52% for the analytical method and 37.15% for the numerical method.Keywords: pico hydro, undershot waterwheel, blade angle, computational fluid dynamics
Procedia PDF Downloads 76886 Dynamic Conformal Arc versus Intensity Modulated Radiotherapy for Image Guided Stereotactic Radiotherapy of Cranial Lesion
Authors: Chor Yi Ng, Christine Kong, Loretta Teo, Stephen Yau, FC Cheung, TL Poon, Francis Lee
Abstract:
Purpose: Dynamic conformal arc (DCA) and intensity modulated radiotherapy (IMRT) are two treatment techniques commonly used for stereotactic radiosurgery/radiotherapy of cranial lesions. IMRT plans usually give better dose conformity while DCA plans have better dose fall off. Rapid dose fall off is preferred for radiotherapy of cranial lesions, but dose conformity is also important. For certain lesions, DCA plans have good conformity, while for some lesions, the conformity is just unacceptable with DCA plans, and IMRT has to be used. The choice between the two may not be apparent until each plan is prepared and dose indices compared. We described a deviation index (DI) which is a measurement of the deviation of the target shape from a sphere, and test its functionality to choose between the two techniques. Method and Materials: From May 2015 to May 2017, our institute has performed stereotactic radiotherapy for 105 patients treating a total of 115 lesions (64 DCA plans and 51 IMRT plans). Patients were treated with the Varian Clinac iX with HDMLC. Brainlab Exactrac system was used for patient setup. Treatment planning was done with Brainlab iPlan RT Dose (Version 4.5.4). DCA plans were found to give better dose fall off in terms of R50% (R50% (DCA) = 4.75 Vs R50% (IMRT) = 5.242) while IMRT plans have better conformity in terms of treatment volume ratio (TVR) (TVR(DCA) = 1.273 Vs TVR(IMRT) = 1.222). Deviation Index (DI) is proposed to better facilitate the choice between the two techniques. DI is the ratio of the volume of a 1 mm shell of the PTV and the volume of a 1 mm shell of a sphere of identical volume. DI will be close to 1 for a near spherical PTV while a large DI will imply a more irregular PTV. To study the functionality of DI, 23 cases were chosen with PTV volume ranged from 1.149 cc to 29.83 cc, and DI ranged from 1.059 to 3.202. For each case, we did a nine field IMRT plan with one pass optimization and a five arc DCA plan. Then the TVR and R50% of each case were compared and correlated with the DI. Results: For the 23 cases, TVRs and R50% of the DCA and IMRT plans were examined. The conformity for IMRT plans are better than DCA plans, with majority of the TVR(DCA)/TVR(IMRT) ratios > 1, values ranging from 0.877 to1.538. While the dose fall off is better for DCA plans, with majority of the R50%(DCA)/ R50%(IMRT) ratios < 1. Their correlations with DI were also studied. A strong positive correlation was found between the ratio of TVRs and DI (correlation coefficient = 0.839), while the correlation between the ratio of R50%s and DI was insignificant (correlation coefficient = -0.190). Conclusion: The results suggest DI can be used as a guide for choosing the planning technique. For DI greater than a certain value, we can expect the conformity for DCA plans to become unacceptably great, and IMRT will be the technique of choice.Keywords: cranial lesions, dynamic conformal arc, IMRT, image guided radiotherapy, stereotactic radiotherapy
Procedia PDF Downloads 239885 The Community Structure of Fish and its Correlation with Mangrove Forest Litter Production in Panjang Island, Banten Bay, Indonesia
Authors: Meilisha Putri Pertiwi, Mufti Petala Patria
Abstract:
Mangrove forest often categorized as a productive ecosystem in trophic water and the highest carbon storage among all the forest types. Mangrove-derived organic matter determines the food web of fish and invertebrates. In Indonesia trophic water ecosystem, 80% commersial fish caught in coastal area are high related to food web in mangrove forest ecosystem. Based on the previous research in Panjang Island, Bojonegara, Banten, Indonesia, removed mangrove litterfall to the sea water were 9,023 g/m³/s for two stations (west station–5,169 g/m³/s and north station-3,854 g/m³/s). The vegetation were dominated from Rhizophora apiculata and Rhizopora stylosa. C element is the highest content (27,303% and 30,373%) than N element (0,427% and 0,35%) and P element (0,19% and 0,143%). The aim of research also to know the diversity of fish inhabit in mangrove forest. Fish sampling is by push net. Fish caught are collected into plastics, total length measured, weigh measured, and individual and total counted. Meanwhile, the 3 modified pipes (1 m long, 5 inches diameter, and a closed one hole part facing the river by using a nylon cloth) set in the water channel connecting mangrove forest and sea water for each stasiun. They placed for 1 hour at low tide. Then calculate the speed of water flow and volume of modified pipes. The fish and mangrove litter will be weigh for wet weight, dry weight, and analyze the C, N, and P element content. The sampling data will be conduct 3 times of month in full moon. The salinity, temperature, turbidity, pH, DO, and the sediment of mangrove forest will be measure too. This research will give information about the fish diversity in mangrove forest, the removed mangrove litterfall to the sea water, the composition of sediment, the total element content (C, N, P) of fish and mangrove litter, and the correlation of element content absorption between fish and mangrove litter. The data will be use for the fish and mangrove ecosystem conservation.Keywords: fish diversity, mangrove forest, mangrove litter, carbon element, nitrogen element, P element, conservation
Procedia PDF Downloads 484884 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 362883 The Use of Space Syntax in Urban Transportation Planning and Evaluation: Limits and Potentials
Authors: Chuan Yang, Jing Bie, Yueh-Lung Lin, Zhong Wang
Abstract:
Transportation planning is an academic integration discipline combining research and practice with the aim of mobility and accessibility improvements at both strategic-level policy-making and operational dimensions of practical planning. Transportation planning could build the linkage between traffic and social development goals, for instance, economic benefits and environmental sustainability. The transportation planning analysis and evaluation tend to apply empirical quantitative approaches with the guidance of the fundamental principles, such as efficiency, equity, safety, and sustainability. Space syntax theory has been applied in the spatial distribution of pedestrian movement or vehicle flow analysis, however rare has been written about its application in transportation planning. The correlated relationship between the variables of space syntax analysis and authentic observations have declared that the urban configurations have a significant effect on urban dynamics, for instance, land value, building density, traffic, crime. This research aims to explore the potentials of applying Space Syntax methodology to evaluate urban transportation planning through studying the effects of urban configuration on cities transportation performance. By literature review, this paper aims to discuss the effects that urban configuration with different degrees of integration and accessibility have on three elementary components of transportation planning - transportation efficiency, transportation safety, and economic agglomeration development - via intensifying and stabilising the nature movements generated by the street network. And then the potential and limits of Space Syntax theory to study the performance of urban transportation and transportation planning would be discussed in the paper. In practical terms, this research will help future research explore the effects of urban design on transportation performance, and identify which patterns of urban street networks would allow for most efficient and safe transportation performance with higher economic benefits.Keywords: transportation planning, space syntax, economic agglomeration, transportation efficiency, transportation safety
Procedia PDF Downloads 194882 Optimization of Temperature Coefficients for MEMS Based Piezoresistive Pressure Sensor
Authors: Vijay Kumar, Jaspreet Singh, Manoj Wadhwa
Abstract:
Piezo-resistive pressure sensors were one of the first developed micromechanical system (MEMS) devices and still display a significant growth prompted by the advancements in micromachining techniques and material technology. In MEMS based piezo-resistive pressure sensors, temperature can be considered as the main environmental condition which affects the system performance. The study of the thermal behavior of these sensors is essential to define the parameters that cause the output characteristics to drift. In this work, a study on the effects of temperature and doping concentration in a boron implanted piezoresistor for a silicon-based pressure sensor is discussed. We have optimized the temperature coefficient of resistance (TCR) and temperature coefficient of sensitivity (TCS) values to determine the effect of temperature drift on the sensor performance. To be more precise, in order to reduce the temperature drift, a high doping concentration is needed. And it is well known that the Wheatstone bridge in a pressure sensor is supplied with a constant voltage or a constant current input supply. With a constant voltage supply, the thermal drift can be compensated along with an external compensation circuit, whereas the thermal drift in the constant current supply can be directly compensated by the bridge itself. But it would be beneficial to also compensate the temperature coefficient of piezoresistors so as to further reduce the temperature drift. So, with a current supply, the TCS is dependent on both the TCπ and TCR. As TCπ is a negative quantity and TCR is a positive quantity, it is possible to choose an appropriate doping concentration at which both of them cancel each other. An exact cancellation of TCR and TCπ values is not readily attainable; therefore, an adjustable approach is generally used in practical applications. Thus, one goal of this work has been to better understand the origin of temperature drift in pressure sensor devices so that the temperature effects can be minimized or eliminated. This paper describes the optimum doping levels for the piezoresistors where the TCS of the pressure transducers will be zero due to the cancellation of TCR and TCπ values. Also, the fabrication and characterization of the pressure sensor are carried out. The optimized TCR value obtained for the fabricated die is 2300 ± 100ppm/ᵒC, for which the piezoresistors are implanted at a doping concentration of 5E13 ions/cm³ and the TCS value of -2100ppm/ᵒC is achieved. Therefore, the desired TCR and TCS value is achieved, which are approximately equal to each other, so the thermal effects are considerably reduced. Finally, we have calculated the effect of temperature and doping concentration on the output characteristics of the sensor. This study allows us to predict the sensor behavior against temperature and to minimize this effect by optimizing the doping concentration.Keywords: piezo-resistive, pressure sensor, doping concentration, TCR, TCS
Procedia PDF Downloads 177881 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data
Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple
Abstract:
In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network
Procedia PDF Downloads 137880 6-Degree-Of-Freedom Spacecraft Motion Planning via Model Predictive Control and Dual Quaternions
Authors: Omer Burak Iskender, Keck Voon Ling, Vincent Dubanchet, Luca Simonini
Abstract:
This paper presents Guidance and Control (G&C) strategy to approach and synchronize with potentially rotating targets. The proposed strategy generates and tracks a safe trajectory for space servicing missions, including tasks like approaching, inspecting, and capturing. The main objective of this paper is to validate the G&C laws using a Hardware-In-the-Loop (HIL) setup with realistic rendezvous and docking equipment. Throughout this work, the assumption of full relative state feedback is relaxed by onboard sensors that bring realistic errors and delays and, while the proposed closed loop approach demonstrates the robustness to the above mentioned challenge. Moreover, G&C blocks are unified via the Model Predictive Control (MPC) paradigm, and the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description. In this work, G&C is formulated as a convex optimization problem where constraints such as thruster limits and the output constraints are explicitly handled. Furthermore, the Monte-Carlo method is used to evaluate the robustness of the proposed method to the initial condition errors, the uncertainty of the target's motion and attitude, and actuator errors. A capture scenario is tested with the robotic test bench that has onboard sensors which estimate the position and orientation of a drifting satellite through camera imagery. Finally, the approach is compared with currently used robust H-infinity controllers and guidance profile provided by the industrial partner. The HIL experiments demonstrate that the proposed strategy is a potential candidate for future space servicing missions because 1) the algorithm is real-time implementable as convex programming offers deterministic convergence properties and guarantee finite time solution, 2) critical physical and output constraints are respected, 3) robustness to sensor errors and uncertainties in the system is proven, 4) couples translational motion with rotational motion.Keywords: dual quaternion, model predictive control, real-time experimental test, rendezvous and docking, spacecraft autonomy, space servicing
Procedia PDF Downloads 145879 Multi-Agent Searching Adaptation Using Levy Flight and Inferential Reasoning
Authors: Sagir M. Yusuf, Chris Baber
Abstract:
In this paper, we describe how to achieve knowledge understanding and prediction (Situation Awareness (SA)) for multiple-agents conducting searching activity using Bayesian inferential reasoning and learning. Bayesian Belief Network was used to monitor agents' knowledge about their environment, and cases are recorded for the network training using expectation-maximisation or gradient descent algorithm. The well trained network will be used for decision making and environmental situation prediction. Forest fire searching by multiple UAVs was the use case. UAVs are tasked to explore a forest and find a fire for urgent actions by the fire wardens. The paper focused on two problems: (i) effective agents’ path planning strategy and (ii) knowledge understanding and prediction (SA). The path planning problem by inspiring animal mode of foraging using Lévy distribution augmented with Bayesian reasoning was fully described in this paper. Results proof that the Lévy flight strategy performs better than the previous fixed-pattern (e.g., parallel sweeps) approaches in terms of energy and time utilisation. We also introduced a waypoint assessment strategy called k-previous waypoints assessment. It improves the performance of the ordinary levy flight by saving agent’s resources and mission time through redundant search avoidance. The agents (UAVs) are to report their mission knowledge at the central server for interpretation and prediction purposes. Bayesian reasoning and learning were used for the SA and results proof effectiveness in different environments scenario in terms of prediction and effective knowledge representation. The prediction accuracy was measured using learning error rate, logarithm loss, and Brier score and the result proves that little agents mission that can be used for prediction within the same or different environment. Finally, we described a situation-based knowledge visualization and prediction technique for heterogeneous multi-UAV mission. While this paper proves linkage of Bayesian reasoning and learning with SA and effective searching strategy, future works is focusing on simplifying the architecture.Keywords: Levy flight, distributed constraint optimization problem, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence
Procedia PDF Downloads 142878 Comparison and Effectiveness of Cranial Electrical Stimulation Treatment, Brain Training and Their Combination on Language and Verbal Fluency of Patients with Mild Cognitive Impairment: A Single Subject Design
Authors: Firoozeh Ghazanfari, Kourosh Amraei, Parisa Poorabadi
Abstract:
Mild cognitive impairment is one of the neurocognitive disorders that go beyond age-related decline in cognitive functions, but in fact, it is not so severe which affects daily activities. This study aimed to investigate and compare the effectiveness of treatment with cranial electrical stimulation, brain training and their double combination on the language and verbal fluency of the elderly with mild cognitive impairment. This is a single-subject method with comparative intervention designs. Four patients with a definitive diagnosis of mild cognitive impairment by a psychiatrist were selected via purposive and convenience sampling method. Addenbrooke's Cognitive Examination Scale (2017) was used to assess language and verbal fluency. Two groups were formed with different order of cranial electrical stimulation treatment, brain training by pencil and paper method and their double combination, and two patients were randomly replaced in each group. The arrangement of the first group included cranial electrical stimulation, brain training, double combination and the second group included double combination, cranial electrical stimulation and brain training, respectively. Treatment plan included: A1, B, A2, C, A3, D, A4, where electrical stimulation treatment was given in ten 30-minutes sessions (5 mA and frequency of 0.5-500 Hz) and brain training in ten 30-minutes sessions. Each baseline lasted four weeks. Patients in first group who first received cranial electrical stimulation treatment showed a higher percentage of improvement in the language and verbal fluency subscale of Addenbrooke's Cognitive Examination in comparison to patients of the second group. Based on the results, it seems that cranial electrical stimulation with its effect on neurotransmitters and brain blood flow, especially in the brain stem, may prepare the brain at the neurochemical and molecular level for a better effectiveness of brain training at the behavioral level, and the selective treatment of electrical stimulation solitude in the first place may be more effective than combining it with paper-pencil brain training.Keywords: cranial electrical stimulation, treatment, brain training, verbal fluency, cognitive impairment
Procedia PDF Downloads 85877 Design and Computational Fluid Dynamics Analysis of Aerodynamic Package of a Formula Student Car
Authors: Aniketh Ravukutam, Rajath Rao M., Pradyumna S. A.
Abstract:
In the past few decades there has been great advancement in use of aerodynamics in cars. Now its use has been evident from commercial cars to race cars for achieving higher speeds, stability and efficiency. This paper focusses on studying the effects of aerodynamics in Formula Student car. These cars weigh around 200kgs with an average speed of 60kmph. With increasing competition every year, developing a competitive car is a herculean task. The race track comprises mostly of tight corners and little or no straights thus testing the car’s cornering capabilities. Higher cornering speeds can be achieved by increasing traction at the tires. Studying the aerodynamics helps in achieving higher traction without much addition in overall weight of car. The main focus is to develop an aerodynamic package involving front wing, under tray and body to obtain an optimum value of down force. The initial process involves the detail study of geometrical constraints mentioned in the rule book and calculating the limiting value of drag as per the engine specifications. The successive steps involve conduction of various iterations in ANSYS for selection of airfoils, deciding the number of elements, designing the nose for low drag, channelizing the flow under the body and obtain an optimum value of down force within the limits defined in the initial process. The final step involves design of model using these results in Virtual environment called OptimumLap® for detailed study of performance with and without the presence of aerodynamics. The CFD analysis results showed an overall down force of 377.44N with a drag of 164.08N. The corresponding parameters of the last model were applied in OptimumLap® and an improvement of 3.5 seconds in lap times was observed.Keywords: aerodynamics, formula student, traction, front wing, undertray, body, rule book, drag, down force, virtual environment, computational fluid dynamics (CFD)
Procedia PDF Downloads 239876 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging
Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland
Abstract:
A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography
Procedia PDF Downloads 155875 Effect of Floods on Water Quality: A Global Review and Analysis
Authors: Apoorva Bamal, Agnieszka Indiana Olbert
Abstract:
Floods are known to be one of the most devastating hydro-climatic events, impacting a wide range of stakeholders in terms of environmental, social and economic losses. With difference in inundation durations and level of impact, flood hazards are of different degrees and strength. Amongst various set of domains being impacted by floods, environmental degradation in terms of water quality deterioration is one of the majorly effected but less highlighted domains across the world. The degraded water quality is caused by numerous natural and anthropogenic factors that are both point and non-point sources of pollution. Therefore, it is essential to understand the nature and source of the water pollution due to flooding. The major impact of floods is not only on the physico-chemical water quality parameters, but also on the biological elements leading to a vivid influence on the aquatic ecosystem. This deteriorated water quality is impacting many water categories viz. agriculture, drinking water, aquatic habitat, and miscellaneous services requiring an appropriate water quality to survive. This study identifies, reviews, evaluates and assesses multiple researches done across the world to determine the impact of floods on water quality. With a detailed statistical analysis of top relevant researches, this study is a synopsis of the methods used in assessment of impact of floods on water quality in different geographies, and identifying the gaps for further abridgement. As per majority of the studies, different flood magnitudes have varied impact on the water quality parameters leading to either increased or decreased values as compared to the recommended values for various categories. There is also an evident shift of the biological elements in the impacted waters leading to a change in its phenology and inhabitants of the specified water body. This physical, chemical and biological water quality degradation by floods is dependent upon its duration, extent, magnitude and flow direction. Therefore, this research provides an overview into the multiple impacts of floods on water quality, along with a roadmap of way forward to an efficient and uniform linkage of floods and impacted water quality dynamics.Keywords: floods, statistical analysis, water pollution, water quality
Procedia PDF Downloads 80874 Alkali Activated Materials Based on Natural Clay from Raciszyn
Authors: Michal Lach, Maria Hebdowska-Krupa, Justyna Stefanek, Artur Stanek, Anna Stefanska, Janusz Mikula, Marek Hebda
Abstract:
Limited resources of raw materials determine the necessity of obtaining materials from other sources. In this area, the most known and widespread are recycling processes, which are mainly focused on the reuse of material. Another possible solution used in various companies to achieve improvement in sustainable development is waste-free production. It involves the production exclusively from such materials, whose waste is included in the group of renewable raw materials. This means that they can: (i) be recycled directly during the manufacturing process of further products or (ii) be raw material obtained by other companies for the production of alternative products. The article presents the possibility of using post-production clay from the Jurassic limestone deposit "Raciszyn II" as a raw material for the production of alkali activated materials (AAM). Such products are currently increasingly used, mostly in various building applications. However, their final properties depend significantly on many factors; the most important of them are: chemical composition of the raw material, particle size, specific surface area, type and concentration of the activator and the temperature range of the heat treatment. Conducted mineralogical and chemical analyzes of clay from the “Raciszyn II” deposit confirmed that this material, due to its high content of aluminosilicates, can be used as raw material for the production of AAM. In order to obtain the product with the best properties, the optimization of the clay calcining process was also carried out. Based on the obtained results, it was found that this process should occur in the range between 750 oC and 800 oC. The use of a lower temperature causes getting a raw material with low metakaolin content which is the main component of materials suitable for alkaline activation processes. On the other hand, higher heat treatment temperatures cause thermal dissociation of large amounts of calcite, which is associated with the release of large amounts of CO2 and the formation of calcium oxide. This compound significantly accelerates the binding process, which consequently often prevents the correct formation of geopolymer mass. The effect of the use of various activators: (i) NaOH, (ii) KOH and (iii) a mixture of KOH to NaOH in a ratio of 10%, 25% and 50% by volume on the compressive strength of the AAM was also analyzed. Obtained results depending on the activator used were in the range from 25 MPa to 40 MPa. These values are comparable with the results obtained for materials produced on the basis of Portland cement, which is one of the most popular building materials.Keywords: alkaline activation, aluminosilicates, calcination, compressive strength
Procedia PDF Downloads 152873 Investigation of Produced and Ground Water Contamination of Al Wahat Area South-Eastern Part of Sirt Basin, Libya
Authors: Khalifa Abdunaser, Salem Eljawashi
Abstract:
Study area is threatened by numerous petroleum activities. The most important risk is associated with dramatic dangers of misuse and oil and gas pollutions, such as significant volumes of produced water, which refers to waste water generated during the production of oil and natural gas and disposed on the surface surrounded oil and gas fields. This work concerns the impact of oil exploration and production activities on the physical and environment fate of the area, focusing on the investigation and observation of crude oil migration as toxic fluid. Its penetration in groundwater resulted from the produced water impacted by oilfield operations disposed to the earth surface in Al Wahat area. Describing the areal distribution of the dominant groundwater quality constituents has been conducted to identify the major hydro-geochemical processes that affect the quality of water and to evaluate the relations between rock types and groundwater flow to the quality and geochemistry of water in Post-Eocene aquifer. The chemical and physical characteristics of produced water, where it is produced, and its potential impacts on the environment and on oil and gas operations have been discussed. Field work survey was conducted to identify and locate a large number of monitoring wells previously drilled throughout the study area. Groundwater samples were systematically collected in order to detect the fate of spills resulting from the various activities at the oil fields in the study area. Spatial distribution maps of the water quality parameters were built using Kriging methods of interpolation in ArcMap software. Thematic maps were generated using GIS and remote sensing techniques, which were applied to include all these data layers as an active database for the area for the purpose of identifying hot spots and prioritizing locations based on their environmental conditions as well as for monitoring plans.Keywords: Sirt Basin, produced water, Al Wahat area, Ground water
Procedia PDF Downloads 142872 The Study of Intangible Assets at Various Firm States
Authors: Gulnara Galeeva, Yulia Kasperskaya
Abstract:
The study deals with the relevant problem related to the formation of the efficient investment portfolio of an enterprise. The structure of the investment portfolio is connected to the degree of influence of intangible assets on the enterprise’s income. This determines the importance of research on the content of intangible assets. However, intangible assets studies do not take into consideration how the enterprise state can affect the content and the importance of intangible assets for the enterprise`s income. This affects accurateness of the calculations. In order to study this problem, the research was divided into several stages. In the first stage, intangible assets were classified based on their synergies as the underlying intangibles and the additional intangibles. In the second stage, this classification was applied. It showed that the lifecycle model and the theory of abrupt development of the enterprise, that are taken into account while designing investment projects, constitute limit cases of a more general theory of bifurcations. The research identified that the qualitative content of intangible assets significant depends on how close the enterprise is to being in crisis. In the third stage, the author developed and applied the Wide Pairwise Comparison Matrix method. This allowed to establish that using the ratio of the standard deviation to the mean value of the elements of the vector of priority of intangible assets makes it possible to estimate the probability of a full-blown crisis of the enterprise. The author has identified a criterion, which allows making fundamental decisions on investment feasibility. The study also developed an additional rapid method of assessing the enterprise overall status based on using the questionnaire survey with its Director. The questionnaire consists only of two questions. The research specifically focused on the fundamental role of stochastic resonance in the emergence of bifurcation (crisis) in the economic development of the enterprise. The synergetic approach made it possible to describe the mechanism of the crisis start in details and also to identify a range of universal ways of overcoming the crisis. It was outlined that the structure of intangible assets transforms into a more organized state with the strengthened synchronization of all processes as a result of the impact of the sporadic (white) noise. Obtained results offer managers and business owners a simple and an affordable method of investment portfolio optimization, which takes into account how close the enterprise is to a state of a full-blown crisis.Keywords: analytic hierarchy process, bifurcation, investment portfolio, intangible assets, wide matrix
Procedia PDF Downloads 206871 Virtual Screening and in Silico Toxicity Property Prediction of Compounds against Mycobacterium tuberculosis Lipoate Protein Ligase B (LipB)
Authors: Junie B. Billones, Maria Constancia O. Carrillo, Voltaire G. Organo, Stephani Joy Y. Macalino, Inno A. Emnacen, Jamie Bernadette A. Sy
Abstract:
The drug discovery and development process is generally known to be a very lengthy and labor-intensive process. Therefore, in order to be able to deliver prompt and effective responses to cure certain diseases, there is an urgent need to reduce the time and resources needed to design, develop, and optimize potential drugs. Computer-aided drug design (CADD) is able to alleviate this issue by applying computational power in order to streamline the whole drug discovery process, starting from target identification to lead optimization. This drug design approach can be predominantly applied to diseases that cause major public health concerns, such as tuberculosis. Hitherto, there has been no concrete cure for this disease, especially with the continuing emergence of drug resistant strains. In this study, CADD is employed for tuberculosis by first identifying a key enzyme in the mycobacterium’s metabolic pathway that would make a good drug target. One such potential target is the lipoate protein ligase B enzyme (LipB), which is a key enzyme in the M. tuberculosis metabolic pathway involved in the biosynthesis of the lipoic acid cofactor. Its expression is considerably up-regulated in patients with multi-drug resistant tuberculosis (MDR-TB) and it has no known back-up mechanism that can take over its function when inhibited, making it an extremely attractive target. Using cutting-edge computational methods, compounds from AnalytiCon Discovery Natural Derivatives database were screened and docked against the LipB enzyme in order to rank them based on their binding affinities. Compounds which have better binding affinities than LipB’s known inhibitor, decanoic acid, were subjected to in silico toxicity evaluation using the ADMET and TOPKAT protocols. Out of the 31,692 compounds in the database, 112 of these showed better binding energies than decanoic acid. Furthermore, 12 out of the 112 compounds showed highly promising ADMET and TOPKAT properties. Future studies involving in vitro or in vivo bioassays may be done to further confirm the therapeutic efficacy of these 12 compounds, which eventually may then lead to a novel class of anti-tuberculosis drugs.Keywords: pharmacophore, molecular docking, lipoate protein ligase B (LipB), ADMET, TOPKAT
Procedia PDF Downloads 422870 Benefits of Monitoring Acid Sulfate Potential of Coffee Rock (Indurated Sand) across Entire Dredge Cycle in South East Queensland
Authors: S. Albert, R. Cossu, A. Grinham, C. Heatherington, C. Wilson
Abstract:
Shipping trends suggest increasing vessel size and draught visiting Australian ports highlighting potential challenges to port infrastructure and requiring optimization of shipping channels to ensure safe passage for vessels. The Port of Brisbane in Queensland, Australia has an 80 km long access shipping channel which vessels must transit 15 km of relatively shallow coffee rock (generic class of indurated sands where sand grains are bound within an organic clay matrix) outcrops towards the northern passage in Moreton Bay. This represents a risk to shipping channel deepening and maintenance programs as the dredgeability of this material is more challenging due to its high cohesive strength compared with the surrounding marine sands and potential higher acid sulfate risk. In situ assessment of acid sulfate sediment for dredge spoil control is an important tool in mitigating ecological harm. The coffee rock in an anoxic undisturbed state does not pose any acid sulfate risk, however when disturbed via dredging it’s vital to ensure that any present iron sulfides are either insignificant or neutralized. To better understand the potential risk we examined the reduction potential of coffee rock across the entire dredge cycle in order to accurately portray the true outcome of disturbed acid sulfate sediment in dredging operations in Moreton Bay. In December 2014 a dredge trial was undertaken with a trailing suction hopper dredger. In situ samples were collected prior to dredging revealed acid sulfate potential above threshold guidelines which could lead to expensive dredge spoil management. However, potential acid sulfate risk was then monitored in the hopper and subsequent discharge, both showing a significant reduction in acid sulfate potential had occurred. Additionally, the acid neutralizing capacity significantly increased due to the inclusion of shell fragments (calcium carbonate) from the dredge target areas. This clearly demonstrates the importance of assessing potential acid sulfate risk across the entire dredging cycle and highlights the need to carefully evaluate sources of acidity.Keywords: acid sulfate, coffee rock, indurated sand, dredging, maintenance dredging
Procedia PDF Downloads 366