Search results for: Distributed generation (DG)
4023 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition
Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can
Abstract:
To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning
Procedia PDF Downloads 864022 Construction of Finite Woven Frames through Bounded Linear Operators
Authors: A. Bhandari, S. Mukherjee
Abstract:
Two frames in a Hilbert space are called woven or weaving if all possible merge combinations between them generate frames of the Hilbert space with uniform frame bounds. Weaving frames are powerful tools in wireless sensor networks which require distributed data processing. Considering the practical applications, this article deals with finite woven frames. We provide methods of constructing finite woven frames, in particular, bounded linear operators are used to construct woven frames from a given frame. Several examples are discussed. We also introduce the notion of woven frame sequences and characterize them through the concepts of gaps and angles between spaces.Keywords: frames, woven frames, gap, angle
Procedia PDF Downloads 1954021 Social Perspective of Gender Biasness Among Rural Children in Haryna State of India
Authors: Kamaljeet Kaur, Vinod Kumari, Jatesh Kathpalia, Bas Kaur
Abstract:
A gender bias towards girl child is pervasive across the world. It is seen in all the strata of the society and manifests in various forms. However nature and extent of these inequalities are not uniform. Generally these inequalities are more prevalent in patriarchal society. Despite emerging and increasing opportunities for women, there are still inequalities between men and women in each and every sphere like education, health, economy, polity and social sphere. Patriarchal ideology as a cultural norm enforces gender construction which is oriented toward hierarchical relations between the sexes and neglect of women in Indian society. Discrimination to girls may also vary by their age and be restricted to the birth order and sex composition of her elder surviving siblings. The present study was conducted to know the gender discrimination among rural children in India. The respondents were selected from three generations as per AICRP age group viz, 18-30 years (3rd generation), 31-60 years (2nd generation) and above 60 years (1st generation). A total sample size was 600 respondents from different villages of two districts of Haryana state comprising of half males and half females. Data were collected using personal interview schedule and analysed by SPSS software. Among the total births 46.35 per cent were girl child and 53.64 % were male child. Dropout rate was more in female children as compared to male children i.e. near about one third (31.09%) female children dropped school followed by 21.17 % male children. It was quite surprising that near about two-third (61.16%) female children and more than half (59.22%) of the male children dropped school. Cooking was mainly performed by adult female with overall mean scores 2.0 and ranked first which was followed by female child (1.7 mean scores) clearly indicating that cooking was the activity performed mainly by females while activity related to purchase of fruits and vegetable, cereals and pulses was mainly done by adult male. First preference was given to male child for serving of costly and special food. Regarding professional aspiration of children of the respondents’ families, it was observed that 20.10% of the male children wanted to become engineer, whereas only 3.89 % female children wanted to become engineer. Ratio of male children was high in both generations irrespective of the districts. School dropouts were more in case of female in both the 1st and 2 nd generations. The main reasons of school dropout were lack of interest, lack of resources and early marriage in both the generations. Female enrolment was more in faculty of arts, whereas in case of male percentage it was more in faculty of non-medical and medical which showed that female children were getting traditional type of education. It is suggested to provide equal opportunities to girls and boys in home as well as outside the home for smooth functioning of society.Keywords: gender biasness, male child, female child, education, home
Procedia PDF Downloads 864020 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center
Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael
Abstract:
Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency
Procedia PDF Downloads 364019 A Lightweight Blockchain: Enhancing Internet of Things Driven Smart Buildings Scalability and Access Control Using Intelligent Direct Acyclic Graph Architecture and Smart Contracts
Authors: Syed Irfan Raza Naqvi, Zheng Jiangbin, Ahmad Moshin, Pervez Akhter
Abstract:
Currently, the IoT system depends on a centralized client-servant architecture that causes various scalability and privacy vulnerabilities. Distributed ledger technology (DLT) introduces a set of opportunities for the IoT, which leads to practical ideas for existing components at all levels of existing architectures. Blockchain Technology (BCT) appears to be one approach to solving several IoT problems, like Bitcoin (BTC) and Ethereum, which offer multiple possibilities. Besides, IoTs are resource-constrained devices with insufficient capacity and computational overhead to process blockchain consensus mechanisms; the traditional BCT existing challenge for IoTs is poor scalability, energy efficiency, and transaction fees. IOTA is a distributed ledger based on Direct Acyclic Graph (DAG) that ensures M2M micro-transactions are free of charge. IOTA has the potential to address existing IoT-related difficulties such as infrastructure scalability, privacy and access control mechanisms. We proposed an architecture, SLDBI: A Scalable, lightweight DAG-based Blockchain Design for Intelligent IoT Systems, which adapts the DAG base Tangle and implements a lightweight message data model to address the IoT limitations. It enables the smooth integration of new IoT devices into a variety of apps. SLDBI enables comprehensive access control, energy efficiency, and scalability in IoT ecosystems by utilizing the Masked Authentication Message (MAM) protocol and the IOTA Smart Contract Protocol (ISCP). Furthermore, we suggest proof-of-work (PoW) computation on the full node in an energy-efficient way. Experiments have been carried out to show the capability of a tangle to achieve better scalability while maintaining energy efficiency. The findings show user access control management at granularity levels and ensure scale up to massive networks with thousands of IoT nodes, such as Smart Connected Buildings (SCBDs).Keywords: blockchain, IOT, direct acyclic graphy, scalability, access control, architecture, smart contract, smart connected buildings
Procedia PDF Downloads 1234018 Influence of Convective Boundary Condition on Chemically Reacting Micropolar Fluid Flow over a Truncated Cone Embedded in Porous Medium
Authors: Pradeepa Teegala, Ramreddy Chitteti
Abstract:
This article analyzes the mixed convection flow of chemically reacting micropolar fluid over a truncated cone embedded in non-Darcy porous medium with convective boundary condition. In addition, heat generation/absorption and Joule heating effects are taken into consideration. The similarity solution does not exist for this complex fluid flow problem, and hence non-similarity transformations are used to convert the governing fluid flow equations along with related boundary conditions into a set of nondimensional partial differential equations. Many authors have been applied the spectral quasi-linearization method to solve the ordinary differential equations, but here the resulting nonlinear partial differential equations are solved for non-similarity solution by using a recently developed method called the spectral quasi-linearization method (SQLM). Comparison with previously published work on special cases of the problem is performed and found to be in excellent agreement. The effect of pertinent parameters namely, Biot number, mixed convection parameter, heat generation/absorption, Joule heating, Forchheimer number, chemical reaction, micropolar and magnetic field on physical quantities of the flow are displayed through graphs and the salient features are explored in detail. Further, the results are analyzed by comparing with two special cases, namely, vertical plate and full cone wherever possible.Keywords: chemical reaction, convective boundary condition, joule heating, micropolar fluid, mixed convection, spectral quasi-linearization method
Procedia PDF Downloads 2774017 Dual Metal Organic Framework Derived N-Doped Fe3C Nanocages Decorated with Ultrathin ZnIn2S4 Nanosheets for Efficient Photocatalytic Hydrogen Generation
Authors: D. Amaranatha Reddy
Abstract:
Highly efficient and stable co-catalysts materials is of great important for boosting photo charge carrier’s separation, transportation efficiency, and accelerating the catalytic reactive sites of semiconductor photocatalysts. As a result, it is of decisive importance to fabricate low price noble metal free co-catalysts with high catalytic reactivity, but it remains very challenging. Considering this challenge here, dual metal organic frame work derived N-Doped Fe3C nanocages have been rationally designed and decorated with ultrathin ZnIn2S4 nanosheets for efficient photocatalytic hydrogen generation. The fabrication strategy precisely integrates co-catalyst nanocages with ultrathin two-dimensional (2D) semiconductor nanosheets by providing tightly interconnected nano-junctions and helps to suppress the charge carrier’s recombination rate. Furthermore, constructed highly porous hybrid structures expose ample active sites for catalytic reduction reactions and harvest visible light more effectively by light scattering. As a result, fabricated nanostructures exhibit superior solar driven hydrogen evolution rate (9600 µmol/g/h) with an apparent quantum efficiency of 3.6 %, which is relatively higher than the Pt noble metal co-catalyst systems and earlier reported ZnIn2S4 based nanohybrids. We believe that the present work promotes the application of sulfide based nanostructures in solar driven hydrogen production.Keywords: photocatalysis, water splitting, hydrogen fuel production, solar-driven hydrogen
Procedia PDF Downloads 1344016 Development of a Methodology for Surgery Planning and Control: A Management Approach to Handle the Conflict of High Utilization and Low Overtime
Authors: Timo Miebach, Kirsten Hoeper, Carolin Felix
Abstract:
In times of competitive pressures and demographic change, hospitals have to reconsider their strategies as a company. Due to the fact, that operations are one of the main income and one of the primary cost drivers otherwise, a process-oriented approach and an efficient use of resources seems to be the right way for getting a consistent market position. Thus, the efficient operation room occupancy planning is an important cause variable for the success and continued the existence of these institutions. A high utilization of resources is essential. This means a very high, but nevertheless sensible capacity-oriented utilization of working systems that can be realized by avoiding downtimes and a thoughtful occupancy planning. This engineering approach should help hospitals to reach her break-even point. Firstly, the aim is to establish a strategy point, which can be used for the generation of a planned throughput time. Secondly, the operation planning and control should be facilitated and implemented accurately by the generation of time modules. More than 100,000 data records of the Hannover Medical School were analyzed. The data records contain information about the type of conducted operation, the duration of the individual process steps, and all other organizational-specific data such as an operating room. Based on the aforementioned data base, a generally valid model was developed by an analysis to define a strategy point which takes the conflict of capacity utilization and low overtime into account. Furthermore, time modules were generated in this work, which allows a simplified and flexible operation planning and control for the operation manager. By the time modules, it is possible to reduce a high average value of the idle times of the operation rooms. Furthermore, the potential is used to minimize the idle time spread.Keywords: capacity, operating room, surgery planning and control, utilization
Procedia PDF Downloads 2534015 Design and Fabrication of Pulse Detonation Engine Based on Numerical Simulation
Authors: Vishal Shetty, Pranjal Khasnis, Saptarshi Mandal
Abstract:
This work explores the design and fabrication of a fundamental pulse detonation engine (PDE) prototype on the basis of pressure and temperature pulse obtained from numerical simulation of the same. PDE is an advanced propulsion system that utilizes detonation waves for thrust generation. PDEs use a fuel-air mixture ignited to create a supersonic detonation wave, resulting in rapid energy release, high pressures, and high temperatures. The operational cycle includes fuel injection, ignition, detonation, exhaust of combustion products, and purging of the chamber for the next cycle. This work presents details of the core operating principles of a PDE, highlighting its potential advantages over traditional jet engines that rely on continuous combustion. The design focuses on a straightforward, valve-controlled system for fuel and oxidizer injection into a detonation tube. The detonation was initiated using an electronically controlled spark plug or similar high-energy ignition source. Following the detonation, a purge valve was employed to expel the combusted gases and prepare the tube for the next cycle. Key considerations for the design include material selection for the detonation tube to withstand the high temperatures and pressures generated during detonation. Fabrication techniques prioritized readily available machining methods to create a functional prototype. This work detailed the testing procedures for verifying the functionality of the PDE prototype. Emphasis was given to the measurement of thrust generation and capturing of pressure data within the detonation tube. The numerical analysis presents performance evaluation and potential areas for future design optimization.Keywords: pulse detonation engine, ignition, detonation, combustion
Procedia PDF Downloads 244014 Design and Analysis of a Combined Cooling, Heating and Power Plant for Maximum Operational Flexibility
Authors: Salah Hosseini, Hadi Ramezani, Bagher Shahbazi, Hossein Rabiei, Jafar Hooshmand, Hiwa Khaldi
Abstract:
Diversity of energy portfolio and fluctuation of urban energy demand establish the need for more operational flexibility of combined Cooling, Heat, and Power Plants. Currently, the most common way to achieve these specifications is the use of heat storage devices or wet operation of gas turbines. The current work addresses using variable extraction steam turbine in conjugation with a gas turbine inlet cooling system as an alternative way for enhancement of a CCHP cycle operating range. A thermodynamic model is developed and typical apartments building in PARDIS Technology Park (located at Tehran Province) is chosen as a case study. Due to the variable Heat demand and using excess chiller capacity for turbine inlet cooling purpose, the mentioned steam turbine and TIAC system provided an opportunity for flexible operation of the cycle and boosted the independence of the power and heat generation in the CCHP plant. It was found that the ratio of power to the heat of CCHP cycle varies from 12.6 to 2.4 depending on the City heating and cooling demands and ambient condition, which means a good independence between power and heat generation. Furthermore, selection of the TIAC design temperature is done based on the amount of ratio of power gain to TIAC coil surface area, it was found that for current cycle arrangement the TIAC design temperature of 15 C is most economical. All analysis is done based on the real data, gathered from the local weather station of the PARDIS site.Keywords: CCHP plant, GTG, HRSG, STG, TIAC, operational flexibility, power to heat ratio
Procedia PDF Downloads 2824013 All-Optical Gamma-Rays and Positrons Source by Ultra-Intense Laser Irradiating an Al Cone
Authors: T. P. Yu, J. J. Liu, X. L. Zhu, Y. Yin, W. Q. Wang, J. M. Ouyang, F. Q. Shao
Abstract:
A strong electromagnetic field with E>1015V/m can be supplied by an intense laser such as ELI and HiPER in the near future. Exposing in such a strong laser field, laser-matter interaction enters into the near quantum electrodynamics (QED) regime and highly non-linear physics may occur during the laser-matter interaction. Recently, the multi-photon Breit-Wheeler (BW) process attracts increasing attention because it is capable to produce abundant positrons and it enhances the positron generation efficiency significantly. Here, we propose an all-optical scheme for bright gamma rays and dense positrons generation by irradiating a 1022 W/cm2 laser pulse onto an Al cone filled with near-critical-density plasmas. Two-dimensional (2D) QED particle-in-cell (PIC) simulations show that, the radiation damping force becomes large enough to compensate for the Lorentz force in the cone, causing radiation-reaction trapping of a dense electron bunch in the laser field. The trapped electrons oscillate in the laser electric field and emits high-energy gamma photons in two ways: (1) nonlinear Compton scattering due to the oscillation of electrons in the laser fields, and (2) Compton backwardscattering resulting from the bunch colliding with the reflected laser by the cone tip. The multi-photon Breit-Wheeler process is thus initiated and abundant electron-positron pairs are generated with a positron density ~1027m-3. The scheme is finally demonstrated by full 3D PIC simulations, which indicate the positron flux is up to 109. This compact gamma ray and positron source may have promising applications in future.Keywords: BW process, electron-positron pairs, gamma rays emission, ultra-intense laser
Procedia PDF Downloads 2604012 Feasibility Study of Tidal Current of the Bay of Bengal to Generate Electricity as a Renewable Energy
Authors: Myisha Ahmad, G. M. Jahid Hasan
Abstract:
Electricity is the pinnacle of human civilization. At present, the growing concerns over significant climate change have intensified the importance of the use of renewable energy technologies for electricity generation. The interest is primarily due to better energy security, smaller environmental impact and providing a sustainable alternative compared to the conventional energy sources. Solar power, wind, biomass, tidal power, and wave power are some of the most reliable sources of renewable energy. Ocean approximately holds 2×10³ TW of energy and has the largest renewable energy resource on the planet. Ocean energy has many forms namely, encompassing tides, ocean circulation, surface waves, salinity and thermal gradients. Ocean tide in particular, associates both potential and kinetic energy. The study is focused on the latter concept that deals with tidal current energy conversion technologies. Tidal streams or marine currents generate kinetic energy that can be extracted by marine current energy devices and converted into transmittable energy form. The principle of technology development is very comparable to that of wind turbines. Conversion of marine tidal resources into substantial electrical power offers immense opportunities to countries endowed with such resources and this work is aimed at addressing such prospects of Bangladesh. The study analyzed the extracted current velocities from numerical model works at several locations in the Bay of Bengal. Based on current magnitudes, directions and available technologies the most fitted locations were adopted and possible annual generation capacity was estimated. The paper also examines the future prospects of tidal current energy along the Bay of Bengal and establishes a constructive approach that could be adopted in future project developments.Keywords: bay of Bengal, energy potential, renewable energy, tidal current
Procedia PDF Downloads 3754011 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors
Authors: Jakob Krause
Abstract:
Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling
Procedia PDF Downloads 1494010 Preparation of Nanophotonics LiNbO3 Thin Films and Studying Their Morphological and Structural Properties by Sol-Gel Method for Waveguide Applications
Authors: A. Fakhri Makram, Marwa S. Alwazni, Al-Douri Yarub, Evan T. Salim, Hashim Uda, Chin C. Woei
Abstract:
Lithium niobate (LiNbO3) nanostructures are prepared on quartz substrate by the sol-gel method. They have been deposited with different molarity concentration and annealed at 500°C. These samples are characterized and analyzed by X-ray diffraction (XRD), Scanning Electron Microscope (SEM) and Atomic Force Microscopy (AFM). The measured results showed an importance increasing in molarity concentrations that indicate the structure starts to become crystal, regular, homogeneous, well crystal distributed, which made it more suitable for optical waveguide application.Keywords: lithium niobate, morphological properties, thin film, pechini method, XRD
Procedia PDF Downloads 4474009 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip
Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas
Abstract:
A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration
Procedia PDF Downloads 3884008 High-Rise Building with PV Facade
Authors: Jiří Hirš, Jitka Mohelnikova
Abstract:
A photovoltaic system integrated into a high-rise building façade was studied. The high-rise building is located in the Central Europe region with temperate climate and dominant partly cloudy and overcast sky conditions. The PV façade has been monitored since 2013. The three-year monitoring of the façade energy generation shows that the façade has an important impact on the building energy efficiency and sustainable operation.Keywords: buildings, energy, PV façade, solar radiation
Procedia PDF Downloads 3094007 A Nanoindentation Study of Thin Film Prepared by Physical Vapor Deposition
Authors: Dhiflaoui Hafedh, Khlifi Kaouther, Ben Cheikh Larbi Ahmed
Abstract:
Monolayer and multilayer coatings of CrN and AlCrN deposited on 100Cr6 (AISI 52100) substrate by PVD magnetron sputtering system. The micro structures of the coatings were characterized using atomic force microscopy (AFM). The AFM analysis revealed the presence of domes and craters which are uniformly distributed over all surfaces of the various layers. Nano indentation measurement of CrN coating showed maximum hardness (H) and modulus (E) of 14 GPa and 240 GPa, respectively. The measured H and E values of AlCrN coatings were found to be 30 GPa and 382 GPa, respectively. The improved hardness in both the coatings was attributed mainly to a reduction in crystallite size and decrease in surface roughness. The incorporation of Al into the CrN coatings has improved both hardness and Young’s modulus.Keywords: CrN, AlCrN coatings, hardness, nanoindentation
Procedia PDF Downloads 5594006 Depollution of the Pinheiros River in the City of São Paulo: Mapping the Dynamics of Conflicts and Coalitions between Actors in Two Recent Depollution Projects
Authors: Adalberto Gregorio Back
Abstract:
Historically, the Pinheiros River, which crosses the urban area of the largest South American metropolis, the city of São Paulo, has been the subject of several interventions involving different interests and multiple demands, including the implementation of road axes and industrial occupation in the city, following its floodplains. the dilution of sewers; generation of electricity, with the reversal of its waters to the Billings Dam; and urban drainage. These processes, together with the exclusionary and peripheral urban sprawl with high population density in the peripheries, result in difficulties for the collection and treatment of household sewage, which flow into the tributaries and the Pinheiros River itself. In the last 20 years, two separate projects have been undertaken to clean up its waters. The first one between 2001-2011 was the flotation system, aimed at cleaning the river in its own gutter with equipment installed near the Bilings Dam; and, more recently, from 2019 to 2022, the proposal to connect about 74 thousand dwellings to the sewage collection and treatment system, as well as to install treatment plants in the tributaries of Pinheiros where the connection to the system is impracticable, given the irregular occupations. The purpose of this paper is to make a comparative analysis on the dynamics of conflicts, interests and opportunities of coalitions between the actors involved in the two referred projects of pollution of the Pinheiros River. For this, we use the analysis of documents produced by the state government; as well as documents related to the legal disputes that occurred in the first attempt of decontamination involving the sanitation company; the Billings Dam management company interested in power generation; the city hall and regular and irregular dwellings not linked to the sanitation system.Keywords: depollution of the Pinheiros River, interests groups, São Paulo, water energy nexus
Procedia PDF Downloads 1064005 p210 BCR-ABL1 CML with CMML Clones: A Rare Presentation
Authors: Mona Vijayaran, Gurleen Oberoi, Sanjay Mishra
Abstract:
Introduction: p190 BCR‐ABL1 in CML is often associated with monocytosis. In the case described here, monocytosis is associated with coexisting p210 BCR‐ABL and CMML clones. Mutation analysis using next‐generation sequence (NGS) in our case showed TET2 and SRSF2 mutations. Aims & Objectives: A 75-year male was evaluated for monocytosis and thrombocytopenia. CBC showed Hb-11.8g/dl, TLC-12,060/cmm, Monocytes-35%, Platelets-39,000/cmm. Materials & Methods: Bone marrow examination showed a hypercellular marrow with myeloid series showing sequential maturation up to neutrophils with 30% monocytes. Immunophenotyping by flow cytometry from bone marrow had 3% blasts. Making chronic myelomonocytic leukemia as the likely diagnosis. NGS for myeloid mutation panel had TET2 (48.9%) and SRSF2 (32.5%) mutations. This report further supported the diagnosis of CMML. To fulfil the WHO diagnostic criteria for CMML, a BCR ABL1 by RQ-PCR was sent. The report came positive for p210 (B3A2, B2A2) Major Transcript (M-BCR) % IS of 38.418. Result: The patient was counselled regarding the unique presentation of the presence of 2 clones- P210 CML and CMML. After discussion with an international faculty with vast experience in CMML. It was decided to start this elderly gentleman on Imatinib 200mg and not on azacytidine, as ASXL1 was not present; hence, his chances of progressing to AML would be less and on the other end, if CML is left untreated then chances of progression to blast phase would always be a possibility. After 3 months on Imatinib his platelet count improved to 80,000 to 90,000/cmm, but his monocytosis persists. His 3rd month BCR-ABL1 IS% is 0.004%. Conclusion: After searching the literature, there were no case reports of a coexisting CML p210 with CMML. This case might be the first case report. p190 BCR ABL1 is often associated with monocytosis. There are few case reports of p210 BCR ABL1 positivity in patients with monocytosis but none with coexisting CMML. This case highlights the need for extensively evaluating patients with monocytosis with next-generation sequencing for myeloid mutation panel and BCR-ABL1 by RT-PCR to correctly diagnose and treat them.Keywords: CMML, NGS, p190 CML, Imatinib
Procedia PDF Downloads 774004 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation
Authors: Miguel Contreras, David Long, Will Bachman
Abstract:
Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models
Procedia PDF Downloads 2054003 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor Under Scour, and Anchor Transportation and Installation (T&I)
Authors: Vinay Kumar Vanjakula, Frank Adam
Abstract:
The generation of electricity through wind power is one of the leading renewable energy generation methods. Due to abundant higher wind speeds far away from shore, the construction of offshore wind turbines began in the last decades. However, the installation of offshore foundation-based (monopiles) wind turbines in deep waters are often associated with technical and financial challenges. To overcome such challenges, the concept of floating wind turbines is expanded as the basis of the oil and gas industry. For such a floating system, stabilization in harsh conditions is a challenging task. For that, a robust heavy-weight gravity anchor is needed. Transportation of such anchor requires a heavy vessel that increases the cost. To lower the cost, the gravity anchor is designed with ballast chambers that allow the anchor to float while towing and filled with water when lowering to the planned seabed location. The presence of such a large structure may influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes influence the installation process. Also, after installation and under operating conditions, the flow around the anchor may allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scouring on fixed structures (bridges and monopiles) in rivers and oceans have been carried out, and very limited research work on scouring around a bluff-shaped gravity anchor. The objective of this study involves the application of different numerical models to simulate the anchor towing under waves and calm water conditions. Anchor lowering involves the investigation of anchor movements at certain water depths under wave/current. The motions of anchor drift, heave, and pitch is of special focus. The further study involves anchor scour, where the anchor is installed in the seabed; the flow of underwater current around the anchor induces vortices mainly at the front and corners that develop soil erosion. The study of scouring on a submerged gravity anchor is an interesting research question since the flow not only passes around the anchor but also over the structure that forms different flow vortices. The achieved results and the numerical model will be a basis for the development of other designs and concepts for marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM and other similar software.Keywords: anchor lowering, anchor towing, gravity anchor, computational fluid dynamics, scour
Procedia PDF Downloads 1704002 Features of Fossil Fuels Generation from Bazhenov Formation Source Rocks by Hydropyrolysis
Authors: Anton G. Kalmykov, Andrew Yu. Bychkov, Georgy A. Kalmykov
Abstract:
Nowadays, most oil reserves in Russia and all over the world are hard to recover. That is the reason oil companies are searching for new sources for hydrocarbon production. One of the sources might be high-carbon formations with unconventional reservoirs. Bazhenov formation is a huge source rock formation located in West Siberia, which contains unconventional reservoirs on some of the areas. These reservoirs are formed by secondary processes with low predicting ratio. Only one of five wells is drilled through unconventional reservoirs, in others kerogen has low thermal maturity, and they are of low petroliferous. Therefore, there was a request for tertiary methods for in-situ cracking of kerogen and production of oil. Laboratory experiments of Bazhenov formation rock hydrous pyrolysis were used to investigate features of the oil generation process. Experiments on Bazhenov rocks with a different mineral composition (silica concentration from 15 to 90 wt.%, clays – 5-50 wt.%, carbonates – 0-30 wt.%, kerogen – 1-25 wt.%) and thermal maturity (from immature to late oil window kerogen) were performed in a retort under reservoir conditions. Rock samples of 50 g weight were placed in retort, covered with water and heated to the different temperature varied from 250 to 400°C with the durability of the experiments from several hours to one week. After the experiments, the retort was cooled to room temperature; generated hydrocarbons were extracted with hexane, then separated from the solvent and weighted. The molecular composition of this synthesized oil was then investigated via GC-MS chromatography Characteristics of rock samples after the heating was measured via the Rock-Eval method. It was found, that the amount of synthesized oil and its composition depending on the experimental conditions and composition of rocks. The highest amount of oil was produced at a temperature of 350°C after 12 hours of heating and was up to 12 wt.% of initial organic matter content in the rocks. At the higher temperatures and within longer heating time secondary cracking of generated hydrocarbons occurs, the mass of produced oil is lowering, and the composition contains more hydrocarbons that need to be recovered by catalytical processes. If the temperature is lower than 300°C, the amount of produced oil is too low for the process to be economically effective. It was also found that silica and clay minerals work as catalysts. Selection of heating conditions allows producing synthesized oil with specified composition. Kerogen investigations after heating have shown that thermal maturity increases, but the yield is only up to 35% of the maximum amount of synthetic oil. This yield is the result of gaseous hydrocarbons formation due to secondary cracking and aromatization and coaling of kerogen. Future investigations will allow the increase in the yield of synthetic oil. The results are in a good agreement with theoretical data on kerogen maturation during oil production. Evaluated trends could be tooled up for in-situ oil generation by shale rocks thermal action.Keywords: Bazhenov formation, fossil fuels, hydropyrolysis, synthetic oil
Procedia PDF Downloads 1144001 From Primer Generation to Chromosome Identification: A Primer Generation Genotyping Method for Bacterial Identification and Typing
Authors: Wisam H. Benamer, Ehab A. Elfallah, Mohamed A. Elshaari, Farag A. Elshaari
Abstract:
A challenge for laboratories is to provide bacterial identification and antibiotic sensitivity results within a short time. Hence, advancement in the required technology is desirable to improve timing, accuracy and quality. Even with the current advances in methods used for both phenotypic and genotypic identification of bacteria the need is there to develop method(s) that enhance the outcome of bacteriology laboratories in accuracy and time. The hypothesis introduced here is based on the assumption that the chromosome of any bacteria contains unique sequences that can be used for its identification and typing. The outcome of a pilot study designed to test this hypothesis is reported in this manuscript. Methods: The complete chromosome sequences of several bacterial species were downloaded to use as search targets for unique sequences. Visual basic and SQL server (2014) were used to generate a complete set of 18-base long primers, a process started with reverse translation of randomly chosen 6 amino acids to limit the number of the generated primers. In addition, the software used to scan the downloaded chromosomes using the generated primers for similarities was designed, and the resulting hits were classified according to the number of similar chromosomal sequences, i.e., unique or otherwise. Results: All primers that had identical/similar sequences in the selected genome sequence(s) were classified according to the number of hits in the chromosomes search. Those that were identical to a single site on a single bacterial chromosome were referred to as unique. On the other hand, most generated primers sequences were identical to multiple sites on a single or multiple chromosomes. Following scanning, the generated primers were classified based on ability to differentiate between medically important bacterial and the initial results looks promising. Conclusion: A simple strategy that started by generating primers was introduced; the primers were used to screen bacterial genomes for match. Primer(s) that were uniquely identical to specific DNA sequence on a specific bacterial chromosome were selected. The identified unique sequence can be used in different molecular diagnostic techniques, possibly to identify bacteria. In addition, a single primer that can identify multiple sites in a single chromosome can be exploited for region or genome identification. Although genomes sequences draft of isolates of organism DNA enable high throughput primer design using alignment strategy, and this enhances diagnostic performance in comparison to traditional molecular assays. In this method the generated primers can be used to identify an organism before the draft sequence is completed. In addition, the generated primers can be used to build a bank for easy access of the primers that can be used to identify bacteria.Keywords: bacteria chromosome, bacterial identification, sequence, primer generation
Procedia PDF Downloads 1934000 Reclaiming the Lost Jewish Identity of a Second Generation Holocaust Survivor Raised as a Christian: The Role of Art and Art Therapy
Authors: Bambi Ward
Abstract:
Children of Holocaust survivors have been described as inheriting their parents’ trauma as a result of ‘vicarious memory’. The term refers to a process whereby second generation Holocaust survivors subconsciously remember aspects of Holocaust trauma, despite not having directly experienced it. This can occur even when there has been a conspiracy of silence in which survivors chose not to discuss the Holocaust with their children. There are still people born in various parts of the world such as Poland, Hungary, other parts of Europe, USA, Canada and Australia, who have only learnt of their Jewish roots as adults. This discovery may occur during a parent’s deathbed confession, or when an adult child is sorting through the personal belongings of a deceased family member. Some Holocaust survivors chose to deny their Jewish heritage and raise their children as Christians. Reasons for this decision include the trauma experienced during the Holocaust for simply being Jewish, the existence of anti-Semitism, and the desire to protect one’s self and one’s family. Although there has been considerable literature written about the transgenerational impact of trauma on children of Holocaust survivors, there has been little scholarly investigation into the effects of a hidden Jewish identity on these children. This paper presents a case study of an adult child of Hungarian Holocaust survivors who was raised as a Christian. At the age of eight she was told about her family’s Jewish background, but her parents insisted that she keep this a secret, even if asked directly. She honoured their request until she turned forty. By that time she had started the challenging process of reclaiming her Jewish identity. The paper outlines the tension between family loyalty and individual freedom, and discusses the role that art and art therapy played in assisting the subject of the case study to reclaim her Jewish identity and commence writing a memoir about her spiritual journey. The main methodology used in this case study is creative practice-led research. Particular attention is paid to the utilisation of an autoethnographic approach. The autoethnographic tools used include reflective journals of the subject of the case study. These journals reflect on the subject’s collection of autobiographical data relating to her family history, and include memories, drawings, products of art therapy, diaries, letters, photographs, home movies, objects, and oral history interviews with her mother. The case study illustrates how art and art therapy benefitted a second generation Holocaust survivor who was brought up having to suppress her Jewish identity. The process allowed her to express subconscious thoughts and feelings about her identity and free herself from the burden of the long term secret she had been carrying. The process described may also be of assistance to other traumatised people who have been trying to break the silence and who are seeking to express themselves in a positive and healing way.Keywords: art, hidden identity, holocaust, silence
Procedia PDF Downloads 2403999 Energy Consumption, Population and Economic Development Dynamics in Nigeria: An Empirical Evidence
Authors: Evelyn Nwamaka Ogbeide-Osaretin, Bright Orhewere
Abstract:
This study examined the role of the population in the linkage between energy consumption and economic development in Nigeria. Time series data on energy consumption, population, and economic development were used for the period 1995 to 2020. The Autoregressive Distributed Lag -Error Correction Model (ARDL-ECM) was engaged. Economic development had a negative substantial impact on energy consumption in the long run. Population growth had a positive significant effect on energy consumption. Government expenditure was also found to impact the level of energy consumption, while energy consumption is not a function of oil price in Nigeria.Keywords: dynamic analysis, energy consumption, population, economic development, Nigeria
Procedia PDF Downloads 1833998 Simultaneous Adsorption and Characterization of NOx and SOx Emissions from Power Generation Plant on Sliced Porous Activated Carbon Prepared by Physical Activation
Authors: Muhammad Shoaib, Hassan M. Al-Swaidan
Abstract:
Air pollution has been a major challenge for the scientists today, due to the release of toxic emissions from various industries like power plants, desalination plants, industrial processes and transportation vehicles. Harmful emissions into the air represent an environmental pressure that reflects negatively on human health and productivity, thus leading to a real loss in the national economy. Variety of air pollutants in the form of carbon oxides, hydrocarbons, nitrogen oxides, sulfur oxides, suspended particulate material etc. are present in air due to the combustion of different types of fuels like crude oil, diesel oil and natural gas. Among various pollutants, NOx and SOx emissions are considered as highly toxic due to its carcinogenicity and its relation with various health disorders. In Kingdom of Saudi Arabia electricity is generated by burning of crude, diesel or natural gas in the turbines of electricity stations. Out of these three, crude oil is used extensively for electricity generation. Due to the burning of the crude oil there are heavy contents of gaseous pollutants like sulfur dioxides (SOx) and nitrogen oxides (NOx), gases which are ultimately discharged in to the environment and is a serious environmental threat. The breakthrough point in case of lab studies using 1 gm of sliced activated carbon adsorbant comes after 20 and 30 minutes for NOx and SOx, respectively, whereas in case of PP8 plant breakthrough point comes in seconds. The saturation point in case of lab studies comes after 100 and 120 minutes and for actual PP8 plant it comes after 60 and 90 minutes for NOx and SOx adsorption, respectively. Surface characterization of NOx and SOx adsorption on SAC confirms the presence of peaks in the FT-IR spectrum. CHNS study verifies that the SAC is suitable for NOx and SOx along with some other C and H containing compounds coming out from stack emission stream from the turbines of a power plant.Keywords: activated carbon, flue gases, NOx and SOx adsorption, physical activation, power plants
Procedia PDF Downloads 3483997 Geochemical Studies of Mud Volcanoes Fluids According to Petroleum Potential of the Lower Kura Depression (Azerbaijan)
Authors: Ayten Bakhtiyar Khasayeva
Abstract:
Lower Kura depression is a part of the South Caspian Basin (SCB), located between the folded regions of the Greater and Lesser Caucasus. The region is characterized by thick sedimentary cover 22 km (SCB up to 30 km), high sedimentation rate, low geothermal gradient (average value corresponds to 2 °C / 100m). There is Quaternary, Pliocene, Miocene and Oligocene deposits take part in geological structure. Miocene and Oligocene deposits are opened by prospecting and exploratory wells in the areas of Kalamaddin and Garabagli. There are 25 mud volcanoes within the territory of the Lower Kura depression, which are the unique source of information about hydrocarbons contenting great depths. During the wells data research, solid erupted products and mud volcano fluids, and according to the geological and thermal characteristics of the region, it was determined that the main phase of the hydrocarbon generation (MK1-AK2) corresponds to a wide range of depths from 10 to 14 km, which corresponds to the Pliocene-Miocene sediments, and to the "oil and gas windows" according to the intended meaning of R0 ≈ 0,65-0,85%. Fluids of mud volcanoes comprise by the following phases - gas, water. Gas phase consists mainly of methane (99%) of heavy hydrocarbons (С2+ hydrocarbons), CO2, N2, inert components He, Ar. The content of the С2+ hydrocarbons in the gases of mud volcanoes associated with oil deposits is increased. Carbon isotopic composition of methane for the Lower Kura depression varies from -40 ‰ to -60 ‰. Water of mud volcanoes are represented by all four genetic types. However the most typical types of water are HCN type. According to the Mg-Li geothermometer formation of mud waters corresponds to the temperature range from 20 °C to 140 °C (PC2). The solid product emissions of mud volcanoes identified 90 minerals and 30 trace elements. As a result geochemical investigation, thermobaric and geological conditions, zone oil and gas generation - the prospect of the Lower Kura depression is projected to depths greater than 10 km.Keywords: geology, geochemistry, mud volcanoes, petroleum potential
Procedia PDF Downloads 3673996 Release of Legacy Persistent Organic Pollutants and Mitigating Their Effects in Downstream Communities
Authors: Kimberley Rain Miner, Karl Kreutz, Larry LeBlanc
Abstract:
During the period of 1950-1970 persistent organic pollutants such as DDT, dioxin and PCB were released in the atmosphere and distributed through precipitation into glaciers throughout the world. Recent abrupt climate change is increasing the melt rate of these glaciers, introducing the toxins to the watershed. Studies have shown the existence of legacy pollutants in glacial ice, but neither the impact nor quantity of these toxins on downstream populations has been assessed. If these pollutants are released at toxic levels it will be necessary to create a mitigation plan to lower their impact on the affected communities.Keywords: climate change, adaptation, mitigation, risk management
Procedia PDF Downloads 3623995 Study of Composite Beam under the Effect of Shear Deformation
Authors: Hamid Hamli Benzahar
Abstract:
The main goal of this research is to study the deflection of a composite beam CB taking into account the effect of shear deformation. The structure is made up of two beams of different sections, joined together by thin adhesive, subjected to end moments and a distributed load. The fundamental differential equation of CB can be obtained from the total energy equation while considering the shear deformation. The differential equation found will be compared with those found in CB, where the shear deformation is zero. The CB system is numerically modeled by the finite element method, where the numerical results of deflection will be compared with those found theoretically.Keywords: composite beam, shear deformation, moments, finites elements
Procedia PDF Downloads 763994 Harnessing the Generation of Ferromagnetic and Silver Nanostructures from Tropical Aquatic Microbial Nanofactories
Authors: Patricia Jayshree Jacob, Mas Jaffri Masarudinb, Mohd Zobir Hussein, Raha Abdul Rahim
Abstract:
Iron based ferromagnetic nanoparticles (IONP) and silver nanostructures (AgNP) have found a wide range of application in antimicrobial therapy, cell targeting, and environmental applications. As such, the design of well-defined monodisperse IONPs and AgNPs have become an essential tool in nanotechnology. Fabrication of these nanostructures using conventional methods is not environmentally conducive and weigh heavily on energy and outlays. Selected microorganisms possess the innate ability to reduce metallic ions in colloidal aqueous solution to generate nanoparticles. Hence, harnessing this potential is a way forward in constructing microbial nano-factories, capable of churning out high yields of well-defined IONP’s and AgNP's with physicochemical characteristics on par with the best synthetically produced nanostructures. In this paper, we report the isolation and characterization of bacterial strains isolated from the tropical marine and freshwater ecosystems of Malaysia that demonstrated facile and rapid generation of ferromagnetic nanoparticles and silver nanostructures when precursors such as FeCl₃.6H₂O and AgNO₃ were added to the cell-free bacterial lysate in colloidal solution. Characterization of these nanoparticles was carried out using FESEM, UV Spectrophotometer, XRD, DLS and FTIR. This aerobic bioprocess was carried out at ambient temperature and humidity and has the potential to be developed for environmental friendly, cost effective large scale production of IONP’s. A preliminary bioprocess study on the harvesting time, incubation temperature and pH was also carried out to determine pertinent abiotic parameters contributing to the optimal production of these nanostructures.Keywords: iron oxide nanoparticles, silver nanoparticles, biosynthesis, aquatic bacteria
Procedia PDF Downloads 285