Search results for: typing efficiency
4891 Exergy Analysis of a Green Dimethyl Ether Production Plant
Authors: Marcello De Falco, Gianluca Natrella, Mauro Capocelli
Abstract:
CO₂ capture and utilization (CCU) is a promising approach to reduce GHG(greenhouse gas) emissions. Many technologies in this field are recently attracting attention. However, since CO₂ is a very stable compound, its utilization as a reagent is energetic intensive. As a consequence, it is unclear whether CCU processes allow for a net reduction of environmental impacts from a life cycle perspective and whether these solutions are sustainable. Among the tools to apply for the quantification of the real environmental benefits of CCU technologies, exergy analysis is the most rigorous from a scientific point of view. The exergy of a system is the maximum obtainable work during a process that brings the system into equilibrium with its reference environment through a series of reversible processes in which the system can only interact with such an environment. In other words, exergy is an “opportunity for doing work” and, in real processes, it is destroyed by entropy generation. The exergy-based analysis is useful to evaluate the thermodynamic inefficiencies of processes, to understand and locate the main consumption of fuels or primary energy, to provide an instrument for comparison among different process configurations and to detect solutions to reduce the energy penalties of a process. In this work, the exergy analysis of a process for the production of Dimethyl Ether (DME) from green hydrogen generated through an electrolysis unit and pure CO₂ captured from flue gas is performed. The model simulates the behavior of all units composing the plant (electrolyzer, carbon capture section, DME synthesis reactor, purification step), with the scope to quantify the performance indices based on the II Law of Thermodynamics and to identify the entropy generation points. Then, a plant optimization strategy is proposed to maximize the exergy efficiency.Keywords: green DME production, exergy analysis, energy penalties, exergy efficiency
Procedia PDF Downloads 2574890 Ontology based Fault Detection and Diagnosis system Querying and Reasoning examples
Authors: Marko Batic, Nikola Tomasevic, Sanja Vranes
Abstract:
One of the strongholds in the ubiquitous efforts related to the energy conservation and energy efficiency improvement is represented by the retrofit of high energy consumers in buildings. In general, HVAC systems represent the highest energy consumers in buildings. However they usually suffer from mal-operation and/or malfunction, causing even higher energy consumption than necessary. Various Fault Detection and Diagnosis (FDD) systems can be successfully employed for this purpose, especially when it comes to the application at a single device/unit level. In the case of more complex systems, where multiple devices are operating in the context of the same building, significant energy efficiency improvements can only be achieved through application of comprehensive FDD systems relying on additional higher level knowledge, such as their geographical location, served area, their intra- and inter- system dependencies etc. This paper presents a comprehensive FDD system that relies on the utilization of common knowledge repository that stores all critical information. The discussed system is deployed as a test-bed platform at the two at Fiumicino and Malpensa airports in Italy. This paper aims at presenting advantages of implementation of the knowledge base through the utilization of ontology and offers improved functionalities of such system through examples of typical queries and reasoning that enable derivation of high level energy conservation measures (ECM). Therefore, key SPARQL queries and SWRL rules, based on the two instantiated airport ontologies, are elaborated. The detection of high level irregularities in the operation of airport heating/cooling plants is discussed and estimation of energy savings is reported.Keywords: airport ontology, knowledge management, ontology modeling, reasoning
Procedia PDF Downloads 5374889 Development of Mixed Matrix Membranes by Using NH₂-Functionalized UiO-66 and [APTMS][AC] Ionic Liquid for the Separation of CO₂
Authors: Hafiza Mamoona Khalid, Afshan Mujahid, Asif Ali, Asim Laeeq Khan, Mahmood Saleem, Rafael M. Santos
Abstract:
The ever-escalating CO₂ concentration in the atmosphere calls for accelerated development and deployment of carbon capture processes to reduce emissions. Mixed matrix membranes (MMMs), which are fabricated by incorporating the beneficial properties of highly selective inorganic fillers into a polymer matrix, have exhibited significant progress and the ability to enhance the performance of a membrane for gas separation. In this research, an amine-based ionic liquid (IL) [APTMS][AC] was prepared, which has greater CO₂ affinity and greater solubility due to its amine moiety. The metal–organic framework (MOF) UiO-66 with a multidimensional crystalline structure was used as a filler due to its appropriate porosity and tunable properties, and it was functionalized with NH₂. MOFs were further modified with an IL to prepare UiO-66@IL and UiO-66-NH₂@IL, and MMMs incorporating each MOF were fabricated with the polymer Pebax-1657. All the prepared membranes and MOFs were characterized to predict their separation efficiency. Several characterization techniques, namely, FTIR spectroscopy, XRD, and SEM, were used to successfully synthesize UiO-66@IL and UiO-66-NH₂@IL composites and confirmed proper dispersion and excellent polymer‒ filler compatibility at filler loadings ranging from 0 to 30 wt.%. The separation performances were investigated, and the results showed that the incorporation of RTIL with the highly crystalline structure and large surface area of UiO-66 enhanced the separation efficiency of the membrane. The permeability of CO₂ for all fabricated membranes continuously increased with increasing filler concentration, wherein the permeability was comparatively high for the UiO-66-NH₂ MMMs. The CO₂/CH₄ selectivity improved by 35%, 54%, and 60%, respectively, for UiO-66@IL, UiO-66-NH₂, and UiO-66-NH₂@IL MMMs compared to simple UiO-66 for CO₂/CH₄ and by 28%, 36%, and 63%, respectively, for CO₂/N₂, with an increase in filler loading in the MMMs.Keywords: gas separation, mixed matrix membranes, CO₂ sequestration, climate change, global warming
Procedia PDF Downloads 154888 Solar-Blind Ni-Schottky Photodetector Based on MOCVD Grown ZnGa₂O₄
Authors: Taslim Khan, Ray Hua Horng, Rajendra Singh
Abstract:
This study presents a comprehensive analysis of the design, fabrication, and performance evaluation of a solar-blind Schottky photodetector based on ZnGa₂O₄ grown via MOCVD, utilizing Ni/Au as the Schottky electrode. ZnGa₂O₄, with its wide bandgap of 5.2 eV, is well-suited for high-performance solar-blind photodetection applications. The photodetector demonstrates an impressive responsivity of 280 A/W, indicating its exceptional sensitivity within the solar-blind ultraviolet band. One of the device's notable attributes is its high rejection ratio of 10⁵, which effectively filters out unwanted background signals, enhancing its reliability in various environments. The photodetector also boasts a photodetector responsivity contrast ratio (PDCR) of 10⁷, showcasing its ability to detect even minor changes in incident UV light. Additionally, the device features an outstanding detective of 10¹⁸ Jones, underscoring its capability to precisely detect faint UV signals. It exhibits a fast response time of 80 ms and an ON/OFF ratio of 10⁵, making it suitable for real-time UV sensing applications. The noise-equivalent power (NEP) of 10^-17 W/Hz further highlights its efficiency in detecting low-intensity UV signals. The photodetector also achieves a high forward-to-backward current rejection ratio of 10⁶, ensuring high selectivity. Furthermore, the device maintains an extremely low dark current of approximately 0.1 pA. These findings position the ZnGa₂O₄-based Schottky photodetector as a leading candidate for solar-blind UV detection applications. It offers a compelling combination of sensitivity, selectivity, and operational efficiency, making it a highly promising tool for environments requiring precise and reliable UV detection.Keywords: wideband gap, solar blind photodetector, MOCVD, zinc gallate
Procedia PDF Downloads 404887 A Conceptual Framework of Impact of Lean on the Performance of Construction Industry
Authors: Jaber Shurrab, Matloub Hussain
Abstract:
The rapid pace of changes in the construction industry, technological advancements, and rising costs present tremendous challenges for project managers. Project managers are under severe pressure to minimize the waste, improve the efficiency of the entire operations and the philosophy of ‘lean thinking’ so that ‘more could be achieved with less’ is becoming very popular. Though, lean management has strong roots in manufacturing industry and over the last decade lean philosophy has started gaining attention in the service industry as well. However, little has been known in the context of waste minimization and lean implementation in the construction industry and this paper deals with this important issue. The primary objective of this paper is to propose a conceptual framework for the exploration of appropriate lean techniques applicable to medium and large construction companies and measure their impact on the competitiveness and economic performance of construction companies of United Arab Emirates (UAE). To this end, a comprehensive literature review and interviews with eight project managers of medium and large construction companies of UAE have been conducted. It has been found that competitive, reduce waste and costs are critical to the construction industry. This is an ongoing research in lean management, giving project managers a practical framework for improving the efficiency of their project through various lean techniques. Originality/value: Research significance emphasizes increasing the effectiveness of the construction industry, influences the development of lean construction framework which improves lean construction practices using the lean techniques. This contributes to the effort of applying lean techniques in the construction industry. Limited publications were done in the construction industry mainly in United Arab Emirates (UAE) compared to the lean manufacturing. This research will recommend a systematic approach for the implementing of the anticipated framework within a cyclical look-ahead period and emphasizes the practical implications of the proposed approach.Keywords: construction, lean, lean manufacturing, waste
Procedia PDF Downloads 2864886 Optimization of Quercus cerris Bark Liquefaction
Authors: Luísa P. Cruz-Lopes, Hugo Costa e Silva, Idalina Domingos, José Ferreira, Luís Teixeira de Lemos, Bruno Esteves
Abstract:
The liquefaction process of cork based tree barks has led to an increase of interest due to its potential innovation in the lumber and wood industries. In this particular study the bark of Quercus cerris (Turkish oak) is used due to its appreciable amount of cork tissue, although of inferior quality when compared to the cork provided by other Quercus trees. This study aims to optimize alkaline catalysis liquefaction conditions, regarding several parameters. To better comprehend the possible chemical characteristics of the bark of Quercus cerris, a complete chemical analysis was performed. The liquefaction process was performed in a double-jacket reactor heated with oil, using glycerol and a mixture of glycerol/ethylene glycol as solvents, potassium hydroxide as a catalyst, and varying the temperature, liquefaction time and granulometry. Due to low liquefaction efficiency resulting from the first experimental procedures a study was made regarding different washing techniques after the filtration process using methanol and methanol/water. The chemical analysis stated that the bark of Quercus cerris is mostly composed by suberin (ca. 30%) and lignin (ca. 24%) as well as insolvent hemicelluloses in hot water (ca. 23%). On the liquefaction stage, the results that led to higher yields were: using a mixture of methanol/ethylene glycol as reagents and a time and temperature of 120 minutes and 200 ºC, respectively. It is concluded that using a granulometry of <80 mesh leads to better results, even if this parameter barely influences the liquefaction efficiency. Regarding the filtration stage, washing the residue with methanol and then distilled water leads to a considerable increase on final liquefaction percentages, which proves that this procedure is effective at liquefying suberin content and lignocellulose fraction.Keywords: liquefaction, Quercus cerris, polyalcohol liquefaction, temperature
Procedia PDF Downloads 3334885 Key Principles and Importance of Applied Geomorphological Maps for Engineering Structure Placement
Authors: Sahar Maleki, Reza Shahbazi, Nayere Sadat Bayat Ghiasi
Abstract:
Applied geomorphological maps are crucial tools in engineering, particularly for the placement of structures. These maps provide precise information about the terrain, including landforms, soil types, and geological features, which are essential for making informed decisions about construction sites. The importance of these maps is evident in risk assessment, as they help identify potential hazards such as landslides, erosion, and flooding, enabling better risk management. Additionally, these maps assist in selecting the most suitable locations for engineering projects. Cost efficiency is another significant benefit, as proper site selection and risk assessment can lead to substantial cost savings by avoiding unsuitable areas and minimizing the need for extensive ground modifications. Ensuring the maps are accurate and up-to-date is crucial for reliable decision-making. Detailed information about various geomorphological features is necessary to provide a comprehensive overview. Integrating geomorphological data with other environmental and engineering data to create a holistic view of the site is one of the most fundamental steps in engineering. In summary, the preparation of applied geomorphological maps is a vital step in the planning and execution of engineering projects, ensuring safety, efficiency, and sustainability. In the Geological Survey of Iran, the preparation of these applied maps has enabled the identification and recognition of areas prone to geological hazards such as landslides, subsidence, earthquakes, and more. Additionally, areas with problematic soils, potential groundwater zones, and safe construction sites are identified and made available to the public.Keywords: geomorphological maps, geohazards, risk assessment, decision-making
Procedia PDF Downloads 234884 In-situ Acoustic Emission Analysis of a Polymer Electrolyte Membrane Water Electrolyser
Authors: M. Maier, I. Dedigama, J. Majasan, Y. Wu, Q. Meyer, L. Castanheira, G. Hinds, P. R. Shearing, D. J. L. Brett
Abstract:
Increasing the efficiency of electrolyser technology is commonly seen as one of the main challenges on the way to the Hydrogen Economy. There is a significant lack of understanding of the different states of operation of polymer electrolyte membrane water electrolysers (PEMWE) and how these influence the overall efficiency. This in particular means the two-phase flow through the membrane, gas diffusion layers (GDL) and flow channels. In order to increase the efficiency of PEMWE and facilitate their spread as commercial hydrogen production technology, new analytic approaches have to be found. Acoustic emission (AE) offers the possibility to analyse the processes within a PEMWE in a non-destructive, fast and cheap in-situ way. This work describes the generation and analysis of AE data coming from a PEM water electrolyser, for, to the best of our knowledge, the first time in literature. Different experiments are carried out. Each experiment is designed so that only specific physical processes occur and AE solely related to one process can be measured. Therefore, a range of experimental conditions is used to induce different flow regimes within flow channels and GDL. The resulting AE data is first separated into different events, which are defined by exceeding the noise threshold. Each acoustic event consists of a number of consequent peaks and ends when the wave diminishes under the noise threshold. For all these acoustic events the following key attributes are extracted: maximum peak amplitude, duration, number of peaks, peaks before the maximum, average intensity of a peak and time till the maximum is reached. Each event is then expressed as a vector containing the normalized values for all criteria. Principal Component Analysis is performed on the resulting data, which orders the criteria by the eigenvalues of their covariance matrix. This can be used as an easy way of determining which criteria convey the most information on the acoustic data. In the following, the data is ordered in the two- or three-dimensional space formed by the most relevant criteria axes. By finding spaces in the two- or three-dimensional space only occupied by acoustic events originating from one of the three experiments it is possible to relate physical processes to certain acoustic patterns. Due to the complex nature of the AE data modern machine learning techniques are needed to recognize these patterns in-situ. Using the AE data produced before allows to train a self-learning algorithm and develop an analytical tool to diagnose different operational states in a PEMWE. Combining this technique with the measurement of polarization curves and electrochemical impedance spectroscopy allows for in-situ optimization and recognition of suboptimal states of operation.Keywords: acoustic emission, gas diffusion layers, in-situ diagnosis, PEM water electrolyser
Procedia PDF Downloads 1564883 Modeling and Characterization of Organic LED
Authors: Bouanati Sidi Mohammed, N. E. Chabane Sari, Mostefa Kara Selma
Abstract:
It is well-known that Organic light emitting diodes (OLEDs) are attracting great interest in the display technology industry due to their many advantages, such as low price of manufacturing, large-area of electroluminescent display, various colors of emission included white light. Recently, there has been much progress in understanding the device physics of OLEDs and their basic operating principles. In OLEDs, Light emitting is the result of the recombination of electron and hole in light emitting layer, which are injected from cathode and anode. For improve luminescence efficiency, it is needed that hole and electron pairs exist affluently and equally and recombine swiftly in the emitting layer. The aim of this paper is to modeling polymer LED and OLED made with small molecules for studying the electrical and optical characteristics. The first simulation structures used in this paper is a mono layer device; typically consisting of the poly (2-methoxy-5(2’-ethyl) hexoxy-phenylenevinylene) (MEH-PPV) polymer sandwiched between an anode usually an indium tin oxide (ITO) substrate, and a cathode, such as Al. In the second structure we replace MEH-PPV by tris (8-hydroxyquinolinato) aluminum (Alq3). We choose MEH-PPV because of it's solubility in common organic solvents, in conjunction with a low operating voltage for light emission and relatively high conversion efficiency and Alq3 because it is one of the most important host materials used in OLEDs. In this simulation, the Poole-Frenkel- like mobility model and the Langevin bimolecular recombination model have been used as the transport and recombination mechanism. These models are enabled in ATLAS -SILVACO software. The influence of doping and thickness on I(V) characteristics and luminescence, are reported.Keywords: organic light emitting diode, polymer lignt emitting diode, organic materials, hexoxy-phenylenevinylene
Procedia PDF Downloads 5544882 On the Solution of Boundary Value Problems Blended with Hybrid Block Methods
Authors: Kizito Ugochukwu Nwajeri
Abstract:
This paper explores the application of hybrid block methods for solving boundary value problems (BVPs), which are prevalent in various fields such as science, engineering, and applied mathematics. Traditionally, numerical approaches such as finite difference and shooting methods, often encounter challenges related to stability and convergence, particularly in the context of complex and nonlinear BVPs. To address these challenges, we propose a hybrid block method that integrates features from both single-step and multi-step techniques. This method allows for the simultaneous computation of multiple solution points while maintaining high accuracy. Specifically, we employ a combination of polynomial interpolation and collocation strategies to derive a system of equations that captures the behavior of the solution across the entire domain. By directly incorporating boundary conditions into the formulation, we enhance the stability and convergence properties of the numerical solution. Furthermore, we introduce an adaptive step-size mechanism to optimize performance based on the local behavior of the solution. This adjustment allows the method to respond effectively to variations in solution behavior, improving both accuracy and computational efficiency. Numerical tests on a variety of boundary value problems demonstrate the effectiveness of the hybrid block methods. These tests showcase significant improvements in accuracy and computational efficiency compared to conventional methods, indicating that our approach is robust and versatile. The results suggest that this hybrid block method is suitable for a wide range of applications in real-world problems, offering a promising alternative to existing numerical techniques.Keywords: hybrid block methods, boundary value problem, polynomial interpolation, adaptive step-size control, collocation methods
Procedia PDF Downloads 334881 Transdermal Medicated- Layered Extended-Release Patches for Co-delivery of Carbamazepine and Pyridoxine
Authors: Sarah K. Amer, Walaa Alaa
Abstract:
Epilepsy is an important cause of mortality and morbidity, according to WHO statistics. It is characterized by the presence of frequent seizures occurring more than 24 hours apart. Carbamazepine (CBZ) is considered first-line treatment for epilepsy. However, reports have shown that CBZ oral formulations failed to achieve optimum systemic delivery, minimize side effects, and enhance patient compliance. Besides, the literature has signified the lack of therapeutically efficient CBZ transdermal formulation and the urge for its existence owing to its ease and convenient method of application and highlighted capability to attain higher bioavailability and more extended-release profiles compared to conventional oral CBZ tablets. This work aims to prepare CBZ microspheres (MS) that are embedded in a transdermal gel containing Vitamin B to be co-delivered. MS were prepared by emulsion-solvent diffusion method using Eudragit S as core forming polymer and hydroxypropyl methylcellulose (HPMC) polymer. The MS appeared to be spherical and porous in nature, offering a large surface area and high entrapment efficiency of CBZ. The transdermal gel was prepared by solvent-evaporation technique using HPMC that, offered high entrapment efficiency and Eudragit S that provided an extended-release profile. Polyethylene glycol, Span 80 and Pyridoxine were also added. Data indicated that combinations of CBZ with pyridoxine can reduce epileptic seizures without affecting motor coordination. Extended-release profiles were evident for this system. The patches were furthermore tested for thickness, moisture content, folding endurance, spreadability and viscosity measurements. This novel pharmaceutical formulation would be of great influence on seizure control, offering better therapeutic effects.Keywords: epilepsy, carbamazepine, pyridoxine, transdermal
Procedia PDF Downloads 604880 Rethinking Riba in an Agency Theoretic Framework: Islamic Banking and Finance beyond Sophistry
Authors: Muhammad Arsalan
Abstract:
The efficiency of a financial intermediation system is assessed by its ability to achieve allocative efficiency, asset transformation, and the subsequent economic development. Islamic Banking and Finance (IBF) was conceived to serve as an alternate financial intermediation system adherent to the injunctions of Islam. A critical appraisal of the state of contemporary IBF reveals that it neither fulfills the aspirations of Islamic rhetoric nor is efficient in terms of asset transformation and economic development. This paper is an intuitive pursuit to explore the economic rationale of established principles of IBF, and the reasons of the persistent divergence of IBF being accused of ruses and sophistry. Disentangling the varying viewpoints, the underdevelopment of IBF has been attributed to misinterpretation of Riba, which has been explicated through a narrow fiqhi and legally deterministic approach. It presents a critical account of how incorrect conceptualization of the key injunction on Riba, steered flawed institutionalization of an Islamic Financial intermediation system. It also emphasizes on the wrong interpretation of the ontological and epistemological sources of Islamic Law (primarily Riba), that explains the perennial economic underdevelopment of the Muslim world. Deeming ‘a collaborative and dynamic Ijtihad’ as the elixir, this paper insists on the exigency of redefining Riba, i.e., a definition that incorporates the modern modes of economic cooperation and the contemporary financial intermediation ecosystem. Finally, Riba has been articulated in an agency theoretic framework to eschew expropriation of wealth, and assure protection of property rights, aimed at realizing the twin goals of a) Shari’ah adherence in true spirit, b) financial and economic development of the Muslim world.Keywords: agency theory, financial intermediation, Islamic banking and finance, ijtihad, economic development, Riba, information asymmetry
Procedia PDF Downloads 1394879 Simulation-Based Evaluation of Indoor Air Quality and Comfort Control in Non-Residential Buildings
Authors: Torsten Schwan, Rene Unger
Abstract:
Simulation of thermal and electrical building performance more and more becomes part of an integrative planning process. Increasing requirements on energy efficiency, the integration of volatile renewable energy, smart control and storage management often cause tremendous challenges for building engineers and architects. This mainly affects commercial or non-residential buildings. Their energy consumption characteristics significantly distinguish from residential ones. This work focuses on the many-objective optimization problem indoor air quality and comfort, especially in non-residential buildings. Based on a brief description of intermediate dependencies between different requirements on indoor air treatment it extends existing Modelica-based building physics models with additional system states to adequately represent indoor air conditions. Interfaces to corresponding HVAC (heating, ventilation, and air conditioning) system and control models enable closed-loop analyzes of occupants' requirements and energy efficiency as well as profitableness aspects. A complex application scenario of a nearly-zero-energy school building shows advantages of presented evaluation process for engineers and architects. This way, clear identification of air quality requirements in individual rooms together with realistic model-based description of occupants' behavior helps to optimize HVAC system already in early design stages. Building planning processes can be highly improved and accelerated by increasing integration of advanced simulation methods. Those methods mainly provide suitable answers on engineers' and architects' questions regarding more exuberant and complex variety of suitable energy supply solutions.Keywords: indoor air quality, dynamic simulation, energy efficient control, non-residential buildings
Procedia PDF Downloads 2324878 The Potential in the Use of Building Information Modelling and Life-Cycle Assessment for Retrofitting Buildings: A Study Based on Interviews with Experts in Both Fields
Authors: Alex Gonzalez Caceres, Jan Karlshøj, Tor Arvid Vik
Abstract:
Life cycle of residential buildings are expected to be several decades, 40% of European residential buildings have inefficient energy conservation measure. The existing building represents 20-40% of the energy use and the CO₂ emission. Since net zero energy buildings are a short-term goal, (should be achieved by EU countries after 2020), is necessary to plan the next logical step, which is to prepare the existing outdated stack of building to retrofit them into an energy efficiency buildings. In order to accomplish this, two specialize and widespread tool can be used Building Information Modelling (BIM) and life-cycle assessment (LCA). BIM and LCA are tools used by a variety of disciplines; both are able to represent and analyze the constructions in different stages. The combination of these technologies could improve greatly the retrofitting techniques. The incorporation of the carbon footprint, introducing a single database source for different material analysis. To this is added the possibility of considering different analysis approaches such as costs and energy saving. Is expected with these measures, enrich the decision-making. The methodology is based on two main activities; the first task involved the collection of data this is accomplished by literature review and interview with experts in the retrofitting field and BIM technologies. The results of this task are presented as an evaluation checklist of BIM ability to manage data and improve decision-making in retrofitting projects. The last activity involves an evaluation using the results of the previous tasks, to check how far the IFC format can support the requirements by each specialist, and its uses by third party software. The result indicates that BIM/LCA have a great potential to improve the retrofitting process in existing buildings, but some modification must be done in order to meet the requirements of the specialists for both, retrofitting and LCA evaluators.Keywords: retrofitting, BIM, LCA, energy efficiency
Procedia PDF Downloads 2204877 Design and Thermal Analysis of Power Harvesting System of a Hexagonal Shaped Small Spacecraft
Authors: Mansa Radhakrishnan, Anwar Ali, Muhammad Rizwan Mughal
Abstract:
Many universities around the world are working on modular and low budget architecture of small spacecraft to reduce the development cost of the overall system. This paper focuses on the design of a modular solar power harvesting system for a hexagonal-shaped small satellite. The designed solar power harvesting systems are composed of solar panels and power converter subsystems. The solar panel is composed of solar cells mounted on the external face of the printed circuit board (PCB), while the electronic components of power conversion are mounted on the interior side of the same PCB. The solar panel with dimensions 16.5cm × 99cm is composed of 36 solar cells (each solar cell is 4cm × 7cm) divided into four parallel banks where each bank consists of 9 solar cells. The output voltage of a single solar cell is 2.14V, and the combined output voltage of 9 series connected solar cells is around 19.3V. The output voltage of the solar panel is boosted to the satellite power distribution bus voltage level (28V) by a boost converter working on a constant voltage maximum power point tracking (MPPT) technique. The solar panel module is an eight-layer PCB having embedded coil in 4 internal layers. This coil is used to control the attitude of the spacecraft, which consumes power to generate a magnetic field and rotate the spacecraft. As power converter and distribution subsystem components are mounted on the PCB internal layer, therefore it is mandatory to do thermal analysis in order to ensure that the overall module temperature is within thermal safety limits. The main focus of the overall design is on compactness, miniaturization, and efficiency enhancement.Keywords: small satellites, power subsystem, efficiency, MPPT
Procedia PDF Downloads 754876 Enhance Construction Visual As-Built Schedule Management Using BIM Technology
Authors: Shu-Hui Jan, Hui-Ping Tserng, Shih-Ping Ho
Abstract:
Construction project control attempts to obtain real-time as-built schedule information and to eliminate project delays by effectively enhancing dynamic schedule control and management. Suitable platforms for enhancing an as-built schedule visually during the construction phase are necessary and important for general contractors. As the application of building information modeling (BIM) becomes more common, schedule management integrated with the BIM approach becomes essential to enhance visual construction management implementation for the general contractor during the construction phase. To enhance visualization of the updated as-built schedule for the general contractor, this study presents a novel system called the Construction BIM-assisted Schedule Management (ConBIM-SM) system for general contractors in
Keywords: building information modeling (BIM), construction schedule management, as-built schedule management, BIM schedule updating mechanism
Procedia PDF Downloads 3754875 Integrated Design of Froth Flotation Process in Sludge Oil Recovery Using Cavitation Nanobubbles for Increase the Efficiency and High Viscose Compatibility
Authors: Yolla Miranda, Marini Altyra, Karina Kalmapuspita Imas
Abstract:
Oily sludge wastes always fill in upstream and downstream petroleum industry process. Sludge still contains oil that can use for energy storage. Recycling sludge is a method to handling it for reduce the toxicity and very probable to get the remaining oil around 20% from its volume. Froth flotation, a common method based on chemical unit for separate fine solid particles from an aqueous suspension. The basic composition of froth flotation is the capture of oil droplets or small solids by air bubbles in an aqueous slurry, followed by their levitation and collection in a froth layer. This method has been known as no intensive energy requirement and easy to apply. But the low efficiency and unable treat the high viscosity become the biggest problem in froth flotation unit. This study give the design to manage the high viscosity of sludge first and then entering the froth flotation including cavitation tube on it to change the bubbles into nano particles. The recovery in flotation starts with the collision and adhesion of hydrophobic particles to the air bubbles followed by transportation of the hydrophobic particle-bubble aggregate from the collection zone to the froth zone, drainage and enrichment of the froth, and finally by its overflow removal from the cell top. The effective particle separation by froth flotation relies on the efficient capture of hydrophobic particles by air bubbles in three steps. The important step is collision. Decreasing the bubble particles will increasing the collision effect. It cause the process more efficient. The pre-treatment, froth flotation, and cavitation tube integrated each other. The design shows the integrated unit and its process.Keywords: sludge oil recovery, froth flotation, cavitation tube, nanobubbles, high viscosity
Procedia PDF Downloads 3794874 Seismic Vulnerability Analysis of Arch Dam Based on Response Surface Method
Authors: Serges Mendomo Meye, Li Guowei, Shen Zhenzhong
Abstract:
Earthquake is one of the main loads threatening dam safety. Once the dam is damaged, it will bring huge losses of life and property to the country and people. Therefore, it is very important to research the seismic safety of the dam. Due to the complex foundation conditions, high fortification intensity, and high scientific and technological content, it is necessary to adopt reasonable methods to evaluate the seismic safety performance of concrete arch dams built and under construction in strong earthquake areas. Structural seismic vulnerability analysis can predict the probability of structural failure at all levels under different intensity earthquakes, which can provide a scientific basis for reasonable seismic safety evaluation and decision-making. In this paper, the response surface method (RSM) is applied to the seismic vulnerability analysis of arch dams, which improves the efficiency of vulnerability analysis. Based on the central composite test design method, the material-seismic intensity samples are established. The response surface model (RSM) with arch crown displacement as performance index is obtained by finite element (FE) calculation of the samples, and then the accuracy of the response surface model (RSM) is verified. To obtain the seismic vulnerability curves, the seismic intensity measure ??(?1) is chosen to be 0.1~1.2g, with an interval of 0.1g and a total of 12 intensity levels. For each seismic intensity level, the arch crown displacement corresponding to 100 sets of different material samples can be calculated by algebraic operation of the response surface model (RSM), which avoids 1200 times of nonlinear dynamic calculation of arch dam; thus, the efficiency of vulnerability analysis is improved greatly.Keywords: high concrete arch dam, performance index, response surface method, seismic vulnerability analysis, vector-valued intensity measure
Procedia PDF Downloads 2404873 Adaptive Energy-Aware Routing (AEAR) for Optimized Performance in Resource-Constrained Wireless Sensor Networks
Authors: Innocent Uzougbo Onwuegbuzie
Abstract:
Wireless Sensor Networks (WSNs) are crucial for numerous applications, yet they face significant challenges due to resource constraints such as limited power and memory. Traditional routing algorithms like Dijkstra, Ad hoc On-Demand Distance Vector (AODV), and Bellman-Ford, while effective in path establishment and discovery, are not optimized for the unique demands of WSNs due to their large memory footprint and power consumption. This paper introduces the Adaptive Energy-Aware Routing (AEAR) model, a solution designed to address these limitations. AEAR integrates reactive route discovery, localized decision-making using geographic information, energy-aware metrics, and dynamic adaptation to provide a robust and efficient routing strategy. We present a detailed comparative analysis using a dataset of 50 sensor nodes, evaluating power consumption, memory footprint, and path cost across AEAR, Dijkstra, AODV, and Bellman-Ford algorithms. Our results demonstrate that AEAR significantly reduces power consumption and memory usage while optimizing path weight. This improvement is achieved through adaptive mechanisms that balance energy efficiency and link quality, ensuring prolonged network lifespan and reliable communication. The AEAR model's superior performance underlines its potential as a viable routing solution for energy-constrained WSN environments, paving the way for more sustainable and resilient sensor network deployments.Keywords: wireless sensor networks (WSNs), adaptive energy-aware routing (AEAR), routing algorithms, energy, efficiency, network lifespan
Procedia PDF Downloads 374872 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit
Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic
Abstract:
Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method
Procedia PDF Downloads 1204871 Fabrication and Characterization of Folic Acid-Grafted-Thiomer Enveloped Liposomes for Enhanced Oral Bioavailability of Docetaxel
Authors: Farhan Sohail, Gul Shahnaz Irshad Hussain, Shoaib Sarwar, Ibrahim Javed, Zajif Hussain, Akhtar Nadhman
Abstract:
The present study was aimed to develop a hybrid nanocarrier (NC) system with enhanced membrane permeability, bioavailability and targeted delivery of Docetaxel (DTX) in breast cancer. Hybrid NC’s based on folic acid (FA) grafted thiolated chitosan (TCS) enveloped liposomes were prepared with DTX and evaluated in-vitro and in-vivo for their enhanced permeability and bioavailability. Physicochemical characterization of NC’s including particle size, morphology, zeta potential, FTIR, DSC, PXRD, encapsulation efficiency and drug release from NC’s was determined in vitro. Permeation enhancement and p-gp inhibition were performed through everted sac method on freshly excised rat intestine which indicated that permeation was enhanced 5 times as compared to pure DTX and the hybrid NC’s were strongly able to inhibit the p-gp activity as well. In-vitro cytotoxicity and tumor targeting was done using MDA-MB-231 cell line. The stability study of the formulations performed for 3 months showed the improved stability of FA-TCS enveloped liposomes in terms of its particles size, zeta potential and encapsulation efficiency as compared to TCS NP’s and liposomes. The pharmacokinetic study was performed in vivo using rabbits. The oral bioavailability and AUC0-96 was increased 10.07 folds with hybrid NC’s as compared to positive control. Half-life (t1/2) was increased 4 times (58.76 hrs) as compared to positive control (17.72 hrs). Conclusively, it is suggested that FA-TCS enveloped liposomes have strong potential to enhance permeability and bioavailability of hydrophobic drugs after oral administration and tumor targeting.Keywords: docetaxel, coated liposome, permeation enhancement, oral bioavailability
Procedia PDF Downloads 4084870 Transforming Emergency Care: Revolutionizing Obstetrics and Gynecology Operations for Enhanced Excellence
Authors: Lolwa Alansari, Hanen Mrabet, Kholoud Khaled, Abdelhamid Azhaghdani, Sufia Athar, Aska Kaima, Zaineb Mhamdia, Zubaria Altaf, Almunzer Zakaria, Tamara Alshadafat
Abstract:
Introduction: The Obstetrics and Gynecology Emergency Department at Alwakra Hospital has faced significant challenges, which have been further worsened by the impact of the COVID-19 pandemic. These challenges involve issues such as overcrowding, extended wait times, and a notable surge in demand for emergency care services. Moreover, prolonged waiting times have emerged as a primary factor contributing to situations where patients leave without receiving attention, known as left without being seen (LWBS), and unexpectedly abscond. Addressing the issue of insufficient patient mobility in the obstetrics and gynecology emergency department has brought about substantial improvements in patient care, healthcare administration, and overall departmental efficiency. These changes have not only alleviated overcrowding but have also elevated the quality of emergency care, resulting in higher patient satisfaction, better outcomes, and operational rewards. Methodology: The COVID-19 pandemic has served as a catalyst for substantial transformations in the obstetrics and gynecology emergency, aligning seamlessly with the strategic direction of Hamad Medical Corporation (HMC). The fundamental aim of this initiative is to revolutionize the operational efficiency of the OB-GYN ED. To accomplish this mission, a range of transformations has been initiated, focusing on essential areas such as digitizing systems, optimizing resource allocation, enhancing budget efficiency, and reducing overall costs. The project utilized the Plan-Do-Study-Act (PDSA) model, involving a diverse team collecting baseline data and introducing throughput improvements. Post-implementation data and feedback were analysed, leading to the integration of effective interventions into standard procedures. These interventions included optimized space utilization, real-time communication, bedside registration, technology integration, pre-triage screening, enhanced communication and patient education, consultant presence, and a culture of continuous improvement. These strategies significantly reduced waiting times, enhancing both patient care and operational efficiency. Results: Results demonstrated a substantial reduction in overall average waiting time, dropping from 35 to approximately 14 minutes by August 2023. The wait times for priority 1 cases have been reduced from 22 to 0 minutes, and for priority 2 cases, the wait times have been reduced from 32 to approximately 13.6 minutes. The proportion of patients spending less than 8 hours in the OB ED observation beds rose from 74% in January 2022 to over 98% in 2023. Notably, there was a remarkable decrease in LWBS and absconded patient rates from 2020 to 2023. Conclusion: The project initiated a profound change in the department's operational environment. Efficiency became deeply embedded in the unit's culture, promoting teamwork among staff that went beyond the project's original focus and had a positive influence on operations in other departments. This effectiveness not only made processes more efficient but also resulted in significant cost reductions for the hospital. These cost savings were achieved by reducing wait times, which in turn led to fewer prolonged patient stays and reduced the need for additional treatments. These continuous improvement initiatives have now become an integral part of the Obstetrics and Gynecology Division's standard operating procedures, ensuring that the positive changes brought about by the project persist and evolve over time.Keywords: overcrowding, waiting time, person centered care, quality initiatives
Procedia PDF Downloads 654869 Optimizing Parallel Computing Systems: A Java-Based Approach to Modeling and Performance Analysis
Authors: Maher Ali Rusho, Sudipta Halder
Abstract:
The purpose of the study is to develop optimal solutions for models of parallel computing systems using the Java language. During the study, programmes were written for the examined models of parallel computing systems. The result of the parallel sorting code is the output of a sorted array of random numbers. When processing data in parallel, the time spent on processing and the first elements of the list of squared numbers are displayed. When processing requests asynchronously, processing completion messages are displayed for each task with a slight delay. The main results include the development of optimisation methods for algorithms and processes, such as the division of tasks into subtasks, the use of non-blocking algorithms, effective memory management, and load balancing, as well as the construction of diagrams and comparison of these methods by characteristics, including descriptions, implementation examples, and advantages. In addition, various specialised libraries were analysed to improve the performance and scalability of the models. The results of the work performed showed a substantial improvement in response time, bandwidth, and resource efficiency in parallel computing systems. Scalability and load analysis assessments were conducted, demonstrating how the system responds to an increase in data volume or the number of threads. Profiling tools were used to analyse performance in detail and identify bottlenecks in models, which improved the architecture and implementation of parallel computing systems. The obtained results emphasise the importance of choosing the right methods and tools for optimising parallel computing systems, which can substantially improve their performance and efficiency.Keywords: algorithm optimisation, memory management, load balancing, performance profiling, asynchronous programming.
Procedia PDF Downloads 124868 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center
Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael
Abstract:
Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency
Procedia PDF Downloads 354867 Formulation and Evaluation of Silibilin Loaded PLGA Nanoparticles for Cancer Therapy
Authors: Priya Patel, Paresh Patel, Mihir Raval
Abstract:
Silibinin, a flavanone as an antimicrotubular agent used in the treatment of cancer, was encapsulated in nanoparticles (NPs) of poly (lactide-co-glycolide) (PLGA) polymer using the spray-drying technique. The effects of various experimental parameters were optimized by box-behnken experimental design. Production yield, encapsulation efficiency and dissolution study along with characterization by scanning electron microscopy, DSC, FTIR followed by bioavailability study. Particle size and zeta potential were evaluated by using zetatrac particle size analyzer. Experimental design it was evaluated that inlet temperature and polymer concentration influence on the drug release. Feed flow rate impact on particle size. Results showed that spray drying technique yield 149 nm indicate nanosize range. The small size of the nanoparticle resulted in an enhanced cellular entry and greater bioavailability. Entrapment efficiency was found between 89.35% and 98.36%. Zeta potential shows good stability index of nanoparticle formulation. The in vitro release studies indicated the silibinin loaded PLGA nanoparticles provide controlled drug release over a period of 32 h. Pharmacokinetic studies demonstrated that after oral administration of silibinin-loaded PLGA nanoparticles to rats at a dose of 10 mg/kg, relative bioavailability was enhanced about 8.85-fold, compared to silibinin suspension as control hence, this investigation demonstrated the potential of the experimental design in understanding the effect of the formulation variables on the quality of silibinin loaded PLGA nanoparticles. These results describe an effective strategy of silibinin loaded PLGA nanoparticles and might provide a promising approach against the cancer.Keywords: silibinin, cancer, nanoparticles, PLGA, bioavailability
Procedia PDF Downloads 4294866 Nonlinear Multivariable Analysis of CO2 Emissions in China
Authors: Hsiao-Tien Pao, Yi-Ying Li, Hsin-Chia Fu
Abstract:
This paper addressed the impacts of energy consumption, economic growth, financial development, and population size on environmental degradation using grey relational analysis (GRA) for China, where foreign direct investment (FDI) inflows is the proxy variable for financial development. The more recent historical data during the period 2004–2011 are used, because the use of very old data for data analysis may not be suitable for rapidly developing countries. The results of the GRA indicate that the linkage effects of energy consumption–emissions and GDP–emissions are ranked first and second, respectively. These reveal that energy consumption and economic growth are strongly correlated with emissions. Higher economic growth requires more energy consumption and increasing environmental pollution. Likewise, more efficient energy use needs a higher level of economic development. Therefore, policies to improve energy efficiency and create a low-carbon economy can reduce emissions without hurting economic growth. The finding of FDI–emissions linkage is ranked third. This indicates that China do not apply weak environmental regulations to attract inward FDI. Furthermore, China’s government in attracting inward FDI should strengthen environmental policy. The finding of population–emissions linkage effect is ranked fourth, implying that population size does not directly affect CO2 emissions, even though China has the world’s largest population, and Chinese people are very economical use of energy-related products. Overall, the energy conservation, improving efficiency, managing demand, and financial development, which aim at curtailing waste of energy, reducing both energy consumption and emissions, and without loss of the country’s competitiveness, can be adopted for developing economies. The GRA is one of the best way to use a lower data to build a dynamic analysis model.Keywords: China, CO₂ emissions, foreign direct investment, grey relational analysis
Procedia PDF Downloads 4034865 Design and Optimization of a Small Hydraulic Propeller Turbine
Authors: Dario Barsi, Marina Ubaldi, Pietro Zunino, Robert Fink
Abstract:
A design and optimization procedure is proposed and developed to provide the geometry of a high efficiency compact hydraulic propeller turbine for low head. For the preliminary design of the machine, classic design criteria, based on the use of statistical correlations for the definition of the fundamental geometric parameters and the blade shapes are used. These relationships are based on the fundamental design parameters (i.e., specific speed, flow coefficient, work coefficient) in order to provide a simple yet reliable procedure. Particular attention is paid, since from the initial steps, on the correct conformation of the meridional channel and on the correct arrangement of the blade rows. The preliminary geometry thus obtained is used as a starting point for the hydrodynamic optimization procedure, carried out using a CFD calculation software coupled with a genetic algorithm that generates and updates a large database of turbine geometries. The optimization process is performed using a commercial approach that solves the turbulent Navier Stokes equations (RANS) by exploiting the axial-symmetric geometry of the machine. The geometries generated within the database are therefore calculated in order to determine the corresponding overall performance. In order to speed up the optimization calculation, an artificial neural network (ANN) based on the use of an objective function is employed. The procedure was applied for the specific case of a propeller turbine with an innovative design of a modular type, specific for applications characterized by very low heads. The procedure is tested in order to verify its validity and the ability to automatically obtain the targeted net head and the maximum for the total to total internal efficiency.Keywords: renewable energy conversion, hydraulic turbines, low head hydraulic energy, optimization design
Procedia PDF Downloads 1504864 Dual Metal Organic Framework Derived N-Doped Fe3C Nanocages Decorated with Ultrathin ZnIn2S4 Nanosheets for Efficient Photocatalytic Hydrogen Generation
Authors: D. Amaranatha Reddy
Abstract:
Highly efficient and stable co-catalysts materials is of great important for boosting photo charge carrier’s separation, transportation efficiency, and accelerating the catalytic reactive sites of semiconductor photocatalysts. As a result, it is of decisive importance to fabricate low price noble metal free co-catalysts with high catalytic reactivity, but it remains very challenging. Considering this challenge here, dual metal organic frame work derived N-Doped Fe3C nanocages have been rationally designed and decorated with ultrathin ZnIn2S4 nanosheets for efficient photocatalytic hydrogen generation. The fabrication strategy precisely integrates co-catalyst nanocages with ultrathin two-dimensional (2D) semiconductor nanosheets by providing tightly interconnected nano-junctions and helps to suppress the charge carrier’s recombination rate. Furthermore, constructed highly porous hybrid structures expose ample active sites for catalytic reduction reactions and harvest visible light more effectively by light scattering. As a result, fabricated nanostructures exhibit superior solar driven hydrogen evolution rate (9600 µmol/g/h) with an apparent quantum efficiency of 3.6 %, which is relatively higher than the Pt noble metal co-catalyst systems and earlier reported ZnIn2S4 based nanohybrids. We believe that the present work promotes the application of sulfide based nanostructures in solar driven hydrogen production.Keywords: photocatalysis, water splitting, hydrogen fuel production, solar-driven hydrogen
Procedia PDF Downloads 1344863 Analysis of Grid Connected High Concentrated Photovoltaic Systems for Peak Load Shaving in Kuwait
Authors: Adel A. Ghoneim
Abstract:
Air conditioning devices are substantially utilized in the summer months, as a result maximum loads in Kuwait take place in these intervals. Peak energy consumption are usually more expensive to satisfy compared to other standard power sources. The primary objective of the current work is to enhance the performance of high concentrated photovoltaic (HCPV) systems in an attempt to minimize peak power usage in Kuwait using HCPV modules. High concentrated PV multi-junction solar cells provide a promising method towards accomplishing lowest pricing per kilowatt-hour. Nevertheless, these cells have various features that should be resolved to be feasible for extensive power production. A single diode equivalent circuit model is formulated to analyze multi-junction solar cells efficiency in Kuwait weather circumstances taking into account the effects of both the temperature and the concentration ratio. The diode shunt resistance that is commonly ignored in the established models is considered in the present numerical model. The current model results are successfully validated versus measurements from published data to within 1.8% accuracy. Present calculations reveal that the single diode model considering the shunt resistance provides accurate and dependable results. The electrical efficiency (η) is observed to increase with concentration to a specific concentration level after which it reduces. Implementing grid systems is noticed to increase with concentration to a certain concentration degree after which it decreases. Employing grid connected HCPV systems results in significant peak load reduction.Keywords: grid connected, high concentrated photovoltaic systems, peak load, solar cells
Procedia PDF Downloads 1554862 Development and Characterization of Topical 5-Fluorouracil Solid Lipid Nanoparticles for the Effective Treatment of Non-Melanoma Skin Cancer
Authors: Sudhir Kumar, V. R. Sinha
Abstract:
Background: The topical and systemic toxicity associated with present nonmelanoma skin cancer (NMSC) treatment therapy using 5-Fluorouracil (5-FU) make it necessary to develop a novel delivery system having lesser toxicity and better control over drug release. Solid lipid nanoparticles offer many advantages like: controlled and localized release of entrapped actives, nontoxicity, and better tolerance. Aim:-To investigate safety and efficacy of 5-FU loaded solid lipid nanoparticles as a topical delivery system for the treatment of nonmelanoma skin cancer. Method: Topical solid lipid nanoparticles of 5-FU were prepared using Compritol 888 ATO (Glyceryl behenate) as lipid component and pluronic F68 (Poloxamer 188), Tween 80 (Polysorbate 80), Tyloxapol (4-(1,1,3,3-Tetramethylbutyl) phenol polymer with formaldehyde and oxirane) as surfactants. The SLNs were prepared with emulsification method. Different formulation parameters viz. type and ratio of surfactant, ratio of lipid and ratio of surfactant:lipid were investigated on particle size and drug entrapment efficiency. Results: Characterization of SLNs like–Transmission Electron Microscopy (TEM), Differential Scannig calorimetry (DSC), Fourier transform infrared spectroscopy (FTIR), Particle size determination, Polydispersity index, Entrapment efficiency, Drug loading, ex vivo skin permeation and skin retention studies, skin irritation and histopathology studies were performed. TEM results showed that shape of SLNs was spherical with size range 200-500nm. Higher encapsulation efficiency was obtained for batches having higher concentration of surfactant and lipid. It was found maximum 64.3% for SLN-6 batch with size of 400.1±9.22 nm and PDI 0.221±0.031. Optimized SLN batches and marketed 5-FU cream were compared for flux across rat skin and skin drug retention. The lesser flux and higher skin retention was obtained for SLN formulation in comparison to topical 5-FU cream, which ensures less systemic toxicity and better control of drug release across skin. Chronic skin irritation studies lacks serious erythema or inflammation and histopathology studies showed no significant change in physiology of epidermal layers of rat skin. So, these studies suggest that the optimized SLN formulation is efficient then marketed cream and safer for long term NMSC treatment regimens. Conclusion: Topical and systemic toxicity associated with long-term use of 5-FU, in the treatment of NMSC, can be minimized with its controlled release with significant drug retention with minimal flux across skin. The study may provide a better alternate for effective NMSC treatment.Keywords: 5-FU, topical formulation, solid lipid nanoparticles, non melanoma skin cancer
Procedia PDF Downloads 518