Search results for: electrical property
210 Hidro-IA: An Artificial Intelligent Tool Applied to Optimize the Operation Planning of Hydrothermal Systems with Historical Streamflow
Authors: Thiago Ribeiro de Alencar, Jacyro Gramulia Junior, Patricia Teixeira Leite
Abstract:
The area of the electricity sector that deals with energy needs by the hydroelectric in a coordinated manner is called Operation Planning of Hydrothermal Power Systems (OPHPS). The purpose of this is to find a political operative to provide electrical power to the system in a given period, with reliability and minimal cost. Therefore, it is necessary to determine an optimal schedule of generation for each hydroelectric, each range, so that the system meets the demand reliably, avoiding rationing in years of severe drought, and that minimizes the expected cost of operation during the planning, defining an appropriate strategy for thermal complementation. Several optimization algorithms specifically applied to this problem have been developed and are used. Although providing solutions to various problems encountered, these algorithms have some weaknesses, difficulties in convergence, simplification of the original formulation of the problem, or owing to the complexity of the objective function. An alternative to these challenges is the development of techniques for simulation optimization and more sophisticated and reliable, it can assist the planning of the operation. Thus, this paper presents the development of a computational tool, namely Hydro-IA for solving optimization problem identified and to provide the User an easy handling. Adopted as intelligent optimization technique is Genetic Algorithm (GA) and programming language is Java. First made the modeling of the chromosomes, then implemented the function assessment of the problem and the operators involved, and finally the drafting of the graphical interfaces for access to the User. The results with the Genetic Algorithms were compared with the optimization technique nonlinear programming (NLP). Tests were conducted with seven hydroelectric plants interconnected hydraulically with historical stream flow from 1953 to 1955. The results of comparison between the GA and NLP techniques shows that the cost of operating the GA becomes increasingly smaller than the NLP when the number of hydroelectric plants interconnected increases. The program has managed to relate a coherent performance in problem resolution without the need for simplification of the calculations together with the ease of manipulating the parameters of simulation and visualization of output results.Keywords: energy, optimization, hydrothermal power systems, artificial intelligence and genetic algorithms
Procedia PDF Downloads 420209 Anaerobic Co-digestion in Two-Phase TPAD System of Sewage Sludge and Fish Waste
Authors: Rocio López, Miriam Tena, Montserrat Pérez, Rosario Solera
Abstract:
Biotransformation of organic waste into biogas is considered an interesting alternative for the production of clean energy from renewable sources by reducing the volume and organic content of waste Anaerobic digestion is considered one of the most efficient technologies to transform waste into fertilizer and biogas in order to obtain electrical energy or biofuel within the concept of the circular economy. Currently, three types of anaerobic processes have been developed on a commercial scale: (1) single-stage process where sludge bioconversion is completed in a single chamber, (2) two-stage process where the acidogenic and methanogenic stages are separated into two chambers and, finally, (3) temperature-phase sequencing (TPAD) process that combines a thermophilic pretreatment unit prior to mesophilic anaerobic digestion. Two-stage processes can provide hydrogen and methane with easier control of the first and second stage conditions producing higher total energy recovery and substrate degradation than single-stage processes. On the other hand, co-digestion is the simultaneous anaerobic digestion of a mixture of two or more substrates. The technology is similar to anaerobic digestion but is a more attractive option as it produces increased methane yields due to the positive synergism of the mixtures in the digestion medium thus increasing the economic viability of biogas plants. The present study focuses on the energy recovery by anaerobic co-digestion of sewage sludge and waste from the aquaculture-fishing sector. The valorization is approached through the application of a temperature sequential phase process or TPAD technology (Temperature - Phased Anaerobic Digestion). Moreover, two-phase of microorganisms is considered. Thus, the selected process allows the development of a thermophilic acidogenic phase followed by a mesophilic methanogenic phase to obtain hydrogen (H₂) in the first stage and methane (CH₄) in the second stage. The combination of these technologies makes it possible to unify all the advantages of these anaerobic digestion processes individually. To achieve these objectives, a sequential study has been carried out in which the biochemical potential of hydrogen (BHP) is tested followed by a BMP test, which will allow checking the feasibility of the two-stage process. The best results obtained were high total and soluble COD yields (59.8% and 82.67%, respectively) as well as H₂ production rates of 12LH₂/kg SVadded and methane of 28.76 L CH₄/kg SVadded for TPAD.Keywords: anaerobic co-digestion, TPAD, two-phase, BHP, BMP, sewage sludge, fish waste
Procedia PDF Downloads 156208 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential
Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen
Abstract:
Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance
Procedia PDF Downloads 394207 Experimental Analysis of the Influence of Water Mass Flow Rate on the Performance of a CO2 Direct-Expansion Solar Assisted Heat Pump
Authors: Sabrina N. Rabelo, Tiago de F. Paulino, Willian M. Duarte, Samer Sawalha, Luiz Machado
Abstract:
Energy use is one of the main indicators for the economic and social development of a country, reflecting directly in the quality of life of the population. The expansion of energy use together with the depletion of fossil resources and the poor efficiency of energy systems have led many countries in recent years to invest in renewable energy sources. In this context, solar-assisted heat pump has become very important in energy industry, since it can transfer heat energy from the sun to water or another absorbing source. The direct-expansion solar assisted heat pump (DX-SAHP) water heater system operates by receiving solar energy incident in a solar collector, which serves as an evaporator in a refrigeration cycle, and the energy reject by the condenser is used for water heating. In this paper, a DX-SAHP using carbon dioxide as refrigerant (R744) was assembled, and the influence of the variation of the water mass flow rate in the system was analyzed. The parameters such as high pressure, water outlet temperature, gas cooler outlet temperature, evaporator temperature, and the coefficient of performance were studied. The mainly components used to assemble the heat pump were a reciprocating compressor, a gas cooler which is a countercurrent concentric tube heat exchanger, a needle-valve, and an evaporator that is a copper bare flat plate solar collector designed to capture direct and diffuse radiation. Routines were developed in the LabVIEW and CoolProp through MATLAB software’s, respectively, to collect data and calculate the thermodynamics properties. The range of coefficient of performance measured was from 3.2 to 5.34. It was noticed that, with the higher water mass flow rate, the water outlet temperature decreased, and consequently, the coefficient of performance of the system increases since the heat transfer in the gas cooler is higher. In addition, the high pressure of the system and the CO2 gas cooler outlet temperature decreased. The heat pump using carbon dioxide as a refrigerant, especially operating with solar radiation has been proven to be a renewable source in an efficient system for heating residential water compared to electrical heaters reaching temperatures between 40 °C and 80 °C.Keywords: water mass flow rate, R-744, heat pump, solar evaporator, water heater
Procedia PDF Downloads 176206 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects
Authors: Karan Sharma, Ajay Kumar
Abstract:
Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.Keywords: EEG signal, Reiki, time consuming, epileptic seizure
Procedia PDF Downloads 406205 Bandgap Engineering of CsMAPbI3-xBrx Quantum Dots for Intermediate Band Solar Cell
Authors: Deborah Eric, Abbas Ahmad Khan
Abstract:
Lead halide perovskites quantum dots have attracted immense scientific and technological interest for successful photovoltaic applications because of their remarkable optoelectronic properties. In this paper, we have simulated CsMAPbI3-xBrx based quantum dots to implement their use in intermediate band solar cells (IBSC). These types of materials exhibit optical and electrical properties distinct from their bulk counterparts due to quantum confinement. The conceptual framework provides a route to analyze the electronic properties of quantum dots. This layer of quantum dots optimizes the position and bandwidth of IB that lies in the forbidden region of the conventional bandgap. A three-dimensional MAPbI3 quantum dot (QD) with geometries including spherical, cubic, and conical has been embedded in the CsPbBr3 matrix. Bound energy wavefunction gives rise to miniband, which results in the formation of IB. If there is more than one miniband, then there is a possibility of having more than one IB. The optimization of QD size results in more IBs in the forbidden region. One band time-independent Schrödinger equation using the effective mass approximation with step potential barrier is solved to compute the electronic states. Envelope function approximation with BenDaniel-Duke boundary condition is used in combination with the Schrödinger equation for the calculation of eigen energies and Eigen energies are solved for the quasi-bound states using an eigenvalue study. The transfer matrix method is used to study the quantum tunneling of MAPbI3 QD through neighbor barriers of CsPbI3. Electronic states are computed using Schrödinger equation with effective mass approximation by considering quantum dot and wetting layer assembly. Results have shown the varying the quantum dot size affects the energy pinning of QD. Changes in the ground, first, second state energies have been observed. The QD is non-zero at the center and decays exponentially to zero at boundaries. Quasi-bound states are characterized by envelope functions. It has been observed that conical quantum dots have maximum ground state energy at a small radius. Increasing the wetting layer thickness exhibits energy signatures similar to bulk material for each QD size.Keywords: perovskite, intermediate bandgap, quantum dots, miniband formation
Procedia PDF Downloads 165204 BLS-2/BSL-3 Laboratory for Diagnosis of Pathogens on the Colombia-Ecuador Border Region: A Post-COVID Commitment to Public Health
Authors: Anderson Rocha-Buelvas, Jaqueline Mena Huertas, Edith Burbano Rosero, Arsenio Hidalgo Troya, Mauricio Casas Cruz
Abstract:
COVID-19 is a disruptive pandemic for the public health and economic system of whole countries, including Colombia. Nariño Department is the southwest of the country and draws attention to being on the border with Ecuador, constantly facing demographic transition affecting infections between countries. In Nariño, the early routine diagnosis of SARS-CoV-2, which can be handled at BSL-2, has affected the transmission dynamics of COVID-19. However, new emerging and re-emerging viruses with biological flexibility classified as a Risk Group 3 agent can take advantage of epidemiological opportunities, generating the need to increase clinical diagnosis, mainly in border regions between countries. The overall objective of this project was to assure the quality of the analytical process in the diagnosis of high biological risk pathogens in Nariño by building a laboratory that includes biosafety level (BSL)-2 and (BSL)-3 containment zones. The delimitation of zones was carried out according to the Verification Tool of the National Health Institute of Colombia and following the standard requirements for the competence of testing and calibration laboratories of the International Organization for Standardization. This is achieved by harmonization of methods and equipment for effective and durable diagnostics of the large-scale spread of highly pathogenic microorganisms, employing negative-pressure containment systems and UV Systems in accordance with a finely controlled electrical system and PCR systems as new diagnostic tools. That increases laboratory capacity. Protection in BSL-3 zones will separate the handling of potentially infectious aerosols within the laboratory from the community and the environment. It will also allow the handling and inactivation of samples with suspected pathogens and the extraction of molecular material from them, allowing research with pathogens with high risks, such as SARS-CoV-2, Influenza, and syncytial virus, and malaria, among others. The diagnosis of these pathogens will be articulated across the spectrum of basic, applied, and translational research that could receive about 60 daily samples. It is expected that this project will be articulated with the health policies of neighboring countries to increase research capacity.Keywords: medical laboratory science, SARS-CoV-2, public health surveillance, Colombia
Procedia PDF Downloads 91203 Spark Plasma Sintering/Synthesis of Alumina-Graphene Composites
Authors: Nikoloz Jalabadze, Roin Chedia, Lili Nadaraia, Levan Khundadze
Abstract:
Nanocrystalline materials in powder condition can be manufactured by a number of different methods, however manufacture of composite materials product in the same nanocrystalline state is still a problem because the processes of compaction and synthesis of nanocrystalline powders go with intensive growth of particles – the process which promotes formation of pieces in an ordinary crystalline state instead of being crystallized in the desirable nanocrystalline state. To date spark plasma sintering (SPS) has been considered as the most promising and energy efficient method for producing dense bodies of composite materials. An advantage of the SPS method in comparison with other methods is mainly low temperature and short time of the sintering procedure. That finally gives an opportunity to obtain dense material with nanocrystalline structure. Graphene has recently garnered significant interest as a reinforcing phase in composite materials because of its excellent electrical, thermal and mechanical properties. Graphene nanoplatelets (GNPs) in particular have attracted much interest as reinforcements for ceramic matrix composites (mostly in Al2O3, Si3N4, TiO2, ZrB2 a. c.). SPS has been shown to fully densify a variety of ceramic systems effectively including Al2O3 and often with improvements in mechanical and functional behavior. Alumina consolidated by SPS has been shown to have superior hardness, fracture toughness, plasticity and optical translucency compared to conventionally processed alumina. Knowledge of how GNPs influence sintering behavior is important to effectively process and manufacture process. In this study, the effects of GNPs on the SPS processing of Al2O3 are investigated by systematically varying sintering temperature, holding time and pressure. Our experiments showed that SPS process is also appropriate for the synthesis of nanocrystalline powders of alumina-graphene composites. Depending on the size of the molds, it is possible to obtain different amount of nanopowders. Investigation of the structure, physical-chemical, mechanical and performance properties of the elaborated composite materials was performed. The results of this study provide a fundamental understanding of the effects of GNP on sintering behavior, thereby providing a foundation for future optimization of the processing of these promising nanocomposite systems.Keywords: alumina oxide, ceramic matrix composites, graphene nanoplatelets, spark-plasma sintering
Procedia PDF Downloads 377202 Artificial Intelligence Impact on the Australian Government Public Sector
Authors: Jessica Ho
Abstract:
AI has helped government, businesses and industries transform the way they do things. AI is used in automating tasks to improve decision-making and efficiency. AI is embedded in sensors and used in automation to help save time and eliminate human errors in repetitive tasks. Today, we saw the growth in AI using the collection of vast amounts of data to forecast with greater accuracy, inform decision-making, adapt to changing market conditions and offer more personalised service based on consumer habits and preferences. Government around the world share the opportunity to leverage these disruptive technologies to improve productivity while reducing costs. In addition, these intelligent solutions can also help streamline government processes to deliver more seamless and intuitive user experiences for employees and citizens. This is a critical challenge for NSW Government as we are unable to determine the risk that is brought by the unprecedented pace of adoption of AI solutions in government. Government agencies must ensure that their use of AI complies with relevant laws and regulatory requirements, including those related to data privacy and security. Furthermore, there will always be ethical concerns surrounding the use of AI, such as the potential for bias, intellectual property rights and its impact on job security. Within NSW’s public sector, agencies are already testing AI for crowd control, infrastructure management, fraud compliance, public safety, transport, and police surveillance. Citizens are also attracted to the ease of use and accessibility of AI solutions without requiring specialised technical skills. This increased accessibility also comes with balancing a higher risk and exposure to the health and safety of citizens. On the other side, public agencies struggle with keeping up with this pace while minimising risks, but the low entry cost and open-source nature of generative AI led to a rapid increase in the development of AI powered apps organically – “There is an AI for That” in Government. Other challenges include the fact that there appeared to be no legislative provisions that expressly authorise the NSW Government to use an AI to make decision. On the global stage, there were too many actors in the regulatory space, and a sovereign response is needed to minimise multiplicity and regulatory burden. Therefore, traditional corporate risk and governance framework and regulation and legislation frameworks will need to be evaluated for AI unique challenges due to their rapidly evolving nature, ethical considerations, and heightened regulatory scrutiny impacting the safety of consumers and increased risks for Government. Creating an effective, efficient NSW Government’s governance regime, adapted to the range of different approaches to the applications of AI, is not a mere matter of overcoming technical challenges. Technologies have a wide range of social effects on our surroundings and behaviours. There is compelling evidence to show that Australia's sustained social and economic advancement depends on AI's ability to spur economic growth, boost productivity, and address a wide range of societal and political issues. AI may also inflict significant damage. If such harm is not addressed, the public's confidence in this kind of innovation will be weakened. This paper suggests several AI regulatory approaches for consideration that is forward-looking and agile while simultaneously fostering innovation and human rights. The anticipated outcome is to ensure that NSW Government matches the rising levels of innovation in AI technologies with the appropriate and balanced innovation in AI governance.Keywords: artificial inteligence, machine learning, rules, governance, government
Procedia PDF Downloads 70201 Facial Behavior Modifications Following the Diffusion of the Use of Protective Masks Due to COVID-19
Authors: Andreas Aceranti, Simonetta Vernocchi, Marco Colorato, Daniel Zaccariello
Abstract:
Our study explores the usefulness of implementing facial expression recognition capabilities and using the Facial Action Coding System (FACS) in contexts where the other person is wearing a mask. In the communication process, the subjects use a plurality of distinct and autonomous reporting systems. Among them, the system of mimicking facial movements is worthy of attention. Basic emotion theorists have identified the existence of specific and universal patterns of facial expressions related to seven basic emotions -anger, disgust, contempt, fear, sadness, surprise, and happiness- that would distinguish one emotion from another. However, due to the COVID-19 pandemic, we have come up against the problem of having the lower half of the face covered and, therefore, not investigable due to the masks. Facial-emotional behavior is a good starting point for understanding: (1) the affective state (such as emotions), (2) cognitive activity (perplexity, concentration, boredom), (3) temperament and personality traits (hostility, sociability, shyness), (4) psychopathology (such as diagnostic information relevant to depression, mania, schizophrenia, and less severe disorders), (5) psychopathological processes that occur during social interactions patient and analyst. There are numerous methods to measure facial movements resulting from the action of muscles, see for example, the measurement of visible facial actions using coding systems (non-intrusive systems that require the presence of an observer who encodes and categorizes behaviors) and the measurement of electrical "discharges" of contracting muscles (facial electromyography; EMG). However, the measuring system invented by Ekman and Friesen (2002) - "Facial Action Coding System - FACS" is the most comprehensive, complete, and versatile. Our study, carried out on about 1,500 subjects over three years of work, allowed us to highlight how the movements of the hands and upper part of the face change depending on whether the subject wears a mask or not. We have been able to identify specific alterations to the subjects’ hand movement patterns and their upper face expressions while wearing masks compared to when not wearing them. We believe that finding correlations between how body language changes when our facial expressions are impaired can provide a better understanding of the link between the face and body non-verbal language.Keywords: facial action coding system, COVID-19, masks, facial analysis
Procedia PDF Downloads 79200 Integrating Wearable-Textiles Sensors and IoT for Continuous Electromyography Monitoring
Authors: Bulcha Belay Etana, Benny Malengier, Debelo Oljira, Janarthanan Krishnamoorthy, Lieva Vanlangenhove
Abstract:
Electromyography (EMG) is a technique used to measure the electrical activity of muscles. EMG can be used to assess muscle function in a variety of settings, including clinical, research, and sports medicine. The aim of this study was to develop a wearable textile sensor for EMG monitoring. The sensor was designed to be soft, stretchable, and washable, making it suitable for long-term use. The sensor was fabricated using a conductive thread material that was embroidered onto a fabric substrate. The sensor was then connected to a microcontroller unit (MCU) and a Wi-Fi-enabled module. The MCU was programmed to acquire the EMG signal and transmit it wirelessly to the Wi-Fi-enabled module. The Wi-Fi-enabled module then sent the signal to a server, where it could be accessed by a computer or smartphone. The sensor was able to successfully acquire and transmit EMG signals from a variety of muscles. The signal quality was comparable to that of commercial EMG sensors. The development of this sensor has the potential to improve the way EMG is used in a variety of settings. The sensor is soft, stretchable, and washable, making it suitable for long-term use. This makes it ideal for use in clinical settings, where patients may need to wear the sensor for extended periods of time. The sensor is also small and lightweight, making it ideal for use in sports medicine and research settings. The data for this study was collected from a group of healthy volunteers. The volunteers were asked to perform a series of muscle contractions while the EMG signal was recorded. The data was then analyzed to assess the performance of the sensor. The EMG signals were analyzed using a variety of methods, including time-domain analysis and frequency-domain analysis. The time-domain analysis was used to extract features such as the root mean square (RMS) and average rectified value (ARV). The frequency-domain analysis was used to extract features such as the power spectrum. The question addressed by this study was whether a wearable textile sensor could be developed that is soft, stretchable, and washable and that can successfully acquire and transmit EMG signals. The results of this study demonstrate that a wearable textile sensor can be developed that meets the requirements of being soft, stretchable, washable, and capable of acquiring and transmitting EMG signals. This sensor has the potential to improve the way EMG is used in a variety of settings.Keywords: EMG, electrode position, smart wearable, textile sensor, IoT, IoT-integrated textile sensor
Procedia PDF Downloads 75199 Evaluation of Functional Properties of Protein Hydrolysate from the Fresh Water Mussel Lamellidens marginalis for Nutraceutical Therapy
Authors: Jana Chakrabarti, Madhushrita Das, Ankhi Haldar, Roshni Chatterjee, Tanmoy Dey, Pubali Dhar
Abstract:
High incidences of Protein Energy Malnutrition as a consequence of low protein intake are quite prevalent among the children in developing countries. Thus prevention of under-nutrition has emerged as a critical challenge to India’s developmental Planners in recent times. Increase in population over the last decade has led to greater pressure on the existing animal protein sources. But these resources are currently declining due to persistent drought, diseases, natural disasters, high-cost of feed, and low productivity of local breeds and this decline in productivity is most evident in some developing countries. So the need of the hour is to search for efficient utilization of unconventional low-cost animal protein resources. Molluscs, as a group is regarded as under-exploited source of health-benefit molecules. Bivalve is the second largest class of phylum Mollusca. Annual harvests of bivalves for human consumption represent about 5% by weight of the total world harvest of aquatic resources. The freshwater mussel Lamellidens marginalis is widely distributed in ponds and large bodies of perennial waters in the Indian sub-continent and well accepted as food all over India. Moreover, ethno-medicinal uses of the flesh of Lamellidens among the rural people to treat hypertension have been documented. Present investigation thus attempts to evaluate the potential of Lamellidens marginalis as functional food. Mussels were collected from freshwater ponds and brought to the laboratory two days before experimentation for acclimatization in laboratory conditions. Shells were removed and fleshes were preserved at- 20oC until analysis. Tissue homogenate was prepared for proximate studies. Fatty acids and amino acids composition were analyzed. Vitamins, Minerals and Heavy metal contents were also studied. Mussel Protein hydrolysate was prepared using Alcalase 2.4 L and degree of hydrolysis was evaluated to analyze its Functional properties. Ferric Reducing Antioxidant Power (FRAP) and DPPH Antioxidant assays were performed. Anti-hypertensive property was evaluated by measuring Angiotensin Converting Enzyme (ACE) inhibition assay. Proximate analysis indicates that mussel meat contains moderate amount of protein (8.30±0.67%), carbohydrate (8.01±0.38%) and reducing sugar (4.75±0.07%), but less amount of fat (1.02±0.20%). Moisture content is quite high but ash content is very low. Phospholipid content is significantly high (19.43 %). Lipid constitutes, substantial amount of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) which have proven prophylactic values. Trace elements are found present in substantial amount. Comparative study of proximate nutrients between Labeo rohita, Lamellidens and cow’s milk indicates that mussel meat can be used as complementary food source. Functionality analyses of protein hydrolysate show increase in Fat absorption, Emulsification, Foaming capacity and Protein solubility. Progressive anti-oxidant and anti-hypertensive properties have also been documented. Lamellidens marginalis can thus be regarded as a functional food source as this may combine effectively with other food components for providing essential elements to the body. Moreover, mussel protein hydrolysate provides opportunities for utilizing it in various food formulations and pharmaceuticals. The observations presented herein should be viewed as a prelude to what future holds.Keywords: functional food, functional properties, Lamellidens marginalis, protein hydrolysate
Procedia PDF Downloads 418198 Validation of an Impedance-Based Flow Cytometry Technique for High-Throughput Nanotoxicity Screening
Authors: Melanie Ostermann, Eivind Birkeland, Ying Xue, Alexander Sauter, Mihaela R. Cimpan
Abstract:
Background: New reliable and robust techniques to assess biological effects of nanomaterials (NMs) in vitro are needed to speed up safety analysis and to identify key physicochemical parameters of NMs, which are responsible for their acute cytotoxicity. The central aim of this study was to validate and evaluate the applicability and reliability of an impedance-based flow cytometry (IFC) technique for the high-throughput screening of NMs. Methods: Eight inorganic NMs from the European Commission Joint Research Centre Repository were used: NM-302 and NM-300k (Ag: 200 nm rods and 16.7 nm spheres, respectively), NM-200 and NM- 203 (SiO₂: 18.3 nm and 24.7 nm amorphous, respectively), NM-100 and NM-101 (TiO₂: 100 nm and 6 nm anatase, respectively), and NM-110 and NM-111 (ZnO: 147 nm and 141 nm, respectively). The aim was to assess the biological effects of these materials on human monoblastoid (U937) cells. Dispersions of NMs were prepared as described in the NANOGENOTOX dispersion protocol and cells were exposed to NMs at relevant concentrations (2, 10, 20, 50, and 100 µg/mL) for 24 hrs. The change in electrical impedance was measured at 0.5, 2, 6, and 12 MHz using the IFC AmphaZ30 (Amphasys AG, Switzerland). A traditional toxicity assay, Trypan Blue Dye Exclusion assay, and dark-field microscopy were used to validate the IFC method. Results: Spherical Ag particles (NM-300K) showed the highest toxic effect on U937 cells followed by ZnO (NM-111 ≥ NM-110) particles. Silica particles were moderate to non-toxic at all used concentrations under these conditions. A higher toxic effect was seen with smaller sized TiO2 particles (NM-101) compared to their larger analogues (NM-100). No interferences between the IFC and the used NMs were seen. Uptake and internalization of NMs were observed after 24 hours exposure, confirming actual NM-cell interactions. Conclusion: Results collected with the IFC demonstrate the applicability of this method for rapid nanotoxicity assessment, which proved to be less prone to nano-related interference issues compared to some traditional toxicity assays. Furthermore, this label-free and novel technique shows good potential for up-scaling in directions of an automated high-throughput screening and for future NM toxicity assessment. This work was supported by the EC FP7 NANoREG (Grant Agreement NMP4-LA-2013-310584), the Research Council of Norway, project NorNANoREG (239199/O70), the EuroNanoMed II 'GEMN' project (246672), and the UH-Nett Vest project.Keywords: cytotoxicity, high-throughput, impedance, nanomaterials
Procedia PDF Downloads 362197 Ytterbium Advantages for Brachytherapy
Authors: S. V. Akulinichev, S. A. Chaushansky, V. I. Derzhiev
Abstract:
High dose rate (HDR) brachytherapy is a method of contact radiotherapy, when a single sealed source with an activity of about 10 Ci is temporarily inserted in the tumor area. The isotopes Ir-192 and (much less) Co-60 are used as active material for such sources. The other type of brachytherapy, the low dose rate (LDR) brachytherapy, implies the insertion of many permanent sources (up to 200) of lower activity. The pulse dose rate (PDR) brachytherapy can be considered as a modification of HDR brachytherapy, when the single source is repeatedly introduced in the tumor region in a pulse regime during several hours. The PDR source activity is of the order of one Ci and the isotope Ir-192 is currently used for these sources. The PDR brachytherapy is well recommended for the treatment of several tumors since, according to oncologists, it combines the medical benefits of both HDR and LDR types of brachytherapy. One of the main problems for the PDR brachytherapy progress is the shielding of the treatment area since the longer stay of patients in a shielded canyon is not enough comfortable for them. The use of Yb-169 as an active source material is the way to resolve the shielding problem for PDR, as well as for HRD brachytherapy. The isotope Yb-169 has the average photon emission energy of 93 KeV and the half-life of 32 days. Compared to iridium and cobalt, this isotope has a significantly lower emission energy and therefore requires a much lighter shielding. Moreover, the absorption cross section of different materials has a strong Z-dependence in that photon energy range. For example, the dose distributions of iridium and ytterbium have a quite similar behavior in the water or in the body. But the heavier material as lead absorbs the ytterbium radiation much stronger than the iridium or cobalt radiation. For example, only 2 mm of lead layer is enough to reduce the ytterbium radiation by a couple of orders of magnitude but is not enough to protect from iridium radiation. We have created an original facility to produce the start stable isotope Yb-168 using the laser technology AVLIS. This facility allows to raise the Yb-168 concentration up to 50 % and consumes much less of electrical power than the alternative electromagnetic enrichment facilities. We also developed, in cooperation with the Institute of high pressure physics of RAS, a new technology for manufacturing high-density ceramic cores of ytterbium oxide. Ceramics density reaches the limit of the theoretical values: 9.1 g/cm3 for the cubic phase of ytterbium oxide and 10 g/cm3 for the monoclinic phase. Source cores from this ceramics have high mechanical characteristics and a glassy surface. The use of ceramics allows to increase the source activity with fixed external dimensions of sources.Keywords: brachytherapy, high, pulse dose rates, radionuclides for therapy, ytterbium sources
Procedia PDF Downloads 491196 A Homogenized Mechanical Model of Carbon Nanotubes/Polymer Composite with Interface Debonding
Authors: Wenya Shu, Ilinca Stanciulescu
Abstract:
Carbon nanotubes (CNTs) possess attractive properties, such as high stiffness and strength, and high thermal and electrical conductivities, making them promising filler in multifunctional nanocomposites. Although CNTs can be efficient reinforcements, the expected level of mechanical performance of CNT-polymers is not often reached in practice due to the poor mechanical behavior of the CNT-polymer interfaces. It is believed that the interactions of CNT and polymer mainly result from the Van der Waals force. The interface debonding is a fracture and delamination phenomenon. Thus, the cohesive zone modeling (CZM) is deemed to give good capture of the interface behavior. The detailed, cohesive zone modeling provides an option to consider the CNT-matrix interactions, but brings difficulties in mesh generation and also leads to high computational costs. Homogenized models that smear the fibers in the ground matrix and treat the material as homogeneous are studied in many researches to simplify simulations. But based on the perfect interface assumption, the traditional homogenized model obtained by mixing rules severely overestimates the stiffness of the composite, even comparing with the result of the CZM with artificially very strong interface. A mechanical model that can take into account the interface debonding and achieve comparable accuracy to the CZM is thus essential. The present study first investigates the CNT-matrix interactions by employing cohesive zone modeling. Three different coupled CZM laws, i.e., bilinear, exponential and polynomial, are considered. These studies indicate that the shapes of the CZM constitutive laws chosen do not influence significantly the simulations of interface debonding. Assuming a bilinear traction-separation relationship, the debonding process of single CNT in the matrix is divided into three phases and described by differential equations. The analytical solutions corresponding to these phases are derived. A homogenized model is then developed by introducing a parameter characterizing interface sliding into the mixing theory. The proposed mechanical model is implemented in FEAP8.5 as a user material. The accuracy and limitations of the model are discussed through several numerical examples. The CZM simulations in this study reveal important factors in the modeling of CNT-matrix interactions. The analytical solutions and proposed homogenized model provide alternative methods to efficiently investigate the mechanical behaviors of CNT/polymer composites.Keywords: carbon nanotube, cohesive zone modeling, homogenized model, interface debonding
Procedia PDF Downloads 129195 Monitoring Memories by Using Brain Imaging
Authors: Deniz Erçelen, Özlem Selcuk Bozkurt
Abstract:
The course of daily human life calls for the need for memories and remembering the time and place for certain events. Recalling memories takes up a substantial amount of time for an individual. Unfortunately, scientists lack the proper technology to fully understand and observe different brain regions that interact to form or retrieve memories. The hippocampus, a complex brain structure located in the temporal lobe, plays a crucial role in memory. The hippocampus forms memories as well as allows the brain to retrieve them by ensuring that neurons fire together. This process is called “neural synchronization.” Sadly, the hippocampus is known to deteriorate often with age. Proteins and hormones, which repair and protect cells in the brain, typically decline as the age of an individual increase. With the deterioration of the hippocampus, an individual becomes more prone to memory loss. Many memory loss starts off as mild but may evolve into serious medical conditions such as dementia and Alzheimer’s disease. In their quest to fully comprehend how memories work, scientists have created many different kinds of technology that are used to examine the brain and neural pathways. For instance, Magnetic Resonance Imaging - or MRI- is used to collect detailed images of an individual's brain anatomy. In order to monitor and analyze brain functions, a different version of this machine called Functional Magnetic Resonance Imaging - or fMRI- is used. The fMRI is a neuroimaging procedure that is conducted when the target brain regions are active. It measures brain activity by detecting changes in blood flow associated with neural activity. Neurons need more oxygen when they are active. The fMRI measures the change in magnetization between blood which is oxygen-rich and oxygen-poor. This way, there is a detectable difference across brain regions, and scientists can monitor them. Electroencephalography - or EEG - is also a significant way to monitor the human brain. The EEG is more versatile and cost-efficient than an fMRI. An EEG measures electrical activity which has been generated by the numerous cortical layers of the brain. EEG allows scientists to be able to record brain processes that occur after external stimuli. EEGs have a very high temporal resolution. This quality makes it possible to measure synchronized neural activity and almost precisely track the contents of short-term memory. Science has come a long way in monitoring memories using these kinds of devices, which have resulted in the inspections of neurons and neural pathways becoming more intense and detailed.Keywords: brain, EEG, fMRI, hippocampus, memories, neural pathways, neurons
Procedia PDF Downloads 86194 An EEG-Based Scale for Comatose Patients' Vigilance State
Authors: Bechir Hbibi, Lamine Mili
Abstract:
Understanding the condition of comatose patients can be difficult, but it is crucial to their optimal treatment. Consequently, numerous scoring systems have been developed around the world to categorize patient states based on physiological assessments. Although validated and widely adopted by medical communities, these scores still present numerous limitations and obstacles. Even with the addition of additional tests and extensions, these scoring systems have not been able to overcome certain limitations, and it appears unlikely that they will be able to do so in the future. On the other hand, physiological tests are not the only way to extract ideas about comatose patients. EEG signal analysis has helped extensively to understand the human brain and human consciousness and has been used by researchers in the classification of different levels of disease. The use of EEG in the ICU has become an urgent matter in several cases and has been recommended by medical organizations. In this field, the EEG is used to investigate epilepsy, dementia, brain injuries, and many other neurological disorders. It has recently also been used to detect pain activity in some regions of the brain, for the detection of stress levels, and to evaluate sleep quality. In our recent findings, our aim was to use multifractal analysis, a very successful method of handling multifractal signals and feature extraction, to establish a state of awareness scale for comatose patients based on their electrical brain activity. The results show that this score could be instantaneous and could overcome many limitations with which the physiological scales stock. On the contrary, multifractal analysis stands out as a highly effective tool for characterizing non-stationary and self-similar signals. It demonstrates strong performance in extracting the properties of fractal and multifractal data, including signals and images. As such, we leverage this method, along with other features derived from EEG signal recordings from comatose patients, to develop a scale. This scale aims to accurately depict the vigilance state of patients in intensive care units and to address many of the limitations inherent in physiological scales such as the Glasgow Coma Scale (GCS) and the FOUR score. The results of applying version V0 of this approach to 30 patients with known GCS showed that the EEG-based score similarly describes the states of vigilance but distinguishes between the states of 8 sedated patients where the GCS could not be applied. Therefore, our approach could show promising results with patients with disabilities, injected with painkillers, and other categories where physiological scores could not be applied.Keywords: coma, vigilance state, EEG, multifractal analysis, feature extraction
Procedia PDF Downloads 72193 Consumer Preferences for Low-Carbon Futures: A Structural Equation Model Based on the Domestic Hydrogen Acceptance Framework
Authors: Joel A. Gordon, Nazmiye Balta-Ozkan, Seyed Ali Nabavi
Abstract:
Hydrogen-fueled technologies are rapidly advancing as a critical component of the low-carbon energy transition. In countries historically reliant on natural gas for home heating, such as the UK, hydrogen may prove fundamental for decarbonizing the residential sector, alongside other technologies such as heat pumps and district heat networks. While the UK government is set to take a long-term policy decision on the role of domestic hydrogen by 2026, there are considerable uncertainties regarding consumer preferences for ‘hydrogen homes’ (i.e., hydrogen-fueled appliances for space heating, hot water, and cooking. In comparison to other hydrogen energy technologies, such as road transport applications, to date, few studies have engaged with the social acceptance aspects of the domestic hydrogen transition, resulting in a stark knowledge deficit and pronounced risk to policymaking efforts. In response, this study aims to safeguard against undesirable policy measures by revealing the underlying relationships between the factors of domestic hydrogen acceptance and their respective dimensions: attitudinal, socio-political, community, market, and behavioral acceptance. The study employs an online survey (n=~2100) to gauge how different UK householders perceive the proposition of switching from natural gas to hydrogen-fueled appliances. In addition to accounting for housing characteristics (i.e., housing tenure, property type and number of occupants per dwelling) and several other socio-structural variables (e.g. age, gender, and location), the study explores the impacts of consumer heterogeneity on hydrogen acceptance by recruiting respondents from across five distinct groups: (1) fuel poor householders, (2) technology engaged householders, (3) environmentally engaged householders, (4) technology and environmentally engaged householders, and (5) a baseline group (n=~700) which filters out each of the smaller targeted groups (n=~350). This research design reflects the notion that supporting a socially fair and efficient transition to hydrogen will require parallel engagement with potential early adopters and demographic groups impacted by fuel poverty while also accounting strongly for public attitudes towards net zero. Employing a second-order multigroup confirmatory factor analysis (CFA) in Mplus, the proposed hydrogen acceptance model is tested to fit the data through a partial least squares (PLS) approach. In addition to testing differences between and within groups, the findings provide policymakers with critical insights regarding the significance of knowledge and awareness, safety perceptions, perceived community impacts, cost factors, and trust in key actors and stakeholders as potential explanatory factors of hydrogen acceptance. Preliminary results suggest that knowledge and awareness of hydrogen are positively associated with support for domestic hydrogen at the household, community, and national levels. However, with the exception of technology and/or environmentally engaged citizens, much of the population remains unfamiliar with hydrogen and somewhat skeptical of its application in homes. Knowledge and awareness present as critical to facilitating positive safety perceptions, alongside higher levels of trust and more favorable expectations for community benefits, appliance performance, and potential cost savings. Based on these preliminary findings, policymakers should be put on red alert about diffusing hydrogen into the public consciousness in alignment with energy security, fuel poverty, and net-zero agendas.Keywords: hydrogen homes, social acceptance, consumer heterogeneity, heat decarbonization
Procedia PDF Downloads 114192 Feasibility and Energy Efficiency Analysis of Chilled Water Radiant Cooling System of Office Apartment in Nigeria’s Tropical Climate City
Authors: Rasaq Adekunle Olabomi
Abstract:
More than 30% of the global building energy consumption is attributed to heating, ventilation and air-conditioning (HVAC) due to increasing urbanization and the need for more personal comfort. While heating is predominant in the temperate regions (especially during winter), comfort cooling is constantly needed in tropical regions such as Nigeria. This makes cooling a major contributor to the peak electrical load in the tropics. Meanwhile, the high solar energy availability in the tropical climate region presents a higher application potentials for solar thermal cooling systems; more so, the need for cooling mostly coincides with the solar energy availability. In addition to huge energy consumption, conventional (compressor type) air-conditioning systems mostly use refrigerants that are regarded as environmental unfriendly because of their ozone depletion potentials; this has made the alternative cooling systems to become popular in the present time. The better thermal capacity and less pumping power requirement of chilled water than chilled air has also made chilled water a preferred option over the chilled air cooling system. Radiant floor chilled water cooling is particularly is also considered suitable for spaces such as meeting room, seminar hall, auditorium, airport arrival and departure halls among others. This study did the analysis of the feasibility and energy efficiency of solar thermal chilled water for radiant flood cooling of an office apartment in a tropical climate city in Nigeria with a view to recommend its up-scaling. The analysis considered the weather parameters including available solar irradiance (kWh/m2-day) as well as the technical details of the solar thermal cooling systems to determine the feasibility. Project cost, its energy savings, emission reduction potentials and cost-to-benefits ration are used to analyze its energy efficiency as well as the viability of the cooling system. The techno-economic analysis of the proposed system, carried out using RETScreen software shows that its viability in but SWOT analysis of policy and institutional framework to promote solar energy utilization for the cooling systems shows weakness such as poor infrastructure and inadequate local capacity for technological development as major challenges.Keywords: cooling load, absorption cooling system, coefficient of performance, radiant floor, cost saving, emission reduction
Procedia PDF Downloads 27191 Measurements for Risk Analysis and Detecting Hazards by Active Wearables
Authors: Werner Grommes
Abstract:
Intelligent wearables (illuminated vests or hand and foot-bands, smart watches with a laser diode, Bluetooth smart glasses) overflow the market today. They are integrated with complex electronics and are worn very close to the body. Optical measurements and limitation of the maximum light density are needed. Smart watches are equipped with a laser diode or control different body currents. Special glasses generate readable text information that is received via radio transmission. Small high-performance batteries (lithium-ion/polymer) supply the electronics. All these products have been tested and evaluated for risk. These products must, for example, meet the requirements for electromagnetic compatibility as well as the requirements for electromagnetic fields affecting humans or implant wearers. Extensive analyses and measurements were carried out for this purpose. Many users are not aware of these risks. The result of this study should serve as a suggestion to do it better in the future or simply to point out these risks. Commercial LED warning vests, LED hand and foot-bands, illuminated surfaces with inverter (high voltage), flashlights, smart watches, and Bluetooth smart glasses were checked for risks. The luminance, the electromagnetic emissions in the low-frequency as well as in the high-frequency range, audible noises, and nervous flashing frequencies were checked by measurements and analyzed. Rechargeable lithium-ion or lithium-polymer batteries can burn or explode under special conditions like overheating, overcharging, deep discharge or using out of the temperature specification. Some risk analysis becomes necessary. The result of this study is that many smart wearables are worn very close to the body, and an extensive risk analysis becomes necessary. Wearers of active implants like a pacemaker or implantable cardiac defibrillator must be considered. If the wearable electronics include switching regulators or inverter circuits, active medical implants in the near field can be disturbed. A risk analysis is necessary.Keywords: safety and hazards, electrical safety, EMC, EMF, active medical implants, optical radiation, illuminated warning vest, electric luminescent, hand and head lamps, LED, e-light, safety batteries, light density, optical glare effects
Procedia PDF Downloads 110190 Long-Term Economic-Ecological Assessment of Optimal Local Heat-Generating Technologies for the German Unrefurbished Residential Building Stock on the Quarter Level
Authors: M. A. Spielmann, L. Schebek
Abstract:
In order to reach the long-term national climate goals of the German government for the building sector, substantial energetic measures have to be executed. Historically, those measures were primarily energetic efficiency measures at the buildings’ shells. Advanced technologies for the on-site generation of heat (or other types of energy) often are not feasible at this small spatial scale of a single building. Therefore, the present approach uses the spatially larger dimension of a quarter. The main focus of the present paper is the long-term economic-ecological assessment of available decentralized heat-generating (CHP power plants and electrical heat pumps) technologies at the quarter level for the German unrefurbished residential buildings. Three distinct terms have to be described methodologically: i) Quarter approach, ii) Economic assessment, iii) Ecological assessment. The quarter approach is used to enable synergies and scaling effects over a single-building. For the present study, generic quarters that are differentiated according to significant parameters concerning their heat demand are used. The core differentiation of those quarters is made by the construction time period of the buildings. The economic assessment as the second crucial parameter is executed with the following structure: Full costs are quantized for each technology combination and quarter. The investment costs are analyzed on an annual basis and are modeled with the acquisition of debt. Annuity loans are assumed. Consequently, for each generic quarter, an optimal technology combination for decentralized heat generation is provided in each year of the temporal boundaries (2016-2050). The ecological assessment elaborates for each technology combination and each quarter a Life Cycle assessment. The measured impact category hereby is GWP 100. The technology combinations for heat production can be therefore compared against each other concerning their long-term climatic impacts. Core results of the approach can be differentiated to an economic and ecological dimension. With an annual resolution, the investment and running costs of different energetic technology combinations are quantified. For each quarter an optimal technology combination for local heat supply and/or energetic refurbishment of the buildings within the quarter is provided. Coherently to the economic assessment, the climatic impacts of the technology combinations are quantized and compared against each other.Keywords: building sector, economic-ecological assessment, heat, LCA, quarter level
Procedia PDF Downloads 224189 Availability Analysis of Process Management in the Equipment Maintenance and Repair Implementation
Authors: Onur Ozveri, Korkut Karabag, Cagri Keles
Abstract:
It is an important issue that the occurring of production downtime and repair costs when machines fail in the machine intensive production industries. In the case of failure of more than one machine at the same time, which machines will have the priority to repair, how to determine the optimal repair time should be allotted for this machines and how to plan the resources needed to repair are the key issues. In recent years, Business Process Management (BPM) technique, bring effective solutions to different problems in business. The main feature of this technique is that it can improve the way the job done by examining in detail the works of interest. In the industries, maintenance and repair works are operating as a process and when a breakdown occurs, it is known that the repair work is carried out in a series of process. Maintenance main-process and repair sub-process are evaluated with process management technique, so it is thought that structure could bring a solution. For this reason, in an international manufacturing company, this issue discussed and has tried to develop a proposal for a solution. The purpose of this study is the implementation of maintenance and repair works which is integrated with process management technique and at the end of implementation, analyzing the maintenance related parameters like quality, cost, time, safety and spare part. The international firm that carried out the application operates in a free region in Turkey and its core business area is producing original equipment technologies, vehicle electrical construction, electronics, safety and thermal systems for the world's leading light and heavy vehicle manufacturers. In the firm primarily, a project team has been established. The team dealt with the current maintenance process again, and it has been revised again by the process management techniques. Repair process which is sub-process of maintenance process has been discussed again. In the improved processes, the ABC equipment classification technique was used to decide which machine or machines will be given priority in case of failure. This technique is a prioritization method of malfunctioned machine based on the effect of the production, product quality, maintenance costs and job security. Improved maintenance and repair processes have been implemented in the company for three months, and the obtained data were compared with the previous year data. In conclusion, breakdown maintenance was found to occur in a shorter time, with lower cost and lower spare parts inventory.Keywords: ABC equipment classification, business process management (BPM), maintenance, repair performance
Procedia PDF Downloads 194188 Improving the Dielectric Strength of Transformer Oil for High Health Index: An FEM Based Approach Using Nanofluids
Authors: Fatima Khurshid, Noor Ul Ain, Syed Abdul Rehman Kashif, Zainab Riaz, Abdullah Usman Khan, Muhammad Imran
Abstract:
As the world is moving towards extra-high voltage (EHV) and ultra-high voltage (UHV) power systems, the performance requirements of power transformers are becoming crucial to the system reliability and security. With the transformers being an essential component of a power system, low health index of transformers poses greater risks for safe and reliable operation. Therefore, to meet the rising demands of the power system and transformer performance, researchers are being prompted to provide solutions for enhanced thermal and electrical properties of transformers. This paper proposes an approach to improve the health index of a transformer by using nano-technology in conjunction with bio-degradable oils. Vegetable oils can serve as potential dielectric fluid alternatives to the conventional mineral oils, owing to their numerous inherent benefits; namely, higher fire and flashpoints, and being environment-friendly in nature. Moreover, the addition of nanoparticles in the dielectric fluid further serves to improve the dielectric strength of the insulation medium. In this research, using the finite element method (FEM) in COMSOL Multiphysics environment, and a 2D space dimension, three different oil samples have been modelled, and the electric field distribution is computed for each sample at various electric potentials, i.e., 90 kV, 100 kV, 150 kV, and 200 kV. Furthermore, each sample has been modified with the addition of nanoparticles of different radii (50 nm and 100 nm) and at different interparticle distance (5 mm and 10 mm), considering an instant of time. The nanoparticles used are non-conductive and have been modelled as alumina (Al₂O₃). The geometry has been modelled according to IEC standard 60897, with a standard electrode gap distance of 25 mm. For an input supply voltage of 100 kV, the maximum electric field stresses obtained for the samples of synthetic vegetable oil, olive oil, and mineral oil are 5.08 ×10⁶ V/m, 5.11×10⁶ V/m and 5.62×10⁶ V/m, respectively. It is observed that for the unmodified samples, vegetable oils have a greater dielectric strength as compared to the conventionally used mineral oils because of their higher flash points and higher values of relative permittivity. Also, for the modified samples, the addition of nanoparticles inhibits the streamer propagation inside the dielectric medium and hence, serves to improve the dielectric properties of the medium.Keywords: dielectric strength, finite element method, health index, nanotechnology, streamer propagation
Procedia PDF Downloads 141187 Preparation of IPNs and Effect of Swift Heavy Ions Irradiation on their Physico-Chemical Properties
Authors: B. S Kaith, K. Sharma, V. Kumar, S. Kalia
Abstract:
Superabsorbent are three-dimensional networks of linear or branched polymeric chains which can uptake large volume of biological fluids. The ability is due to the presence of functional groups like –NH2, -COOH and –OH. Such cross-linked products based on natural materials, such as cellulose, starch, dextran, gum and chitosan, because of their easy availability, low production cost, non-toxicity and biodegradability have attracted the attention of Scientists and Technologists all over the world. Since natural polymers have better biocompatibility and are non-toxic than most synthetic one, therefore, such materials can be applied in the preparation of controlled drug delivery devices, biosensors, tissue engineering, contact lenses, soil conditioning, removal of heavy metal ions and dyes. Gums are natural potential antioxidants and are used as food additives. They have excellent properties like high solubility, pH stability, non-toxicity and gelling characteristics. Till date lot of methods have been applied for the synthesis and modifications of cross-linked materials with improved properties suitable for different applications. It is well known that ion beam irradiation can play a crucial role to synthesize, modify, crosslink or degrade polymeric materials. High energetic heavy ions irradiation on polymer film induces significant changes like chain scission, cross-linking, structural changes, amorphization and degradation in bulk. Various researchers reported the effects of low and heavy ion irradiation on the properties of polymeric materials and observed significant improvement in optical, electrical, chemical, thermal and dielectric properties. Moreover, modifications induced in the materials mainly depend on the structure, the ion beam parameters like energy, linear energy transfer, fluence, mass, charge and the nature of the target material. Ion-beam irradiation is a useful technique for improving the surface properties of biodegradable polymers without missing the bulk properties. Therefore, a considerable interest has been grown to study the effects of SHIs irradiation on the properties of synthesized semi-IPNs and IPNs. The present work deals with the preparation of semi-IPNs and IPNs and impact of SHI like O7+ and Ni9+ irradiation on optical, chemical, structural, morphological and thermal properties along with impact on different applications. The results have been discussed on the basis of Linear Energy Transfer (LET) of the ions.Keywords: adsorbent, gel, IPNs, semi-IPNs
Procedia PDF Downloads 372186 The Characterization and Optimization of Bio-Graphene Derived From Oil Palm Shell Through Slow Pyrolysis Environment and Its Electrical Conductivity and Capacitance Performance as Electrodes Materials in Fast Charging Supercapacitor Application
Authors: Nurhafizah Md. Disa, Nurhayati Binti Abdullah, Muhammad Rabie Bin Omar
Abstract:
This research intends to identify the existing knowledge gap because of the lack of substantial studies to fabricate and characterize bio-graphene created from Oil Palm Shell (OPS) through the means of pre-treatment and slow pyrolysis. By fabricating bio-graphene through OPS, a novel material can be found to procure and used for graphene-based research. The characterization of produced bio-graphene is intended to possess a unique hexagonal graphene pattern and graphene properties in comparison to other previously fabricated graphene. The OPS will be fabricated by pre-treatment of zinc chloride (ZnCl₂) and iron (III) chloride (FeCl3), which then induced the bio-graphene thermally by slow pyrolysis. The pyrolizer's final temperature and resident time will be set at 550 °C, 5/min, and 1 hour respectively. Finally, the charred product will be washed with hydrochloric acid (HCL) to remove metal residue. The obtained bio-graphene will undergo different analyses to investigate the physicochemical properties of the two-dimensional layer of carbon atoms with sp2 hybridization hexagonal lattice structure. The analysis that will be taking place is Raman Spectroscopy (RAMAN), UV-visible spectroscopy (UV-VIS), Transmission Electron Microscopy (TEM), Scanning Electron Microscopy (SEM), and X-Ray Diffraction (XRD). In retrospect, RAMAN is used to analyze three key peaks found in graphene, namely D, G, and 2D peaks, which will evaluate the quality of the bio-graphene structure and the number of layers generated. To compare and strengthen graphene layer resolves, UV-VIS may be used to establish similar results of graphene layer from last layer analysis and also characterize the types of graphene procured. A clear physical image of graphene can be obtained by analyzation of TEM in order to study structural quality and layers condition and SEM in order to study the surface quality and repeating porosity pattern. Lastly, establishing the crystallinity of the produced bio-graphene, simultaneously as an oxygen contamination factor and thus pristineness of the graphene can be done by XRD. In the conclusion of this paper, this study is able to obtain bio-graphene through OPS as a novel material in pre-treatment by chloride ZnCl₂ and FeCl3 and slow pyrolization to provide a characterization analysis related to bio-graphene that will be beneficial for future graphene-related applications. The characterization should yield similar findings to previous papers as to confirm graphene quality.Keywords: oil palm shell, bio-graphene, pre-treatment, slow pyrolysis
Procedia PDF Downloads 84185 Commercial Winding for Superconducting Cables and Magnets
Authors: Glenn Auld Knierim
Abstract:
Automated robotic winding of high-temperature superconductors (HTS) addresses precision, efficiency, and reliability critical to the commercialization of products. Today’s HTS materials are mature and commercially promising but require manufacturing attention. In particular to the exaggerated rectangular cross-section (very thin by very wide), winding precision is critical to address the stress that can crack the fragile ceramic superconductor (SC) layer and destroy the SC properties. Damage potential is highest during peak operations, where winding stress magnifies operational stress. Another challenge is operational parameters such as magnetic field alignment affecting design performance. Winding process performance, including precision, capability for geometric complexity, and efficient repeatability, are required for commercial production of current HTS. Due to winding limitations, current HTS magnets focus on simple pancake configurations. HTS motors, generators, MRI/NMR, fusion, and other projects are awaiting robotic wound solenoid, planar, and spherical magnet configurations. As with conventional power cables, full transposition winding is required for long length alternating current (AC) and pulsed power cables. Robotic production is required for transposition, periodic swapping of cable conductors, and placing into precise positions, which allows power utility required minimized reactance. A full transposition SC cable, in theory, has no transmission length limits for AC and variable transient operation due to no resistance (a problem with conventional cables), negligible reactance (a problem for helical wound HTS cables), and no long length manufacturing issues (a problem with both stamped and twisted stacked HTS cables). The Infinity Physics team is solving manufacturing problems by developing automated manufacturing to produce the first-ever reliable and utility-grade commercial SC cables and magnets. Robotic winding machines combine mechanical and process design, specialized sense and observer, and state-of-the-art optimization and control sequencing to carefully manipulate individual fragile SCs, especially HTS, to shape previously unattainable, complex geometries with electrical geometry equivalent to commercially available conventional conductor devices.Keywords: automated winding manufacturing, high temperature superconductor, magnet, power cable
Procedia PDF Downloads 140184 Technical and Economic Potential of Partial Electrification of Railway Lines
Authors: Rafael Martins Manzano Silva, Jean-Francois Tremong
Abstract:
Electrification of railway lines allows to increase speed, power, capacity and energetic efficiency of rolling stocks. However, this process of electrification is complex and costly. An electrification project is not just about design of catenary. It also includes installation of structures around electrification, as substation installation, electrical isolation, signalling, telecommunication and civil engineering structures. France has more than 30,000 km of railways, whose only 53% are electrified. The others 47% of railways use diesel locomotive and represent only 10% of the circulation (tons.km). For this reason, a new type of electrification, less expensive than the usual, is requested to enable the modernization of these railways. One solution could be the use of hybrids trains. This technology opens up new opportunities for less expensive infrastructure development such as the partial electrification of railway lines. In a partially electrified railway, the power supply of theses hybrid trains could be made either by the catenary or by the on-board energy storage system (ESS). Thus, the on-board ESS would feed the energetic needs of the train along the non-electrified zones while in electrified zones, the catenary would feed the train and recharge the on-board ESS. This paper’s objective deals with the technical and economic potential identification of partial electrification of railway lines. This study provides different scenarios of electrification by replacing the most expensive places to electrify using on-board ESS. The target is to reduce the cost of new electrification projects, i.e. reduce the cost of electrification infrastructures while not increasing the cost of rolling stocks. In this study, scenarios are constructed in function of the electrification’s cost of each structure. The electrification’s cost varies considerably because of the installation of catenary support in tunnels, bridges and viaducts is much more expensive than in others zones of the railway. These scenarios will be used to describe the power supply system and to choose between the catenary and the on-board energy storage depending on the position of the train on the railway. To identify the influence of each partial electrification scenario in the sizing of the on-board ESS, a model of the railway line and of the rolling stock is developed for a real case. This real case concerns a railway line located in the south of France. The energy consumption and the power demanded at each point of the line for each power supply (catenary or on-board ESS) are provided at the end of the simulation. Finally, the cost of a partial electrification is obtained by adding the civil engineering costs of the zones to be electrified plus the cost of the on-board ESS. The study of the technical and economic potential ends with the identification of the most economically interesting scenario of electrification.Keywords: electrification, hybrid, railway, storage
Procedia PDF Downloads 431183 Managing Crowds at Sports Mega Events: Examining the Impact of ‘Fan Parks’ at International Football Tournaments between 2002 and 2016
Authors: Joel Rookwood
Abstract:
Sports mega events have become increasingly significant in sporting, political and economic terms, with analysis often focusing on issues including resource expenditure, development, legacy and sustainability. Transnational tournaments can inspire interest from a variety of demographics, and the operational management of such events can involve contributions from a range of personnel. In addition to television audiences events also attract attending spectators, and in football contexts the temporary migration of fans from potentially rival nations and teams can present event organising committees and security personnel with various challenges in relation to crowd management. The behaviour, interaction and control of supporters has previously led to incidents of disorder and hooliganism, with damage to property as well as injuries and deaths proving significant consequences. The Heysel tragedy at the 1985 European Cup final in Brussels is a notable example, where 39 fans died following crowd disorder and mismanagement. Football disasters and disorder, particularly in the context of international competition, have inspired responses from police, law makers, event organisers, clubs and associations, including stadium improvements, legislative developments and crowd management practice to improve the effectiveness of spectator safety. The growth and internationalisation of fandom and developments in event management and tourism have seen various responses to the evolving challenges associated with hosting large numbers of visiting spectators at mega events. In football contexts ‘fan parks’ are a notable example. Since the first widespread introduction in European football competitions at the 2006 World Cup finals in Germany, these facilities have become a staple element of such mega events. This qualitative, longitudinal, multi-continent research draws on extensive semi-structured interview and observation data. As a frame of reference, this work considers football events staged before and after the development of fan parks. Research was undertaken at four World Cup finals (Japan 2002, Germany 2006, South Africa 2010 and Brazil 2014), four European Championships (Portugal 2004, Switzerland/Austria 2008, Poland/Ukraine 2012 and France 2016), four other confederation tournaments (Ghana 2008, Qatar 2011, USA 2011 and Chile 2015), and four European club finals (Istanbul 2005, Athens 2007, Rome 2009 and Basle 2016). This work found that these parks are typically temporarily erected, specifically located zones where supporters congregate together irrespective of allegiances to watch matches on large screens, and partake in other forms of organised on-site entertainment. Such facilities can also allow organisers to control the behaviour, confine the movement and monitor the alcohol consumption of supporters. This represents a notable shift in policy from previous football tournaments, when the widely assumed causal link between alcohol and hooliganism which frequently shaped legislative and police responses to disorder, also dissuaded some authorities from permitting fans to consume alcohol in and around stadia. It also reflects changing attitudes towards modern football fans. The work also found that in certain contexts supporters have increasingly engaged with such provision which impacts fan behaviour, but that this is relative to factors including location, facilities, management and security.Keywords: event, facility, fan, management, park
Procedia PDF Downloads 313182 Examining Employee Social Intrapreneurial Behaviour (ESIB) in Kuwait: Pilot Study
Authors: Ardita Malaj, Ahmad R. Alsaber, Bedour Alboloushi, Anwaar Alkandari
Abstract:
Organizations worldwide, particularly in Kuwait, are concerned with implementing a progressive workplace culture and fostering social innovation behaviours. The main aim of this research is to examine and establish a thorough comprehension of the relationship between an inventive organizational culture, employee intrapreneurial behaviour, authentic leadership, employee job satisfaction, and employee job commitment in the manufacturing sector of Kuwait, which is a developed economy. Literature reviews analyse the core concepts and their related areas by scrutinizing their definitions, dimensions, and importance to uncover any deficiencies in existing research. The examination of relevant research uncovered major gaps in understanding. This study examines the reliability and validity of a newly developed questionnaire designed to identify the appropriate applications for a large-scale investigation. A preliminary investigation was carried out, determining a sample size of 36 respondents selected randomly from a pool of 223 samples. SPSS was utilized to calculate the percentages of the demographic characteristics for the participants, assess the credibility of the measurements, evaluate the internal consistency, validate all agreements, and determine Pearson's correlation. The study's results indicated that the majority of participants were male (66.7%), aged between 35 and 44 (38.9%), and possessed a bachelor's degree (58.3%). Approximately 94.4% of the participants were employed full-time. 72.2% of the participants are employed in the electrical, computer, and ICT sector, whilst 8.3% work in the metal industry. Out of all the departments, the human resource department had the highest level of engagement, making up 13.9% of the total. Most participants (36.1%) possessed intermediate or advanced levels of experience, whilst 21% were classified as entry-level. Furthermore, 8.3% of individuals were categorized as first-level management, 22.2% were categorized as middle management, and 16.7% were categorized as executive or senior management. Around 19.4% of the participants have over a decade of professional experience. The Pearson's correlation coefficient for all 5 components varies between 0.4009 to 0.7183. The results indicate that all elements of the questionnaire were effectively verified, with a Cronbach alpha factor predominantly exceeding 0.6, which is the criterion commonly accepted by researchers. Therefore, the work on the larger scope of testing and analysis could continue.Keywords: pilot study, ESIB, innovative organizational culture, Kuwait, validation
Procedia PDF Downloads 33181 A Novel Harmonic Compensation Algorithm for High Speed Drives
Authors: Lakdar Sadi-Haddad
Abstract:
The past few years study of very high speed electrical drives have seen a resurgence of interest. An inventory of the number of scientific papers and patents dealing with the subject makes it relevant. In fact democratization of magnetic bearing technology is at the origin of recent developments in high speed applications. These machines have as main advantage a much higher power density than the state of the art. Nevertheless particular attention should be paid to the design of the inverter as well as control and command. Surface mounted permanent magnet synchronous machine is the most appropriate technology to address high speed issues. However, it has the drawback of using a carbon sleeve to contain magnets that could tear because of the centrifugal forces generated in rotor periphery. Carbon fiber is well known for its mechanical properties but it has poor heat conduction. It results in a very bad evacuation of eddy current losses induce in the magnets by time and space stator harmonics. The three-phase inverter is the main harmonic source causing eddy currents in the magnets. In high speed applications such harmonics are harmful because on the one hand the characteristic impedance is very low and on the other hand the ratio between the switching frequency and that of the fundamental is much lower than that of the state of the art. To minimize the impact of these harmonics a first lever is to use strategy of modulation producing low harmonic distortion while the second is to introduce a sinus filter between the inverter and the machine to smooth voltage and current waveforms applied to the machine. Nevertheless, in very high speed machine the interaction of the processes mentioned above may introduce particular harmonics that can irreversibly damage the system: harmonics at the resonant frequency, harmonics at the shaft mode frequency, subharmonics etc. Some studies address these issues but treat these phenomena with separate solutions (specific strategy of modulation, active damping methods ...). The purpose of this paper is to present a complete new active harmonic compensation algorithm based on an improvement of the standard vector control as a global solution to all these issues. This presentation will be based on a complete theoretical analysis of the processes leading to the generation of such undesired harmonics. Then a state of the art of available solutions will be provided before developing the content of a new active harmonic compensation algorithm. The study will be completed by a validation study using simulations and practical case on a high speed machine.Keywords: active harmonic compensation, eddy current losses, high speed machine
Procedia PDF Downloads 395