Search results for: electrical machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4636

Search results for: electrical machine

826 An Efficient Aptamer-Based Biosensor Developed via Irreversible Pi-Pi Functionalisation of Graphene/Zinc Oxide Nanocomposite

Authors: Sze Shin Low, Michelle T. T. Tan, Poi Sim Khiew, Hwei-San Loh

Abstract:

An efficient graphene/zinc oxide (PSE-G/ZnO) platform based on pi-pi stacking, non-covalent interactions for the development of aptamer-based biosensor was presented in this study. As a proof of concept, the DNA recognition capability of the as-developed PSE-G/ZnO enhanced aptamer-based biosensor was evaluated using Coconut Cadang-cadang viroid disease (CCCVd). The G/ZnO nanocomposite was synthesised via a simple, green and efficient approach. The pristine graphene was produced through a single step exfoliation of graphite in sonochemical alcohol-water treatment while the zinc nitrate hexahydrate was mixed with the graphene and subjected to low temperature hydrothermal growth. The developed facile, environmental friendly method provided safer synthesis procedure by eliminating the need of harsh reducing chemicals and high temperature. The as-prepared nanocomposite was characterised by X-ray diffractometry (XRD), scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS) to evaluate its crystallinity, morphology and purity. Electrochemical impedance spectroscopy (EIS) was employed for the detection of CCCVd sequence with the use of potassium ferricyanide (K3[Fe(CN)6]). Recognition of the RNA analytes was achieved via the significant increase in resistivity for the double stranded DNA, as compared to single-stranded DNA. The PSE-G/ZnO enhanced aptamer-based biosensor exhibited higher sensitivity than the bare biosensor, attributing to the synergistic effect of high electrical conductivity of graphene and good electroactive property of ZnO.

Keywords: aptamer-based biosensor, graphene/zinc oxide nanocomposite, green synthesis, screen printed carbon electrode

Procedia PDF Downloads 344
825 Image Processing-Based Maize Disease Detection Using Mobile Application

Authors: Nathenal Thomas

Abstract:

In the food chain and in many other agricultural products, corn, also known as maize, which goes by the scientific name Zea mays subsp, is a widely produced agricultural product. Corn has the highest adaptability. It comes in many different types, is employed in many different industrial processes, and is more adaptable to different agro-climatic situations. In Ethiopia, maize is among the most widely grown crop. Small-scale corn farming may be a household's only source of food in developing nations like Ethiopia. The aforementioned data demonstrates that the country's requirement for this crop is excessively high, and conversely, the crop's productivity is very low for a variety of reasons. The most damaging disease that greatly contributes to this imbalance between the crop's supply and demand is the corn disease. The failure to diagnose diseases in maize plant until they are too late is one of the most important factors influencing crop output in Ethiopia. This study will aid in the early detection of such diseases and support farmers during the cultivation process, directly affecting the amount of maize produced. The diseases in maize plants, such as northern leaf blight and cercospora leaf spot, have distinct symptoms that are visible. This study aims to detect the most frequent and degrading maize diseases using the most efficiently used subset of machine learning technology, deep learning so, called Image Processing. Deep learning uses networks that can be trained from unlabeled data without supervision (unsupervised). It is a feature that simulates the exercises the human brain goes through when digesting data. Its applications include speech recognition, language translation, object classification, and decision-making. Convolutional Neural Network (CNN) for Image Processing, also known as convent, is a deep learning class that is widely used for image classification, image detection, face recognition, and other problems. it will also use this algorithm as the state-of-the-art for my research to detect maize diseases by photographing maize leaves using a mobile phone.

Keywords: CNN, zea mays subsp, leaf blight, cercospora leaf spot

Procedia PDF Downloads 58
824 Trip Reduction in Turbo Machinery

Authors: Pranay Mathur, Carlo Michelassi, Simi Karatha, Gilda Pedoto

Abstract:

Industrial plant uptime is top most importance for reliable, profitable & sustainable operation. Trip and failed start has major impact on plant reliability and all plant operators focussed on efforts required to minimise the trips & failed starts. The performance of these CTQs are measured with 2 metrics, MTBT(Mean time between trips) and SR (Starting reliability). These metrics helps to identify top failure modes and identify units need more effort to improve plant reliability. Baker Hughes Trip reduction program structured to reduce these unwanted trip 1. Real time machine operational parameters remotely available and capturing the signature of malfunction including related boundary condition. 2. Real time alerting system based on analytics available remotely. 3. Remote access to trip logs and alarms from control system to identify the cause of events. 4. Continuous support to field engineers by remotely connecting with subject matter expert. 5. Live tracking of key CTQs 6. Benchmark against fleet 7. Break down to the cause of failure to component level 8. Investigate top contributor, identify design and operational root cause 9. Implement corrective and preventive action 10. Assessing effectiveness of implemented solution using reliability growth models. 11. Develop analytics for predictive maintenance With this approach , Baker Hughes team is able to support customer in achieving their Reliability Key performance Indicators for monitored units, huge cost savings for plant operators. This Presentation explains these approach while providing successful case studies, in particular where 12nos. of LNG and Pipeline operators with about 140 gas compressing line-ups has adopted these techniques and significantly reduce the number of trips and improved MTBT

Keywords: reliability, availability, sustainability, digital infrastructure, weibull, effectiveness, automation, trips, fail start

Procedia PDF Downloads 57
823 Understanding the Classification of Rain Microstructure and Estimation of Z-R Relationship using a Micro Rain Radar in Tropical Region

Authors: Tomiwa, Akinyemi Clement

Abstract:

Tropical regions experience diverse and complex precipitation patterns, posing significant challenges for accurate rainfall estimation and forecasting. This study addresses the problem of effectively classifying tropical rain types and refining the Z-R (Reflectivity-Rain Rate) relationship to enhance rainfall estimation accuracy. Through a combination of remote sensing, meteorological analysis, and machine learning, the research aims to develop an advanced classification framework capable of distinguishing between different types of tropical rain based on their unique characteristics. This involves utilizing high-resolution satellite imagery, radar data, and atmospheric parameters to categorize precipitation events into distinct classes, providing a comprehensive understanding of tropical rain systems. Additionally, the study seeks to improve the Z-R relationship, a crucial aspect of rainfall estimation. One year of rainfall data was analyzed using a Micro Rain Radar (MRR) located at The Federal University of Technology Akure, Nigeria, measuring rainfall parameters from ground level to a height of 4.8 km with a vertical resolution of 0.16 km. Rain rates were classified into low (stratiform) and high (convective) based on various microstructural attributes such as rain rates, liquid water content, Drop Size Distribution (DSD), average fall speed of the drops, and radar reflectivity. By integrating diverse datasets and employing advanced statistical techniques, the study aims to enhance the precision of Z-R models, offering a more reliable means of estimating rainfall rates from radar reflectivity data. This refined Z-R relationship holds significant potential for improving our understanding of tropical rain systems and enhancing forecasting accuracy in regions prone to heavy precipitation.

Keywords: remote sensing, precipitation, drop size distribution, micro rain radar

Procedia PDF Downloads 8
822 Seawater Intrusion in the Coastal Aquifer of Wadi Nador (Algeria)

Authors: Abdelkader Hachemi & Boualem Remini

Abstract:

Seawater intrusion is a significant challenge faced by coastal aquifers in the Mediterranean basin. This study aims to determine the position of the sharp interface between seawater and freshwater in the aquifer of Wadi Nador, located in the Wilaya of Tipaza, Algeria. A numerical areal sharp interface model using the finite element method is developed to investigate the spatial and temporal behavior of seawater intrusion. The aquifer is assumed to be homogeneous and isotropic. The simulation results are compared with geophysical prospection data obtained through electrical methods in 2011 to validate the model. The simulation results demonstrate a good agreement with the geophysical prospection data, confirming the accuracy of the sharp interface model. The position of the sharp interface in the aquifer is found to be approximately 1617 meters from the sea. Two scenarios are proposed to predict the interface position for the year 2024: one without pumping and the other with pumping. The results indicate a noticeable retreat of the sharp interface position in the first scenario, while a slight decline is observed in the second scenario. The findings of this study provide valuable insights into the dynamics of seawater intrusion in the Wadi Nador aquifer. The predicted changes in the sharp interface position highlight the potential impact of pumping activities on the aquifer's vulnerability to seawater intrusion. This study emphasizes the importance of implementing measures to manage and mitigate seawater intrusion in coastal aquifers. The sharp interface model developed in this research can serve as a valuable tool for assessing and monitoring the vulnerability of aquifers to seawater intrusion.

Keywords: seawater, intrusion, sharp interface, Algeria

Procedia PDF Downloads 54
821 Corporate Digital Responsibility in Construction Engineering-Construction 4.0: Ethical Guidelines for Digitization and Artificial Intelligence

Authors: Weber-Lewerenz Bianca

Abstract:

Digitization is developing fast and has become a powerful tool for digital planning, construction, and operations. Its transformation bears high potentials for companies, is critical for success, and thus, requires responsible handling. This study provides an assessment of calls made in the sustainable development goals by the United Nations (SDGs), White Papers on AI by international institutions, EU-Commission and German Government requesting for the consideration and protection of values and fundamental rights, the careful demarcation between machine (artificial) and human intelligence and the careful use of such technologies. The study discusses digitization and the impacts of artificial intelligence (AI) in construction engineering from an ethical perspective by generating data via conducting case studies and interviewing experts as part of the qualitative method. This research evaluates critically opportunities and risks revolving around corporate digital responsibility (CDR) in the construction industry. To the author's knowledge, no study has set out to investigate how CDR in construction could be conceptualized, especially in relation to the digitization and AI, to mitigate digital transformation both in large, medium-sized, and small companies. No study addressed the key research question: Where can CDR be allocated, how shall its adequate ethical framework be designed to support digital innovations in order to make full use of the potentials of digitization and AI? Now is the right timing for constructive approaches and apply ethics-by-design in order to develop and implement a safe and efficient AI. This represents the first study in construction engineering applying a holistic, interdisciplinary, inclusive approach to provide guidelines for orientation, examine benefits of AI and define ethical principles as the key driver for success, resources-cost-time efficiency, and sustainability using digital technologies and AI in construction engineering to enhance digital transformation. Innovative corporate organizations starting new business models are more likely to succeed than those dominated by conservative, traditional attitudes.

Keywords: construction engineering, digitization, digital transformation, artificial intelligence, ethics, corporate digital responsibility, digital innovation

Procedia PDF Downloads 214
820 Enhancing Robustness in Federated Learning through Decentralized Oracle Consensus and Adaptive Evaluation

Authors: Peiming Li

Abstract:

This paper presents an innovative blockchain-based approach to enhance the reliability and efficiency of federated learning systems. By integrating a decentralized oracle consensus mechanism into the federated learning framework, we address key challenges of data and model integrity. Our approach utilizes a network of redundant oracles, functioning as independent validators within an epoch-based training system in the federated learning model. In federated learning, data is decentralized, residing on various participants' devices. This scenario often leads to concerns about data integrity and model quality. Our solution employs blockchain technology to establish a transparent and tamper-proof environment, ensuring secure data sharing and aggregation. The decentralized oracles, a concept borrowed from blockchain systems, act as unbiased validators. They assess the contributions of each participant using a Hidden Markov Model (HMM), which is crucial for evaluating the consistency of participant inputs and safeguarding against model poisoning and malicious activities. Our methodology's distinct feature is its epoch-based training. An epoch here refers to a specific training phase where data is updated and assessed for quality and relevance. The redundant oracles work in concert to validate data updates during these epochs, enhancing the system's resilience to security threats and data corruption. The effectiveness of this system was tested using the Mnist dataset, a standard in machine learning for benchmarking. Results demonstrate that our blockchain-oriented federated learning approach significantly boosts system resilience, addressing the common challenges of federated environments. This paper aims to make these advanced concepts accessible, even to those with a limited background in blockchain or federated learning. We provide a foundational understanding of how blockchain technology can revolutionize data integrity in decentralized systems and explain the role of oracles in maintaining model accuracy and reliability.

Keywords: federated learning system, block chain, decentralized oracles, hidden markov model

Procedia PDF Downloads 43
819 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering

Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott

Abstract:

Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.

Keywords: cancer research, graph theory, machine learning, single cell analysis

Procedia PDF Downloads 87
818 Behavior of Epoxy Insulator with Surface Defect under HVDC Stress

Authors: Qingying Liu, S. Liu, L. Hao, B. Zhang, J. D. Yan

Abstract:

HVDC technology is becoming increasingly popular due to its simplicity in topology and less power loss over long distance of power transmission, in comparison with HVAC technology. However, the dielectric behavior of insulators in the long term under HVDC stress is completely different from that under HVAC stress as a result of charge accumulation in a constant electric field. Insulators used in practical systems are never perfect in their structural conditions. Over time shallow cracks may develop on their surface. The presence of defects can lead to drastic change in their dielectric behaviour and thus increase the probability of surface flashover. In this contribution, experimental investigations have been carried out on the charge accumulation phenomenon on the surface of a rod insulator made of epoxy that is placed between two disk shaped electrodes at different voltage levels and in different gases (SF6, CO2 and N2). Many results obtained, such as, the two-dimensional electrostatic potential distribution along the insulator surface after the removal of the power source following a pre-defined period of application. The probe has been carefully calibrated before each test. Results show that surface charge distribution near the two disk shaped electrodes is not uniform in the circumferential direction, possibly due to the imperfect electrical connections between the embeded conductor in the insulator and the disk shaped electrodes. The axial length of this non-uniform region is experimentally determined, which provides useful information for shielding design. A charge transport model is also used to explain the formation of the long term electrostatic potential distribution under a constant applied voltage.

Keywords: HVDC, power systems, dielectric behavior, insulation, charge accumulation

Procedia PDF Downloads 210
817 Contextual SenSe Model: Word Sense Disambiguation using Sense and Sense Value of Context Surrounding the Target

Authors: Vishal Raj, Noorhan Abbas

Abstract:

Ambiguity in NLP (Natural language processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential am-biguities. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a novel method to create an affinity matrix to calculate the affinity be-tween any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an al-gorithm to create the sense clusters of tokens using affinity matrix under hierar-chy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contex-tual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and chal-lenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.

Keywords: word sense disambiguation (wsd), contextual sense model (csm), most frequent sense (mfs), part of speech (pos), natural language processing (nlp), oov (out of vocabulary), lemma_pos (a token where lemma and pos of word are joined by underscore), information retrieval (ir), machine translation (mt)

Procedia PDF Downloads 86
816 News Reading Practices: Traditional Media versus New Media

Authors: Nuran Öze

Abstract:

People always want to be aware of what is happening around them. The nature of man constantly triggers the need for gathering information because of curiosity. The media has emerged to save people the need for information. It is known that the media has changed with the technological developments over time, diversified and, people's information needs are provided in different ways. Today, the Internet has become an integral part of everyday life. The invasion of the Internet into everyday life practices at this level affects every aspect of life. These effects cause people to change their life practices. Technological developments have always influenced of people, the way they reach information. Looking at the history of the media, the breaking point about the dissemination of information is seen as the invention of the machine of the printing press. This adventure that started with written media has now become a multi-dimensional structure. Written, audio, visual media has now changed shape with new technologies. Especially emerging of the internet to everyday life, of course, has effects on media field. 'New media' has appeared which contains most of traditional media features in its'. While in the one hand this transformation enables captures a harmony between traditional and new media, on the other hand, new media and traditional media are rivaling each other. The purpose of this study is to examine the problematic relationship between traditional media and new media through the news reading practices of individuals. This study can be evaluated as a kind of media sociology. To reach this aim, two different field researches will be done besides literature review. The research will be conducted in Northern Cyprus. Northern Cyprus Northern Cyprus is located in the Mediterranean Sea. North Cyprus is a country which is not recognized by any country except Turkey. Despite this, takes its share from all technological developments take place in the world. One of the field researches will consist of the questionnaires to be applied on media readers' news reading practices. This survey will be conducted in a social media environment. The second field survey will be conducted in the form of interviews with general editorials or news directors in traditional media. In the second field survey, in-depth interview method will be applied. As a result of these investigations, supporting sides between the new media and the traditional media and directions which contrast with each other will be revealed. In addition to that, it will try to understand the attitudes and perceptions of readers about the traditional media and the new media in this study.

Keywords: new media, news, North Cyprus, traditional media

Procedia PDF Downloads 214
815 Investigation of Irrigation Water Quality at Al-Wafra Agricultural Area, Kuwait

Authors: Mosab Aljeri, Ali Abdulraheem

Abstract:

The water quality of five water types at Al-Wuhaib farm, Al-Wafra area, was studies through onsite field measurements, including pH, temperature, electrical conductivity (EC), and dissolved oxygen (DO), for four different water types. Biweekly samples were collected and analyzed for two months to obtain data of chemicals, nutrients, organics, and heavy metals. The field and laboratory results were compared with irrigation standards of Kuwait Environmental Public Authority (KEPA). The pH values of the five samples sites were within the maximum and minimum limits of KEPA standards. Based on EC values, two groups of water types were observed. The first group represents freshwater quality originated from freshwater Ministry of Electricity & Water & Renewable Energy (MEWRE) line or from freshwater tanks or treated wastewater. The second group represents brackish water type originated from groundwater or treated water mixed with groundwater. The study indicated that all nitrogen forms (ammonia, Total Kjeldahl nitrogen (TKN), Total nitrogen (TN)), total phosphate concentrations and all tested heavy metals for the five water types were below KEPA standards. These macro and micro nutrients are essential for plant growth and can be used as fertilizers. The study suggest that the groundwater should be treated and disinfected in the farming area. Also, these type of studies shall be carried out routinely to all farm areas to ensure safe water use and safe agricultural produce.

Keywords: salinity, heavy metals, ammonia, phosphate

Procedia PDF Downloads 62
814 Bluetooth Communication Protocol Study for Multi-Sensor Applications

Authors: Joao Garretto, R. J. Yarwood, Vamsi Borra, Frank Li

Abstract:

Bluetooth Low Energy (BLE) has emerged as one of the main wireless communication technologies used in low-power electronics, such as wearables, beacons, and Internet of Things (IoT) devices. BLE’s energy efficiency characteristic, smart mobiles interoperability, and Over the Air (OTA) capabilities are essential features for ultralow-power devices, which are usually designed with size and cost constraints. Most current research regarding the power analysis of BLE devices focuses on the theoretical aspects of the advertising and scanning cycles, with most results being presented in the form of mathematical models and computer software simulations. Such computer modeling and simulations are important for the comprehension of the technology, but hardware measurement is essential for the understanding of how BLE devices behave in real operation. In addition, recent literature focuses mostly on the BLE technology, leaving possible applications and its analysis out of scope. In this paper, a coin cell battery-powered BLE Data Acquisition Device, with a 4-in-1 sensor and one accelerometer, is proposed and evaluated with respect to its Power Consumption. First, evaluations of the device in advertising mode with the sensors turned off completely, followed by the power analysis when each of the sensors is individually turned on and data is being transmitted, and concluding with the power consumption evaluation when both sensors are on and respectively broadcasting the data to a mobile phone. The results presented in this paper are real-time measurements of the electrical current consumption of the BLE device, where the energy levels that are demonstrated are matched to the BLE behavior and sensor activity.

Keywords: bluetooth low energy, power analysis, BLE advertising cycle, wireless sensor node

Procedia PDF Downloads 75
813 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.

Keywords: human motion recognition, motion representation, Laban Movement Analysis, Discrete Hidden Markov Model

Procedia PDF Downloads 186
812 Hand Gesture Detection via EmguCV Canny Pruning

Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae

Abstract:

Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.

Keywords: canny pruning, hand recognition, machine learning, skin tracking

Procedia PDF Downloads 161
811 Treatment of Healthcare Wastewater Using The Peroxi-Photoelectrocoagulation Process: Predictive Models for Chemical Oxygen Demand, Color Removal, and Electrical Energy Consumption

Authors: Samuel Fekadu A., Esayas Alemayehu B., Bultum Oljira D., Seid Tiku D., Dessalegn Dadi D., Bart Van Der Bruggen A.

Abstract:

The peroxi-photoelectrocoagulation process was evaluated for the removal of chemical oxygen demand (COD) and color from healthcare wastewater. A 2-level full factorial design with center points was created to investigate the effect of the process parameters, i.e., initial COD, H₂O₂, pH, reaction time and current density. Furthermore, the total energy consumption and average current efficiency in the system were evaluated. Predictive models for % COD, % color removal and energy consumption were obtained. The initial COD and pH were found to be the most significant variables in the reduction of COD and color in peroxi-photoelectrocoagulation process. Hydrogen peroxide only has a significant effect on the treated wastewater when combined with other input variables in the process like pH, reaction time and current density. In the peroxi-photoelectrocoagulation process, current density appears not as a single effect but rather as an interaction effect with H₂O₂ in reducing COD and color. Lower energy expenditure was observed at higher initial COD, shorter reaction time and lower current density. The average current efficiency was found as low as 13 % and as high as 777 %. Overall, the study showed that hybrid electrochemical oxidation can be applied effectively and efficiently for the removal of pollutants from healthcare wastewater.

Keywords: electrochemical oxidation, UV, healthcare pollutants removals, factorial design

Procedia PDF Downloads 60
810 Deep Cryogenic Treatment With Subsequent Aging Applied to Martensitic Stainless Steel: Evaluation of Hardness, Tenacity and Microstructure

Authors: Victor Manuel Alcántara Alza

Abstract:

The way in which the application of the deep cryogenic treatment DCT(-196°C) affects, applied with subsequent aging, was investigated, regarding the mechanical properties of hardness, toughness and microstructure, applied to martensitic stainless steels, with the aim of establishing a different methodology compared to the traditional DCT cryogenic treatment with subsequent tempering. For this experimental study, a muffle furnace was used, first subjecting the specimens to deep cryogenization in a liquid Nitrogen bath/4h, after being previously austenitized at the following temperatures: 1020-1030-1040-1050 (°C) / 1 hour; and then tempered in oil. A first group of cryogenic samples were subjected to subsequent aging at 150°C, with immersion times: 2.5 -5- 10 - 20 - 50 – 100 (h). The next group was subjected to subsequent tempering at temperatures: 480-500-510-520-530-540 (°C)/ 2h. The hardness tests were carried out under standards, using a Universal Durometer, and the readings were made on the HRC scale. The Impact Resistance tests were carried out in a Charpy machine following the ASTM E 23 – 93ª standard. Measurements were taken in joules. Microscopy was performed at the optical level using a 1000X microscope. It was found: For the entire aging interval, the samples austenitized at 1050°C present greater hardness than austenitized at 1040°C, with the maximum peak aged being at 30h. In all cases, the aged samples exceed the hardness of the tempered samples, even in their minimum values. In post-tempered samples, the tempering temperature hardly have effect on the impact strength of material. In the Cryogenic Treatment: DCT + subsequent aging, the maximum hardness value (58.7 HRC) is linked to an impact toughness value (54J) obtained with aging time of 39h, which is considered an optimal condition. The higher hardness of steel after the DCT treatment is attributed to the transformation of retained austenite into martensite. The microstructure is composed mainly of lath martensite; and the original grain size of the austenite can be appreciated. The choice of the combination: Hardness-toughness, is subject to the required service conditions of steel.

Keywords: deep cryogenic treatment; aged precipitation; martensitic steels;, mechanical properties; martensitic steels, hardness, carbides precipitaion

Procedia PDF Downloads 61
809 Investigation on Solar Thermoelectric Generator Using D-Mannitol/Multi-Walled Carbon Nanotubes Composite Phase Change Materials

Authors: Zihua Wu, Yueming He, Xiaoxiao Yu, Yuanyuan Wang, Huaqing Xie

Abstract:

The match of Solar thermoelectric generator (STEG) and phase change materials (PCM) can enhance the solar energy storage and reduce environmental impact from the day-and-night transformation and weather changes. This work utilizes D-mannitol (DM) matrix as the suitable PCM for coupling with thermoelectric generator to achieve the middle-temperature solar energy storage performance at 165℃-167℃. DM/MWCNT composite phase change materials prepared by ball milling not only can keep a high phase change enthalpy of DM material but also have great photo-thermal conversion efficiency of 82%. Based on the self-made storage device container, the effect of PCM thickness on the solar energy storage performance is further discussed and analyzed. The experimental results prove that PCM-STEG coupling system can output more electric energy than pure STEG system because PCM can decline the heat transfer and storage thermal energy to further generate the electric energy through thermal-to-electric conversion when the light is removed. The increase of PCM thickness can reduce the heat transfer and enhance thermal storage, and then the power generation performance of PCM-STEG coupling system can be improved. As the increase of light intensity, the output electric energy of the coupling system rises accordingly, and the maximum amount of electrical energy can reach by 113.85 J at 1.6 W/cm2. The study of the PCM-STEG coupling system has certain reference for the development of solar energy storage and application.

Keywords: solar energy, solar thermoelectric generator, phase change materials, solar-to-electric energy, DM/MWCNT

Procedia PDF Downloads 50
808 Performance Evaluation of Production Schedules Based on Process Mining

Authors: Kwan Hee Han

Abstract:

External environment of enterprise is rapidly changing majorly by global competition, cost reduction pressures, and new technology. In these situations, production scheduling function plays a critical role to meet customer requirements and to attain the goal of operational efficiency. It deals with short-term decision making in the production process of the whole supply chain. The major task of production scheduling is to seek a balance between customer orders and limited resources. In manufacturing companies, this task is so difficult because it should efficiently utilize resource capacity under the careful consideration of many interacting constraints. At present, many computerized software solutions have been utilized in many enterprises to generate a realistic production schedule to overcome the complexity of schedule generation. However, most production scheduling systems do not provide sufficient information about the validity of the generated schedule except limited statistics. Process mining only recently emerged as a sub-discipline of both data mining and business process management. Process mining techniques enable the useful analysis of a wide variety of processes such as process discovery, conformance checking, and bottleneck analysis. In this study, the performance of generated production schedule is evaluated by mining event log data of production scheduling software system by using the process mining techniques since every software system generates event logs for the further use such as security investigation, auditing and error bugging. An application of process mining approach is proposed for the validation of the goodness of production schedule generated by scheduling software systems in this study. By using process mining techniques, major evaluation criteria such as utilization of workstation, existence of bottleneck workstations, critical process route patterns, and work load balance of each machine over time are measured, and finally, the goodness of production schedule is evaluated. By using the proposed process mining approach for evaluating the performance of generated production schedule, the quality of production schedule of manufacturing enterprises can be improved.

Keywords: data mining, event log, process mining, production scheduling

Procedia PDF Downloads 263
807 Interfacial Adhesion and Properties Improvement of Polyethylene/Thermoplastic Starch Blend Compatibilized by Stearic Acid-Grafted-Starch

Authors: Nattaporn Khanoonkon, Rangrong Yoksan, Amod A. Ogale

Abstract:

Polyethylene (PE) is one of the most petroleum-based thermoplastic materials used in many applications including packaging due to its cheap, light-weight, chemically inert and capable to be converted into various shapes and sizes of products. Although PE is a commercially potential material, its non-biodegradability caused environmental problems. At present, bio-based polymers become more interesting owing to its bio-degradability, non-toxicity, and renewability as well as being eco-friendly. Thermoplastic starch (TPS) is a bio-based and biodegradable plastic produced from the plasticization of starch under applying heat and shear force. In many researches, TPS was blended with petroleum-based polymers including PE in order to reduce the cost and the use of those polymers. However, the phase separation between hydrophobic PE and hydrophilic TPS limited the amount of TPS incorporated. The immiscibility of two different polarity polymers can be diminished by adding compatibilizer. PE-based compatibilizers, e.g. polyethylene-grafted-maleic anhydride, polyethylene-co-vinyl alcohol, etc. have been applied for the PE/TPS blend system in order to improve their miscibility. Until now, there is no report about the utilization of starch-based compatibilizer for PE/TPS blend system. The aims of the present research were therefore to synthesize a new starch-based compatibilizer, i.e. stearic acid-grafted starch (SA-g-starch) and to study the effect of SA-g-starch on chemical interaction, morphological properties, tensile properties and water vapor as well as oxygen barrier properties of the PE/TPS blend films. PE/TPS blends without and with incorporating SA-g-starch with a content of 1, 3 and 5 part(s) per hundred parts of starch (phr) were prepared using a twin screw extruder and then blown into films using a film blowing machine. Incorporating 1 phr and 3 phr of SA-g-starch could improve miscibility of the two polymers as confirmed from the reduction of TPS phase size and the good dispersion of TPS phase in PE matrix. In addition, the blend containing SA-g-starch with contents of 1 phr and 3 phr exhibited higher tensile strength and extensibility, as well as lower water vapor and oxygen permeabilities than the naked blend. The above results suggested that SA-g-starch could be potentially applied as a compatibilizer for the PE/TPS blend system.

Keywords: blend, compatibilizer, polyethylene, thermoplastic starch

Procedia PDF Downloads 424
806 Austempered Compacted Graphite Irons: Influence of Austempering Temperature on Microstructure and Microscratch Behavior

Authors: Rohollah Ghasemi, Arvin Ghorbani

Abstract:

This study investigates the effect of austempering temperature on microstructure and scratch behavior of the austempered heat-treated compacted graphite irons. The as-cast was used as base material for heat treatment practices. The samples were extracted from as-cast ferritic CGI pieces and were heat treated under austenitising temperature of 900°C for 60 minutes which followed by quenching in salt-bath at different austempering temperatures of 275°C, 325°C and 375°C. For all heat treatments, an austempering holding time of 30 minutes was selected for this study. Light optical microscope (LOM) and scanning electron microscope (SEM) and electron back scattered diffraction (EBSD) analysis confirmed the ausferritic matrix formed in all heat-treated samples. Microscratches were performed under the load of 200, 600 and 1000 mN using a sphero-conical diamond indenter with a tip radius of 50 μm and induced cone angle 90° at a speed of 10 μm/s at room temperature ~25°C. An instrumented nanoindentation machine was used for performing nanoindentation hardness measurement and microscratch testing. Hardness measurements and scratch resistance showed a significant increase in Brinell, Vickers, and nanoindentation hardness values as well as microscratch resistance of the heat-treated samples compared to the as-cast ferritic sample. The increase in hardness and improvement in microscratch resistance are associated with the formation of the ausferrite matrix consisted of carbon-saturated retained austenite and acicular ferrite in austempered matrix. The maximum hardness was observed for samples austempered at 275°C which resulted in the formation of very fine acicular ferrite. In addition, nanohardness values showed a quite significant variation in the matrix due to the presence of acicular ferrite and carbon-saturated retained austenite. It was also observed that the increase of austempering temperature resulted in increase of volume of the carbon-saturated retained austenite and decrease of hardness values.

Keywords: austempered CGI, austempering, scratch testing, scratch plastic deformation, scratch hardness

Procedia PDF Downloads 117
805 On Cloud Computing: A Review of the Features

Authors: Assem Abdel Hamed Mousa

Abstract:

The Internet of Things probably already influences your life. And if it doesn’t, it soon will, say computer scientists; Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives. Alan Kay of Apple calls this "Third Paradigm" computing. Ubiquitous computing is essentially the term for human interaction with computers in virtually everything. Ubiquitous computing is roughly the opposite of virtual reality. Where virtual reality puts people inside a computer-generated world, ubiquitous computing forces the computer to live out here in the world with people. Virtual reality is primarily a horse power problem; ubiquitous computing is a very difficult integration of human factors, computer science, engineering, and social sciences. The approach: Activate the world. Provide hundreds of wireless computing devices per person per office, of all scales (from 1" displays to wall sized). This has required new work in operating systems, user interfaces, networks, wireless, displays, and many other areas. We call our work "ubiquitous computing". This is different from PDA's, dynabooks, or information at your fingertips. It is invisible; everywhere computing that does not live on a personal device of any sort, but is in the woodwork everywhere. The initial incarnation of ubiquitous computing was in the form of "tabs", "pads", and "boards" built at Xerox PARC, 1988-1994. Several papers describe this work, and there are web pages for the Tabs and for the Boards (which are a commercial product now): Ubiquitous computing will drastically reduce the cost of digital devices and tasks for the average consumer. With labor intensive components such as processors and hard drives stored in the remote data centers powering the cloud , and with pooled resources giving individual consumers the benefits of economies of scale, monthly fees similar to a cable bill for services that feed into a consumer’s phone.

Keywords: internet, cloud computing, ubiquitous computing, big data

Procedia PDF Downloads 367
804 Removal of Nickel and Vanadium from Crude Oil by Using Solvent Extraction and Electrochemical Process

Authors: Aliya Kurbanova, Nurlan Akhmetov, Abilmansur Yeshmuratov, Yerzhigit Sugurbekov, Ramiz Zulkharnay, Gulzat Demeuova, Murat Baisariyev, Gulnar Sugurbekova

Abstract:

Last decades crude oils have tended to become more challenge to process due to increasing amounts of sour and heavy crude oils. Some crude oils contain high vanadium and nickel content, for example Pavlodar LLP crude oil, which contains more than 23.09 g/t nickel and 58.59 g/t vanadium. In this study, we used two types of metal removing methods such as solvent extraction and electrochemical. The present research is conducted for comparative analysis of the deasphalting with organic solvents (cyclohexane, carbon tetrachloride, chloroform) and electrochemical method. Applying the cyclic voltametric analysis (CVA) and Inductively coupled plasma mass spectrometry (ICP MS), these mentioned types of metal extraction methods were compared in this paper. Maximum efficiency of deasphalting, with cyclohexane as the solvent, in Soxhlet extractor was 66.4% for nickel and 51.2% for vanadium content from crude oil. Percentage of Ni extraction reached maximum of approximately 55% by using the electrochemical method in electrolysis cell, which was developed for this research and consists of three sections: oil and protonating agent (EtOH) solution between two conducting membranes which divides it from two capsules of 10% sulfuric acid and two graphite electrodes which cover all three parts in electrical circuit. Ions of metals pass through membranes and remain in acid solutions. The best result was obtained in 60 minutes with ethanol to oil ratio 25% to 75% respectively, current fits into the range from 0.3A to 0.4A, voltage changed from 12.8V to 17.3V.

Keywords: demetallization, deasphalting, electrochemical removal, heavy metals, petroleum engineering, solvent extraction

Procedia PDF Downloads 289
803 Integrated Geotechnical and Geophysical Investigation of a Proposed Construction Site at Mowe, Southwestern Nigeria

Authors: Kayode Festus Oyedele, Sunday Oladele, Adaora Chibundu Nduka

Abstract:

The subsurface of a proposed site for building development in Mowe, Nigeria, using Standard Penetration Test (SPT) and Cone Penetrometer Test (CPT) supplemented with Horizontal Electrical Profiling (HEP) was investigated with the aim of evaluating the suitability of the strata for foundation materials. Four SPT and CPT were implemented using 10 tonnes hammer. HEP utilizing Wenner array were performed with inter-electrode spacing of 10 – 60 m along four traverses coincident with each of the SPT and CPT. The HEP data were processed using DIPRO software and textural filtering of the resulting resistivity sections was implemented to enable delineation of hidden layers. Sandy lateritic clay, silty lateritic clay, clay, clayey sand and sand horizons were delineated. The SPT “N” value defined very soft to soft sandy lateritic (<4), stiff silty lateritic clay (7 – 12), very stiff silty clay (12 - 15), clayey sand (15- 20) and sand (27 – 37). Sandy lateritic clay (5-40 kg/cm2) and silty lateritic clay (25 - 65 kg/cm2) were defined from the CPT response. Sandy lateritic clay (220-750 Ωm), clay (< 50 Ωm) and sand (415-5359 Ωm) were delineated from the resistivity sections with two thin layers of silty lateritic clay and clayey sand defined in the texturally filtered resistivity sections. This study concluded that the presence of incompetent thick clayey materials (18 m) beneath the study area makes it unsuitable for shallow foundation. Deep foundation involving piling through the clayey layers to the competent sand at 20 m depth was recommended.

Keywords: cone penetrometer, foundation, lithologic texture, resistivity section, standard penetration test

Procedia PDF Downloads 244
802 Dispersion Effects in Waves Reflected by Lossy Conductors: The Optics vs. Electromagnetics Approach

Authors: Oibar Martinez, Clara Oliver, Jose Miguel Miranda

Abstract:

The study of dispersion phenomena in electromagnetic waves reflected by conductors at infrared and lower frequencies is a topic which finds a number of applications. We aim to explain in this work what are the most relevant ones and how this phenomenon is modeled from both optics and electromagnetics points of view. We also explain here how the amplitude of an electromagnetic wave reflected by a lossy conductor could depend on both the frequency of the incident wave, as well as on the electrical properties of the conductor, and we illustrate this phenomenon with a practical example. The mathematical analysis made by a specialist in electromagnetics or a microwave engineer is apparently very different from the one made by a specialist in optics. We show here how both approaches lead to the same physical result and what are the key concepts which enable one to understand that despite the differences in the equations the solution to the problem happens to be the same. Our study starts with an analysis made by using the complex refractive index and the reflectance parameter. We show how this reflectance has a dependence with the square root of the frequency when the reflecting material is a good conductor, and the frequency of the wave is low enough. Then we analyze the same problem with a less known approach, which is based on the reflection coefficient of the electric field, a parameter that is most commonly used in electromagnetics and microwave engineering. In summary, this paper presents a mathematical study illustrated with a worked example which unifies the modeling of dispersion effects made by specialists in optics and the one made by specialists in electromagnetics. The main finding of this work is that it is possible to reproduce the dependence of the Fresnel reflectance with frequency from the intrinsic impedance of the reflecting media.

Keywords: dispersion, electromagnetic waves, microwaves, optics

Procedia PDF Downloads 112
801 Study of the Physical Aging of Polyvinyl Chloride (PVC)

Authors: Mohamed Ouazene

Abstract:

The insulating properties of the polymers are widely used in electrical engineering for the production of insulators and various supports, as well as for the insulation of electric cables for medium and high voltage, etc. These polymeric materials have significant advantages both technically and economically. However, although the insulation with polymeric materials has advantages, there are also certain disadvantages such as the influence of the heat which can have a detrimental effect on these materials. Polyvinyl chloride (PVC) is one of the polymers used in a plasticized state in the cable insulation to medium and high voltage. The studied material is polyvinyl chloride (PVC 4000 M) from the Algerian national oil company whose formula is: Industrial PVC 4000 M is in the form of white powder. The test sample is a pastille of 1 mm thick and 1 cm in diameter. The consequences of increasing the temperature of a polymer are modifications; some of them are reversible and others irreversible [1]. The reversible changes do not affect the chemical composition of the polymer, or its structure. They are characterized by transitions and relaxations. The glass transition temperature is an important feature of a polymer. Physical aging of PVC is to maintain the material for a longer or shorter time to its glass transition temperature. The aim of this paper is to study this phenomenon by the method of thermally stimulated depolarization currents. Relaxations within the polymer have been recorded in the form of current peaks. We have found that the intensity decreases for more residence time in the polymer along its glass transition temperature. Furthermore, it is inferred from this work that the phenomenon of physical aging can have important consequences on the properties of the polymer. It leads to a more compact rearrangement of the material and a reconstruction or reinforcement of structural connections.

Keywords: depolarization currents, glass transition temperature, physical aging, polyvinyl chloride (PVC)

Procedia PDF Downloads 370
800 Discerning Divergent Nodes in Social Networks

Authors: Mehran Asadi, Afrand Agah

Abstract:

In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.

Keywords: online social networks, data mining, social cloud computing, interaction and collaboration

Procedia PDF Downloads 130
799 Study on the Addition of Solar Generating and Energy Storage Units to a Power Distribution System

Authors: T. Costa, D. Narvaez, K. Melo, M. Villalva

Abstract:

Installation of micro-generators based on renewable energy in power distribution system has increased in recent years, with the main renewable sources being solar and wind. Due to the intermittent nature of renewable energy sources, such micro-generators produce time-varying energy which does not correspond at certain times of the day to the peak energy consumption of end users. For this reason, the use of energy storage units next to the grid contributes to the proper leveling of the buses’ voltage level according to Brazilian energy quality standards. In this work, the effect of the addition of a photovoltaic solar generator and a store of energy in the busbar voltages of an electric system is analyzed. The consumption profile is defined as the average hourly use of appliances in a common residence, and the generation profile is defined as a function of the solar irradiation available in a locality. The power summation method is validated with analytical calculation and is used to calculate the modules and angles of the voltages in the buses of an electrical system based on the IEEE standard, at each hour of the day and with defined load and generation profiles. The results show that bus 5 presents the worst voltage level at the power consumption peaks and stabilizes at the appropriate range with the inclusion of the energy storage during the night time period. Solar generator maintains improvement of the voltage level during the period when it receives solar irradiation, having peaks of production during the 12 pm (without exceeding the appropriate maximum levels of tension).

Keywords: energy storage, power distribution system, solar generator, voltage level

Procedia PDF Downloads 124
798 The Impact of a Sustainable Solar Heating System on the Growth of ‎Strawberry Plants in an Agricultural Greenhouse

Authors: Ilham Ihoume, Rachid Tadili, Nora Arbaoui

Abstract:

The use of solar energy is a crucial tactic in the agricultural industry's plan ‎‎to decrease greenhouse gas emissions. This clean source of energy can ‎greatly lower the sector's carbon footprint and make a significant impact in ‎the ‎fight against climate change. In this regard, this study examines the ‎effects ‎of a solar-based heating system, in a north-south oriented agricultural ‎green‎house on the development of strawberry plants during winter. This ‎system ‎relies on the circulation of water as a heat transfer fluid in a closed ‎circuit ‎installed on the greenhouse roof to store heat during the day and ‎release it ‎inside at night. A comparative experimental study was conducted ‎in two ‎greenhouses, one experimental with the solar heating system and the ‎other ‎for control without any heating system. Both greenhouses are located ‎on the ‎terrace of the Solar Energy and Environment Laboratory of the ‎Mohammed ‎V University in Rabat, Morocco. The developed heating system ‎consists of a ‎copper coil inserted in double glazing and placed on the roof of ‎the greenhouse, a water pump circulator, a battery, and a photovoltaic solar ‎panel to ‎power the electrical components. This inexpensive and ‎environmentally ‎friendly system allows the greenhouse to be heated during ‎the winter and ‎improves its microclimate system. This improvement resulted ‎in an increase ‎in the air temperature inside the experimental greenhouse by 6 ‎‎°C and 8 °C, ‎and a reduction in its relative humidity by 23% and 35% ‎compared to the ‎control greenhouse and the ambient air, respectively, ‎throughout the winter. ‎For the agronomic performance, it was observed that ‎the production was 17 ‎days earlier than in the control greenhouse‎.‎

Keywords: sustainability, thermal energy storage, solar energy, agriculture greenhouse

Procedia PDF Downloads 72
797 Analysis on Solar Panel Performance and PV-Inverter Configuration for Tropical Region

Authors: Eko Adhi Setiawan, Duli Asih Siregar, Aiman Setiawan

Abstract:

Solar energy is abundant in nature, particularly in the tropics which have peak sun hour that can reach 8 hours per day. In the fabrication process, Photovoltaic’s (PV) performance are tested in standard test conditions (STC). It specifies a module temperature of 25°C, an irradiance of 1000 W/ m² with an air mass 1.5 (AM1.5) spectrum and zero wind speed. Thus, the results of the performance testing of PV at STC conditions cannot fully represent the performance of PV in the tropics. For example Indonesia, which has a temperature of 20-40°C. In this paper, the effect of temperature on the choice of the 5 kW AC inverter topology on the PV system such as the Central Inverter, String Inverter and AC-Module specifically for the tropics will be discussed. The proper inverter topology can be determined by analysis of the effect of temperature and irradiation on the PV panel. The effect of temperature and irradiation will be represented in the characteristics of I-V and P-V curves. PV’s characteristics on high temperature would be analyzed using Solar panel modeling through MATLAB Simulink based on mathematical equations that form Solar panel’s characteristic curve. Based on PV simulation, it is known then that temperature coefficients of short circuit current (ISC), open circuit voltage (VOC), and maximum output power (PMAX) consecutively as high as 0.56%/oC, -0.31%/oC and -0.4%/oC. Those coefficients can be used to calculate PV’s electrical parameters such as ISC, VOC, and PMAX in certain earth’s surface’s certain point. Then, from the parameters, the utility of the 5 kW AC inverter system can be determined. As the result, for tropical area, string inverter topology has the highest utility rates with 98, 80 %. On the other hand, central inverter and AC-Module Topology has utility rates of 92.69 % and 87.7 % eventually.

Keywords: Photovoltaic, PV-Inverter Configuration, PV Modeling, Solar Panel Characteristics.

Procedia PDF Downloads 366