Search results for: “onion husk” algorithm
244 Microfiber Release During Laundry Under Different Rinsing Parameters
Authors: Fulya Asena Uluç, Ehsan Tuzcuoğlu, Songül Bayraktar, Burak Koca, Alper Gürarslan
Abstract:
Microplastics are contaminants that are widely distributed in the environment with a detrimental ecological effect. Besides this, recent research has proved the existence of microplastics in human blood and organs. Microplastics in the environment can be divided into two main categories: primary and secondary microplastics. Primary microplastics are plastics that are released into the environment as microscopic particles. On the other hand, secondary microplastics are the smaller particles that are shed as a result of the consumption of synthetic materials in textile products as well as other products. Textiles are the main source of microplastic contamination in aquatic ecosystems. Laundry of synthetic textiles (34.8%) accounts for an average annual discharge of 3.2 million tons of primary microplastics into the environment. Recently, microfiber shedding from laundry research has gained traction. However, no comprehensive study was conducted from the standpoint of rinsing parameters during laundry to analyze microfiber shedding. The purpose of the present study is to quantify microfiber shedding from fabric under different rinsing conditions and determine the effective rinsing parameters on microfiber release in a laundry environment. In this regard, a parametric study is carried out to investigate the key factors affecting the microfiber release from a front-load washing machine. These parameters are the amount of water used during the rinsing step and the spinning speed at the end of the washing cycle. Minitab statistical program is used to create a design of the experiment (DOE) and analyze the experimental results. Tests are repeated twice and besides the controlled parameters, other washing parameters are kept constant in the washing algorithm. At the end of each cycle, released microfibers are collected via a custom-made filtration system and weighted with precision balance. The results showed that by increasing the water amount during the rinsing step, the amount of microplastic released from the washing machine increased drastically. Also, the parametric study revealed that increasing the spinning speed results in an increase in the microfiber release from textiles.Keywords: front load, laundry, microfiber, microfiber release, microfiber shedding, microplastic, pollution, rinsing parameters, sustainability, washing parameters, washing machine
Procedia PDF Downloads 97243 Numerical Investigation of the Operating Parameters of the Vertical Axis Wind Turbine
Authors: Zdzislaw Kaminski, Zbigniew Czyz, Tytus Tulwin
Abstract:
This paper describes the geometrical model, algorithm and CFD simulation of an airflow around a Vertical Axis Wind Turbine rotor. A solver, ANSYS Fluent, was applied for the numerical simulation. Numerical simulation, unlike experiments, enables us to validate project assumptions when it is designed to avoid a costly preparation of a model or a prototype for a bench test. This research focuses on the rotor designed according to patent no PL 219985 with its blades capable of modifying their working surfaces, i.e. absorbing wind kinetic energy. The operation of this rotor is based on a regulation of blade angle α between the top and bottom parts of blades mounted on an axis. If angle α increases, the working surface which absorbs wind kinetic energy also increases. CFD calculations enable us to compare aerodynamic characteristics of forces acting on rotor working surfaces and specify rotor operation parameters like torque or turbine assembly power output. This paper is part of the research to improve an efficiency of a rotor assembly and it contains investigation of the impact of a blade angle of wind turbine working blades on the power output as a function of rotor torque, specific rotational speed and wind speed. The simulation was made for wind speeds ranging from 3.4 m/s to 6.2 m/s and blade angles of 30°, 60°, 90°. The simulation enables us to create a mathematical model to describe how aerodynamic forces acting each of the blade of the studied rotor are generated. Also, the simulation results are compared with the wind tunnel ones. This investigation enables us to estimate the growth in turbine power output if a blade angle changes. The regulation of blade angle α enables a smooth change in turbine rotor power, which is a kind of safety measures if the wind is strong. Decreasing blade angle α reduces the risk of damaging or destroying a turbine that is still in operation and there is no complete rotor braking as it is in other Horizontal Axis Wind Turbines. This work has been financed by the Polish Ministry of Science and Higher Education.Keywords: computational fluid dynamics, mathematical model, numerical analysis, power, renewable energy, wind turbine
Procedia PDF Downloads 337242 Genetic Diversity of Sugar Beet Pollinators
Authors: Ksenija Taški-Ajdukovic, Nevena Nagl, Živko Ćurčić, Dario Danojević
Abstract:
Information about genetic diversity of sugar beet parental populations is of a great importance for hybrid breeding programs. The aim of this research was to evaluate genetic diversity among and within populations and lines of diploid sugar beet pollinators, by using SSR markers. As plant material were used eight pollinators originating from three USDA-ARS breeding programs and four pollinators from Institute of Field and Vegetable Crops, Novi Sad. Depending on the presence of self-fertility gene, the pollinators were divided into three groups: autofertile (inbred lines), autosterile (open-pollinating populations), and group with partial presence of autofertility gene. A total of 40 SSR primers were screened, out of which 34 were selected for the analysis of genetic diversity. A total of 129 different alleles were obtained with mean value 3.2 alleles per SSR primer. According to the results of genetic variability assessment the number and percentage of polymorphic loci was the maximal in pollinators NS1 and tester cms2 while effective number of alleles, expected heterozygosis and Shannon’s index was highest in pollinator EL0204. Analysis of molecular variance (AMOVA) showed that 77.34% of the total genetic variation was attributed to intra-varietal variance. Correspondence analysis results were very similar to grouping by neighbor-joining algorithm. Number of groups was smaller by one, because correspondence analysis merged IFVCNS pollinators with CZ25 into one group. Pollinators FC220, FC221 and C 51 were in the next group, while self-fertile pollinators CR10 and C930-35 from USDA-Salinas were separated. On another branch were self-sterile pollinators ЕL0204 and ЕL53 from USDA-East Lansing. Sterile testers cms1 and cms2 formed separate group. The presented results confirmed that SSR analysis can be successfully used in estimation of genetic diversity within and among sugar beet populations. Since the tested pollinator differed considering the presence of self-fertility gene, their heterozygosity differed as well. It was lower in genotypes with fixed self-fertility genes. Since the most of tested populations were open-pollinated, which rarely self-pollinate, high variability within the populations was expected. Cluster analysis grouped populations according to their origin.Keywords: auto fertility, genetic diversity, pollinator, SSR, sugar beet
Procedia PDF Downloads 460241 Enhancing Signal Reception in a Mobile Radio Network Using Adaptive Beamforming Antenna Arrays Technology
Authors: Ugwu O. C., Mamah R. O., Awudu W. S.
Abstract:
This work is aimed at enhancing signal reception on a mobile radio network and minimizing outage probability in a mobile radio network using adaptive beamforming antenna arrays. In this research work, an empirical real-time drive measurement was done in a cellular network of Globalcom Nigeria Limited located at Ikeja, the headquarters of Lagos State, Nigeria, with reference base station number KJA 004. The empirical measurement includes Received Signal Strength and Bit Error Rate which were recorded for exact prediction of the signal strength of the network as at the time of carrying out this research work. The Received Signal Strength and Bit Error Rate were measured with a spectrum monitoring Van with the help of a Ray Tracer at an interval of 100 meters up to 700 meters from the transmitting base station. The distance and angular location measurements from the reference network were done with the help Global Positioning System (GPS). The other equipment used were transmitting equipment measurements software (Temsoftware), Laptops and log files, which showed received signal strength with distance from the base station. Results obtained were about 11% from the real-time experiment, which showed that mobile radio networks are prone to signal failure and can be minimized using an Adaptive Beamforming Antenna Array in terms of a significant reduction in Bit Error Rate, which implies improved performance of the mobile radio network. In addition, this work did not only include experiments done through empirical measurement but also enhanced mathematical models that were developed and implemented as a reference model for accurate prediction. The proposed signal models were based on the analysis of continuous time and discrete space, and some other assumptions. These developed (proposed) enhanced models were validated using MATLAB (version 7.6.3.35) program and compared with the conventional antenna for accuracy. These outage models were used to manage the blocked call experience in the mobile radio network. 20% improvement was obtained when the adaptive beamforming antenna arrays were implemented on the wireless mobile radio network.Keywords: beamforming algorithm, adaptive beamforming, simulink, reception
Procedia PDF Downloads 41240 Comparison of Existing Predictor and Development of Computational Method for S- Palmitoylation Site Identification in Arabidopsis Thaliana
Authors: Ayesha Sanjana Kawser Parsha
Abstract:
S-acylation is an irreversible bond in which cysteine residues are linked to fatty acids palmitate (74%) or stearate (22%), either at the COOH or NH2 terminal, via a thioester linkage. There are several experimental methods that can be used to identify the S-palmitoylation site; however, since they require a lot of time, computational methods are becoming increasingly necessary. There aren't many predictors, however, that can locate S- palmitoylation sites in Arabidopsis Thaliana with sufficient accuracy. This research is based on the importance of building a better prediction tool. To identify the type of machine learning algorithm that predicts this site more accurately for the experimental dataset, several prediction tools were examined in this research, including the GPS PALM 6.0, pCysMod, GPS LIPID 1.0, CSS PALM 4.0, and NBA PALM. These analyses were conducted by constructing the receiver operating characteristics plot and the area under the curve score. An AI-driven deep learning-based prediction tool has been developed utilizing the analysis and three sequence-based input data, such as the amino acid composition, binary encoding profile, and autocorrelation features. The model was developed using five layers, two activation functions, associated parameters, and hyperparameters. The model was built using various combinations of features, and after training and validation, it performed better when all the features were present while using the experimental dataset for 8 and 10-fold cross-validations. While testing the model with unseen and new data, such as the GPS PALM 6.0 plant and pCysMod mouse, the model performed better, and the area under the curve score was near 1. It can be demonstrated that this model outperforms the prior tools in predicting the S- palmitoylation site in the experimental data set by comparing the area under curve score of 10-fold cross-validation of the new model with the established tools' area under curve score with their respective training sets. The objective of this study is to develop a prediction tool for Arabidopsis Thaliana that is more accurate than current tools, as measured by the area under the curve score. Plant food production and immunological treatment targets can both be managed by utilizing this method to forecast S- palmitoylation sites.Keywords: S- palmitoylation, ROC PLOT, area under the curve, cross- validation score
Procedia PDF Downloads 77239 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea
Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim
Abstract:
Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.Keywords: deep learning, algae concentration, remote sensing, satellite
Procedia PDF Downloads 183238 Drone Swarm Routing and Scheduling for Off-shore Wind Turbine Blades Inspection
Authors: Mohanad Al-Behadili, Xiang Song, Djamila Ouelhadj, Alex Fraess-Ehrfeld
Abstract:
In off-shore wind farms, turbine blade inspection accessibility under various sea states is very challenging and greatly affects the downtime of wind turbines. Maintenance of any offshore system is not an easy task due to the restricted logistics and accessibility. The multirotor unmanned helicopter is of increasing interest in inspection applications due to its manoeuvrability and payload capacity. These advantages increase when many of them are deployed simultaneously in a swarm. Hence this paper proposes a drone swarm framework for inspecting offshore wind turbine blades and nacelles so as to reduce downtime. One of the big challenges of this task is that when operating a drone swarm, an individual drone may not have enough power to fly and communicate during missions and it has no capability of refueling due to its small size. Once the drone power is drained, there are no signals transmitted and the links become intermittent. Vessels equipped with 5G masts and small power units are utilised as platforms for drones to recharge/swap batteries. The research work aims at designing a smart energy management system, which provides automated vessel and drone routing and recharging plans. To achieve this goal, a novel mathematical optimisation model is developed with the main objective of minimising the number of drones and vessels, which carry the charging stations, and the downtime of the wind turbines. There are a number of constraints to be considered, such as each wind turbine must be inspected once and only once by one drone; each drone can inspect at most one wind turbine after recharging, then fly back to the charging station; collision should be avoided during the drone flying; all wind turbines in the wind farm should be inspected within the given time window. We have developed a real-time Ant Colony Optimisation (ACO) algorithm to generate real-time and near-optimal solutions to the drone swarm routing problem. The schedule will generate efficient and real-time solutions to indicate the inspection tasks, time windows, and the optimal routes of the drones to access the turbines. Experiments are conducted to evaluate the quality of the solutions generated by ACO.Keywords: drone swarm, routing, scheduling, optimisation model, ant colony optimisation
Procedia PDF Downloads 264237 Optimized Scheduling of Domestic Load Based on User Defined Constraints in a Real-Time Tariff Scenario
Authors: Madia Safdar, G. Amjad Hussain, Mashhood Ahmad
Abstract:
One of the major challenges of today’s era is peak demand which causes stress on the transmission lines and also raises the cost of energy generation and ultimately higher electricity bills to the end users, and it was used to be managed by the supply side management. However, nowadays this has been withdrawn because of existence of potential in the demand side management (DSM) having its economic and- environmental advantages. DSM in domestic load can play a vital role in reducing the peak load demand on the network provides a significant cost saving. In this paper the potential of demand response (DR) in reducing the peak load demands and electricity bills to the electric users is elaborated. For this purpose the domestic appliances are modeled in MATLAB Simulink and controlled by a module called energy management controller. The devices are categorized into controllable and uncontrollable loads and are operated according to real-time tariff pricing pattern instead of fixed time pricing or variable pricing. Energy management controller decides the switching instants of the controllable appliances based on the results from optimization algorithms. In GAMS software, the MILP (mixed integer linear programming) algorithm is used for optimization. In different cases, different constraints are used for optimization, considering the comforts, needs and priorities of the end users. Results are compared and the savings in electricity bills are discussed in this paper considering real time pricing and fixed tariff pricing, which exhibits the existence of potential to reduce electricity bills and peak loads in demand side management. It is seen that using real time pricing tariff instead of fixed tariff pricing helps to save in the electricity bills. Moreover the simulation results of the proposed energy management system show that the gained power savings lie in high range. It is anticipated that the result of this research will prove to be highly effective to the utility companies as well as in the improvement of domestic DR.Keywords: controllable and uncontrollable domestic loads, demand response, demand side management, optimization, MILP (mixed integer linear programming)
Procedia PDF Downloads 302236 Automatic Target Recognition in SAR Images Based on Sparse Representation Technique
Authors: Ahmet Karagoz, Irfan Karagoz
Abstract:
Synthetic Aperture Radar (SAR) is a radar mechanism that can be integrated into manned and unmanned aerial vehicles to create high-resolution images in all weather conditions, regardless of day and night. In this study, SAR images of military vehicles with different azimuth and descent angles are pre-processed at the first stage. The main purpose here is to reduce the high speckle noise found in SAR images. For this, the Wiener adaptive filter, the mean filter, and the median filters are used to reduce the amount of speckle noise in the images without causing loss of data. During the image segmentation phase, pixel values are ordered so that the target vehicle region is separated from other regions containing unnecessary information. The target image is parsed with the brightest 20% pixel value of 255 and the other pixel values of 0. In addition, by using appropriate parameters of statistical region merging algorithm, segmentation comparison is performed. In the step of feature extraction, the feature vectors belonging to the vehicles are obtained by using Gabor filters with different orientation, frequency and angle values. A number of Gabor filters are created by changing the orientation, frequency and angle parameters of the Gabor filters to extract important features of the images that form the distinctive parts. Finally, images are classified by sparse representation method. In the study, l₁ norm analysis of sparse representation is used. A joint database of the feature vectors generated by the target images of military vehicle types is obtained side by side and this database is transformed into the matrix form. In order to classify the vehicles in a similar way, the test images of each vehicle is converted to the vector form and l₁ norm analysis of the sparse representation method is applied through the existing database matrix form. As a result, correct recognition has been performed by matching the target images of military vehicles with the test images by means of the sparse representation method. 97% classification success of SAR images of different military vehicle types is obtained.Keywords: automatic target recognition, sparse representation, image classification, SAR images
Procedia PDF Downloads 366235 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence
Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács
Abstract:
The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility
Procedia PDF Downloads 118234 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data
Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca
Abstract:
In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.Keywords: citizen science, data quality filtering, species distribution models, trait profiles
Procedia PDF Downloads 203233 Nurse-Led Codes: Practical Application in the Emergency Department during a Global Pandemic
Authors: F. DelGaudio, H. Gill
Abstract:
Resuscitation during cardiopulmonary (CPA) arrest is dynamic, high stress, high acuity situation, which can easily lead to communication breakdown, and errors. The care of these high acuity patients has also been shown to increase physiologic stress and task saturation of providers, which can negatively impact the care being provided. These difficulties are further complicated during a global pandemic and pose a significant safety risk to bedside providers. Nurse-led codes are a relatively new concept that may be a potential solution for alleviating some of these difficulties. An experienced nurse who has completed advanced cardiac life support (ACLS), and additional training, assumed the responsibility of directing the mechanics of the appropriate ACLS algorithm. This was done in conjunction with a physician who also acted as a physician leader. The additional nurse-led code training included a multi-disciplinary in situ simulation of a CPA on a suspected COVID-19 patient. During the CPA, the nurse leader’s responsibilities include: ensuring adequate compression depth and rate, minimizing interruptions in chest compressions, the timing of rhythm/pulse checks, and appropriate medication administration. In addition, the nurse leader also functions as a last line safety check for appropriate personal protective equipment and limiting exposure of staff. The use of nurse-led codes for CPA has shown to decrease the cognitive overload and task saturation for the physician, as well as limiting the number of staff being exposed to a potentially infectious patient. The real-world application has allowed physicians to perform and oversee high-risk procedures such as intubation, line placement, and point of care ultrasound, without sacrificing the integrity of the resuscitation. Nurse-led codes have also given the physician the bandwidth to review pertinent medical history, advanced directives, determine reversible causes, and have the end of life conversations with family. While there is a paucity of research on the effectiveness of nurse-led codes, there are many potentially significant benefits. In addition to its value during a pandemic, it may also be beneficial during complex circumstances such as extracorporeal cardiopulmonary resuscitation.Keywords: cardiopulmonary arrest, COVID-19, nurse-led code, task saturation
Procedia PDF Downloads 155232 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks
Authors: Fazıl Gökgöz, Fahrettin Filiz
Abstract:
Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.Keywords: deep learning, long short term memory, energy, renewable energy load forecasting
Procedia PDF Downloads 266231 Smart Defect Detection in XLPE Cables Using Convolutional Neural Networks
Authors: Tesfaye Mengistu
Abstract:
Power cables play a crucial role in the transmission and distribution of electrical energy. As the electricity generation, transmission, distribution, and storage systems become smarter, there is a growing emphasis on incorporating intelligent approaches to ensure the reliability of power cables. Various types of electrical cables are employed for transmitting and distributing electrical energy, with cross-linked polyethylene (XLPE) cables being widely utilized due to their exceptional electrical and mechanical properties. However, insulation defects can occur in XLPE cables due to subpar manufacturing techniques during production and cable joint installation. To address this issue, experts have proposed different methods for monitoring XLPE cables. Some suggest the use of interdigital capacitive (IDC) technology for online monitoring, while others propose employing continuous wave (CW) terahertz (THz) imaging systems to detect internal defects in XLPE plates used for power cable insulation. In this study, we have developed models that employ a custom dataset collected locally to classify the physical safety status of individual power cables. Our models aim to replace physical inspections with computer vision and image processing techniques to classify defective power cables from non-defective ones. The implementation of our project utilized the Python programming language along with the TensorFlow package and a convolutional neural network (CNN). The CNN-based algorithm was specifically chosen for power cable defect classification. The results of our project demonstrate the effectiveness of CNNs in accurately classifying power cable defects. We recommend the utilization of similar or additional datasets to further enhance and refine our models. Additionally, we believe that our models could be used to develop methodologies for detecting power cable defects from live video feeds. We firmly believe that our work makes a significant contribution to the field of power cable inspection and maintenance. Our models offer a more efficient and cost-effective approach to detecting power cable defects, thereby improving the reliability and safety of power grids.Keywords: artificial intelligence, computer vision, defect detection, convolutional neural net
Procedia PDF Downloads 112230 Extracting Opinions from Big Data of Indonesian Customer Reviews Using Hadoop MapReduce
Authors: Veronica S. Moertini, Vinsensius Kevin, Gede Karya
Abstract:
Customer reviews have been collected by many kinds of e-commerce websites selling products, services, hotel rooms, tickets and so on. Each website collects its own customer reviews. The reviews can be crawled, collected from those websites and stored as big data. Text analysis techniques can be used to analyze that data to produce summarized information, such as customer opinions. Then, these opinions can be published by independent service provider websites and used to help customers in choosing the most suitable products or services. As the opinions are analyzed from big data of reviews originated from many websites, it is expected that the results are more trusted and accurate. Indonesian customers write reviews in Indonesian language, which comes with its own structures and uniqueness. We found that most of the reviews are expressed with “daily language”, which is informal, do not follow the correct grammar, have many abbreviations and slangs or non-formal words. Hadoop is an emerging platform aimed for storing and analyzing big data in distributed systems. A Hadoop cluster consists of master and slave nodes/computers operated in a network. Hadoop comes with distributed file system (HDFS) and MapReduce framework for supporting parallel computation. However, MapReduce has weakness (i.e. inefficient) for iterative computations, specifically, the cost of reading/writing data (I/O cost) is high. Given this fact, we conclude that MapReduce function is best adapted for “one-pass” computation. In this research, we develop an efficient technique for extracting or mining opinions from big data of Indonesian reviews, which is based on MapReduce with one-pass computation. In designing the algorithm, we avoid iterative computation and instead adopt a “look up table” technique. The stages of the proposed technique are: (1) Crawling the data reviews from websites; (2) cleaning and finding root words from the raw reviews; (3) computing the frequency of the meaningful opinion words; (4) analyzing customers sentiments towards defined objects. The experiments for evaluating the performance of the technique were conducted on a Hadoop cluster with 14 slave nodes. The results show that the proposed technique (stage 2 to 4) discovers useful opinions, is capable of processing big data efficiently and scalable.Keywords: big data analysis, Hadoop MapReduce, analyzing text data, mining Indonesian reviews
Procedia PDF Downloads 201229 Computer-Aided Drug Repurposing for Mycobacterium Tuberculosis by Targeting Tryptophanyl-tRNA Synthetase
Authors: Neslihan Demirci, Serdar Durdağı
Abstract:
Mycobacterium tuberculosis is still a worldwide disease-causing agent that, according to WHO, led to the death of 1.5 million people from tuberculosis (TB) in 2020. The bacteria reside in macrophages located specifically in the lung. There is a known quadruple drug therapy regimen for TB consisting of isoniazid (INH), rifampin (RIF), pyrazinamide (PZA), and ethambutol (EMB). Over the past 60 years, there have been great contributions to treatment options, such as recently approved delamanid (OPC67683) and bedaquiline (TMC207/R207910), targeting mycolic acid and ATP synthesis, respectively. Also, there are natural compounds that can block the tryptophanyl-tRNA synthetase (TrpRS) enzyme, chuangxinmycin, and indolmycin. Yet, already the drug resistance is reported for those agents. In this study, the newly released TrpRS enzyme structure is investigated for potential inhibitor drugs from already synthesized molecules to help the treatment of resistant cases and to propose an alternative drug for the quadruple drug therapy of tuberculosis. Maestro, Schrodinger is used for docking and molecular dynamic simulations. In-house library containing ~8000 compounds among FDA-approved indole-containing compounds, a total of 57 obtained from the ChemBL were used for both ATP and tryptophan binding pocket docking. Best of indole-containing 57 compounds were subjected to hit expansion and compared later with virtual screening workflow (VSW) results. After docking, VSW was done. Glide-XP docking algorithm was chosen. When compared, VSW alone performed better than the hit expansion module. Best scored compounds were kept for ten ns molecular dynamic simulations by Desmond. Further, 100 ns molecular dynamic simulation was performed for elected molecules according to Z-score. The top three MMGBSA-scored compounds were subjected to steered molecular dynamic (SMD) simulations by Gromacs. While SMD simulations are still being conducted, ponesimod (for multiple sclerosis), vilanterol (β₂ adrenoreceptor agonist), and silodosin (for benign prostatic hyperplasia) were found to have a significant affinity for tuberculosis TrpRS, which is the propulsive force for the urge to expand the research with in vitro studies. Interestingly, top-scored ponesimod has been reported to have a side effect that makes the patient prone to upper respiratory tract infections.Keywords: drug repurposing, molecular dynamics, tryptophanyl-tRNA synthetase, tuberculosis
Procedia PDF Downloads 123228 Artificial Neural Network Based Parameter Prediction of Miniaturized Solid Rocket Motor
Authors: Hao Yan, Xiaobing Zhang
Abstract:
The working mechanism of miniaturized solid rocket motors (SRMs) is not yet fully understood. It is imperative to explore its unique features. However, there are many disadvantages to using common multi-objective evolutionary algorithms (MOEAs) in predicting the parameters of the miniaturized SRM during its conceptual design phase. Initially, the design variables and objectives are constrained in a lumped parameter model (LPM) of this SRM, which leads to local optima in MOEAs. In addition, MOEAs require a large number of calculations due to their population strategy. Although the calculation time for simulating an LPM just once is usually less than that of a CFD simulation, the number of function evaluations (NFEs) is usually large in MOEAs, which makes the total time cost unacceptably long. Moreover, the accuracy of the LPM is relatively low compared to that of a CFD model due to its assumptions. CFD simulations or experiments are required for comparison and verification of the optimal results obtained by MOEAs with an LPM. The conceptual design phase based on MOEAs is a lengthy process, and its results are not precise enough due to the above shortcomings. An artificial neural network (ANN) based parameter prediction is proposed as a way to reduce time costs and improve prediction accuracy. In this method, an ANN is used to build a surrogate model that is trained with a 3D numerical simulation. In design, the original LPM is replaced by a surrogate model. Each case uses the same MOEAs, in which the calculation time of the two models is compared, and their optimization results are compared with 3D simulation results. Using the surrogate model for the parameter prediction process of the miniaturized SRMs results in a significant increase in computational efficiency and an improvement in prediction accuracy. Thus, the ANN-based surrogate model does provide faster and more accurate parameter prediction for an initial design scheme. Moreover, even when the MOEAs converge to local optima, the time cost of the ANN-based surrogate model is much lower than that of the simplified physical model LPM. This means that designers can save a lot of time during code debugging and parameter tuning in a complex design process. Designers can reduce repeated calculation costs and obtain accurate optimal solutions by combining an ANN-based surrogate model with MOEAs.Keywords: artificial neural network, solid rocket motor, multi-objective evolutionary algorithm, surrogate model
Procedia PDF Downloads 90227 Early Outcomes and Lessons from the Implementation of a Geriatric Hip Fracture Protocol at a Level 1 Trauma Center
Authors: Peter Park, Alfonso Ayala, Douglas Saeks, Jordan Miller, Carmen Flores, Karen Nelson
Abstract:
Introduction Hip fractures account for more than 300,000 hospital admissions every year. Many present as fragility fractures in geriatric patients with multiple medical comorbidities. Standardized protocols for the multidisciplinary management of this patient population have been shown to improve patient outcomes. A hip fracture protocol was implemented at a Level I Trauma center with a focus on pre-operative medical optimization and early surgical care. This study evaluates the efficacy of that protocol, including the early transition period. Methods A retrospective review was performed of all patients ages 60 and older with isolated hip fractures who were managed surgically between 2020 and 2022. This included patients 1 year prior and 1 year following the implementation of a hip fracture protocol at a Level I Trauma center. Results 530 patients were identified: 249 patients were treated before, and 281 patients were treated after the protocol was instituted. There was no difference in mean age (p=0.35), gender (p=0.3), or Charlson Comorbidity Index (p=0.38) between the cohorts. Following the implementation of the protocol, there were observed increases in time to surgery (27.5h vs. 33.8h, p=0.01), hospital length of stay (6.3d vs. 9.7d, p<0.001), and ED LOS (5.1h vs. 6.2h, p<0.001). There were no differences in in-hospital mortality (2.01% pre vs. 3.20% post, p=0.39) and complication rates (25% pre vs 26% post, p=0.76). A trend towards improved outcomes was seen after the early transition period but failed to yield statistical significance. Conclusion Early medical management and surgical intervention are key determining factors affecting outcomes following fragility hip fractures. The implementation of a hip fracture protocol at this institution has not yet significantly affected these parameters. This could in part be due to the restrictions placed at this institution during the COVID-19 pandemic. Despite this, the time to OR pre-and post-implementation was quicker than figures reported elsewhere in literature. Further longitudinal data will be collected to determine the final influence of this protocol. Significance/Clinical Relevance Given the increasing number of elderly people and the high morbidity and mortality associated with hip fractures in this population finding cost effective ways to improve outcomes in the management of these injuries has the potential to have enormous positive impact for both patients and hospital systems.Keywords: hip fracture, geriatric, treatment algorithm, preoperative optimization
Procedia PDF Downloads 78226 An Adaptive Conversational AI Approach for Self-Learning
Authors: Airy Huang, Fuji Foo, Aries Prasetya Wibowo
Abstract:
In recent years, the focus of Natural Language Processing (NLP) development has been gradually shifting from the semantics-based approach to deep learning one, which performs faster with fewer resources. Although it performs well in many applications, the deep learning approach, due to the lack of semantics understanding, has difficulties in noticing and expressing a novel business case with a pre-defined scope. In order to meet the requirements of specific robotic services, deep learning approach is very labor-intensive and time consuming. It is very difficult to improve the capabilities of conversational AI in a short time, and it is even more difficult to self-learn from experiences to deliver the same service in a better way. In this paper, we present an adaptive conversational AI algorithm that combines both semantic knowledge and deep learning to address this issue by learning new business cases through conversations. After self-learning from experience, the robot adapts to the business cases originally out of scope. The idea is to build new or extended robotic services in a systematic and fast-training manner with self-configured programs and constructed dialog flows. For every cycle in which a chat bot (conversational AI) delivers a given set of business cases, it is trapped to self-measure its performance and rethink every unknown dialog flows to improve the service by retraining with those new business cases. If the training process reaches a bottleneck and incurs some difficulties, human personnel will be informed of further instructions. He or she may retrain the chat bot with newly configured programs, or new dialog flows for new services. One approach employs semantics analysis to learn the dialogues for new business cases and then establish the necessary ontology for the new service. With the newly learned programs, it completes the understanding of the reaction behavior and finally uses dialog flows to connect all the understanding results and programs, achieving the goal of self-learning process. We have developed a chat bot service mounted on a kiosk, with a camera for facial recognition and a directional microphone array for voice capture. The chat bot serves as a concierge with polite conversation for visitors. As a proof of concept. We have demonstrated to complete 90% of reception services with limited self-learning capability.Keywords: conversational AI, chatbot, dialog management, semantic analysis
Procedia PDF Downloads 136225 Cluster Analysis and Benchmarking for Performance Optimization of a Pyrochlore Processing Unit
Authors: Ana C. R. P. Ferreira, Adriano H. P. Pereira
Abstract:
Given the frequent variation of mineral properties throughout the Araxá pyrochlore deposit, even if a good homogenization work has been carried out before feeding the processing plants, an operation with quality and performance’s high variety standard is expected. These results could be improved and standardized if the blend composition parameters that most influence the processing route are determined, and then the types of raw materials are grouped by them, finally presenting a great reference with operational settings for each group. Associating the physical and chemical parameters of a unit operation through benchmarking or even an optimal reference of metallurgical recovery and product quality reflects in the reduction of the production costs, optimization of the mineral resource, and guarantee of greater stability in the subsequent processes of the production chain that uses the mineral of interest. Conducting a comprehensive exploratory data analysis to identify which characteristics of the ore are most relevant to the process route, associated with the use of Machine Learning algorithms for grouping the raw material (ore) and associating these with reference variables in the process’ benchmark is a reasonable alternative for the standardization and improvement of mineral processing units. Clustering methods through Decision Tree and K-Means were employed, associated with algorithms based on the theory of benchmarking, with criteria defined by the process team in order to reference the best adjustments for processing the ore piles of each cluster. A clean user interface was created to obtain the outputs of the created algorithm. The results were measured through the average time of adjustment and stabilization of the process after a new pile of homogenized ore enters the plant, as well as the average time needed to achieve the best processing result. Direct gains from the metallurgical recovery of the process were also measured. The results were promising, with a reduction in the adjustment time and stabilization when starting the processing of a new ore pile, as well as reaching the benchmark. Also noteworthy are the gains in metallurgical recovery, which reflect a significant saving in ore consumption and a consequent reduction in production costs, hence a more rational use of the tailings dams and life optimization of the mineral deposit.Keywords: mineral clustering, machine learning, process optimization, pyrochlore processing
Procedia PDF Downloads 143224 Development of a Feedback Control System for a Lab-Scale Biomass Combustion System Using Programmable Logic Controller
Authors: Samuel O. Alamu, Seong W. Lee, Blaise Kalmia, Marc J. Louise Caballes, Xuejun Qian
Abstract:
The application of combustion technologies for thermal conversion of biomass and solid wastes to energy has been a major solution to the effective handling of wastes over a long period of time. Lab-scale biomass combustion systems have been observed to be economically viable and socially acceptable, but major concerns are the environmental impacts of the process and deviation of temperature distribution within the combustion chamber. Both high and low combustion chamber temperature may affect the overall combustion efficiency and gaseous emissions. Therefore, there is an urgent need to develop a control system which measures the deviations of chamber temperature from set target values, sends these deviations (which generates disturbances in the system) in the form of feedback signal (as input), and control operating conditions for correcting the errors. In this research study, major components of the feedback control system were determined, assembled, and tested. In addition, control algorithms were developed to actuate operating conditions (e.g., air velocity, fuel feeding rate) using ladder logic functions embedded in the Programmable Logic Controller (PLC). The developed control algorithm having chamber temperature as a feedback signal is integrated into the lab-scale swirling fluidized bed combustor (SFBC) to investigate the temperature distribution at different heights of the combustion chamber based on various operating conditions. The air blower rates and the fuel feeding rates obtained from automatic control operations were correlated with manual inputs. There was no observable difference in the correlated results, thus indicating that the written PLC program functions were adequate in designing the experimental study of the lab-scale SFBC. The experimental results were analyzed to study the effect of air velocity operating at 222-273 ft/min and fuel feeding rate of 60-90 rpm on the chamber temperature. The developed temperature-based feedback control system was shown to be adequate in controlling the airflow and the fuel feeding rate for the overall biomass combustion process as it helps to minimize the steady-state error.Keywords: air flow, biomass combustion, feedback control signal, fuel feeding, ladder logic, programmable logic controller, temperature
Procedia PDF Downloads 129223 AIR SAFE: an Internet of Things System for Air Quality Management Leveraging Artificial Intelligence Algorithms
Authors: Mariangela Viviani, Daniele Germano, Simone Colace, Agostino Forestiero, Giuseppe Papuzzo, Sara Laurita
Abstract:
Nowadays, people spend most of their time in closed environments, in offices, or at home. Therefore, secure and highly livable environmental conditions are needed to reduce the probability of aerial viruses spreading. Also, to lower the human impact on the planet, it is important to reduce energy consumption. Heating, Ventilation, and Air Conditioning (HVAC) systems account for the major part of energy consumption in buildings [1]. Devising systems to control and regulate the airflow is, therefore, essential for energy efficiency. Moreover, an optimal setting for thermal comfort and air quality is essential for people’s well-being, at home or in offices, and increases productivity. Thanks to the features of Artificial Intelligence (AI) tools and techniques, it is possible to design innovative systems with: (i) Improved monitoring and prediction accuracy; (ii) Enhanced decision-making and mitigation strategies; (iii) Real-time air quality information; (iv) Increased efficiency in data analysis and processing; (v) Advanced early warning systems for air pollution events; (vi) Automated and cost-effective m onitoring network; and (vii) A better understanding of air quality patterns and trends. We propose AIR SAFE, an IoT-based infrastructure designed to optimize air quality and thermal comfort in indoor environments leveraging AI tools. AIR SAFE employs a network of smart sensors collecting indoor and outdoor data to be analyzed in order to take any corrective measures to ensure the occupants’ wellness. The data are analyzed through AI algorithms able to predict the future levels of temperature, relative humidity, and CO₂ concentration [2]. Based on these predictions, AIR SAFE takes actions, such as opening/closing the window or the air conditioner, to guarantee a high level of thermal comfort and air quality in the environment. In this contribution, we present the results from the AI algorithm we have implemented on the first s et o f d ata c ollected i n a real environment. The results were compared with other models from the literature to validate our approach.Keywords: air quality, internet of things, artificial intelligence, smart home
Procedia PDF Downloads 93222 Implementation of Chlorine Monitoring and Supply System for Drinking Water Tanks
Authors: Ugur Fidan, Naim Karasekreter
Abstract:
Healthy and clean water should not contain disease-causing micro-organisms and toxic chemicals and must contain the necessary minerals in a balanced manner. Today, water resources have a limited and strategic importance, necessitating the management of water reserves. Water tanks meet the water needs of people and should be regularly chlorinated to prevent waterborne diseases. For this purpose, automatic chlorination systems placed in water tanks for killing bacteria. However, the regular operation of automatic chlorination systems depends on refilling the chlorine tank when it is empty. For this reason, there is a need for a stock control system, in which chlorine levels are regularly monitored and supplied. It has become imperative to take urgent measures against epidemics caused by the fact that most of our country is not aware of the end of chlorine. The aim of this work is to rehabilitate existing water tanks and to provide a method for a modern water storage system in which chlorination is digitally monitored by turning the newly established water tanks into a closed system. A sensor network structure using GSM/GPRS communication infrastructure has been developed in the study. The system consists of two basic units: hardware and software. The hardware includes a chlorine level sensor, an RFID interlock system for authorized personnel entry into water tank, a motion sensor for animals and other elements, and a camera system to ensure process safety. It transmits the data from the hardware sensors to the host server software via the TCP/IP protocol. The main server software processes the incoming data through the security algorithm and informs the relevant unit responsible (Security forces, Chlorine supply unit, Public health, Local Administrator) by e-mail and SMS. Since the software is developed base on the web, authorized personnel are also able to monitor drinking water tank and report data on the internet. When the findings and user feedback obtained as a result of the study are evaluated, it is shown that closed drinking water tanks are built with GRP type material, and continuous monitoring in digital environment is vital for sustainable health water supply for people.Keywords: wireless sensor networks (WSN), monitoring, chlorine, water tank, security
Procedia PDF Downloads 160221 Exploiting the Potential of Fabric Phase Sorptive Extraction for Forensic Food Safety: Analysis of Food Samples in Cases of Drug Facilitated Crimes
Authors: Bharti Jain, Rajeev Jain, Abuzar Kabir, Torki Zughaibi, Shweta Sharma
Abstract:
Drug-facilitated crimes (DFCs) entail the use of a single drug or a mixture of drugs to render a victim unable. Traditionally, biological samples have been gathered from victims and conducted analysis to establish evidence of drug administration. Nevertheless, the rapid metabolism of various drugs and delays in analysis can impede the identification of such substances. For this, the present article describes a rapid, sustainable, highly efficient and miniaturized protocol for the identification and quantification of three sedative-hypnotic drugs, namely diazepam, chlordiazepoxide and ketamine in alcoholic beverages and complex food samples (cream of biscuit, flavored milk, juice, cake, tea, sweets and chocolate). The methodology involves utilizing fabric phase sorptive extraction (FPSE) to extract diazepam (DZ), chlordiazepoxide (CDP), and ketamine (KET). Subsequently, the extracted samples are subjected to analysis using gas chromatography-mass spectrometry (GC-MS). Several parameters, including the type of membrane, pH, agitation time and speed, ionic strength, sample volume, elution volume and time, and type of elution solvent, were screened and thoroughly optimized. Sol-gel Carbowax 20M (CW-20M) has demonstrated the most effective extraction efficiency for the target analytes among all evaluated membranes. Under optimal conditions, the method displayed linearity within the range of 0.3–10 µg mL–¹ (or µg g–¹), exhibiting a coefficient of determination (R2) ranging from 0.996–0.999. The limits of detection (LODs) and limits of quantification (LOQs) for liquid samples range between 0.020-0.069 µg mL-¹ and 0.066-0.22 µg mL-¹, respectively. Correspondingly, the LODs for solid samples ranged from 0.056-0.090 µg g-¹, while the LOQs ranged from 0.18-0.29 µg g-¹. Notably, the method showcased better precision, with repeatability and reproducibility both below 5% and 10%, respectively. Furthermore, the FPSE-GC-MS method proved effective in determining diazepam (DZ) in forensic food samples connected to drug-facilitated crimes (DFCs). Additionally, the proposed method underwent evaluation for its whiteness using the RGB12 algorithm.Keywords: drug facilitated crime, fabric phase sorptive extraction, food forensics, white analytical chemistry
Procedia PDF Downloads 70220 Roboweeder: A Robotic Weeds Killer Using Electromagnetic Waves
Authors: Yahoel Van Essen, Gordon Ho, Brett Russell, Hans-Georg Worms, Xiao Lin Long, Edward David Cooper, Avner Bachar
Abstract:
Weeds reduce farm and forest productivity, invade crops, smother pastures and some can harm livestock. Farmers need to spend a significant amount of money to control weeds by means of biological, chemical, cultural, and physical methods. To solve the global agricultural labor shortage and remove poisonous chemicals, a fully autonomous, eco-friendly, and sustainable weeding technology is developed. This takes the form of a weeding robot, ‘Roboweeder’. Roboweeder includes a four-wheel-drive self-driving vehicle, a 4-DOF robotic arm which is mounted on top of the vehicle, an electromagnetic wave generator (magnetron) which is mounted on the “wrist” of the robotic arm, 48V battery packs, and a control/communication system. Cameras are mounted on the front and two sides of the vehicle. Using image processing and recognition, distinguish types of weeds are detected before being eliminated. The electromagnetic wave technology is applied to heat the individual weeds and clusters dielectrically causing them to wilt and die. The 4-DOF robotic arm was modeled mathematically based on its structure/mechanics, each joint’s load, brushless DC motor and worm gear’ characteristics, forward kinematics, and inverse kinematics. The Proportional-Integral-Differential control algorithm is used to control the robotic arm’s motion to ensure the waveguide aperture pointing to the detected weeds. GPS and machine vision are used to traverse the farm and avoid obstacles without the need of supervision. A Roboweeder prototype has been built. Multiple test trials show that Roboweeder is able to detect, point, and kill the pre-defined weeds successfully although further improvements are needed, such as reducing the “weeds killing” time and developing a new waveguide with a smaller waveguide aperture to avoid killing crops surrounded. This technology changes the tedious, time consuming and expensive weeding processes, and allows farmers to grow more, go organic, and eliminate operational headaches. A patent of this technology is pending.Keywords: autonomous navigation, machine vision, precision heating, sustainable and eco-friendly
Procedia PDF Downloads 252219 Aerosol Direct Radiative Forcing Over the Indian Subcontinent: A Comparative Analysis from the Satellite Observation and Radiative Transfer Model
Authors: Shreya Srivastava, Sagnik Dey
Abstract:
Aerosol direct radiative forcing (ADRF) refers to the alteration of the Earth's energy balance from the scattering and absorption of solar radiation by aerosol particles. India experiences substantial ADRF due to high aerosol loading from various sources. These aerosols' radiative impact depends on their physical characteristics (such as size, shape, and composition) and atmospheric distribution. Quantifying ADRF is crucial for understanding aerosols’ impact on the regional climate and the Earth's radiative budget. In this study, we have taken radiation data from Clouds and the Earth’s Radiant Energy System (CERES, spatial resolution=1ox1o) for 22 years (2000-2021) over the Indian subcontinent. Except for a few locations, the short-wave DARF exhibits aerosol cooling at the TOA (values ranging from +2.5 W/m2 to -22.5W/m2). Cooling due to aerosols is more pronounced in the absence of clouds. Being an aerosol hotspot, higher negative ADRF is observed over the Indo-Gangetic Plain (IGP). Aerosol Forcing Efficiency (AFE) shows a decreasing seasonal trend in winter (DJF) over the entire study region while an increasing trend over IGP and western south India during the post-monsoon season (SON) in clear-sky conditions. Analysing atmospheric heating and AOD trends, we found that only the aerosol loading is not governing the change in atmospheric heating but also the aerosol composition and/or their vertical profile. We used a Multi-angle Imaging Spectro-Radiometer (MISR) Level-2 Version 23 aerosol products to look into aerosol composition. MISR incorporates 74 aerosol mixtures in its retrieval algorithm based on size, shape, and absorbing properties. This aerosol mixture information was used for analysing long-term changes in aerosol composition and dominating aerosol species corresponding to the aerosol forcing value. Further, ADRF derived from this method is compared with around 35 studies across India, where a plane parallel Radiative transfer model was used, and the model inputs were taken from the OPAC (Optical Properties of Aerosols and Clouds) utilizing only limited aerosol parameter measurements. The result shows a large overestimation of TOA warming by the latter (i.e., Model-based method).Keywords: aerosol radiative forcing (ARF), aerosol composition, MISR, CERES, SBDART
Procedia PDF Downloads 54218 Hidro-IA: An Artificial Intelligent Tool Applied to Optimize the Operation Planning of Hydrothermal Systems with Historical Streamflow
Authors: Thiago Ribeiro de Alencar, Jacyro Gramulia Junior, Patricia Teixeira Leite
Abstract:
The area of the electricity sector that deals with energy needs by the hydroelectric in a coordinated manner is called Operation Planning of Hydrothermal Power Systems (OPHPS). The purpose of this is to find a political operative to provide electrical power to the system in a given period, with reliability and minimal cost. Therefore, it is necessary to determine an optimal schedule of generation for each hydroelectric, each range, so that the system meets the demand reliably, avoiding rationing in years of severe drought, and that minimizes the expected cost of operation during the planning, defining an appropriate strategy for thermal complementation. Several optimization algorithms specifically applied to this problem have been developed and are used. Although providing solutions to various problems encountered, these algorithms have some weaknesses, difficulties in convergence, simplification of the original formulation of the problem, or owing to the complexity of the objective function. An alternative to these challenges is the development of techniques for simulation optimization and more sophisticated and reliable, it can assist the planning of the operation. Thus, this paper presents the development of a computational tool, namely Hydro-IA for solving optimization problem identified and to provide the User an easy handling. Adopted as intelligent optimization technique is Genetic Algorithm (GA) and programming language is Java. First made the modeling of the chromosomes, then implemented the function assessment of the problem and the operators involved, and finally the drafting of the graphical interfaces for access to the User. The results with the Genetic Algorithms were compared with the optimization technique nonlinear programming (NLP). Tests were conducted with seven hydroelectric plants interconnected hydraulically with historical stream flow from 1953 to 1955. The results of comparison between the GA and NLP techniques shows that the cost of operating the GA becomes increasingly smaller than the NLP when the number of hydroelectric plants interconnected increases. The program has managed to relate a coherent performance in problem resolution without the need for simplification of the calculations together with the ease of manipulating the parameters of simulation and visualization of output results.Keywords: energy, optimization, hydrothermal power systems, artificial intelligence and genetic algorithms
Procedia PDF Downloads 420217 Efficient Reuse of Exome Sequencing Data for Copy Number Variation Callings
Authors: Chen Wang, Jared Evans, Yan Asmann
Abstract:
With the quick evolvement of next-generation sequencing techniques, whole-exome or exome-panel data have become a cost-effective way for detection of small exonic mutations, but there has been a growing desire to accurately detect copy number variations (CNVs) as well. In order to address this research and clinical needs, we developed a sequencing coverage pattern-based method not only for copy number detections, data integrity checks, CNV calling, and visualization reports. The developed methodologies include complete automation to increase usability, genome content-coverage bias correction, CNV segmentation, data quality reports, and publication quality images. Automatic identification and removal of poor quality outlier samples were made automatically. Multiple experimental batches were routinely detected and further reduced for a clean subset of samples before analysis. Algorithm improvements were also made to improve somatic CNV detection as well as germline CNV detection in trio family. Additionally, a set of utilities was included to facilitate users for producing CNV plots in focused genes of interest. We demonstrate the somatic CNV enhancements by accurately detecting CNVs in whole exome-wide data from the cancer genome atlas cancer samples and a lymphoma case study with paired tumor and normal samples. We also showed our efficient reuses of existing exome sequencing data, for improved germline CNV calling in a family of the trio from the phase-III study of 1000 Genome to detect CNVs with various modes of inheritance. The performance of the developed method is evaluated by comparing CNV calling results with results from other orthogonal copy number platforms. Through our case studies, reuses of exome sequencing data for calling CNVs have several noticeable functionalities, including a better quality control for exome sequencing data, improved joint analysis with single nucleotide variant calls, and novel genomic discovery of under-utilized existing whole exome and custom exome panel data.Keywords: bioinformatics, computational genetics, copy number variations, data reuse, exome sequencing, next generation sequencing
Procedia PDF Downloads 257216 A Modified Estimating Equations in Derivation of the Causal Effect on the Survival Time with Time-Varying Covariates
Authors: Yemane Hailu Fissuh, Zhongzhan Zhang
Abstract:
a systematic observation from a defined time of origin up to certain failure or censor is known as survival data. Survival analysis is a major area of interest in biostatistics and biomedical researches. At the heart of understanding, the most scientific and medical research inquiries lie for a causality analysis. Thus, the main concern of this study is to investigate the causal effect of treatment on survival time conditional to the possibly time-varying covariates. The theory of causality often differs from the simple association between the response variable and predictors. A causal estimation is a scientific concept to compare a pragmatic effect between two or more experimental arms. To evaluate an average treatment effect on survival outcome, the estimating equation was adjusted for time-varying covariates under the semi-parametric transformation models. The proposed model intuitively obtained the consistent estimators for unknown parameters and unspecified monotone transformation functions. In this article, the proposed method estimated an unbiased average causal effect of treatment on survival time of interest. The modified estimating equations of semiparametric transformation models have the advantage to include the time-varying effect in the model. Finally, the finite sample performance characteristics of the estimators proved through the simulation and Stanford heart transplant real data. To this end, the average effect of a treatment on survival time estimated after adjusting for biases raised due to the high correlation of the left-truncation and possibly time-varying covariates. The bias in covariates was restored, by estimating density function for left-truncation. Besides, to relax the independence assumption between failure time and truncation time, the model incorporated the left-truncation variable as a covariate. Moreover, the expectation-maximization (EM) algorithm iteratively obtained unknown parameters and unspecified monotone transformation functions. To summarize idea, the ratio of cumulative hazards functions between the treated and untreated experimental group has a sense of the average causal effect for the entire population.Keywords: a modified estimation equation, causal effect, semiparametric transformation models, survival analysis, time-varying covariate
Procedia PDF Downloads 175215 Classification of Emotions in Emergency Call Center Conversations
Authors: Magdalena Igras, Joanna Grzybowska, Mariusz Ziółko
Abstract:
The study of emotions expressed in emergency phone call is presented, covering both statistical analysis of emotions configurations and an attempt to automatically classify emotions. An emergency call is a situation usually accompanied by intense, authentic emotions. They influence (and may inhibit) the communication between caller and responder. In order to support responders in their responsible and psychically exhaustive work, we studied when and in which combinations emotions appeared in calls. A corpus of 45 hours of conversations (about 3300 calls) from emergency call center was collected. Each recording was manually tagged with labels of emotions valence (positive, negative or neutral), type (sadness, tiredness, anxiety, surprise, stress, anger, fury, calm, relief, compassion, satisfaction, amusement, joy) and arousal (weak, typical, varying, high) on the basis of perceptual judgment of two annotators. As we concluded, basic emotions tend to appear in specific configurations depending on the overall situational context and attitude of speaker. After performing statistical analysis we distinguished four main types of emotional behavior of callers: worry/helplessness (sadness, tiredness, compassion), alarm (anxiety, intense stress), mistake or neutral request for information (calm, surprise, sometimes with amusement) and pretension/insisting (anger, fury). The frequency of profiles was respectively: 51%, 21%, 18% and 8% of recordings. A model of presenting the complex emotional profiles on the two-dimensional (tension-insecurity) plane was introduced. In the stage of acoustic analysis, a set of prosodic parameters, as well as Mel-Frequency Cepstral Coefficients (MFCC) were used. Using these parameters, complex emotional states were modeled with machine learning techniques including Gaussian mixture models, decision trees and discriminant analysis. Results of classification with several methods will be presented and compared with the state of the art results obtained for classification of basic emotions. Future work will include optimization of the algorithm to perform in real time in order to track changes of emotions during a conversation.Keywords: acoustic analysis, complex emotions, emotion recognition, machine learning
Procedia PDF Downloads 398