Search results for: battery grading algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4346

Search results for: battery grading algorithm

2306 Algorithm Development of Individual Lumped Parameter Modelling for Blood Circulatory System: An Optimization Study

Authors: Bao Li, Aike Qiao, Gaoyang Li, Youjun Liu

Abstract:

Background: Lumped parameter model (LPM) is a common numerical model for hemodynamic calculation. LPM uses circuit elements to simulate the human blood circulatory system. Physiological indicators and characteristics can be acquired through the model. However, due to the different physiological indicators of each individual, parameters in LPM should be personalized in order for convincing calculated results, which can reflect the individual physiological information. This study aimed to develop an automatic and effective optimization method to personalize the parameters in LPM of the blood circulatory system, which is of great significance to the numerical simulation of individual hemodynamics. Methods: A closed-loop LPM of the human blood circulatory system that is applicable for most persons were established based on the anatomical structures and physiological parameters. The patient-specific physiological data of 5 volunteers were non-invasively collected as personalized objectives of individual LPM. In this study, the blood pressure and flow rate of heart, brain, and limbs were the main concerns. The collected systolic blood pressure, diastolic blood pressure, cardiac output, and heart rate were set as objective data, and the waveforms of carotid artery flow and ankle pressure were set as objective waveforms. Aiming at the collected data and waveforms, sensitivity analysis of each parameter in LPM was conducted to determine the sensitive parameters that have an obvious influence on the objectives. Simulated annealing was adopted to iteratively optimize the sensitive parameters, and the objective function during optimization was the root mean square error between the collected waveforms and data and simulated waveforms and data. Each parameter in LPM was optimized 500 times. Results: In this study, the sensitive parameters in LPM were optimized according to the collected data of 5 individuals. Results show a slight error between collected and simulated data. The average relative root mean square error of all optimization objectives of 5 samples were 2.21%, 3.59%, 4.75%, 4.24%, and 3.56%, respectively. Conclusions: Slight error demonstrated good effects of optimization. The individual modeling algorithm developed in this study can effectively achieve the individualization of LPM for the blood circulatory system. LPM with individual parameters can output the individual physiological indicators after optimization, which are applicable for the numerical simulation of patient-specific hemodynamics.

Keywords: blood circulatory system, individual physiological indicators, lumped parameter model, optimization algorithm

Procedia PDF Downloads 137
2305 Feature Based Unsupervised Intrusion Detection

Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein

Abstract:

The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.

Keywords: information gain (IG), intrusion detection system (IDS), k-means clustering, Weka

Procedia PDF Downloads 296
2304 Using Geospatial Analysis to Reconstruct the Thunderstorm Climatology for the Washington DC Metropolitan Region

Authors: Mace Bentley, Zhuojun Duan, Tobias Gerken, Dudley Bonsal, Henry Way, Endre Szakal, Mia Pham, Hunter Donaldson, Chelsea Lang, Hayden Abbott, Leah Wilcynzski

Abstract:

Air pollution has the potential to modify the lifespan and intensity of thunderstorms and the properties of lightning. Using data mining and geovisualization, we investigate how background climate and weather conditions shape variability in urban air pollution and how this, in turn, shapes thunderstorms as measured by the intensity, distribution, and frequency of cloud-to-ground lightning. A spatiotemporal analysis was conducted in order to identify thunderstorms using high-resolution lightning detection network data. Over seven million lightning flashes were used to identify more than 196,000 thunderstorms that occurred between 2006 - 2020 in the Washington, DC Metropolitan Region. Each lightning flash in the dataset was grouped into thunderstorm events by means of a temporal and spatial clustering algorithm. Once the thunderstorm event database was constructed, hourly wind direction, wind speed, and atmospheric thermodynamic data were added to the initiation and dissipation times and locations for the 196,000 identified thunderstorms. Hourly aerosol and air quality data for the thunderstorm initiation times and locations were also incorporated into the dataset. Developing thunderstorm climatologies using a lightning tracking algorithm and lightning detection network data was found to be useful for visualizing the spatial and temporal distribution of urban augmented thunderstorms in the region.

Keywords: lightning, urbanization, thunderstorms, climatology

Procedia PDF Downloads 75
2303 A Static Android Malware Detection Based on Actual Used Permissions Combination and API Calls

Authors: Xiaoqing Wang, Junfeng Wang, Xiaolan Zhu

Abstract:

Android operating system has been recognized by most application developers because of its good open-source and compatibility, which enriches the categories of applications greatly. However, it has become the target of malware attackers due to the lack of strict security supervision mechanisms, which leads to the rapid growth of malware, thus bringing serious safety hazards to users. Therefore, it is critical to detect Android malware effectively. Generally, the permissions declared in the AndroidManifest.xml can reflect the function and behavior of the application to a large extent. Since current Android system has not any restrictions to the number of permissions that an application can request, developers tend to apply more than actually needed permissions in order to ensure the successful running of the application, which results in the abuse of permissions. However, some traditional detection methods only consider the requested permissions and ignore whether it is actually used, which leads to incorrect identification of some malwares. Therefore, a machine learning detection method based on the actually used permissions combination and API calls was put forward in this paper. Meanwhile, several experiments are conducted to evaluate our methodology. The result shows that it can detect unknown malware effectively with higher true positive rate and accuracy while maintaining a low false positive rate. Consequently, the AdaboostM1 (J48) classification algorithm based on information gain feature selection algorithm has the best detection result, which can achieve an accuracy of 99.8%, a true positive rate of 99.6% and a lowest false positive rate of 0.

Keywords: android, API Calls, machine learning, permissions combination

Procedia PDF Downloads 329
2302 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis

Authors: C. B. Le, V. N. Pham

Abstract:

In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.

Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering

Procedia PDF Downloads 189
2301 Mobile Systems: History, Technology, and Future

Authors: Shivendra Pratap Singh, Rishabh Sharma

Abstract:

The widespread adoption of mobile technology in recent years has revolutionized the way we communicate and access information. The evolution of mobile systems has been rapid and impactful, shaping our lives and changing the way we live and work. However, despite its significant influence, the history and development of mobile technology are not well understood by the general public. This research paper aims to examine the history, technology and future of mobile systems, exploring their evolution from early mobile phones to the latest smartphones and beyond. The study will analyze the technological advancements and innovations that have shaped the mobile industry, from the introduction of mobile internet and multimedia capabilities to the integration of artificial intelligence and 5G networks. Additionally, the paper will also address the challenges and opportunities facing the future of mobile technology, such as privacy concerns, battery life, and the increasing demand for high-speed internet. Finally, the paper will also provide insights into potential future developments and innovations in the mobile sector, such as foldable phones, wearable technology, and the Internet of Things (IoT). The purpose of this research paper is to provide a comprehensive overview of the history, technology, and future of mobile systems, shedding light on their impact on society and the challenges and opportunities that lie ahead.

Keywords: mobile technology, artificial intelligence, networking, iot, technological advancements, smartphones

Procedia PDF Downloads 92
2300 Automatic Multi-Label Image Annotation System Guided by Firefly Algorithm and Bayesian Method

Authors: Saad M. Darwish, Mohamed A. El-Iskandarani, Guitar M. Shawkat

Abstract:

Nowadays, the amount of available multimedia data is continuously on the rise. The need to find a required image for an ordinary user is a challenging task. Content based image retrieval (CBIR) computes relevance based on the visual similarity of low-level image features such as color, textures, etc. However, there is a gap between low-level visual features and semantic meanings required by applications. The typical method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, a multi-label image annotation system guided by Firefly and Bayesian method is proposed. Firstly, images are segmented using the maximum variance intra cluster and Firefly algorithm, which is a swarm-based approach with high convergence speed, less computation rate and search for the optimal multiple threshold. Feature extraction techniques based on color features and region properties are applied to obtain the representative features. After that, the images are annotated using translation model based on the Net Bayes system, which is efficient for multi-label learning with high precision and less complexity. Experiments are performed using Corel Database. The results show that the proposed system is better than traditional ones for automatic image annotation and retrieval.

Keywords: feature extraction, feature selection, image annotation, classification

Procedia PDF Downloads 586
2299 A Development of Portable Intrinsically Safe Explosion-Proof Type of Dual Gas Detector

Authors: Sangguk Ahn, Youngyu Kim, Jaheon Gu, Gyoutae Park

Abstract:

In this paper, we developed a dual gas leak instrument to detect Hydrocarbon (HC) and Monoxide (CO) gases. To two kinds of gases, it is necessary to design compact structure for sensors. And then it is important to draw sensing circuits such as measuring, amplifying and filtering. After that, it should be well programmed with robust, systematic and module coding methods. In center of them, improvement of accuracy and initial response time are a matter of vital importance. To manufacture distinguished gas leak detector, we applied intrinsically safe explosion-proof structure to lithium ion battery, main circuits, a pump with motor, color LCD interfaces and sensing circuits. On software, to enhance measuring accuracy we used numerical analysis such as Lagrange and Neville interpolation. Performance test result is conducted by using standard Methane with seven different concentrations with three other products. We want raise risk prevention and efficiency of gas safe management through distributing to the field of gas safety. Acknowledgment: This study was supported by Small and Medium Business Administration under the research theme of ‘Commercialized Development of a portable intrinsically safe explosion-proof type dual gas leak detector’, (task number S2456036).

Keywords: gas leak, dual gas detector, intrinsically safe, explosion proof

Procedia PDF Downloads 228
2298 Detection of Curvilinear Structure via Recursive Anisotropic Diffusion

Authors: Sardorbek Numonov, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Dongeun Choi, Byung-Woo Hong

Abstract:

The detection of curvilinear structures often plays an important role in the analysis of images. In particular, it is considered as a crucial step for the diagnosis of chronic respiratory diseases to localize the fissures in chest CT imagery where the lung is divided into five lobes by the fissures that are characterized by linear features in appearance. However, the characteristic linear features for the fissures are often shown to be subtle due to the high intensity variability, pathological deformation or image noise involved in the imaging procedure, which leads to the uncertainty in the quantification of anatomical or functional properties of the lung. Thus, it is desired to enhance the linear features present in the chest CT images so that the distinctiveness in the delineation of the lobe is improved. We propose a recursive diffusion process that prefers coherent features based on the analysis of structure tensor in an anisotropic manner. The local image features associated with certain scales and directions can be characterized by the eigenanalysis of the structure tensor that is often regularized via isotropic diffusion filters. However, the isotropic diffusion filters involved in the computation of the structure tensor generally blur geometrically significant structure of the features leading to the degradation of the characteristic power in the feature space. Thus, it is required to take into consideration of local structure of the feature in scale and direction when computing the structure tensor. We apply an anisotropic diffusion in consideration of scale and direction of the features in the computation of the structure tensor that subsequently provides the geometrical structure of the features by its eigenanalysis that determines the shape of the anisotropic diffusion kernel. The recursive application of the anisotropic diffusion with the kernel the shape of which is derived from the structure tensor leading to the anisotropic scale-space where the geometrical features are preserved via the eigenanalysis of the structure tensor computed from the diffused image. The recursive interaction between the anisotropic diffusion based on the geometry-driven kernels and the computation of the structure tensor that determines the shape of the diffusion kernels yields a scale-space where geometrical properties of the image structure are effectively characterized. We apply our recursive anisotropic diffusion algorithm to the detection of curvilinear structure in the chest CT imagery where the fissures present curvilinear features and define the boundary of lobes. It is shown that our algorithm yields precise detection of the fissures while overcoming the subtlety in defining the characteristic linear features. The quantitative evaluation demonstrates the robustness and effectiveness of the proposed algorithm for the detection of fissures in the chest CT in terms of the false positive and the true positive measures. The receiver operating characteristic curves indicate the potential of our algorithm as a segmentation tool in the clinical environment. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: anisotropic diffusion, chest CT imagery, chronic respiratory disease, curvilinear structure, fissure detection, structure tensor

Procedia PDF Downloads 232
2297 Electric Vehicle Market Penetration Impact on Greenhouse Gas Emissions for Policy-Making: A Case Study of United Arab Emirates

Authors: Ahmed Kiani

Abstract:

The United Arab Emirates is clearly facing a multitude of challenges in curbing its greenhouse gas emissions to meet its pre-allotted framework of Kyoto protocol and COP21 targets due to its hunger for modernization, industrialization, infrastructure growth, soaring population and oil and gas activity. In this work, we focus on the bonafide zero emission electric vehicles market penetration in the country’s transport industry for emission reduction. We study the global electric vehicle market trends, the complementary battery technologies and the trends by manufacturers, emission standards across borders and prioritized advancements which will ultimately dictate the terms of future conditions for the United Arab Emirate transport industry. Based on our findings and analysis at every stage of current viability and state-of-transport-affairs, we postulate policy recommendations to local governmental entities from a supply and demand perspective covering aspects of technology, infrastructure requirements, change in power dynamics, end user incentives program, market regulators behavior and communications amongst key stakeholders. 

Keywords: electric vehicles, greenhouse gas emission reductions, market analysis, policy recommendations

Procedia PDF Downloads 309
2296 Enhanced Planar Pattern Tracking for an Outdoor Augmented Reality System

Authors: L. Yu, W. K. Li, S. K. Ong, A. Y. C. Nee

Abstract:

In this paper, a scalable augmented reality framework for handheld devices is presented. The presented framework is enabled by using a server-client data communication structure, in which the search for tracking targets among a database of images is performed on the server-side while pixel-wise 3D tracking is performed on the client-side, which, in this case, is a handheld mobile device. Image search on the server-side adopts a residual-enhanced image descriptors representation that gives the framework a scalability property. The tracking algorithm on the client-side is based on a gravity-aligned feature descriptor which takes the advantage of a sensor-equipped mobile device and an optimized intensity-based image alignment approach that ensures the accuracy of 3D tracking. Automatic content streaming is achieved by using a key-frame selection algorithm, client working phase monitoring and standardized rules for content communication between the server and client. The recognition accuracy test performed on a standard dataset shows that the method adopted in the presented framework outperforms the Bag-of-Words (BoW) method that has been used in some of the previous systems. Experimental test conducted on a set of video sequences indicated the real-time performance of the tracking system with a frame rate at 15-30 frames per second. The presented framework is exposed to be functional in practical situations with a demonstration application on a campus walk-around.

Keywords: augmented reality framework, server-client model, vision-based tracking, image search

Procedia PDF Downloads 275
2295 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)

Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton

Abstract:

Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.

Keywords: cold-start learning, expectation propagation, multi-armed bandits, Thompson Sampling, variational inference

Procedia PDF Downloads 108
2294 Seismic Performance of Benchmark Building Installed with Semi-Active Dampers

Authors: B. R. Raut

Abstract:

The seismic performance of 20-storey benchmark building with semi-active dampers is investigated under various earthquake ground motions. The Semi-Active Variable Friction Dampers (SAVFD) and Magnetorheological Dampers (MR) are used in this study. A recently proposed predictive control algorithm is employed for SAVFD and a simple mechanical model based on a Bouc–Wen element with clipped optimal control algorithm is employed for MR damper. A parametric study is carried out to ascertain the optimum parameters of the semi-active controllers, which yields the minimum performance indices of controlled benchmark building. The effectiveness of dampers is studied in terms of the reduction in structural responses and performance criteria. To minimize the cost of the dampers, the optimal location of the damper, rather than providing the dampers at all floors, is also investigated. The semi-active dampers installed in benchmark building effectively reduces the earthquake-induced responses. Lesser number of dampers at appropriate locations also provides comparable response of benchmark building, thereby reducing cost of dampers significantly. The effectiveness of two semi-active devices in mitigating seismic responses is cross compared. Among two semi-active devices majority of the performance criteria of MR dampers are lower than SAVFD installed with benchmark building. Thus the performance of the MR dampers is far better than SAVFD in reducing displacement, drift, acceleration and base shear of mid to high-rise building against seismic forces.

Keywords: benchmark building, control strategy, input excitation, MR dampers, peak response, semi-active variable friction dampers

Procedia PDF Downloads 285
2293 Interpretation of the Russia-Ukraine 2022 War via N-Gram Analysis

Authors: Elcin Timur Cakmak, Ayse Oguzlar

Abstract:

This study presents the results of the tweets sent by Twitter users on social media about the Russia-Ukraine war by bigram and trigram methods. On February 24, 2022, Russian President Vladimir Putin declared a military operation against Ukraine, and all eyes were turned to this war. Many people living in Russia and Ukraine reacted to this war and protested and also expressed their deep concern about this war as they felt the safety of their families and their futures were at stake. Most people, especially those living in Russia and Ukraine, express their views on the war in different ways. The most popular way to do this is through social media. Many people prefer to convey their feelings using Twitter, one of the most frequently used social media tools. Since the beginning of the war, it is seen that there have been thousands of tweets about the war from many countries of the world on Twitter. These tweets accumulated in data sources are extracted using various codes for analysis through Twitter API and analysed by Python programming language. The aim of the study is to find the word sequences in these tweets by the n-gram method, which is known for its widespread use in computational linguistics and natural language processing. The tweet language used in the study is English. The data set consists of the data obtained from Twitter between February 24, 2022, and April 24, 2022. The tweets obtained from Twitter using the #ukraine, #russia, #war, #putin, #zelensky hashtags together were captured as raw data, and the remaining tweets were included in the analysis stage after they were cleaned through the preprocessing stage. In the data analysis part, the sentiments are found to present what people send as a message about the war on Twitter. Regarding this, negative messages make up the majority of all the tweets as a ratio of %63,6. Furthermore, the most frequently used bigram and trigram word groups are found. Regarding the results, the most frequently used word groups are “he, is”, “I, do”, “I, am” for bigrams. Also, the most frequently used word groups are “I, do, not”, “I, am, not”, “I, can, not” for trigrams. In the machine learning phase, the accuracy of classifications is measured by Classification and Regression Trees (CART) and Naïve Bayes (NB) algorithms. The algorithms are used separately for bigrams and trigrams. We gained the highest accuracy and F-measure values by the NB algorithm and the highest precision and recall values by the CART algorithm for bigrams. On the other hand, the highest values for accuracy, precision, and F-measure values are achieved by the CART algorithm, and the highest value for the recall is gained by NB for trigrams.

Keywords: classification algorithms, machine learning, sentiment analysis, Twitter

Procedia PDF Downloads 73
2292 Unified Coordinate System Approach for Swarm Search Algorithms in Global Information Deficit Environments

Authors: Rohit Dey, Sailendra Karra

Abstract:

This paper aims at solving the problem of multi-target searching in a Global Positioning System (GPS) denied environment using swarm robots with limited sensing and communication abilities. Typically, existing swarm-based search algorithms rely on the presence of a global coordinate system (vis-à-vis, GPS) that is shared by the entire swarm which, in turn, limits its application in a real-world scenario. This can be attributed to the fact that robots in a swarm need to share information among themselves regarding their location and signal from targets to decide their future course of action but this information is only meaningful when they all share the same coordinate frame. The paper addresses this very issue by eliminating any dependency of a search algorithm on the need of a predetermined global coordinate frame by the unification of the relative coordinate of individual robots when within the communication range, therefore, making the system more robust in real scenarios. Our algorithm assumes that all the robots in the swarm are equipped with range and bearing sensors and have limited sensing range and communication abilities. Initially, every robot maintains their relative coordinate frame and follow Levy walk random exploration until they come in range with other robots. When two or more robots are within communication range, they share sensor information and their location w.r.t. their coordinate frames based on which we unify their coordinate frames. Now they can share information about the areas that were already explored, information about the surroundings, and target signal from their location to make decisions about their future movement based on the search algorithm. During the process of exploration, there can be several small groups of robots having their own coordinate systems but eventually, it is expected for all the robots to be under one global coordinate frame where they can communicate information on the exploration area following swarm search techniques. Using the proposed method, swarm-based search algorithms can work in a real-world scenario without GPS and any initial information about the size and shape of the environment. Initial simulation results show that running our modified-Particle Swarm Optimization (PSO) without global information we can still achieve the desired results that are comparable to basic PSO working with GPS. In the full paper, we plan on doing the comparison study between different strategies to unify the coordinate system and to implement them on other bio-inspired algorithms, to work in GPS denied environment.

Keywords: bio-inspired search algorithms, decentralized control, GPS denied environment, swarm robotics, target searching, unifying coordinate systems

Procedia PDF Downloads 137
2291 Stability Analysis of DC Microgrid with Varying Supercapacitor Operating Voltages

Authors: Annie B. V., Anu A. G., Harikumar R.

Abstract:

Microgrid (MG) is a self-governing miniature section of the power system. Nowadays the majority of loads and energy storage devices are inherently in DC form. This necessitates a greater scope of research in the various types of energy storage devices in DC microgrids. In a modern power system, DC microgrid is a manageable electric power system usually integrated with renewable energy sources (RESs) and DC loads with the help of power electronic converters. The stability of the DC microgrid mainly depends on the power imbalance. Power imbalance due to the presence of intermittent renewable energy resources (RERs) is supplied by energy storage devices. Battery, supercapacitor, flywheel, etc. are some of the commonly used energy storage devices. Owing to the high energy density provided by the batteries, this type of energy storage system is mainly utilized in all sorts of hybrid energy storage systems. To minimize the stability issues, a Supercapacitor (SC) is usually interfaced with the help of a bidirectional DC/DC converter. SC can exchange power during transient conditions due to its high power density. This paper analyses the stability issues of DC microgrids with hybrid energy storage systems (HESSs) arises from a reduction in SC operating voltage due to self-discharge. The stability of DC microgrid and power management is analyzed with different control strategies.

Keywords: DC microgrid, hybrid energy storage system (HESS), power management, small signal modeling, supercapacitor

Procedia PDF Downloads 249
2290 Fuzzy Adaptive Control of an Intelligent Hybrid HPS (Pvwindbat), Grid Power System Applied to a Dwelling

Authors: A. Derrouazin, N. Mekkakia-M, R. Taleb, M. Helaimi, A. Benbouali

Abstract:

Nowadays the use of different sources of renewable energy for the production of electricity is the concern of everyone, as, even impersonal domestic use of the electricity in isolated sites or in town. As the conventional sources of energy are shrinking, a need has arisen to look for alternative sources of energy with more emphasis on its optimal use. This paper presents design of a sustainable Hybrid Power System (PV-Wind-Storage) assisted by grid as supplementary sources applied to case study residential house, to meet its entire energy demand. A Fuzzy control system model has been developed to optimize and control flow of power from these sources. This energy requirement is mainly fulfilled from PV and Wind energy stored in batteries module for critical load of a residential house and supplemented by grid for base and peak load. The system has been developed for maximum daily households load energy of 3kWh and can be scaled to any higher value as per requirement of individual /community house ranging from 3kWh/day to 10kWh/day, as per the requirement. The simulation work, using intelligent energy management, has resulted in an optimal yield leading to average reduction in cost of electricity by 50% per day.

Keywords: photovoltaic (PV), wind turbine, battery, microcontroller, fuzzy control (FC), Matlab

Procedia PDF Downloads 648
2289 Maturity Classification of Oil Palm Fresh Fruit Bunches Using Thermal Imaging Technique

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Reza Ehsani, Hawa Ze Jaffar, Ishak Aris

Abstract:

Ripeness estimation of oil palm fresh fruit is important processes that affect the profitableness and salability of oil palm fruits. The adulthood or ripeness of the oil palm fruits influences the quality of oil palm. Conventional procedure includes physical grading of Fresh Fruit Bunches (FFB) maturity by calculating the number of loose fruits per bunch. This physical classification of oil palm FFB is costly, time consuming and the results may have human error. Hence, many researchers try to develop the methods for ascertaining the maturity of oil palm fruits and thereby, deviously the oil content of distinct palm fruits without the need for exhausting oil extraction and analysis. This research investigates the potential of infrared images (Thermal Images) as a predictor to classify the oil palm FFB ripeness. A total of 270 oil palm fresh fruit bunches from most common cultivar of oil palm bunches Nigresens according to three maturity categories: under ripe, ripe and over ripe were collected. Each sample was scanned by the thermal imaging cameras FLIR E60 and FLIR T440. The average temperature of each bunches were calculated by using image processing in FLIR Tools and FLIR ThermaCAM researcher pro 2.10 environment software. The results show that temperature content decreased from immature to over mature oil palm FFBs. An overall analysis-of-variance (ANOVA) test was proved that this predictor gave significant difference between underripe, ripe and overripe maturity categories. This shows that the temperature as predictors can be good indicators to classify oil palm FFB. Classification analysis was performed by using the temperature of the FFB as predictors through Linear Discriminant Analysis (LDA), Mahalanobis Discriminant Analysis (MDA), Artificial Neural Network (ANN) and K- Nearest Neighbor (KNN) methods. The highest overall classification accuracy was 88.2% by using Artificial Neural Network. This research proves that thermal imaging and neural network method can be used as predictors of oil palm maturity classification.

Keywords: artificial neural network, maturity classification, oil palm FFB, thermal imaging

Procedia PDF Downloads 360
2288 Sorghum Grains Grading for Food, Feed, and Fuel Using NIR Spectroscopy

Authors: Irsa Ejaz, Siyang He, Wei Li, Naiyue Hu, Chaochen Tang, Songbo Li, Meng Li, Boubacar Diallo, Guanghui Xie, Kang Yu

Abstract:

Background: Near-infrared spectroscopy (NIR) is a non-destructive, fast, and low-cost method to measure the grain quality of different cereals. Previously reported NIR model calibrations using the whole grain spectra had moderate accuracy. Improved predictions are achievable by using the spectra of whole grains, when compared with the use of spectra collected from the flour samples. However, the feasibility for determining the critical biochemicals, related to the classifications for food, feed, and fuel products are not adequately investigated. Objectives: To evaluate the feasibility of using NIRS and the influence of four sample types (whole grains, flours, hulled grain flours, and hull-less grain flours) on the prediction of chemical components to improve the grain sorting efficiency for human food, animal feed, and biofuel. Methods: NIR was applied in this study to determine the eight biochemicals in four types of sorghum samples: hulled grain flours, hull-less grain flours, whole grains, and grain flours. A total of 20 hybrids of sorghum grains were selected from the two locations in China. Followed by NIR spectral and wet-chemically measured biochemical data, partial least squares regression (PLSR) was used to construct the prediction models. Results: The results showed that sorghum grain morphology and sample format affected the prediction of biochemicals. Using NIR data of grain flours generally improved the prediction compared with the use of NIR data of whole grains. In addition, using the spectra of whole grains enabled comparable predictions, which are recommended when a non-destructive and rapid analysis is required. Compared with the hulled grain flours, hull-less grain flours allowed for improved predictions for tannin, cellulose, and hemicellulose using NIR data. Conclusion: The established PLSR models could enable food, feed, and fuel producers to efficiently evaluate a large number of samples by predicting the required biochemical components in sorghum grains without destruction.

Keywords: FT-NIR, sorghum grains, biochemical composition, food, feed, fuel, PLSR

Procedia PDF Downloads 69
2287 Design and Construction of a Solar Mobile Anaerobic Digestor for Rural Communities

Authors: César M. Moreira, Marco A. Pazmiño-Hernández, Marco A. Pazmiño-Barreno, Kyle Griffin, Pratap Pullammanappallil

Abstract:

An anaerobic digestion system that was completely operated on solar power (both photovoltaic and solar thermal energy), and mounted on a trailer to make it mobile, was designed and constructed. A 55-gallon batch digester was placed within a chamber that was heated by hot water pumped through a radiator. Hot water was produced by a solar thermal collector and photovoltaic panels charged a battery which operated pumps for recirculating water. It was found that the temperature in the heating chamber was maintained above ambient temperature but it follows the same trend as ambient temperature. The temperature difference between the chamber and ambient values was not constant but varied with time of day. Advantageously, the temperature difference was highest during night and early morning and lowest near noon. In winter, when ambient temperature dipped to 2 °C during early morning hours, the chamber temperature did not drop below 10 °C. Model simulations showed that even if the digester is subjected to diurnal variations of temperature (as observed in winter of a subtropical region), about 63 % of the waste that would have been processed under constant digester temperature of 38 °C, can still be processed. The cost of the digester system without the trailer was $1,800.

Keywords: anaerobic digestion, solar-mobile, rural communities, solar, hybrid

Procedia PDF Downloads 274
2286 Markowitz and Implementation of a Multi-Objective Evolutionary Technique Applied to the Colombia Stock Exchange (2009-2015)

Authors: Feijoo E. Colomine Duran, Carlos E. Peñaloza Corredor

Abstract:

There modeling component selection financial investment (Portfolio) a variety of problems that can be addressed with optimization techniques under evolutionary schemes. For his feature, the problem of selection of investment components of a dichotomous relationship between two elements that are opposed: The Portfolio Performance and Risk presented by choosing it. This relationship was modeled by Markowitz through a media problem (Performance) - variance (risk), ie must Maximize Performance and Minimize Risk. This research included the study and implementation of multi-objective evolutionary techniques to solve these problems, taking as experimental framework financial market equities Colombia Stock Exchange between 2009-2015. Comparisons three multiobjective evolutionary algorithms, namely the Nondominated Sorting Genetic Algorithm II (NSGA-II), the Strength Pareto Evolutionary Algorithm 2 (SPEA2) and Indicator-Based Selection in Multiobjective Search (IBEA) were performed using two measures well known performance: The Hypervolume indicator and R_2 indicator, also it became a nonparametric statistical analysis and the Wilcoxon rank-sum test. The comparative analysis also includes an evaluation of the financial efficiency of the investment portfolio chosen by the implementation of various algorithms through the Sharpe ratio. It is shown that the portfolio provided by the implementation of the algorithms mentioned above is very well located between the different stock indices provided by the Colombia Stock Exchange.

Keywords: finance, optimization, portfolio, Markowitz, evolutionary algorithms

Procedia PDF Downloads 302
2285 Linking Disgust and Misophonia: The Role of Mental Contamination

Authors: Laurisa Peters, Usha Barahmand, Maria Stalias-Mantzikos, Naila Shamsina, Kerry Aguero

Abstract:

In the current study, the authors sought to examine whether the links between moral and sexual disgust and misophonia are mediated by mental contamination. An internationally diverse sample of 283 adults (193 females, 76 males, and 14 non-binary individuals) ranging in age from 18 to 60 years old was recruited from online social media platforms and survey recruitment sites. The sample completed an online battery of scales that consisted of the New York Misophonia Scale, State Mental Contamination Scale, and the Three-Domain Disgust Scale. The hypotheses were evaluated using a series of mediations performed using the PROCESS add-on in SPSS. Correlations were found between emotional and aggressive-avoidant reactions in misophonia, mental contamination, pathogen disgust, and sexual disgust. Moral disgust and non-aggressive reactions in misophonia failed to correlate significantly with any of the other constructs. Sexual disgust had direct and indirect effects, while pathogen disgust had only direct effects on aspects of misophonia. These findings partially support our hypothesis that mental contamination mediates the link between disgust propensity and misophonia while also confirming that pathogen-based disgust is not associated with mental contamination. Findings imply that misophonia is distinct from obsessive-compulsive disorder. Further research into the conceptualization of moral disgust is warranted.

Keywords: misophonia, moral disgust, pathogen disgust, sexual disgust, mental contamination

Procedia PDF Downloads 96
2284 Preservice Science Teachers' Understanding of Equitable Assessment

Authors: Kemal Izci, Ahmet Oguz Akturk

Abstract:

Learning is dependent on cognitive and physical differences as well as other differences such as ethnicity, language, and culture. Furthermore, these differences also influence how students show their learning. Assessment is an integral part of learning and teaching process and is essential for effective instruction. In order to provide effective instruction, teachers need to provide equal assessment opportunities for all students to see their learning difficulties and use them to modify instruction to aid learning. Successful assessment practices are dependent upon the knowledge and value of teachers. Therefore, in order to use assessment to assess and support diverse students learning, preservice and inservice teachers should hold an appropriate understanding of equitable assessment. In order to prepare teachers to help them support diverse student learning, as a first step, this study aims to explore how preservice teachers’ understand equitable assessment. 105 preservice science teachers studying at teacher preparation program in a large university located at Eastern part of Turkey participated in the current study. A questionnaire, preservice teachers’ reflection papers and interviews served as data sources for this study. All collected data qualitatively analyzed to develop themes that illustrate preservice science teachers’ understanding of equitable assessment. Results of the study showed that preservice teachers mostly emphasized fairness including fairness in grading and fairness in asking questions not out of covered concepts for equitable assessment. However, most of preservice teachers do not show an understanding of equity for providing equal opportunities for all students to display their understanding of related content. For some preservice teachers providing different opportunities (providing extra time for non-native speaking students) for some students seems to be unfair for other students and therefore, these kinds of refinements do not need to be used. The results of the study illustrated that preservice science teachers mostly understand equitable assessment as fairness and less highlight the role of using equitable assessment to support all student learning, which is more important in order to improve students’ achievement of science. Therefore, we recommend that more opportunities should be provided for preservice teachers engage in a more broad understanding of equitable assessment and learn how to use equitable assessment practices to aid and support all students learning trough classroom assessment.

Keywords: science teaching, equitable assessment, assessment literacy, preservice science teachers

Procedia PDF Downloads 304
2283 Evaluation of Traumatic Spine by Magnetic Resonance Imaging

Authors: Sarita Magu, Deepak Singh

Abstract:

Study Design: This prospective study was conducted at the department of Radio Diagnosis, at Pt B.D. Sharma PGIMS, Rohtak in 57 patients of spine injury on radiographs or radiographically normal patients with neurological deficits presenting within 72 hours of injury. Aims: Evaluation of the role of Magnetic Resonance Imaging (MRI) in Spinal Trauma Patients and to compare MRI findings with clinical profile and neurological status of the patient and to correlate the MRI findings with neurological recovery of the patient and predict the outcome. Material and Methods: Neurological status of patients was assessed at the time of admission and discharge in all the patients and at long term interval of six months to one year in 27 patients as per American spine injury association classification (ASIA). On MRI cord injury was categorized into cord hemorrhage, cord contusion, cord edema only, and normal cord. Quantitative assessment of injury on MRI was done using mean canal compromise (MCC), mean spinal cord compression (MSCC) and lesion length. Neurological status at admission and neurological recovery at discharge and long term follow up was compared with various qualitative cord findings and quantitative parameters on MRI. Results: Cord edema and normal cord was associated with favorable neurological outcome. Cord contusion show lesser neurological recovery as compared to cord edema. Cord hemorrhage was associated with worst neurological status at admission and poor neurological recovery. Mean MCC, MSCC, and lesion length values were higher in patients presenting with ASIA A grade injury and showed decreasing trends towards ASIA E grade injury. Patients showing neurological recovery over the period of hospital stay and long term follow up had lower mean MCC, MSCC, and lesion length as compared to patients showing no neurological recovery. The data was statistically significant with p value <.05. Conclusion: Cord hemorrhage and higher MCC, MSCC and lesion length has poor prognostic value in spine injury patients.

Keywords: spine injury, cord hemorrhage, cord contusion, MCC, MSCC, lesion length, ASIA grading

Procedia PDF Downloads 355
2282 Nitrogen Doping Effect on Enhancement of Electrochemical Performance of a Carbon Nanotube Based Microsupercapacitor

Authors: Behnoush Dousti, Ye Choi, Gil S. Lee

Abstract:

Microsupercapacitors (MScs) are known as the future of miniaturized energy sources that can be coupled to a battery to deliver stable and constant energy to microelectronics. Among all their counterparts, electrochemical microsupercapacitor have drawn the most research attention due to their higher power density and long cycle life. Designing the microstructure and choosing the electroactive materials are two significant factors that greatly affect the performance of the device. Here, we report successful fabrication and characterization of a microsupercapacitor with interdigitated structure based on Carbon nanotube sheets (CNT sheet). Novel structure of highly aligned CNT sheet as the electrode materials which also offers excellent conductivity and large surface area along with doping with nitrogen, enabled us to develop a device with serval order of magnitude higher electrochemical performance than the pristine CNT in aqueous electrolyte including high specific capacitance and rate capabilities and excellent cycle life over 10000 cycles. Geometric parameters such as finger width and gap size were also studied and it was shown the device performance is much depended on them. Results of this study confirms the potential of CNT sheet for future energy storage devices.

Keywords: carbon nanotube, energy storage systems, microsupercapacitor, nitrogen doping

Procedia PDF Downloads 131
2281 Design and Development of Solar Water Cooler Using Principle of Evaporation

Authors: Vipul Shiralkar, Rohit Khadilkar, Shekhar Kulkarni, Ismail Mullani, Omkar Malvankar

Abstract:

The use of water cooler has increased and become an important appliance in the world of global warming. Most of the coolers are electrically operated. In this study an experimental setup of evaporative water cooler using solar energy is designed and developed. It works on the principle of heat transfer using evaporation of water. Water is made to flow through copper tubes arranged in a specific array manner. Cotton plug is wrapped on copper tubes and rubber pipes are arranged in the same way as copper tubes above it. Water percolated from rubber pipes is absorbed by cotton plug. The setup has 40L water carrying capacity with forced cooling arrangement and variable speed fan which uses solar energy stored in 20Ah capacity battery. Fan speed greatly affects the temperature drop. Tests were performed at different fan speed. Maximum temperature drop achieved was 90C at 1440 rpm of fan speed. This temperature drop is very attractive. This water cooler uses solar energy hence it is cost efficient and it is affordable to rural community as well. The cooler is free from any harmful emissions like other refrigerants and hence environmental friendly. Very less maintenance is required as compared to the conventional electrical water cooler.

Keywords: evaporation, cooler, energy, copper, solar, cost

Procedia PDF Downloads 318
2280 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing

Authors: S. Bouhouche, R. Drai, J. Bast

Abstract:

This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.

Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement

Procedia PDF Downloads 283
2279 Development of an Energy Independant DC Building Demonstrator for Insulated Island Site

Authors: Olivia Bory Devisme, Denis Genon-Catalot, Frederic Alicalapa, Pierre-Olivier Lucas De Peslouan, Jean-Pierre Chabriat

Abstract:

In the context of climate change, it is essential that island territories gain energy autonomy. Currently mostly dependent on fossil fuels, the island of Reunion lo- cated in the Indian Ocean nevertheless has a high potential for solar energy. As the market for photovoltaic panels has been growing in recent years, the issues of energy losses linked to the multiple conversions from direct current to alternating current are emerging. In order to quantify these advantages and disadvantages by a comparative study, this document present the measurements carried out on a direct current test bench, particularly for lighting, ventilation, air condi- tioning and office equipment for the tertiary sector. All equipment is supplied with DC power from energy produced by photovoltaic panels. A weather sta- tion, environmental indoor sensors, and drivers are also used to control energy. Self-consumption is encouraged in order to manage different priorities between user consumption and energy storage in a lithium iron phosphate battery. The measurements are compared to a conventional electrical architecture (DC-AC- DC) for energy consumption, equipment overheating, cost, and life cycle analysis.

Keywords: DC microgrids, solar energy, smart buildings, storage

Procedia PDF Downloads 162
2278 Artificial Neural Network in Ultra-High Precision Grinding of Borosilicate-Crown Glass

Authors: Goodness Onwuka, Khaled Abou-El-Hossein

Abstract:

Borosilicate-crown (BK7) glass has found broad application in the optic and automotive industries and the growing demands for nanometric surface finishes is becoming a necessity in such applications. Thus, it has become paramount to optimize the parameters influencing the surface roughness of this precision lens. The research was carried out on a 4-axes Nanoform 250 precision lathe machine with an ultra-high precision grinding spindle. The experiment varied the machining parameters of feed rate, wheel speed and depth of cut at three levels for different combinations using Box Behnken design of experiment and the resulting surface roughness values were measured using a Taylor Hobson Dimension XL optical profiler. Acoustic emission monitoring technique was applied at a high sampling rate to monitor the machining process while further signal processing and feature extraction methods were implemented to generate the input to a neural network algorithm. This paper highlights the training and development of a back propagation neural network prediction algorithm through careful selection of parameters and the result show a better classification accuracy when compared to a previously developed response surface model with very similar machining parameters. Hence artificial neural network algorithms provide better surface roughness prediction accuracy in the ultra-high precision grinding of BK7 glass.

Keywords: acoustic emission technique, artificial neural network, surface roughness, ultra-high precision grinding

Procedia PDF Downloads 305
2277 Combination of Geological, Geophysical and Reservoir Engineering Analyses in Field Development: A Case Study

Authors: Atif Zafar, Fan Haijun

Abstract:

A sequence of different Reservoir Engineering methods and tools in reservoir characterization and field development are presented in this paper. The real data of Jin Gas Field of L-Basin of Pakistan is used. The basic concept behind this work is to enlighten the importance of well test analysis in a broader way (i.e. reservoir characterization and field development) unlike to just determine the permeability and skin parameters. Normally in the case of reservoir characterization we rely on well test analysis to some extent but for field development plan, the well test analysis has become a forgotten tool specifically for locations of new development wells. This paper describes the successful implementation of well test analysis in Jin Gas Field where the main uncertainties are identified during initial stage of field development when location of new development well was marked only on the basis of G&G (Geologic and Geophysical) data. The seismic interpretation could not encounter one of the boundary (fault, sub-seismic fault, heterogeneity) near the main and only producing well of Jin Gas Field whereas the results of the model from the well test analysis played a very crucial rule in order to propose the location of second well of the newly discovered field. The results from different methods of well test analysis of Jin Gas Field are also integrated with and supported by other tools of Reservoir Engineering i.e. Material Balance Method and Volumetric Method. In this way, a comprehensive way out and algorithm is obtained in order to integrate the well test analyses with Geological and Geophysical analyses for reservoir characterization and field development. On the strong basis of this working and algorithm, it was successfully evaluated that the proposed location of new development well was not justified and it must be somewhere else except South direction.

Keywords: field development plan, reservoir characterization, reservoir engineering, well test analysis

Procedia PDF Downloads 364