Search results for: estimation algorithms
931 The Relationship between Renewable Energy, Real Income, Tourism and Air Pollution
Authors: Eyup Dogan
Abstract:
One criticism of the energy-growth-environment literature, to the best of our knowledge, is that only a few studies analyze the influence of tourism on CO₂ emissions even though tourism sector is closely related to the environment. The other criticism is the selection of methodology. Panel estimation techniques that fail to consider both heterogeneity and cross-sectional dependence across countries can cause forecasting errors. To fulfill the mentioned gaps in the literature, this study analyzes the impacts of real GDP, renewable energy and tourism on the levels of carbon dioxide (CO₂) emissions for the top 10 most-visited countries around the world. This study focuses on the top 10 touristic (most-visited) countries because they receive about the half of the worldwide tourist arrivals in late years and are among the top ones in 'Renewables Energy Country Attractiveness Index (RECAI)'. By looking at Pesaran’s CD test and average growth rates of variables for each country, we detect the presence of cross-sectional dependence and heterogeneity. Hence, this study uses second generation econometric techniques (cross-sectionally augmented Dickey-Fuller (CADF), and cross-sectionally augmented IPS (CIPS) unit root test, the LM bootstrap cointegration test, and the DOLS and the FMOLS estimators) which are robust to the mentioned issues. Therefore, the reported results become accurate and reliable. It is found that renewable energy mitigates the pollution whereas real GDP and tourism contribute to carbon emissions. Thus, regulatory policies are necessary to increase the awareness of sustainable tourism. In addition, the use of renewable energy and the adoption of clean technologies in tourism sector as well as in producing goods and services play significant roles in reducing the levels of emissions.Keywords: air pollution, tourism, renewable energy, income, panel data
Procedia PDF Downloads 184930 Algorithms for Run-Time Task Mapping in NoC-Based Heterogeneous MPSoCs
Authors: M. K. Benhaoua, A. K. Singh, A. E. Benyamina, P. Boulet
Abstract:
Mapping parallelized tasks of applications onto these MPSoCs can be done either at design time (static) or at run-time (dynamic). Static mapping strategies find the best placement of tasks at design-time, and hence, these are not suitable for dynamic workload and seem incapable of runtime resource management. The number of tasks or applications executing in MPSoC platform can exceed the available resources, requiring efficient run-time mapping strategies to meet these constraints. This paper describes a new Spiral Dynamic Task Mapping heuristic for mapping applications onto NoC-based Heterogeneous MPSoC. This heuristic is based on packing strategy and routing Algorithm proposed also in this paper. Heuristic try to map the tasks of an application in a clustering region to reduce the communication overhead between the communicating tasks. The heuristic proposed in this paper attempts to map the tasks of an application that are most related to each other in a spiral manner and to find the best possible path load that minimizes the communication overhead. In this context, we have realized a simulation environment for experimental evaluations to map applications with varying number of tasks onto an 8x8 NoC-based Heterogeneous MPSoCs platform, we demonstrate that the new mapping heuristics with the new modified dijkstra routing algorithm proposed are capable of reducing the total execution time and energy consumption of applications when compared to state-of-the-art run-time mapping heuristics reported in the literature.Keywords: multiprocessor system on chip, MPSoC, network on chip, NoC, heterogeneous architectures, run-time mapping heuristics, routing algorithm
Procedia PDF Downloads 489929 Design and Characterization of Ecological Materials Based on Demolition and Concrete Waste, Casablanca (Morocco)
Authors: Mourad Morsli, Mohamed Tahiri, Azzedine Samdi
Abstract:
The Cities are the urbanized territories most favorable to the consumption of resources (materials, energy). In Morocco, the economic capital Casablanca is one of them, with its 4M inhabitants and its 60% share in the economic and industrial activity of the kingdom. In the absence of legal status in force, urban development has favored the generation of millions of tons of demolition and construction waste scattered in open spaces causing a significant nuisance to the environment and citizens. Hence the main objective of our work is to valorize concrete waste. The representative wastes are mainly concrete, concrete, and fired clay bricks, ceramic tiles, marble panels, gypsum, and scrap metal. The work carried out includes: geolocation with a combination of artificial intelligence, GIS, and Google Earth, which allowed the estimation of the quantity of these wastes per site; then the sorting, crushing, grinding, and physicochemical characterization of the collected samples allowed the definition of the exploitation ways for each extracted fraction for integrated management of the said wastes. In the present work, we proceeded to the exploitation of the fractions obtained after sieving the representative samples to incorporate them in the manufacture of new ecological materials for construction. These formulations prepared studies have been tested and characterized: physical criteria (specific surface, resistance to flexion and compression) and appearance (cracks, deformation). We will present in detail the main results of our research work and also describe the specific properties of each material developed.Keywords: demolition and construction waste, GIS combination software, inert waste recovery, ecological materials, Casablanca, Morocco
Procedia PDF Downloads 135928 A Machine Learning Based Framework for Education Levelling in Multicultural Countries: UAE as a Case Study
Authors: Shatha Ghareeb, Rawaa Al-Jumeily, Thar Baker
Abstract:
In Abu Dhabi, there are many different education curriculums where sector of private schools and quality assurance is supervising many private schools in Abu Dhabi for many nationalities. As there are many different education curriculums in Abu Dhabi to meet expats’ needs, there are different requirements for registration and success. In addition, there are different age groups for starting education in each curriculum. In fact, each curriculum has a different number of years, assessment techniques, reassessment rules, and exam boards. Currently, students that transfer curriculums are not being placed in the right year group due to different start and end dates of each academic year and their date of birth for each year group is different for each curriculum and as a result, we find students that are either younger or older for that year group which therefore creates gaps in their learning and performance. In addition, there is not a way of storing student data throughout their academic journey so that schools can track the student learning process. In this paper, we propose to develop a computational framework applicable in multicultural countries such as UAE in which multi-education systems are implemented. The ultimate goal is to use cloud and fog computing technology integrated with Artificial Intelligence techniques of Machine Learning to aid in a smooth transition when assigning students to their year groups, and provide leveling and differentiation information of students who relocate from a particular education curriculum to another, whilst also having the ability to store and access student data from anywhere throughout their academic journey.Keywords: admissions, algorithms, cloud computing, differentiation, fog computing, levelling, machine learning
Procedia PDF Downloads 142927 Review of Theories and Applications of Genetic Programing in Sediment Yield Modeling
Authors: Adesoji Tunbosun Jaiyeola, Josiah Adeyemo
Abstract:
Sediment yield can be considered to be the total sediment load that leaves a drainage basin. The knowledge of the quantity of sediments present in a river at a particular time can lead to better flood capacity in reservoirs and consequently help to control over-bane flooding. Furthermore, as sediment accumulates in the reservoir, it gradually loses its ability to store water for the purposes for which it was built. The development of hydrological models to forecast the quantity of sediment present in a reservoir helps planners and managers of water resources systems, to understand the system better in terms of its problems and alternative ways to address them. The application of artificial intelligence models and technique to such real-life situations have proven to be an effective approach of solving complex problems. This paper makes an extensive review of literature relevant to the theories and applications of evolutionary algorithms, and most especially genetic programming. The successful applications of genetic programming as a soft computing technique were reviewed in sediment modelling and other branches of knowledge. Some fundamental issues such as benchmark, generalization ability, bloat and over-fitting and other open issues relating to the working principles of GP, which needs to be addressed by the GP community were also highlighted. This review aim to give GP theoreticians, researchers and the general community of GP enough research direction, valuable guide and also keep all stakeholders abreast of the issues which need attention during the next decade for the advancement of GP.Keywords: benchmark, bloat, generalization, genetic programming, over-fitting, sediment yield
Procedia PDF Downloads 446926 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling
Authors: Florin Leon, Silvia Curteanu
Abstract:
Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.Keywords: batch bulk methyl methacrylate polymerization, adaptive sampling, machine learning, large margin nearest neighbor regression
Procedia PDF Downloads 305925 Bias-Corrected Estimation Methods for Receiver Operating Characteristic Surface
Authors: Khanh To Duc, Monica Chiogna, Gianfranco Adimari
Abstract:
With three diagnostic categories, assessment of the performance of diagnostic tests is achieved by the analysis of the receiver operating characteristic (ROC) surface, which generalizes the ROC curve for binary diagnostic outcomes. The volume under the ROC surface (VUS) is a summary index usually employed for measuring the overall diagnostic accuracy. When the true disease status can be exactly assessed by means of a gold standard (GS) test, unbiased nonparametric estimators of the ROC surface and VUS are easily obtained. In practice, unfortunately, disease status verification via the GS test could be unavailable for all study subjects, due to the expensiveness or invasiveness of the GS test. Thus, often only a subset of patients undergoes disease verification. Statistical evaluations of diagnostic accuracy based only on data from subjects with verified disease status are typically biased. This bias is known as verification bias. Here, we consider the problem of correcting for verification bias when continuous diagnostic tests for three-class disease status are considered. We assume that selection for disease verification does not depend on disease status, given test results and other observed covariates, i.e., we assume that the true disease status, when missing, is missing at random. Under this assumption, we discuss several solutions for ROC surface analysis based on imputation and re-weighting methods. In particular, verification bias-corrected estimators of the ROC surface and of VUS are proposed, namely, full imputation, mean score imputation, inverse probability weighting and semiparametric efficient estimators. Consistency and asymptotic normality of the proposed estimators are established, and their finite sample behavior is investigated by means of Monte Carlo simulation studies. Two illustrations using real datasets are also given.Keywords: imputation, missing at random, inverse probability weighting, ROC surface analysis
Procedia PDF Downloads 416924 A Three-Dimensional TLM Simulation Method for Thermal Effect in PV-Solar Cells
Authors: R. Hocine, A. Boudjemai, A. Amrani, K. Belkacemi
Abstract:
Temperature rising is a negative factor in almost all systems. It could cause by self heating or ambient temperature. In solar photovoltaic cells this temperature rising affects on the behavior of cells. The ability of a PV module to withstand the effects of periodic hot-spot heating that occurs when cells are operated under reverse biased conditions is closely related to the properties of the cell semi-conductor material. In addition, the thermal effect also influences the estimation of the maximum power point (MPP) and electrical parameters for the PV modules, such as maximum output power, maximum conversion efficiency, internal efficiency, reliability, and lifetime. The cells junction temperature is a critical parameter that significantly affects the electrical characteristics of PV modules. For practical applications of PV modules, it is very important to accurately estimate the junction temperature of PV modules and analyze the thermal characteristics of the PV modules. Once the temperature variation is taken into account, we can then acquire a more accurate MPP for the PV modules, and the maximum utilization efficiency of the PV modules can also be further achieved. In this paper, the three-Dimensional Transmission Line Matrix (3D-TLM) method was used to map the surface temperature distribution of solar cells while in the reverse bias mode. It was observed that some cells exhibited an inhomogeneity of the surface temperature resulting in localized heating (hot-spot). This hot-spot heating causes irreversible destruction of the solar cell structure. Hot spots can have a deleterious impact on the total solar modules if individual solar cells are heated. So, the results show clearly that the solar cells are capable of self-generating considerable amounts of heat that should be dissipated very quickly to increase PV module's lifetime.Keywords: thermal effect, conduction, heat dissipation, thermal conductivity, solar cell, PV module, nodes, 3D-TLM
Procedia PDF Downloads 387923 Effects of Watershed Erosion on Stream Channel Formation
Authors: Tiao Chang, Ivan Caballero, Hong Zhou
Abstract:
Streams carry water and sediment naturally by maintaining channel dimensions, pattern, and profile over time. Watershed erosion as a natural process has occurred to contribute sediment to streams over time. The formation of channel dimensions is complex. This study is to relate quantifiable and consistent channel dimensions at the bankfull stage to the corresponding watershed erosion estimation by the Revised Universal Soil Loss Equation (RUSLE). Twelve sites of which drainage areas range from 7 to 100 square miles in the Hocking River Basin of Ohio were selected for the bankfull geometry determinations including width, depth, cross-section area, bed slope, and drainage area. The twelve sub-watersheds were chosen to obtain a good overall representation of the Hocking River Basin. It is of interest to determine how these bankfull channel dimensions are related to the soil erosion of corresponding sub-watersheds. Soil erosion is a natural process that has occurred in a watershed over time. The RUSLE was applied to estimate erosions of the twelve selected sub-watersheds where the bankfull geometry measurements were conducted. These quantified erosions of sub-watersheds are used to investigate correlations with bankfull channel dimensions including discharge, channel width, channel depth, cross-sectional area, and pebble distribution. It is found that drainage area, bankfull discharge and cross-sectional area correlates strongly with watershed erosion well. Furthermore, bankfull width and depth are moderately correlated with watershed erosion while the particle size, D50, of channel bed sediment is not well correlated with watershed erosion.Keywords: watershed, stream, sediment, channel
Procedia PDF Downloads 288922 Machine Learning Models for the Prediction of Heating and Cooling Loads of a Residential Building
Authors: Aaditya U. Jhamb
Abstract:
Due to the current energy crisis that many countries are battling, energy-efficient buildings are the subject of extensive research in the modern technological era because of growing worries about energy consumption and its effects on the environment. The paper explores 8 factors that help determine energy efficiency for a building: (relative compactness, surface area, wall area, roof area, overall height, orientation, glazing area, and glazing area distribution), with Tsanas and Xifara providing a dataset. The data set employed 768 different residential building models to anticipate heating and cooling loads with a low mean squared error. By optimizing these characteristics, machine learning algorithms may assess and properly forecast a building's heating and cooling loads, lowering energy usage while increasing the quality of people's lives. As a result, the paper studied the magnitude of the correlation between these input factors and the two output variables using various statistical methods of analysis after determining which input variable was most closely associated with the output loads. The most conclusive model was the Decision Tree Regressor, which had a mean squared error of 0.258, whilst the least definitive model was the Isotonic Regressor, which had a mean squared error of 21.68. This paper also investigated the KNN Regressor and the Linear Regression, which had to mean squared errors of 3.349 and 18.141, respectively. In conclusion, the model, given the 8 input variables, was able to predict the heating and cooling loads of a residential building accurately and precisely.Keywords: energy efficient buildings, heating load, cooling load, machine learning models
Procedia PDF Downloads 96921 Evaluation of Internal Friction Angle in Overconsolidated Granular Soil Deposits Using P- and S-Wave Seismic Velocities
Authors: Ehsan Pegah, Huabei Liu
Abstract:
Determination of the internal friction angle (φ) in natural soil deposits is an important issue in geotechnical engineering. The main objective of this study was to examine the evaluation of this parameter in overconsolidated granular soil deposits by using the P-wave velocity and the anisotropic components of S-wave velocity (i.e., both the vertical component (SV) and the horizontal component (SH) of S-wave). To this end, seventeen pairs of P-wave and S-wave seismic refraction profiles were carried out at three different granular sites in Iran using non-invasive seismic wave methods. The acquired shot gathers were processed, from which the P-wave, SV-wave and SH-wave velocities were derived. The reference values of φ and overconsolidation ratio (OCR) in the soil deposits were measured through laboratory tests. By assuming cross-anisotropy of the soils, the P-wave and S-wave velocities were utilized to develop an equation for calculating the coefficient of lateral earth pressure at-rest (K₀) based on the theory of elasticity for a cross-anisotropic medium. In addition, to develop an equation for OCR estimation in granular geomaterials in terms of SH/SV velocity ratios, a general regression analysis was performed on the resulting information from this research incorporated with the respective data published in the literature. The calculated K₀ values coupled with the estimated OCR values were finally employed in the Mayne and Kulhawy formula to evaluate φ in granular soil deposits. The results showed that reliable values of φ could be estimated based on the seismic wave velocities. The findings of this study may be used as the appropriate approaches for economic and non-invasive determination of in-situ φ in granular soil deposits using the surface seismic surveys.Keywords: angle of internal friction, overconsolidation ratio, granular soils, P-wave velocity, SV-wave velocity, SH-wave velocity
Procedia PDF Downloads 159920 Designing and Prototyping Permanent Magnet Generators for Wind Energy
Authors: T. Asefi, J. Faiz, M. A. Khan
Abstract:
This paper introduces dual rotor axial flux machines with surface mounted and spoke type ferrite permanent magnets with concentrated windings; they are introduced as alternatives to a generator with surface mounted Nd-Fe-B magnets. The output power, voltage, speed and air gap clearance for all the generators are identical. The machine designs are optimized for minimum mass using a population-based algorithm, assuming the same efficiency as the Nd-Fe-B machine. A finite element analysis (FEA) is applied to predict the performance, emf, developed torque, cogging torque, no load losses, leakage flux and efficiency of both ferrite generators and that of the Nd-Fe-B generator. To minimize cogging torque, different rotor pole topologies and different pole arc to pole pitch ratios are investigated by means of 3D FEA. It was found that the surface mounted ferrite generator topology is unable to develop the nominal electromagnetic torque, and has higher torque ripple and is heavier than the spoke type machine. Furthermore, it was shown that the spoke type ferrite permanent magnet generator has favorable performance and could be an alternative to rare-earth permanent magnet generators, particularly in wind energy applications. Finally, the analytical and numerical results are verified using experimental results.Keywords: axial flux, permanent magnet generator, dual rotor, ferrite permanent magnet generator, finite element analysis, wind turbines, cogging torque, population-based algorithms
Procedia PDF Downloads 152919 Serum 25-Dihydroxy Vitamin D3 Level Estimation and Insulin Resistance in Women of 18-40 Years Age Group with Polycystic Ovarian Syndrome
Authors: Thakur Pushpawati, Singh Vinita, Agrawal Sarita, Mohapatra Eli
Abstract:
Polycystic ovary syndrome (PCOS) is a disease of endocrine and frequently encountered in women in their reproductive period, and it is characterized by clinical features of anovulation, clinical and biochemical features of hyperandrogenism, and PCOS morphology on ultrasonographic examination. In Indian scenario, only a few studies are available on the correlation of serum 25-dihydroxy vitamin D3 level and insulin level. The present study is a prospective case-control study and aims to estimate the concentration of serum 25-dihydroxy vitamin D3 and insulin resistance and determine the association of serum 25-dihydroxy vitamin D3 with insulin resistance in PCOS women of 18-40 years age group. In this study, the primary objective is to estimate the concentration of 25-dihydroxy vitamin D3, insulin, glycaemic status, calcium and phosphorus levels in 18-40 year age women with polycystic ovary syndrome and to compare these parameters with age and BMI matched healthy control of same age group women. The secondary objective is to determine the association between 25-dihydroxy vitamin D3 concentration and insulin resistance among PCOS cases in 18-40 years age group women. This study was carried on at outpatient Department of Obstetrics & Gynaecology, Aiims Raipur. It took one year from the date of approval. In case, 32 women were diagnosed (Diagnosed PCOS cases as per Rotterdoms criteria among women of 18-40 years of age), as control group 32 women of 18-40 years of age were diagnosed As a result, serum insulin level was elevated among PCOS women along with 25-dihydroxy vitamin D3 deficiency.Conclude up, PCOS is more common in the age group of 20-40 years. There is a strong correlation between vitamin D deficiency and insulin resistance among PCOS patients.Keywords: vitamin D, insulin resistance, PCOS, reproductive age group
Procedia PDF Downloads 135918 Sequential Pattern Mining from Data of Medical Record with Sequential Pattern Discovery Using Equivalent Classes (SPADE) Algorithm (A Case Study : Bolo Primary Health Care, Bima)
Authors: Rezky Rifaini, Raden Bagus Fajriya Hakim
Abstract:
This research was conducted at the Bolo primary health Care in Bima Regency. The purpose of the research is to find out the association pattern that is formed of medical record database from Bolo Primary health care’s patient. The data used is secondary data from medical records database PHC. Sequential pattern mining technique is the method that used to analysis. Transaction data generated from Patient_ID, Check_Date and diagnosis. Sequential Pattern Discovery Algorithms Using Equivalent Classes (SPADE) is one of the algorithm in sequential pattern mining, this algorithm find frequent sequences of data transaction, using vertical database and sequence join process. Results of the SPADE algorithm is frequent sequences that then used to form a rule. It technique is used to find the association pattern between items combination. Based on association rules sequential analysis with SPADE algorithm for minimum support 0,03 and minimum confidence 0,75 is gotten 3 association sequential pattern based on the sequence of patient_ID, check_Date and diagnosis data in the Bolo PHC.Keywords: diagnosis, primary health care, medical record, data mining, sequential pattern mining, SPADE algorithm
Procedia PDF Downloads 401917 Crop Leaf Area Index (LAI) Inversion and Scale Effect Analysis from Unmanned Aerial Vehicle (UAV)-Based Hyperspectral Data
Authors: Xiaohua Zhu, Lingling Ma, Yongguang Zhao
Abstract:
Leaf Area Index (LAI) is a key structural characteristic of crops and plays a significant role in precision agricultural management and farmland ecosystem modeling. However, LAI retrieved from different resolution data contain a scaling bias due to the spatial heterogeneity and model non-linearity, that is, there is scale effect during multi-scale LAI estimate. In this article, a typical farmland in semi-arid regions of Chinese Inner Mongolia is taken as the study area, based on the combination of PROSPECT model and SAIL model, a multiple dimensional Look-Up-Table (LUT) is generated for multiple crops LAI estimation from unmanned aerial vehicle (UAV) hyperspectral data. Based on Taylor expansion method and computational geometry model, a scale transfer model considering both difference between inter- and intra-class is constructed for scale effect analysis of LAI inversion over inhomogeneous surface. The results indicate that, (1) the LUT method based on classification and parameter sensitive analysis is useful for LAI retrieval of corn, potato, sunflower and melon on the typical farmland, with correlation coefficient R2 of 0.82 and root mean square error RMSE of 0.43m2/m-2. (2) The scale effect of LAI is becoming obvious with the decrease of image resolution, and maximum scale bias is more than 45%. (3) The scale effect of inter-classes is higher than that of intra-class, which can be corrected efficiently by the scale transfer model established based Taylor expansion and Computational geometry. After corrected, the maximum scale bias can be reduced to 1.2%.Keywords: leaf area index (LAI), scale effect, UAV-based hyperspectral data, look-up-table (LUT), remote sensing
Procedia PDF Downloads 440916 Using Cyclic Structure to Improve Inference on Network Community Structure
Authors: Behnaz Moradijamei, Michael Higgins
Abstract:
Identifying community structure is a critical task in analyzing social media data sets often modeled by networks. Statistical models such as the stochastic block model have proven to explain the structure of communities in real-world network data. In this work, we develop a goodness-of-fit test to examine community structure's existence by using a distinguishing property in networks: cyclic structures are more prevalent within communities than across them. To better understand how communities are shaped by the cyclic structure of the network rather than just the number of edges, we introduce a novel method for deciding on the existence of communities. We utilize these structures by using renewal non-backtracking random walk (RNBRW) to the existing goodness-of-fit test. RNBRW is an important variant of random walk in which the walk is prohibited from returning back to a node in exactly two steps and terminates and restarts once it completes a cycle. We investigate the use of RNBRW to improve the performance of existing goodness-of-fit tests for community detection algorithms based on the spectral properties of the adjacency matrix. Our proposed test on community structure is based on the probability distribution of eigenvalues of the normalized retracing probability matrix derived by RNBRW. We attempt to make the best use of asymptotic results on such a distribution when there is no community structure, i.e., asymptotic distribution under the null hypothesis. Moreover, we provide a theoretical foundation for our statistic by obtaining the true mean and a tight lower bound for RNBRW edge weights variance.Keywords: hypothesis testing, RNBRW, network inference, community structure
Procedia PDF Downloads 151915 Programming without Code: An Approach and Environment to Conditions-On-Data Programming
Authors: Philippe Larvet
Abstract:
This paper presents the concept of an object-based programming language where tests (if... then... else) and control structures (while, repeat, for...) disappear and are replaced by conditions on data. According to the object paradigm, by using this concept, data are still embedded inside objects, as variable-value couples, but object methods are expressed into the form of logical propositions (‘conditions on data’ or COD).For instance : variable1 = value1 AND variable2 > value2 => variable3 = value3. Implementing this approach, a central inference engine turns and examines objects one after another, collecting all CODs of each object. CODs are considered as rules in a rule-based system: the left part of each proposition (left side of the ‘=>‘ sign) is the premise and the right part is the conclusion. So, premises are evaluated and conclusions are fired. Conclusions modify the variable-value couples of the object and the engine goes to examine the next object. The paper develops the principles of writing CODs instead of complex algorithms. Through samples, the paper also presents several hints for implementing a simple mechanism able to process this ‘COD language’. The proposed approach can be used within the context of simulation, process control, industrial systems validation, etc. By writing simple and rigorous conditions on data, instead of using classical and long-to-learn languages, engineers and specialists can easily simulate and validate the functioning of complex systems.Keywords: conditions on data, logical proposition, programming without code, object-oriented programming, system simulation, system validation
Procedia PDF Downloads 222914 Artificial Neural Network in Ultra-High Precision Grinding of Borosilicate-Crown Glass
Authors: Goodness Onwuka, Khaled Abou-El-Hossein
Abstract:
Borosilicate-crown (BK7) glass has found broad application in the optic and automotive industries and the growing demands for nanometric surface finishes is becoming a necessity in such applications. Thus, it has become paramount to optimize the parameters influencing the surface roughness of this precision lens. The research was carried out on a 4-axes Nanoform 250 precision lathe machine with an ultra-high precision grinding spindle. The experiment varied the machining parameters of feed rate, wheel speed and depth of cut at three levels for different combinations using Box Behnken design of experiment and the resulting surface roughness values were measured using a Taylor Hobson Dimension XL optical profiler. Acoustic emission monitoring technique was applied at a high sampling rate to monitor the machining process while further signal processing and feature extraction methods were implemented to generate the input to a neural network algorithm. This paper highlights the training and development of a back propagation neural network prediction algorithm through careful selection of parameters and the result show a better classification accuracy when compared to a previously developed response surface model with very similar machining parameters. Hence artificial neural network algorithms provide better surface roughness prediction accuracy in the ultra-high precision grinding of BK7 glass.Keywords: acoustic emission technique, artificial neural network, surface roughness, ultra-high precision grinding
Procedia PDF Downloads 305913 Reuse of Wastewater After Pretreatment Under Teril and Sand in Bechar City
Authors: Sara Seddiki, Maazouzi Abdelhak
Abstract:
The main objective of this modest work is to follow the physicochemical and bacteriological evolution of the wastewater from the town of Bechar subjected to purification by filtration according to various local supports, namely Sable and Terrill by reducing nuisances that undergo the receiving environment (Oued Bechar) and therefore make this water source reusable in different areas. The study first made it possible to characterize the urban wastewater of the Bechar wadi, which presents an environmental threat, thus allowing an estimation of the pollutant load, the chemical oxygen demand COD (145 mg / l) and the biological oxygen demand BOD5 (72 mg / l) revealed that these waters are less biodegradable (COD / BOD5 ratio = 0.62), have a fairly high conductivity (2.76 mS/cm), and high levels of mineral matter presented by chlorides and sulphates 390 and 596.1 mg / l respectively, with a pH of 8.1. The characterization of the sand dune (Beni Abbes) shows that quartz (97%) is the most present mineral. The granular analysis allowed us to determine certain parameters like the uniformity coefficient (CU) and the equivalent diameter, and scanning electron microscope (SEM) observations and X-ray analysis were performed. The study of filtered wastewater shows satisfactory and very encouraging treatment results, with complete elimination of total coliforms and streptococci and a good reduction of total aerobic germs in the sand and clay-sand filter. A good yield has been reported in the sand Terrill filter for the reduction of turbidity. The rates of reduction of organic matter in terms of the biological oxygen demand, in chemical oxygen demand recorded, are of the order of 60%. The elimination of sulphates is 40% for the sand filter.Keywords: urban wastewater, filtration, bacteriological and physicochemical parameters, sand, Terrill, Oued Bechar
Procedia PDF Downloads 95912 A Prediction Model Using the Price Cyclicality Function Optimized for Algorithmic Trading in Financial Market
Authors: Cristian Păuna
Abstract:
After the widespread release of electronic trading, automated trading systems have become a significant part of the business intelligence system of any modern financial investment company. An important part of the trades is made completely automatically today by computers using mathematical algorithms. The trading decisions are taken almost instantly by logical models and the orders are sent by low-latency automatic systems. This paper will present a real-time price prediction methodology designed especially for algorithmic trading. Based on the price cyclicality function, the methodology revealed will generate price cyclicality bands to predict the optimal levels for the entries and exits. In order to automate the trading decisions, the cyclicality bands will generate automated trading signals. We have found that the model can be used with good results to predict the changes in market behavior. Using these predictions, the model can automatically adapt the trading signals in real-time to maximize the trading results. The paper will reveal the methodology to optimize and implement this model in automated trading systems. After tests, it is proved that this methodology can be applied with good efficiency in different timeframes. Real trading results will be also displayed and analyzed in order to qualify the methodology and to compare it with other models. As a conclusion, it was found that the price prediction model using the price cyclicality function is a reliable trading methodology for algorithmic trading in the financial market.Keywords: algorithmic trading, automated trading systems, financial markets, high-frequency trading, price prediction
Procedia PDF Downloads 184911 A Multi-Layer Based Architecture for the Development of an Open Source CAD/CAM Integration Virtual Platform
Authors: Alvaro Aguinaga, Carlos Avila, Edgar Cando
Abstract:
This article proposes a n-layer architecture, with a web client as a front-end, for the development of a virtual platform for process simulation on CNC machines. This Open-Source platform includes a CAD-CAM interface drawing primitives, and then used to furnish a CNC program that triggers a touch-screen virtual simulator. The objectives of this project are twofold. First one is an educational component that fosters new alternatives for the CAD-CAM/CNC learning process in undergrad and grade schools and technical and technological institutes emphasizing in the development of critical skills, discussion and collaborative work. The second objective puts together a research and technological component that will take the state of the art in CAD-CAM integration to a new level with the development of optimal algorithms and virtual platforms, on-line availability, that will pave the way for the long-term goal of this project, that is, to have a visible and active graduate school in Ecuador and a world wide Open-Innovation community in the area of CAD-CAM integration and operation of CNC machinery. The virtual platform, developed as a part of this study: (1) delivers improved training process of students, (2) creates a multidisciplinary team and a collaborative work space that will push the new generation of students to face future technological challenges, (3) implements industry standards for CAD/CAM, (4) presents a platform for the development of industrial applications. A protoype of this system was developed and implemented in a network of universities and technological institutes in Ecuador.Keywords: CAD-CAM integration, virtual platforms, CNC machines, multi-layer based architecture
Procedia PDF Downloads 428910 Characteristics and Flight Test Analysis of a Fixed-Wing UAV with Hover Capability
Authors: Ferit Çakıcı, M. Kemal Leblebicioğlu
Abstract:
In this study, characteristics and flight test analysis of a fixed-wing unmanned aerial vehicle (UAV) with hover capability is analyzed. The base platform is chosen as a conventional airplane with throttle, ailerons, elevator and rudder control surfaces, that inherently allows level flight. Then this aircraft is mechanically modified by the integration of vertical propellers as in multi rotors in order to provide hover capability. The aircraft is modeled using basic aerodynamical principles and linear models are constructed utilizing small perturbation theory for trim conditions. Flight characteristics are analyzed by benefiting from linear control theory’s state space approach. Distinctive features of the aircraft are discussed based on analysis results with comparison to conventional aircraft platform types. A hybrid control system is proposed in order to reveal unique flight characteristics. The main approach includes design of different controllers for different modes of operation and a hand-over logic that makes flight in an enlarged flight envelope viable. Simulation tests are performed on mathematical models that verify asserted algorithms. Flight tests conducted in real world revealed the applicability of the proposed methods in exploiting fixed-wing and rotary wing characteristics of the aircraft, which provide agility, survivability and functionality.Keywords: flight test, flight characteristics, hybrid aircraft, unmanned aerial vehicle
Procedia PDF Downloads 329909 Social Media Resignation the Only Way to Protect User Data and Restore Cognitive Balance, a Literature Review
Authors: Rajarshi Motilal
Abstract:
The birth of the Internet and the rise of social media marked an important chapter in the history of humankind. Often termed the fourth scientific revolution, the Internet has changed human lives and cognisance. The birth of Web 2.0, followed by the launch of social media and social networking sites, added another milestone to these technological advancements where connectivity and influx of information became dominant. With billions of individuals using the internet and social media sites in the 21st century, “users” became “consumers”, and orthodox marketing reshaped itself to digital marketing. Furthermore, organisations started using sophisticated algorithms to predict consumer purchase behaviour and manipulate it to sustain themselves in such a competitive environment. The rampant storage and analysis of individual data became the new normal, raising many questions about data privacy. The excessive usage of the Internet among individuals brought in other problems of them becoming addicted to it, scavenging for societal approval and instant gratification, subsequently leading to a collective dualism, isolation, and finally, depression. This study aims to determine the relationship between social media usage in the modern age and the rise of psychological and cognitive imbalances in human minds. The literature review is positioned timely as an addition to the existing work at a time when the world is constantly debating on whether social media resignation is the only way to protect user data and restore the decaying cognitive balance.Keywords: social media, digital marketing, consumer behaviour, internet addiction, data privacy
Procedia PDF Downloads 76908 FT-NIR Method to Determine Moisture in Gluten Free Rice-Based Pasta during Drying
Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra
Abstract:
Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000 cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.Keywords: FT-NIR, pasta, moisture determination, food engineering
Procedia PDF Downloads 258907 Enhancer: An Effective Transformer Architecture for Single Image Super Resolution
Authors: Pitigalage Chamath Chandira Peiris
Abstract:
A widely researched domain in the field of image processing in recent times has been single image super-resolution, which tries to restore a high-resolution image from a single low-resolution image. Many more single image super-resolution efforts have been completed utilizing equally traditional and deep learning methodologies, as well as a variety of other methodologies. Deep learning-based super-resolution methods, in particular, have received significant interest. As of now, the most advanced image restoration approaches are based on convolutional neural networks; nevertheless, only a few efforts have been performed using Transformers, which have demonstrated excellent performance on high-level vision tasks. The effectiveness of CNN-based algorithms in image super-resolution has been impressive. However, these methods cannot completely capture the non-local features of the data. Enhancer is a simple yet powerful Transformer-based approach for enhancing the resolution of images. A method for single image super-resolution was developed in this study, which utilized an efficient and effective transformer design. This proposed architecture makes use of a locally enhanced window transformer block to alleviate the enormous computational load associated with non-overlapping window-based self-attention. Additionally, it incorporates depth-wise convolution in the feed-forward network to enhance its ability to capture local context. This study is assessed by comparing the results obtained for popular datasets to those obtained by other techniques in the domain.Keywords: single image super resolution, computer vision, vision transformers, image restoration
Procedia PDF Downloads 105906 Comprehensive Feature Extraction for Optimized Condition Assessment of Fuel Pumps
Authors: Ugochukwu Ejike Akpudo, Jank-Wook Hur
Abstract:
The increasing demand for improved productivity, maintainability, and reliability has prompted rapidly increasing research studies on the emerging condition-based maintenance concept- Prognostics and health management (PHM). Varieties of fuel pumps serve critical functions in several hydraulic systems; hence, their failure can have daunting effects on productivity, safety, etc. The need for condition monitoring and assessment of these pumps cannot be overemphasized, and this has led to the uproar in research studies on standard feature extraction techniques for optimized condition assessment of fuel pumps. By extracting time-based, frequency-based and the more robust time-frequency based features from these vibrational signals, a more comprehensive feature assessment (and selection) can be achieved for a more accurate and reliable condition assessment of these pumps. With the aid of emerging deep classification and regression algorithms like the locally linear embedding (LLE), we propose a method for comprehensive condition assessment of electromagnetic fuel pumps (EMFPs). Results show that the LLE as a comprehensive feature extraction technique yields better feature fusion/dimensionality reduction results for condition assessment of EMFPs against the use of single features. Also, unlike other feature fusion techniques, its capabilities as a fault classification technique were explored, and the results show an acceptable accuracy level using standard performance metrics for evaluation.Keywords: electromagnetic fuel pumps, comprehensive feature extraction, condition assessment, locally linear embedding, feature fusion
Procedia PDF Downloads 117905 A Study of Fatigue Life Estimation of a Modular Unmanned Aerial Vehicle by Developing a Structural Health Monitoring System
Authors: Zain Ul Hassan, Muhammad Zain Ul Abadin, Muhammad Zubair Khan
Abstract:
Unmanned aerial vehicles (UAVs) have now become of predominant importance for various operations, and an immense amount of work is going on in this specific category. The structural stability and life of these UAVs is key factor that should be considered while deploying them to different intelligent operations as their failure leads to loss of sensitive real-time data and cost. This paper presents an applied research on the development of a structural health monitoring system for a UAV designed and fabricated by deploying modular approach. Firstly, a modular UAV has been designed which allows to dismantle and to reassemble the components of the UAV without effecting the whole assembly of UAV. This novel approach makes the vehicle very sustainable and decreases its maintenance cost to a significant value by making possible to replace only the part leading to failure. Then the SHM for the designed architecture of the UAV had been specified as a combination of wings integrated with strain gauges, on-board data logger, bridge circuitry and the ground station. For the research purpose sensors have only been attached to the wings being the most load bearing part and as per analysis was done on ANSYS. On the basis of analysis of the load time spectrum obtained by the data logger during flight, fatigue life of the respective component has been predicted using fracture mechanics techniques of Rain Flow Method and Miner’s Rule. Thus allowing us to monitor the health of a specified component time to time aiding to avoid any failure.Keywords: fracture mechanics, rain flow method, structural health monitoring system, unmanned aerial vehicle
Procedia PDF Downloads 294904 Arithmetic Operations Based on Double Base Number Systems
Authors: K. Sanjayani, C. Saraswathy, S. Sreenivasan, S. Sudhahar, D. Suganya, K. S. Neelukumari, N. Vijayarangan
Abstract:
Double Base Number System (DBNS) is an imminent system of representing a number using two bases namely 2 and 3, which has its application in Elliptic Curve Cryptography (ECC) and Digital Signature Algorithm (DSA).The previous binary method representation included only base 2. DBNS uses an approximation algorithm namely, Greedy Algorithm. By using this algorithm, the number of digits required to represent a larger number is less when compared to the standard binary method that uses base 2 algorithms. Hence, the computational speed is increased and time being reduced. The standard binary method uses binary digits 0 and 1 to represent a number whereas the DBNS method uses binary digit 1 alone to represent any number (canonical form). The greedy algorithm uses two ways to represent the number, one is by using only the positive summands and the other is by using both positive and negative summands. In this paper, arithmetic operations are used for elliptic curve cryptography. Elliptic curve discrete logarithm problem is the foundation for most of the day to day elliptic curve cryptography. This appears to be a momentous hard slog compared to digital logarithm problem. In elliptic curve digital signature algorithm, the key generation requires 160 bit of data by usage of standard binary representation. Whereas, the number of bits required generating the key can be reduced with the help of double base number representation. In this paper, a new technique is proposed to generate key during encryption and extraction of key in decryption.Keywords: cryptography, double base number system, elliptic curve cryptography, elliptic curve digital signature algorithm
Procedia PDF Downloads 396903 The Application of AI in Developing Assistive Technologies for Non-Verbal Individuals with Autism
Authors: Ferah Tesfaye Admasu
Abstract:
Autism Spectrum Disorder (ASD) often presents significant communication challenges, particularly for non-verbal individuals who struggle to express their needs and emotions effectively. Assistive technologies (AT) have emerged as vital tools in enhancing communication abilities for this population. Recent advancements in artificial intelligence (AI) hold the potential to revolutionize the design and functionality of these technologies. This study explores the application of AI in developing intelligent, adaptive, and user-centered assistive technologies for non-verbal individuals with autism. Through a review of current AI-driven tools, including speech-generating devices, predictive text systems, and emotion-recognition software, this research investigates how AI can bridge communication gaps, improve engagement, and support independence. Machine learning algorithms, natural language processing (NLP), and facial recognition technologies are examined as core components in creating more personalized and responsive communication aids. The study also discusses the challenges and ethical considerations involved in deploying AI-based AT, such as data privacy and the risk of over-reliance on technology. Findings suggest that integrating AI into assistive technologies can significantly enhance the quality of life for non-verbal individuals with autism, providing them with greater opportunities for social interaction and participation in daily activities. However, continued research and development are needed to ensure these technologies are accessible, affordable, and culturally sensitive.Keywords: artificial intelligence, autism spectrum disorder, non-verbal communication, assistive technology, machine learning
Procedia PDF Downloads 19902 Control of Base Isolated Benchmark using Combined Control Strategy with Fuzzy Algorithm Subjected to Near-Field Earthquakes
Authors: Hashem Shariatmadar, Mozhgansadat Momtazdargahi
Abstract:
The purpose of control structure against earthquake is to dissipate earthquake input energy to the structure and reduce the plastic deformation of structural members. There are different methods for control structure against earthquake to reduce the structure response that they are active, semi-active, inactive and hybrid. In this paper two different combined control systems are used first system comprises base isolator and multi tuned mass dampers (BI & MTMD) and another combination is hybrid base isolator and multi tuned mass dampers (HBI & MTMD) for controlling an eight story isolated benchmark steel structure. Active control force of hybrid isolator is estimated by fuzzy logic algorithms. The influences of the combined systems on the responses of the benchmark structure under the two near-field earthquake (Newhall & Elcentro) are evaluated by nonlinear dynamic time history analysis. Applications of combined control systems consisting of passive or active systems installed in parallel to base-isolation bearings have the capability of reducing response quantities of base-isolated (relative and absolute displacement) structures significantly. Therefore in design and control of irregular isolated structures using the proposed control systems, structural demands (relative and absolute displacement and etc.) in each direction must be considered separately.Keywords: base-isolated benchmark structure, multi-tuned mass dampers, hybrid isolators, near-field earthquake, fuzzy algorithm
Procedia PDF Downloads 304