Search results for: measurement accuracy
3787 Fully Printed Strain Gauges: A Comparison of Aerosoljet-Printing and Micropipette-Dispensing
Authors: Benjamin Panreck, Manfred Hild
Abstract:
Strain sensors based on a change in resistance are well established for the measurement of forces, stresses, or material fatigue. Within the scope of this paper, fully additive manufactured strain sensors were produced using an ink of silver nanoparticles. Their behavior was evaluated by periodic tensile tests. Printed strain sensors exhibit two advantages: Their measuring grid is adaptable to the use case and they do not need a carrier-foil, as the measuring structure can be printed directly onto a thin sprayed varnish layer on the aluminum specimen. In order to compare quality characteristics, the sensors have been manufactured using two different technologies, namely aerosoljet-printing and micropipette-dispensing. Both processes produce structures which exhibit continuous features (in contrast to what can be achieved with droplets during inkjet printing). Briefly summarized the results show that aerosoljet-printing is the preferable technology for specimen with non-planar surfaces whereas both technologies are suitable for flat specimen.Keywords: aerosoljet-printing, micropipette-dispensing, printed electronics, printed sensors, strain gauge
Procedia PDF Downloads 2043786 Characteristics of Ozone Generated from Dielectric Barrier Discharge Plasma Actuators
Authors: R. Osada, S. Ogata, T. Segawa
Abstract:
Dielectric barrier discharge plasma actuators (DBD-PAs) have been developed for active flow control devices. However, it is necessary to reduce ozone produced by DBD toward practical applications using DBD-PAs. In this study, variations of ozone concentration, flow velocity, power consumption were investigated by changing exposed electrodes of DBD-PAs. Two exposed electrode prototypes were prepared: span-type with exposed electrode width of 0.1 mm, and normal-type with width of 5 mm. It was found that span-type shows lower power consumption and higher flow velocity than that of normal-type at Vp-p = 4.0-6.0 kV. Ozone concentration of span-type higher than normal-type at Vp-p = 4.0-8.0 kV. In addition, it was confirmed that catalyst located in downstream from the exposed electrode can reduce ozone concentration between 18 and 42% without affecting the induced flow.Keywords: dielectric barrier discharge plasma actuators, ozone diffusion, PIV measurement, power consumption
Procedia PDF Downloads 2423785 The Temperature Degradation Process of Siloxane Polymeric Coatings
Authors: Andrzej Szewczak
Abstract:
Study of the effect of high temperatures on polymer coatings represents an important field of research of their properties. Polymers, as materials with numerous features (chemical resistance, ease of processing and recycling, corrosion resistance, low density and weight) are currently the most widely used modern building materials, among others in the resin concrete, plastic parts, and hydrophobic coatings. Unfortunately, the polymers have also disadvantages, one of which decides about their usage - low resistance to high temperatures and brittleness. This applies in particular thin and flexible polymeric coatings applied to other materials, such a steel and concrete, which degrade under varying thermal conditions. Research about improvement of this state includes methods of modification of the polymer composition, structure, conditioning conditions, and the polymerization reaction. At present, ways are sought to reflect the actual environmental conditions, in which the coating will be operating after it has been applied to other material. These studies are difficult because of the need for adopting a proper model of the polymer operation and the determination of phenomena occurring at the time of temperature fluctuations. For this reason, alternative methods are being developed, taking into account the rapid modeling and the simulation of the actual operating conditions of polymeric coating’s materials in real conditions. The nature of a duration is typical for the temperature influence in the environment. Studies typically involve the measurement of variation one or more physical and mechanical properties of such coating in time. Based on these results it is possible to determine the effects of temperature loading and develop methods affecting in the improvement of coatings’ properties. This paper contains a description of the stability studies of silicone coatings deposited on the surface of a ceramic brick. The brick’s surface was hydrophobized by two types of inorganic polymers: nano-polymer preparation based on dialkyl siloxanes (Series 1 - 5) and an aqueous solution of the silicon (series 6 - 10). In order to enhance the stability of the film formed on the brick’s surface and immunize it to variable temperature and humidity loading, the nano silica was added to the polymer. The right combination of the polymer liquid phase and the solid phase of nano silica was obtained by disintegration of the mixture by the sonification. The changes of viscosity and surface tension of polymers were defined, which are the basic rheological parameters affecting the state and the durability of the polymer coating. The coatings created on the brick’s surfaces were then subjected to a temperature loading of 100° C and moisture by total immersion in water, in order to determine any water absorption changes caused by damages and the degradation of the polymer film. The effect of moisture and temperature was determined by measurement (at specified number of cycles) of changes in the surface hardness (using a Vickers’ method) and the absorption of individual samples. As a result, on the basis of the obtained results, the degradation process of polymer coatings related to their durability changes in time was determined.Keywords: silicones, siloxanes, surface hardness, temperature, water absorption
Procedia PDF Downloads 2433784 Developing the Methods for the Study of Static and Dynamic Balance
Authors: K. Abuzayan, H. Alabed, J. Ezarrugh, M. Agila
Abstract:
Static and dynamic balance are essential in daily and sports life. Many factors have been identified as influencing static balance control. Therefore, the aim of this study was to apply the (XCoM) method and other relevant variables (CoP, CoM, Fh, KE, P, Q, and, AI) to investigate sport related activities such as hopping and jumping. Many studies have represented the CoP data without mentioning its accuracy, so several experiments were done to establish the agreement between the CoP and the projected CoM in a static condition. Five male healthy (Mean ± SD:- age 24.6 years ±4.5, height 177 cm ± 6.3, body mass 72.8 kg ± 6.6) participated in this study. Results found that The implementation of the XCoM method was found to be practical for evaluating both static and dynamic balance. The general findings were that the CoP, the CoM, the XCoM, Fh, and Q were more informative than the other variables (e.g. KE, P, and AI) during static and dynamic balance. The XCoM method was found to be applicable to dynamic balance as well as static balance.Keywords: centre of mass, static balance, dynamic balance, extrapolated centre of mass
Procedia PDF Downloads 4213783 Reliable and Energy-Aware Data Forwarding under Sink-Hole Attack in Wireless Sensor Networks
Authors: Ebrahim Alrashed
Abstract:
Wireless sensor networks are vulnerable to attacks from adversaries attempting to disrupt their operations. Sink-hole attacks are a type of attack where an adversary node drops data forwarded through it and hence affecting the reliability and accuracy of the network. Since sensor nodes have limited battery power, it is essential that any solution to the sinkhole attack problem be very energy-aware. In this paper, we present a reliable and energy efficient scheme to forward data from source nodes to the base station while under sink-hole attack. The scheme also detects sink-hole attack nodes and avoid paths that includes them.Keywords: energy-aware routing, reliability, sink-hole attack, WSN
Procedia PDF Downloads 3983782 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes
Authors: Angela U. Makolo
Abstract:
Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation
Procedia PDF Downloads 693781 Experimental Measurement for Vehicular Communication Evaluation Using Obu Arada System
Authors: Aymen Sassi
Abstract:
The equipment of vehicles with wireless communication capabilities is expected to be the key to the evolution to next generation intelligent transportation systems (ITS). The IEEE community has been continuously working on the development of an efficient vehicular communication protocol for the enhancement of Wireless Access in Vehicular Environment (WAVE). Vehicular communication systems, called V2X, support vehicle to vehicle (V2V) and vehicle to infrastructure (V2I) communications. The efficiency of such communication systems depends on several factors, among which the surrounding environment and mobility are prominent. Accordingly, this study focuses on the evaluation of the real performance of vehicular communication with special focus on the effects of the real environment and mobility on V2X communication. It starts by identifying the real maximum range that such communication can support and then evaluates V2I and V2V performances. The Arada LocoMate OBU transmission system was used to test and evaluate the impact of the transmission range in V2X communication. The evaluation of V2I and V2V communication takes the real effects of low and high mobility on transmission into account.Keywords: IEEE 802.11p, V2I, V2X, mobility, PLR, Arada LocoMate OBU, maximum range
Procedia PDF Downloads 4153780 Determinants of Corporate Social Responsibility Adoption: Evidence from China
Authors: Jing (Claire) LI
Abstract:
More than two decades from 2000 to 2020 of economic reforms have brought China unprecedented economic growth. There is an urgent call of research towards corporate social responsibility (CSR) in the context of China because while China continues to develop into a global trading market, it suffers from various serious problems relating to CSR. This study analyses the factors affecting the adoption of CSR practices by Chinese listed companies. The author proposes a new framework of factors of CSR adoption. Following common organisational factors and external factors in the literature (including organisational support, company size, shareholder pressures, and government support), this study introduces two additional factors, dynamic capability and regional culture. A survey questionnaire was conducted on the CSR adoption of Chinese listed companies in Shen Zhen and Shang Hai index from December 2019 to March 2020. The survey was conducted to collect data on the factors that affect the adoption of CSR. After collection of data, this study performed factor analysis to reduce the number of measurement items to several main factors. This procedure is to confirm the proposed framework and ensure the significant factors. Through analysis, this study identifies four grouped factors as determinants of the CSR adoption. The first factor loading includes dynamic capability and organisational support. The study finds that they are positively related to the first factor, so the first factor mainly reflects the capabilities of companies, which is one component in internal factors. In the second factor, measurement items of stakeholder pressures mainly are from regulatory bodies, customer and supplier, employees and community, and shareholders. In sum, they are positively related to the second factor and they reflect stakeholder pressures, which is one component of external factors. The third factor reflects organisational characteristics. Variables include company size and cultural score. Among these variables, company size is negatively related to the third factor. The resulted factor loading of the third factor implies that organisational factor is an important determinant of CSR adoption. Cultural consistency, the variable in the fourth factor, is positively related to the factor. It represents the difference between perception of managers and actual culture of the organisations in terms of cultural dimensions, which is one component in internal factors. It implies that regional culture is an important factor of CSR adoption. Overall, the results are consistent with previous literature. This study is of significance from both theoretical and empirical perspectives. First, from the significance of theoretical perspective, this research combines stakeholder theory, dynamic capability view of a firm, and neo-institutional theory in CSR research. Based on association of these three theories, this study introduces two new factors (dynamic capability and regional culture) to have a better framework for CSR adoption. Second, this study contributes to empirical literature of CSR in the context of China. Extant Chinese companies lack recognition of the importance of CSR practices adoption. This study built a framework and may help companies to design resource allocation strategies and evaluate future CSR and management practices in an early stage.Keywords: China, corporate social responsibility, CSR adoption, dynamic capability, regional culture
Procedia PDF Downloads 1363779 Effects of Reversible Watermarking on Iris Recognition Performance
Authors: Andrew Lock, Alastair Allen
Abstract:
Fragile watermarking has been proposed as a means of adding additional security or functionality to biometric systems, particularly for authentication and tamper detection. In this paper we describe an experimental study on the effect of watermarking iris images with a particular class of fragile algorithm, reversible algorithms, and the ability to correctly perform iris recognition. We investigate two scenarios, matching watermarked images to unmodified images, and matching watermarked images to watermarked images. We show that different watermarking schemes give very different results for a given capacity, highlighting the importance of investigation. At high embedding rates most algorithms cause significant reduction in recognition performance. However, in many cases, for low embedding rates, recognition accuracy is improved by the watermarking process.Keywords: biometrics, iris recognition, reversible watermarking, vision engineering
Procedia PDF Downloads 4593778 Research of Intrinsic Emittance of Thermal Cathode with Emission Nonuniformity
Authors: Yufei Peng, Zhen Qin, Jianbe Li, Jidong Long
Abstract:
The thermal cathode is widely used in accelerators, FELs and kinds of vacuum electronics. However, emission nonuniformity exists due to surface profile, material distribution, temperature variation, crystal orientation, etc., which will cause intrinsic emittance growth, brightness decline, envelope size augment, device performance deterioration or even failure. To understand how emittance is manipulated by emission nonuniformity, an intrinsic emittance model consisting of contributions from macro and micro surface nonuniformity is developed analytically based on general thermal emission model at temperature limited regime according to a real 3mm cathode. The model shows relative emittance increased about 50% due to temperature variation, and less than 5% from several kinds of micro surface nonuniformity which is much smaller than other research. Otherwise, we also calculated emittance growth combining with Monte Carlo method and PIC simulation, experiments of emission uniformity and emittance measurement are going to be carried out separately.Keywords: thermal cathode, electron emission fluctuation, intrinsic emittance, surface nonuniformity, cathode lifetime
Procedia PDF Downloads 2993777 Unified Structured Process for Health Analytics
Authors: Supunmali Ahangama, Danny Chiang Choon Poo
Abstract:
Health analytics (HA) is used in healthcare systems for effective decision-making, management, and planning of healthcare and related activities. However, user resistance, the unique position of medical data content, and structure (including heterogeneous and unstructured data) and impromptu HA projects have held up the progress in HA applications. Notably, the accuracy of outcomes depends on the skills and the domain knowledge of the data analyst working on the healthcare data. The success of HA depends on having a sound process model, effective project management and availability of supporting tools. Thus, to overcome these challenges through an effective process model, we propose an HA process model with features from the rational unified process (RUP) model and agile methodology.Keywords: agile methodology, health analytics, unified process model, UML
Procedia PDF Downloads 5073776 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing
Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn
Abstract:
Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency
Procedia PDF Downloads 1143775 Prediction of Coronary Artery Stenosis Severity Based on Machine Learning Algorithms
Authors: Yu-Jia Jian, Emily Chia-Yu Su, Hui-Ling Hsu, Jian-Jhih Chen
Abstract:
Coronary artery is the major supplier of myocardial blood flow. When fat and cholesterol are deposit in the coronary arterial wall, narrowing and stenosis of the artery occurs, which may lead to myocardial ischemia and eventually infarction. According to the World Health Organization (WHO), estimated 740 million people have died of coronary heart disease in 2015. According to Statistics from Ministry of Health and Welfare in Taiwan, heart disease (except for hypertensive diseases) ranked the second among the top 10 causes of death from 2013 to 2016, and it still shows a growing trend. According to American Heart Association (AHA), the risk factors for coronary heart disease including: age (> 65 years), sex (men to women with 2:1 ratio), obesity, diabetes, hypertension, hyperlipidemia, smoking, family history, lack of exercise and more. We have collected a dataset of 421 patients from a hospital located in northern Taiwan who received coronary computed tomography (CT) angiography. There were 300 males (71.26%) and 121 females (28.74%), with age ranging from 24 to 92 years, and a mean age of 56.3 years. Prior to coronary CT angiography, basic data of the patients, including age, gender, obesity index (BMI), diastolic blood pressure, systolic blood pressure, diabetes, hypertension, hyperlipidemia, smoking, family history of coronary heart disease and exercise habits, were collected and used as input variables. The output variable of the prediction module is the degree of coronary artery stenosis. The output variable of the prediction module is the narrow constriction of the coronary artery. In this study, the dataset was randomly divided into 80% as training set and 20% as test set. Four machine learning algorithms, including logistic regression, stepwise regression, neural network and decision tree, were incorporated to generate prediction results. We used area under curve (AUC) / accuracy (Acc.) to compare the four models, the best model is neural network, followed by stepwise logistic regression, decision tree, and logistic regression, with 0.68 / 79 %, 0.68 / 74%, 0.65 / 78%, and 0.65 / 74%, respectively. Sensitivity of neural network was 27.3%, specificity was 90.8%, stepwise Logistic regression sensitivity was 18.2%, specificity was 92.3%, decision tree sensitivity was 13.6%, specificity was 100%, logistic regression sensitivity was 27.3%, specificity 89.2%. From the result of this study, we hope to improve the accuracy by improving the module parameters or other methods in the future and we hope to solve the problem of low sensitivity by adjusting the imbalanced proportion of positive and negative data.Keywords: decision support, computed tomography, coronary artery, machine learning
Procedia PDF Downloads 2293774 Topological Sensitivity Analysis for Reconstruction of the Inverse Source Problem from Boundary Measurement
Authors: Maatoug Hassine, Mourad Hrizi
Abstract:
In this paper, we consider a geometric inverse source problem for the heat equation with Dirichlet and Neumann boundary data. We will reconstruct the exact form of the unknown source term from additional boundary conditions. Our motivation is to detect the location, the size and the shape of source support. We present a one-shot algorithm based on the Kohn-Vogelius formulation and the topological gradient method. The geometric inverse source problem is formulated as a topology optimization one. A topological sensitivity analysis is derived from a source function. Then, we present a non-iterative numerical method for the geometric reconstruction of the source term with unknown support using a level curve of the topological gradient. Finally, we give several examples to show the viability of our presented method.Keywords: geometric inverse source problem, heat equation, topological optimization, topological sensitivity, Kohn-Vogelius formulation
Procedia PDF Downloads 3013773 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland
Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski
Abstract:
PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks
Procedia PDF Downloads 1503772 Towards Dynamic Estimation of Residential Building Energy Consumption in Germany: Leveraging Machine Learning and Public Data from England and Wales
Authors: Philipp Sommer, Amgad Agoub
Abstract:
The construction sector significantly impacts global CO₂ emissions, particularly through the energy usage of residential buildings. To address this, various governments, including Germany's, are focusing on reducing emissions via sustainable refurbishment initiatives. This study examines the application of machine learning (ML) to estimate energy demands dynamically in residential buildings and enhance the potential for large-scale sustainable refurbishment. A major challenge in Germany is the lack of extensive publicly labeled datasets for energy performance, as energy performance certificates, which provide critical data on building-specific energy requirements and consumption, are not available for all buildings or require on-site inspections. Conversely, England and other countries in the European Union (EU) have rich public datasets, providing a viable alternative for analysis. This research adapts insights from these English datasets to the German context by developing a comprehensive data schema and calibration dataset capable of predicting building energy demand effectively. The study proposes a minimal feature set, determined through feature importance analysis, to optimize the ML model. Findings indicate that ML significantly improves the scalability and accuracy of energy demand forecasts, supporting more effective emissions reduction strategies in the construction industry. Integrating energy performance certificates into municipal heat planning in Germany highlights the transformative impact of data-driven approaches on environmental sustainability. The goal is to identify and utilize key features from open data sources that significantly influence energy demand, creating an efficient forecasting model. Using Extreme Gradient Boosting (XGB) and data from energy performance certificates, effective features such as building type, year of construction, living space, insulation level, and building materials were incorporated. These were supplemented by data derived from descriptions of roofs, walls, windows, and floors, integrated into three datasets. The emphasis was on features accessible via remote sensing, which, along with other correlated characteristics, greatly improved the model's accuracy. The model was further validated using SHapley Additive exPlanations (SHAP) values and aggregated feature importance, which quantified the effects of individual features on the predictions. The refined model using remote sensing data showed a coefficient of determination (R²) of 0.64 and a mean absolute error (MAE) of 4.12, indicating predictions based on efficiency class 1-100 (G-A) may deviate by 4.12 points. This R² increased to 0.84 with the inclusion of more samples, with wall type emerging as the most predictive feature. After optimizing and incorporating related features like estimated primary energy consumption, the R² score for the training and test set reached 0.94, demonstrating good generalization. The study concludes that ML models significantly improve prediction accuracy over traditional methods, illustrating the potential of ML in enhancing energy efficiency analysis and planning. This supports better decision-making for energy optimization and highlights the benefits of developing and refining data schemas using open data to bolster sustainability in the building sector. The study underscores the importance of supporting open data initiatives to collect similar features and support the creation of comparable models in Germany, enhancing the outlook for environmental sustainability.Keywords: machine learning, remote sensing, residential building, energy performance certificates, data-driven, heat planning
Procedia PDF Downloads 593771 On-Line Data-Driven Multivariate Statistical Prediction Approach to Production Monitoring
Authors: Hyun-Woo Cho
Abstract:
Detection of incipient abnormal events in production processes is important to improve safety and reliability of manufacturing operations and reduce losses caused by failures. The construction of calibration models for predicting faulty conditions is quite essential in making decisions on when to perform preventive maintenance. This paper presents a multivariate calibration monitoring approach based on the statistical analysis of process measurement data. The calibration model is used to predict faulty conditions from historical reference data. This approach utilizes variable selection techniques, and the predictive performance of several prediction methods are evaluated using real data. The results shows that the calibration model based on supervised probabilistic model yielded best performance in this work. By adopting a proper variable selection scheme in calibration models, the prediction performance can be improved by excluding non-informative variables from their model building steps.Keywords: calibration model, monitoring, quality improvement, feature selection
Procedia PDF Downloads 3573770 Wear Particle Analysis from used Gear Lubricants for Maintenance Diagnostics
Authors: Surapol Raadnui
Abstract:
This particular work describes an experimental investigation on gear wear in which wear and pitting were intentionally allowed to occur, namely, moisture corrosion pitting, acid-induced corrosion pitting, hard contaminant-related pitting and mechanical induced wear. A back to back spur gear test rig and a grease lubricated worm gear rig were used. The tests samples of wear debris were collected and assessed through the utilization of an optical microscope in order to correlate and compare the debris morphology to pitting and wear degradation of the worn gears. In addition, weight loss from all test gear pairs were assessed with utilization of statistical design of experiment. It can be deduced that wear debris characteristics from both cases exhibited a direct relationship with different pitting and wear modes. Thus, it should be possible to detect and diagnose gear pitting and wear utilization of worn surfaces, generated wear debris and quantitative measurement such as weight loss.Keywords: predictive maintenance, worm gear, spur gear, wear debris analysis, problem diagnostic
Procedia PDF Downloads 1563769 A Novel PSO Based Decision Tree Classification
Authors: Ali Farzan
Abstract:
Classification of data objects or patterns is a major part in most of Decision making systems. One of the popular and commonly used classification methods is Decision Tree (DT). It is a hierarchical decision making system by which a binary tree is constructed and starting from root, at each node some of the classes is rejected until reaching the leaf nods. Each leaf node is a representative of one specific class. Finding the splitting criteria in each node for constructing or training the tree is a major problem. Particle Swarm Optimization (PSO) has been adopted as a metaheuristic searching method for finding the best splitting criteria. Result of evaluating the proposed method over benchmark datasets indicates the higher accuracy of the new PSO based decision tree.Keywords: decision tree, particle swarm optimization, splitting criteria, metaheuristic
Procedia PDF Downloads 4073768 Selection of Variogram Model for Environmental Variables
Authors: Sheikh Samsuzzhan Alam
Abstract:
The present study investigates the selection of variogram model in analyzing spatial variations of environmental variables with the trend. Sometimes, the autofitted theoretical variogram does not really capture the true nature of the empirical semivariogram. So proper exploration and analysis are needed to select the best variogram model. For this study, an open source data collected from California Soil Resource Lab1 is used to explain the problems when fitting a theoretical variogram. Five most commonly used variogram models: Linear, Gaussian, Exponential, Matern, and Spherical were fitted to the experimental semivariogram. Ordinary kriging methods were considered to evaluate the accuracy of the selected variograms through cross-validation. This study is beneficial for selecting an appropriate theoretical variogram model for environmental variables.Keywords: anisotropy, cross-validation, environmental variables, kriging, variogram models
Procedia PDF Downloads 3353767 An Investigation of Water Atomizer in Ejected Gas of a Vehicle Engine
Authors: Chun-Wei Liu, Feng-Tsai Weng
Abstract:
People faced pollution threaten in modern age although the standard of exhaust gas of vehicles has been established. The goal of this study is to investigate the effect of water atomizer in a vehicle emission system. Diluted 20% ammonia water was used in spraying system. Micro particles produced by exhausted gas from engine of vehicle which were cumulated through atomized spray in a self-development collector. In experiments, a self-designed atomization model plate and a gas tank controlled by the micro-processor using Pulse Width Modulation (PWM) logic was prepared for exhaust test. The gas from gasoline-engine of vehicle was purified with the model panel collector. A soft well named ANSYS was utilized for analyzing the distribution condition of rejected gas. Micro substance and percentage of CO, HC, CO2, NOx in exhausted gas were investigated at different engine speed, and atomizer vibration frequency. Exceptional results in the vehicle engine emissions measurement were obtained. The temperature of exhausted gas can be decreased 3oC. Micro substances PM10 can be decreased and the percentage of CO can be decreased more than 55% at 2500RPM by proposed system. Value of CO, HC, CO2 and NOX was all decreased when atomizers were used with water.Keywords: atomizer, CO, HC, NOx, PM2.5
Procedia PDF Downloads 4583766 Feedforward Neural Network with Backpropagation for Epilepsy Seizure Detection
Authors: Natalia Espinosa, Arthur Amorim, Rudolf Huebner
Abstract:
Epilepsy is a chronic neural disease and around 50 million people in the world suffer from this disease, however, in many cases, the individual acquires resistance to the medication, which is known as drug-resistant epilepsy, where a detection system is necessary. This paper showed the development of an automatic system for seizure detection based on artificial neural networks (ANN), which are common techniques of machine learning. Discrete Wavelet Transform (DWT) is used for decomposing electroencephalogram (EEG) signal into main brain waves, with these frequency bands is extracted features for training a feedforward neural network with backpropagation, finally made a pattern classification, seizure or non-seizure. Obtaining 95% accuracy in epileptic EEG and 100% in normal EEG.Keywords: Artificial Neural Network (ANN), Discrete Wavelet Transform (DWT), Epilepsy Detection , Seizure.
Procedia PDF Downloads 2273765 Evaluation Synthesis of Private Sector Engagement in International Development
Authors: Valerie Habbel, Magdalena Orth, Johanna Richter, Steffen Schimko
Abstract:
Cooperation between development actors and the private sector is becoming increasingly important, as it is expected to mobilize additional resources to achieve the Sustainable Development Goals (SDGs), among other things. However, whether the goals of cooperation are achieved has so far only been explored in evaluations and studies of individual projects and instruments. The evaluation synthesis attempts to close this gap by systematically analyzing existing evidence (evaluations and academic studies) from national and international development cooperation on private sector engagement. Overall, the evaluations and studies considered report mainly positive effects on investors and donors, intermediaries, partner countries, and target groups. However, various analyses, including on the quality of the evaluations, point to a positive bias in the results. The evaluation synthesis makes recommendations on the definition of indicators, the measurement and evaluation of impacts and additionality, knowledge management, and the consideration of transaction costs in cooperation with private actors.Keywords: evaluation synthesis, private sector engagement, international development, sustainable development
Procedia PDF Downloads 2133764 Transient Heat Conduction in Nonuniform Hollow Cylinders with Time Dependent Boundary Condition at One Surface
Authors: Sen Yung Lee, Chih Cheng Huang, Te Wen Tu
Abstract:
A solution methodology without using integral transformation is proposed to develop analytical solutions for transient heat conduction in nonuniform hollow cylinders with time-dependent boundary condition at the outer surface. It is shown that if the thermal conductivity and the specific heat of the medium are in arbitrary polynomial function forms, the closed solutions of the system can be developed. The influence of physical properties on the temperature distribution of the system is studied. A numerical example is given to illustrate the efficiency and the accuracy of the solution methodology.Keywords: analytical solution, nonuniform hollow cylinder, time-dependent boundary condition, transient heat conduction
Procedia PDF Downloads 5073763 A Novel Spectral Index for Automatic Shadow Detection in Urban Mapping Based on WorldView-2 Satellite Imagery
Authors: Kaveh Shahi, Helmi Z. M. Shafri, Ebrahim Taherzadeh
Abstract:
In remote sensing, shadow causes problems in many applications such as change detection and classification. It is caused by objects which are elevated, thus can directly affect the accuracy of information. For these reasons, it is very important to detect shadows particularly in urban high spatial resolution imagery which created a significant problem. This paper focuses on automatic shadow detection based on a new spectral index for multispectral imagery known as Shadow Detection Index (SDI). The new spectral index was tested on different areas of World-View 2 images and the results demonstrated that the new spectral index has a massive potential to extract shadows effectively and automatically.Keywords: spectral index, shadow detection, remote sensing images, World-View 2
Procedia PDF Downloads 5403762 Evaluating of Turkish Earthquake Code (2007) for FRP Wrapped Circular Concrete Cylinders
Authors: Guler S., Guzel E., Gulen M.
Abstract:
Fiber Reinforced Polymer (FRP) materials are commonly used in construction sector to enhance the strength and ductility capacities of structural elements. The equations on confined compressive strength of FRP wrapped concrete cylinders is described in the 7th chapter of the Turkish Earthquake Code (TEC-07) that enter into force in 2007. This study aims to evaluate the applicability of TEC-07 on confined compressive strengths of circular FRP wrapped concrete cylinders. To this end, a large number of data on circular FRP wrapped concrete cylinders are collected from the literature. It is clearly seen that the predictions of TEC-07 on circular FRP wrapped the FRP wrapped columns is not same accuracy for different ranges of concrete strengths.Keywords: Fiber Reinforced Polymer (FRP), concrete cylinders, Turkish Earthquake Code, earthquake
Procedia PDF Downloads 5193761 Simplified Linearized Layering Method for Stress Intensity Factor Determination
Authors: Jeries J. Abou-Hanna, Bradley Storm
Abstract:
This paper looks to reduce the complexity of determining stress intensity factors while maintaining high levels of accuracy by the use of a linearized layering approach. Many techniques for stress intensity factor determination exist, but they can be limited by conservative results, requiring too many user parameters, or by being too computationally intensive. Multiple notch geometries with various crack lengths were investigated in this study to better understand the effectiveness of the proposed method. By linearizing the average stresses in radial layers around the crack tip, stress intensity factors were found to have error ranging from -10.03% to 8.94% when compared to analytically exact solutions. This approach proved to be a robust and efficient method of accurately determining stress intensity factors.Keywords: fracture mechanics, finite element method, stress intensity factor, stress linearization
Procedia PDF Downloads 1433760 Winter Wheat Yield Forecasting Using Sentinel-2 Imagery at the Early Stages
Authors: Chunhua Liao, Jinfei Wang, Bo Shan, Yang Song, Yongjun He, Taifeng Dong
Abstract:
Winter wheat is one of the main crops in Canada. Forecasting of within-field variability of yield in winter wheat at the early stages is essential for precision farming. However, the crop yield modelling based on high spatial resolution satellite data is generally affected by the lack of continuous satellite observations, resulting in reducing the generalization ability of the models and increasing the difficulty of crop yield forecasting at the early stages. In this study, the correlations between Sentinel-2 data (vegetation indices and reflectance) and yield data collected by combine harvester were investigated and a generalized multivariate linear regression (MLR) model was built and tested with data acquired in different years. It was found that the four-band reflectance (blue, green, red, near-infrared) performed better than their vegetation indices (NDVI, EVI, WDRVI and OSAVI) in wheat yield prediction. The optimum phenological stage for wheat yield prediction with highest accuracy was at the growing stages from the end of the flowering to the beginning of the filling stage. The best MLR model was therefore built to predict wheat yield before harvest using Sentinel-2 data acquired at the end of the flowering stage. Further, to improve the ability of the yield prediction at the early stages, three simple unsupervised domain adaptation (DA) methods were adopted to transform the reflectance data at the early stages to the optimum phenological stage. The winter wheat yield prediction using multiple vegetation indices showed higher accuracy than using single vegetation index. The optimum stage for winter wheat yield forecasting varied with different fields when using vegetation indices, while it was consistent when using multispectral reflectance and the optimum stage for winter wheat yield prediction was at the end of flowering stage. The average testing RMSE of the MLR model at the end of the flowering stage was 604.48 kg/ha. Near the booting stage, the average testing RMSE of yield prediction using the best MLR was reduced to 799.18 kg/ha when applying the mean matching domain adaptation approach to transform the data to the target domain (at the end of the flowering) compared to that using the original data based on the models developed at the booting stage directly (“MLR at the early stage”) (RMSE =1140.64 kg/ha). This study demonstrated that the simple mean matching (MM) performed better than other DA methods and it was found that “DA then MLR at the optimum stage” performed better than “MLR directly at the early stages” for winter wheat yield forecasting at the early stages. The results indicated that the DA had a great potential in near real-time crop yield forecasting at the early stages. This study indicated that the simple domain adaptation methods had a great potential in crop yield prediction at the early stages using remote sensing data.Keywords: wheat yield prediction, domain adaptation, Sentinel-2, within-field scale
Procedia PDF Downloads 663759 Presenting a Model Based on Artificial Neural Networks to Predict the Execution Time of Design Projects
Authors: Hamed Zolfaghari, Mojtaba Kord
Abstract:
After feasibility study the design phase is started and the rest of other phases are highly dependent on this phase. forecasting the duration of design phase could do a miracle and would save a lot of time. This study provides a fast and accurate Machine learning (ML) and optimization framework, which allows a quick duration estimation of project design phase, hence improving operational efficiency and competitiveness of a design construction company. 3 data sets of three years composed of daily time spent for different design projects are used to train and validate the ML models to perform multiple projects. Our study concluded that Artificial Neural Network (ANN) performed an accuracy of 0.94.Keywords: time estimation, machine learning, Artificial neural network, project design phase
Procedia PDF Downloads 983758 Biologically Inspired Small Infrared Target Detection Using Local Contrast Mechanisms
Authors: Tian Xia, Yuan Yan Tang
Abstract:
In order to obtain higher small target detection accuracy, this paper presents an effective algorithm inspired by the local contrast mechanism. The proposed method can enhance target signal and suppress background clutter simultaneously. In the first stage, a enhanced image is obtained using the proposed Weighted Laplacian of Gaussian. In the second stage, an adaptive threshold is adopted to segment the target. Experimental results on two changeling image sequences show that the proposed method can detect the bright and dark targets simultaneously, and is not sensitive to sea-sky line of the infrared image. So it is fit for IR small infrared target detection.Keywords: small target detection, local contrast, human vision system, Laplacian of Gaussian
Procedia PDF Downloads 469