Search results for: prediction model accuracy
14878 3D Numerical Studies on External Aerodynamics of a Flying Car
Authors: Sasitharan Ambicapathy, J. Vignesh, P. Sivaraj, Godfrey Derek Sams, K. Sabarinath, V. R. Sanal Kumar
Abstract:
The external flow simulation of a flying car at take off phase is a daunting task owing to the fact that the prediction of the transient unsteady flow features during its deployment phase is very complex. In this paper 3D numerical simulations of external flow of Ferrari F430 proposed flying car with different NACA 9618 rectangular wings have been carried. Additionally, the aerodynamics characteristics have been generated for optimizing its geometry for achieving the minimum take off velocity with better overall performance in both road and air. The three-dimensional standard k-omega turbulence model has been used for capturing the intrinsic flow physics during the take off phase. In the numerical study, a fully implicit finite volume scheme of the compressible, Reynolds-Averaged, Navier-Stokes equations is employed. Through the detailed parametric analytical studies we have conjectured that Ferrari F430 flying car facilitated with high wings having three different deployment histories during the take off phase is the best choice for accomplishing its better performance for the commercial applications.Keywords: aerodynamics of flying car, air taxi, negative lift, roadable airplane
Procedia PDF Downloads 42514877 Media Richness Perspective on Web 2.0 Usage for Knowledge Creation: The Case of the Cocoa Industry in Ghana
Authors: Albert Gyamfi
Abstract:
Cocoa plays critical role in the socio-economic development of Ghana. Meanwhile, smallholder farmers most of whom are illiterate dominate the industry. According to the cocoa-based agricultural knowledge and information system (AKIS) model knowledge is created and transferred to the industry between three key actors: cocoa researchers, extension experts, and cocoa farmers. Dwelling on the SECI model, the media richness theory (MRT), and the AKIS model, a conceptual model of web 2.0-based AKIS model (AKIS 2.0) is developed and used to assess the possible effects of social media usage for knowledge creation in the Ghanaian cocoa industry. A mixed method approach with a survey questionnaire was employed, and a second-order multi-group structural equation model (SEM) was used to analyze the data. The study concludes that the use of web 2.0 applications for knowledge creation would lead to sustainable interactions among the key knowledge actors for effective knowledge creation in the cocoa industry in Ghana.Keywords: agriculture, cocoa, knowledge, media, web 2.0
Procedia PDF Downloads 33914876 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow
Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat
Abstract:
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement
Procedia PDF Downloads 9714875 Application of a Hybrid Modified Blade Element Momentum Theory/Computational Fluid Dynamics Approach for Wine Turbine Aerodynamic Performances Prediction
Authors: Samah Laalej, Abdelfattah Bouatem
Abstract:
In the field of wind turbine blades, it is complicated to evaluate the aerodynamic performances through experimental measurements as it requires a lot of computing time and resources. Therefore, in this paper, a hybrid BEM-CFD numerical technique is developed to predict power and aerodynamic forces acting on the blades. Computational fluid dynamics (CFD) simulation was conducted to calculate the drag and lift forces through Ansys software using the K-w model. Then an enhanced BEM code was created to predict the power outputs generated by the wind turbine using the aerodynamic properties extracted from the CFD approach. The numerical approach was compared and validated with experimental data. The power curves calculated from this hybrid method were in good agreement with experimental measurements for all velocity ranges.Keywords: blade element momentum, aerodynamic forces, wind turbine blades, computational fluid dynamics approach
Procedia PDF Downloads 7214874 Effect of Fiber Orientation on Dynamic Properties of Carbon-Epoxy Composite Laminate under Flexural Vibration
Authors: Bahlouli Ahmed, Bentalab Nourdin, Nigrou Mourad
Abstract:
This study was aimed at investigating the effect of orientation fiber reinforced on dynamic properties of laminate composite FRP. An experimental investigation is implemented using an impulse technique. The various specimens are excited in free vibration by the use of bi-channel Analyzer. The experimental results are compared by model of finite element analysis using ANSYS. The results studies (natural frequencies measurements, vibration mode, dynamic modulus and damping ratio) show that the effects of significant parameters such as lay-up and stacking sequence, boundary conditions and excitation place of accelerometer. These results are critically examined and discussed. The accuracy of these results is demonstrated by comparing results with those available in the literature.Keywords: natural frequency, damping ratio, laminate composite, dynamic modulus
Procedia PDF Downloads 36314873 Cigarette Smoke Detection Based on YOLOV3
Abstract:
In order to satisfy the real-time and accurate requirements of cigarette smoke detection in complex scenes, a cigarette smoke detection technology based on the combination of deep learning and color features was proposed. Firstly, based on the color features of cigarette smoke, the suspicious cigarette smoke area in the image is extracted. Secondly, combined with the efficiency of cigarette smoke detection and the problem of network overfitting, a network model for cigarette smoke detection was designed according to YOLOV3 algorithm to reduce the false detection rate. The experimental results show that the method is feasible and effective, and the accuracy of cigarette smoke detection is up to 99.13%, which satisfies the requirements of real-time cigarette smoke detection in complex scenes.Keywords: deep learning, computer vision, cigarette smoke detection, YOLOV3, color feature extraction
Procedia PDF Downloads 9314872 Self-Organizing Maps for Credit Card Fraud Detection and Visualization
Authors: Peng Chun-Yi, Chen Wei-Hsuan, Ueng Shyh-Kuang
Abstract:
This study focuses on the application of self-organizing maps (SOM) technology in analyzing credit card transaction data, aiming to enhance the accuracy and efficiency of fraud detection. Som, as an artificial neural network, is particularly suited for pattern recognition and data classification, making it highly effective for the complex and variable nature of credit card transaction data. By analyzing transaction characteristics with SOM, the research identifies abnormal transaction patterns that could indicate potentially fraudulent activities. Moreover, this study has developed a specialized visualization tool to intuitively present the relationships between SOM analysis outcomes and transaction data, aiding financial institution personnel in quickly identifying and responding to potential fraud, thereby reducing financial losses. Additionally, the research explores the integration of SOM technology with composite intelligent system technologies (including finite state machines, fuzzy logic, and decision trees) to further improve fraud detection accuracy. This multimodal approach provides a comprehensive perspective for identifying and understanding various types of fraud within credit card transactions. In summary, by integrating SOM technology with visualization tools and composite intelligent system technologies, this research offers a more effective method of fraud detection for the financial industry, not only enhancing detection accuracy but also deepening the overall understanding of fraudulent activities.Keywords: self-organizing map technology, fraud detection, information visualization, data analysis, composite intelligent system technologies, decision support technologies
Procedia PDF Downloads 6514871 Comparative Study of Vertical and Horizontal Triplex Tube Latent Heat Storage Units
Authors: Hamid El Qarnia
Abstract:
This study investigates the impact of the eccentricity of the central tube on the thermal and fluid characteristics of a triplex tube used in latent heat energy storage technologies. Two triplex tube orientations are considered in the proposed study: vertical and horizontal. The energy storage material, which is a phase change material (PCM), is placed in the space between the inside and outside tubes. During the thermal energy storage period, a heat transfer fluid (HTF) flows inside the two tubes, transmitting the heat to the PCM through two heat exchange surfaces instead of one heat exchange surface as it is the case for double tube heat storage systems. A CFD model is developed and validated against experimental data available in the literature. The mesh independency study is carried out to select the appropriate mesh. In addition, different time steps are examined to determine a time step ensuring accuracy of the numerical results and reduction in the computational time. The numerical model is then used to conduct numerical investigations of the thermal behavior and thermal performance of the storage unit. The effects of eccentricity of the central tube and HTF mass flow rate on thermal characteristics and performance indicators are examined for two flow arrangements: co-current and counter current flows. The results are given in terms of isotherm plots, streamlines, melting time and thermal energy storage efficiency.Keywords: energy storage, heat transfer, melting, solidification
Procedia PDF Downloads 6114870 Predicting Relative Performance of Sector Exchange Traded Funds Using Machine Learning
Abstract:
Machine learning has been used in many areas today. It thrives at reviewing large volumes of data and identifying patterns and trends that might not be apparent to a human. Given the huge potential benefit and the amount of data available in the financial market, it is not surprising to see machine learning applied to various financial products. While future prices of financial securities are extremely difficult to forecast, we study them from a different angle. Instead of trying to forecast future prices, we apply machine learning algorithms to predict the direction of future price movement, in particular, whether a sector Exchange Traded Fund (ETF) would outperform or underperform the market in the next week or in the next month. We apply several machine learning algorithms for this prediction. The algorithms are Linear Discriminant Analysis (LDA), k-Nearest Neighbors (KNN), Decision Tree (DT), Gaussian Naive Bayes (GNB), and Neural Networks (NN). We show that these machine learning algorithms, most notably GNB and NN, have some predictive power in forecasting out-performance and under-performance out of sample. We also try to explore whether it is possible to utilize the predictions from these algorithms to outperform the buy-and-hold strategy of the S&P 500 index. The trading strategy to explore out-performance predictions does not perform very well, but the trading strategy to explore under-performance predictions can earn higher returns than simply holding the S&P 500 index out of sample.Keywords: machine learning, ETF prediction, dynamic trading, asset allocation
Procedia PDF Downloads 10614869 Development and Validation of a Coronary Heart Disease Risk Score in Indian Type 2 Diabetes Mellitus Patients
Authors: Faiz N. K. Yusufi, Aquil Ahmed, Jamal Ahmad
Abstract:
Diabetes in India is growing at an alarming rate and the complications caused by it need to be controlled. Coronary heart disease (CHD) is one of the complications that will be discussed for prediction in this study. India has the second most number of diabetes patients in the world. To the best of our knowledge, there is no CHD risk score for Indian type 2 diabetes patients. Any form of CHD has been taken as the event of interest. A sample of 750 was determined and randomly collected from the Rajiv Gandhi Centre for Diabetes and Endocrinology, J.N.M.C., A.M.U., Aligarh, India. Collected variables include patients data such as sex, age, height, weight, body mass index (BMI), blood sugar fasting (BSF), post prandial sugar (PP), glycosylated haemoglobin (HbA1c), diastolic blood pressure (DBP), systolic blood pressure (SBP), smoking, alcohol habits, total cholesterol (TC), triglycerides (TG), high density lipoprotein (HDL), low density lipoprotein (LDL), very low density lipoprotein (VLDL), physical activity, duration of diabetes, diet control, history of antihypertensive drug treatment, family history of diabetes, waist circumference, hip circumference, medications, central obesity and history of CHD. Predictive risk scores of CHD events are designed by cox proportional hazard regression. Model calibration and discrimination is assessed from Hosmer Lemeshow and area under receiver operating characteristic (ROC) curve. Overfitting and underfitting of the model is checked by applying regularization techniques and best method is selected between ridge, lasso and elastic net regression. Youden’s index is used to choose the optimal cut off point from the scores. Five year probability of CHD is predicted by both survival function and Markov chain two state model and the better technique is concluded. The risk scores for CHD developed can be calculated by doctors and patients for self-control of diabetes. Furthermore, the five-year probabilities can be implemented as well to forecast and maintain the condition of patients.Keywords: coronary heart disease, cox proportional hazard regression, ROC curve, type 2 diabetes Mellitus
Procedia PDF Downloads 22314868 Efficient Recommendation System for Frequent and High Utility Itemsets over Incremental Datasets
Authors: J. K. Kavitha, D. Manjula, U. Kanimozhi
Abstract:
Mining frequent and high utility item sets have gained much significance in the recent years. When the data arrives sporadically, incremental and interactive rule mining and utility mining approaches can be adopted to handle user’s dynamic environmental needs and avoid redundancies, using previous data structures, and mining results. The dependence on recommendation systems has exponentially risen since the advent of search engines. This paper proposes a model for building a recommendation system that suggests frequent and high utility item sets over dynamic datasets for a cluster based location prediction strategy to predict user’s trajectories using the Efficient Incremental Rule Mining (EIRM) algorithm and the Fast Update Utility Pattern Tree (FUUP) algorithm. Through comprehensive evaluations by experiments, this scheme has shown to deliver excellent performance.Keywords: data sets, recommendation system, utility item sets, frequent item sets mining
Procedia PDF Downloads 29714867 Dynamic Control Theory: A Behavioral Modeling Approach to Demand Forecasting amongst Office Workers Engaged in a Competition on Energy Shifting
Authors: Akaash Tawade, Manan Khattar, Lucas Spangher, Costas J. Spanos
Abstract:
Many grids are increasing the share of renewable energy in their generation mix, which is causing the energy generation to become less controllable. Buildings, which consume nearly 33% of all energy, are a key target for demand response: i.e., mechanisms for demand to meet supply. Understanding the behavior of office workers is a start towards developing demand response for one sector of building technology. The literature notes that dynamic computational modeling can be predictive of individual action, especially given that occupant behavior is traditionally abstracted from demand forecasting. Recent work founded on Social Cognitive Theory (SCT) has provided a promising conceptual basis for modeling behavior, personal states, and environment using control theoretic principles. Here, an adapted linear dynamical system of latent states and exogenous inputs is proposed to simulate energy demand amongst office workers engaged in a social energy shifting game. The energy shifting competition is implemented in an office in Singapore that is connected to a minigrid of buildings with a consistent 'price signal.' This signal is translated into a 'points signal' by a reinforcement learning (RL) algorithm to influence participant energy use. The dynamic model functions at the intersection of the points signals, baseline energy consumption trends, and SCT behavioral inputs to simulate future outcomes. This study endeavors to analyze how the dynamic model trains an RL agent and, subsequently, the degree of accuracy to which load deferability can be simulated. The results offer a generalizable behavioral model for energy competitions that provides the framework for further research on transfer learning for RL, and more broadly— transactive control.Keywords: energy demand forecasting, social cognitive behavioral modeling, social game, transfer learning
Procedia PDF Downloads 11314866 Data Quality and Associated Factors on Regular Immunization Programme at Ararso District: Somali Region- Ethiopia
Authors: Eyob Seife, Molla Alemayaehu, Tesfalem Teshome, Bereket Seyoum, Behailu Getachew
Abstract:
Globally, immunization averts between 2 and 3 million deaths yearly, but Vaccine-Preventable Diseases still account for more in Sub-Saharan African countries and takes the majority of under-five deaths yearly, which indicates the need for consistent and on-time information to have evidence-based decision so as to save lives of these vulnerable groups. However, ensuring data of sufficient quality and promoting an information-use culture at the point of collection remains critical and challenging, especially in remote areas where the Ararso district is selected based on a hypothesis of there is a difference in reported and recounted immunization data consistency. Data quality is dependent on different factors where organizational, behavioral, technical and contextual factors are the mentioned ones. A cross-sectional quantitative study was conducted on September 2022 in the Ararso district. The study used the world health organization (WHO) recommended data quality self-assessment (DQS) tools. Immunization tally sheets, registers and reporting documents were reviewed at 4 health facilities (1 health center and 3 health posts) of primary health care units for one fiscal year (12 months) to determine the accuracy ratio, availability and timeliness of reports. The data was collected by trained DQS assessors to explore the quality of monitoring systems at health posts, health centers, and at the district health office. A quality index (QI), availability and timeliness of reports were assessed. Accuracy ratios formulated were: the first and third doses of pentavalent vaccines, fully immunized (FI), TT2+ and the first dose of measles-containing vaccines (MCV). In this study, facility-level results showed poor timeliness at all levels and both over-reporting and under-reporting were observed at all levels when computing the accuracy ratio of registration to health post reports found at health centers for almost all antigens verified. A quality index (QI) of all facilities also showed poor results. Most of the verified immunization data accuracy ratios were found to be relatively better than that of quality index and timeliness of reports. So attention should be given to improving the capacity of staff, timeliness of reports and quality of monitoring system components, namely recording, reporting, archiving, data analysis and using information for decisions at all levels, especially in remote and areas.Keywords: accuracy ratio, ararso district, quality of monitoring system, regular immunization program, timeliness of reports, Somali region-Ethiopia
Procedia PDF Downloads 7714865 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing
Authors: Tolulope Aremu
Abstract:
The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods
Procedia PDF Downloads 2414864 Optimizing Quantum Machine Learning with Amplitude and Phase Encoding Techniques
Authors: Om Viroje
Abstract:
Quantum machine learning represents a frontier in computational technology, promising significant advancements in data processing capabilities. This study explores the significance of data encoding techniques, specifically amplitude and phase encoding, in this emerging field. By employing a comparative analysis methodology, the research evaluates how these encoding techniques affect the accuracy, efficiency, and noise resilience of quantum algorithms. Our findings reveal that amplitude encoding enhances algorithmic accuracy and noise tolerance, whereas phase encoding significantly boosts computational efficiency. These insights are crucial for developing robust quantum frameworks that can be effectively applied in real-world scenarios. In conclusion, optimizing encoding strategies is essential for advancing quantum machine learning, potentially transforming various industries through improved data processing and analysis.Keywords: quantum machine learning, data encoding, amplitude encoding, phase encoding, noise resilience
Procedia PDF Downloads 3014863 Design Channel Non Persistent CSMA MAC Protocol Model for Complex Wireless Systems Based on SoC
Authors: Ibrahim A. Aref, Tarek El-Mihoub, Khadiga Ben Musa
Abstract:
This paper presents Carrier Sense Multiple Access (CSMA) communication model based on SoC design methodology. Such model can be used to support the modelling of the complex wireless communication systems, therefore use of such communication model is an important technique in the construction of high performance communication. SystemC has been chosen because it provides a homogeneous design flow for complex designs (i.e. SoC and IP based design). We use a swarm system to validate CSMA designed model and to show how advantages of incorporating communication early in the design process. The wireless communication created through the modeling of CSMA protocol that can be used to achieve communication between all the agents and to coordinate access to the shared medium (channel).Keywords: systemC, modelling, simulation, CSMA
Procedia PDF Downloads 43114862 Model of Transhipment and Routing Applied to the Cargo Sector in Small and Medium Enterprises of Bogotá, Colombia
Authors: Oscar Javier Herrera Ochoa, Ivan Dario Romero Fonseca
Abstract:
This paper presents a design of a model for planning the distribution logistics operation. The significance of this work relies on the applicability of this fact to the analysis of small and medium enterprises (SMEs) of dry freight in Bogotá. Two stages constitute this implementation: the first one is the place where optimal planning is achieved through a hybrid model developed with mixed integer programming, which considers the transhipment operation based on a combined load allocation model as a classic transshipment model; the second one is the specific routing of that operation through the heuristics of Clark and Wright. As a result, an integral model is obtained to carry out the step by step planning of the distribution of dry freight for SMEs in Bogotá. In this manner, optimum assignments are established by utilizing transshipment centers with that purpose of determining the specific routing based on the shortest distance traveled.Keywords: transshipment model, mixed integer programming, saving algorithm, dry freight transportation
Procedia PDF Downloads 23614861 A Model for Predicting Organic Compounds Concentration Change in Water Associated with Horizontal Hydraulic Fracturing
Authors: Ma Lanting, S. Eguilior, A. Hurtado, Juan F. Llamas Borrajo
Abstract:
Horizontal hydraulic fracturing is a technology to increase natural gas flow and improve productivity in the low permeability formation. During this drilling operation tons of flowback and produced water which contains many organic compounds return to the surface with a potential risk of influencing the surrounding environment and human health. A mathematical model is urgently needed to represent organic compounds in water transportation process behavior and the concentration change with time throughout the hydraulic fracturing operation life cycle. A comprehensive model combined Organic Matter Transport Dynamic Model with Two-Compartment First-order Model Constant (TFRC) Model has been established to quantify the organic compounds concentration. This algorithm model is composed of two transportation parts based on time factor. For the fast part, the curve fitting technique is applied using flowback water data from the Marcellus shale gas site fracturing and the coefficients of determination (R2) from all analyzed compounds demonstrate a high experimental feasibility of this numerical model. Furthermore, along a decade of drilling the concentration ratio curves have been estimated by the slow part of this model. The result shows that the larger value of Koc in chemicals, the later maximum concentration in water will reach, as well as all the maximum concentrations percentage would reach up to 90% of initial concentration from shale formation within a long sufficient period.Keywords: model, shale gas, concentration, organic compounds
Procedia PDF Downloads 22914860 A U-Net Based Architecture for Fast and Accurate Diagram Extraction
Authors: Revoti Prasad Bora, Saurabh Yadav, Nikita Katyal
Abstract:
In the context of educational data mining, the use case of extracting information from images containing both text and diagrams is of high importance. Hence, document analysis requires the extraction of diagrams from such images and processes the text and diagrams separately. To the author’s best knowledge, none among plenty of approaches for extracting tables, figures, etc., suffice the need for real-time processing with high accuracy as needed in multiple applications. In the education domain, diagrams can be of varied characteristics viz. line-based i.e. geometric diagrams, chemical bonds, mathematical formulas, etc. There are two broad categories of approaches that try to solve similar problems viz. traditional computer vision based approaches and deep learning approaches. The traditional computer vision based approaches mainly leverage connected components and distance transform based processing and hence perform well in very limited scenarios. The existing deep learning approaches either leverage YOLO or faster-RCNN architectures. These approaches suffer from a performance-accuracy tradeoff. This paper proposes a U-Net based architecture that formulates the diagram extraction as a segmentation problem. The proposed method provides similar accuracy with a much faster extraction time as compared to the mentioned state-of-the-art approaches. Further, the segmentation mask in this approach allows the extraction of diagrams of irregular shapes.Keywords: computer vision, deep-learning, educational data mining, faster-RCNN, figure extraction, image segmentation, real-time document analysis, text extraction, U-Net, YOLO
Procedia PDF Downloads 14614859 Terrain Classification for Ground Robots Based on Acoustic Features
Authors: Bernd Kiefer, Abraham Gebru Tesfay, Dietrich Klakow
Abstract:
The motivation of our work is to detect different terrain types traversed by a robot based on acoustic data from the robot-terrain interaction. Different acoustic features and classifiers were investigated, such as Mel-frequency cepstral coefficient and Gamma-tone frequency cepstral coefficient for the feature extraction, and Gaussian mixture model and Feed forward neural network for the classification. We analyze the system’s performance by comparing our proposed techniques with some other features surveyed from distinct related works. We achieve precision and recall values between 87% and 100% per class, and an average accuracy at 95.2%. We also study the effect of varying audio chunk size in the application phase of the models and find only a mild impact on performance.Keywords: acoustic features, autonomous robots, feature extraction, terrain classification
Procedia PDF Downloads 37414858 An Elaboration Likelihood Model to Evaluate Consumer Behavior on Facebook Marketplace: Trust on Seller as a Moderator
Authors: Sharmistha Chowdhury, Shuva Chowdhury
Abstract:
Buying-selling new as well as second-hand goods like tools, furniture, household, electronics, clothing, baby stuff, vehicles, and hobbies through the Facebook marketplace has become a new paradigm for c2c sellers. This phenomenon encourages and empowers decentralised home-oriented sellers. This study adopts Elaboration Likelihood Model (ELM) to explain consumer behaviour on Facebook Marketplace (FM). ELM suggests that consumers process information through the central and peripheral routes, which eventually shape their attitudes towards posts. The central route focuses on information quality, and the peripheral route focuses on cues. Sellers’ FM posts usually include product features, prices, conditions, pictures, and pick-up location. This study uses information relevance and accuracy as central route factors. The post’s attractiveness represents cues and creates positive or negative associations with the product. A post with remarkable pictures increases the attractiveness of the post. So, post aesthetics is used as a peripheral route factor. People influenced via the central or peripheral route forms an attitude that includes multiple processes – response and purchase intention. People respond to FM posts through save, share and chat. Purchase intention reflects a positive image of the product and higher purchase intention. This study proposes trust on sellers as a moderator to test the strength of its influence on consumer attitudes and behaviour. Trust on sellers is assessed whether sellers have badges or not. A sample questionnaire will be developed and distributed among a group of random FM sellers who are selling vehicles on this platform to conduct the study. The chosen product of this study is the vehicle, a high-value purchase item. High-value purchase requires consumers to consider forming their attitude without any sign of impulsiveness seriously. Hence, vehicles are the perfect choice to test the strength of consumers attitudes and behaviour. The findings of the study add to the elaboration likelihood model and online second-hand marketplace literature.Keywords: consumer behaviour, elaboration likelihood model, facebook marketplace, c2c marketing
Procedia PDF Downloads 14614857 Special Case of Trip Distribution Model and Its Use for Estimation of Detailed Transport Demand in the Czech Republic
Authors: Jiri Dufek
Abstract:
The national model of the Czech Republic has been modified in a detailed way to get detailed travel demand in the municipality level (cities, villages over 300 inhabitants). As a technique for this detailed modelling, three-dimensional procedure for calibrating gravity models, was used. Besides of zone production and attraction, which is usual in gravity models, the next additional parameter for trip distribution was introduced. Usually it is called by “third dimension”. In the model, this parameter is a demand between regions. The distribution procedure involved calculation of appropriate skim matrices and its multiplication by three coefficients obtained by iterative balancing of production, attraction and third dimension. This type of trip distribution was processed in R-project and the results were used in the Czech Republic transport model, created in PTV Vision. This process generated more precise results in local level od the model (towns, villages)Keywords: trip distribution, three dimension, transport model, municipalities
Procedia PDF Downloads 13514856 Modelling of Solidification in a Latent Thermal Energy Storage with a Finned Tube Bundle Heat Exchanger Unit
Authors: Remo Waser, Simon Maranda, Anastasia Stamatiou, Ludger J. Fischer, Joerg Worlitschek
Abstract:
In latent heat storage, a phase change material (PCM) is used to store thermal energy. The heat transfer rate during solidification is limited and considered as a key challenge in the development of latent heat storages. Thus, finned heat exchangers (HEX) are often utilized to increase the heat transfer rate of the storage system. In this study, a new modeling approach to calculating the heat transfer rate in latent thermal energy storages with complex HEX geometries is presented. This model allows for an optimization of the HEX design in terms of costs and thermal performance of the system. Modeling solidification processes requires the calculation of time-dependent heat conduction with moving boundaries. Commonly used computational fluid dynamic (CFD) methods enable the analysis of the heat transfer in complex HEX geometries. If applied to the entire storage, the drawback of this approach is the high computational effort due to small time steps and fine computational grids required for accurate solutions. An alternative to describe the process of solidification is the so-called temperature-based approach. In order to minimize the computational effort, a quasi-stationary assumption can be applied. This approach provides highly accurate predictions for tube heat exchangers. However, it shows unsatisfactory results for more complex geometries such as finned tube heat exchangers. The presented simulation model uses a temporal and spatial discretization of heat exchanger tube. The spatial discretization is based on the smallest possible symmetric segment of the HEX. The heat flow in each segment is calculated using finite volume method. Since the heat transfer fluid temperature can be derived using energy conservation equations, the boundary conditions at the inner tube wall is dynamically updated for each time step and segment. The model allows a prediction of the thermal performance of latent thermal energy storage systems using complex HEX geometries with considerably low computational effort.Keywords: modelling of solidification, finned tube heat exchanger, latent thermal energy storage
Procedia PDF Downloads 27314855 Trading off Accuracy for Speed in Powerdrill
Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica
Abstract:
In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries
Procedia PDF Downloads 26314854 Photoplethysmography-Based Device Designing for Cardiovascular System Diagnostics
Authors: S. Botman, D. Borchevkin, V. Petrov, E. Bogdanov, M. Patrushev, N. Shusharina
Abstract:
In this paper, we report the development of the device for diagnostics of cardiovascular system state and associated automated workstation for large-scale medical measurement data collection and analysis. It was shown that optimal design for the monitoring device is wristband as it represents engineering trade-off between accuracy and usability. The monitoring device is based on the infrared reflective photoplethysmographic sensor, which allows collecting multiple physiological parameters, such as heart rate and pulsing wave characteristics. Developed device use BLE interface for medical and supplementary data transmission to the coupled mobile phone, which process it and send it to the doctor's automated workstation. Results of this experimental model approbation confirmed the applicability of the proposed approach.Keywords: cardiovascular diseases, health monitoring systems, photoplethysmography, pulse wave, remote diagnostics
Procedia PDF Downloads 49914853 Mobile Platform’s Attitude Determination Based on Smoothed GPS Code Data and Carrier-Phase Measurements
Authors: Mohamed Ramdani, Hassen Abdellaoui, Abdenour Boudrassen
Abstract:
Mobile platform’s attitude estimation approaches mainly based on combined positioning techniques and developed algorithms; which aim to reach a fast and accurate solution. In this work, we describe the design and the implementation of an attitude determination (AD) process, using only measurements from GPS sensors. The major issue is based on smoothed GPS code data using Hatch filter and raw carrier-phase measurements integrated into attitude algorithm based on vectors measurement using least squares (LSQ) estimation method. GPS dataset from a static experiment is used to investigate the effectiveness of the presented approach and consequently to check the accuracy of the attitude estimation algorithm. Attitude results from GPS multi-antenna over short baselines are introduced and analyzed. The 3D accuracy of estimated attitude parameters using smoothed measurements is over 0.27°.Keywords: attitude determination, GPS code data smoothing, hatch filter, carrier-phase measurements, least-squares attitude estimation
Procedia PDF Downloads 16114852 Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods
Authors: Mohammad Arabi
Abstract:
The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes.Keywords: electric motor, fault detection, frequency features, temporal features
Procedia PDF Downloads 5614851 Designing Equivalent Model of Floating Gate Transistor
Authors: Birinderjit Singh Kalyan, Inderpreet Kaur, Balwinder Singh Sohi
Abstract:
In this paper, an equivalent model for floating gate transistor has been proposed. Using the floating gate voltage value, capacitive coupling coefficients has been found at different bias conditions. The amount of charge present on the gate has been then calculated using the transient models of hot electron programming and Fowler-Nordheim Tunnelling. The proposed model can be extended to the transient conditions as well. The SPICE equivalent model is designed and current-voltage characteristics and Transfer characteristics are comparatively analysed. The dc current-voltage characteristics, as well as dc transfer characteristics, have been plotted for an FGMOS with W/L=0.25μm/0.375μm, the inter-poly capacitance of 0.8fF for both programmed and erased states. The Comparative analysis has been made between the present model and capacitive coefficient coupling methods which were already available.Keywords: FGMOS, floating gate transistor, capacitive coupling coefficient, SPICE model
Procedia PDF Downloads 54814850 Numerical Investigation of Indoor Environmental Quality in a Room Heated with Impinging Jet Ventilation
Authors: Mathias Cehlin, Arman Ameen, Ulf Larsson, Taghi Karimipanah
Abstract:
The indoor environmental quality (IEQ) is increasingly recognized as a significant factor influencing the overall level of building occupants’ health, comfort and productivity. An air-conditioning and ventilation system is normally used to create and maintain good thermal comfort and indoor air quality. Providing occupant thermal comfort and well-being with minimized use of energy is the main purpose of heating, ventilating and air conditioning system. Among different types of ventilation systems, the most widely known and used ventilation systems are mixing ventilation (MV) and displacement ventilation (DV). Impinging jet ventilation (IJV) is a promising ventilation strategy developed in the beginning of 2000s. IJV has the advantage of supplying air downwards close to the floor with high momentum and thereby delivering fresh air further out in the room compare to DV. Operating in cooling mode, IJV systems can have higher ventilation effectiveness and heat removal effectiveness compared to MV, and therefore a higher energy efficiency. However, how is the performance of IJV when operating in heating mode? This paper presents the function of IJV in a typical office room for winter conditions (heating mode). In this paper, a validated CFD model, which uses the v2-f model is used for the prediction of air flow pattern, thermal comfort and air change effectiveness. The office room under consideration has the dimensions 4.2×3.6×2.5m, which can be designed like a single-person or two-person office. A number of important factors influencing in the room with IJV are studied. The considered parameters are: heating demand, number of occupants and supplied air conditions. A total of 6 simulation cases are carried out to investigate the effects of the considered parameters. Heat load in the room is contributed by occupants, computer and lighting. The model consists of one external wall including a window. The interaction effects of heat sources, supply air flow and down draught from the window result in a complex flow phenomenon. Preliminary results indicate that IJV can be used for heating of a typical office room. The IEQ seems to be suitable in the occupied region for the studied cases.Keywords: computation fluid dynamics, impinging jet ventilation, indoor environmental quality, ventilation strategy
Procedia PDF Downloads 18214849 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling
Authors: Danlei Yang, Luofeng Huang
Abstract:
The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence
Procedia PDF Downloads 21