Search results for: metrics of engineering
3274 Use of Treated and Untreated Sunflower Seed Hulls in Fattening Lamb Feeding
Authors: Mohammad Saleh Fasihi Ramandi
Abstract:
This study investigates the nutritional value of both enriched and non-enriched sunflower seed hulls in lamb-fattening diets. Sunflower seed processing for oil production produces a considerable by-product, with 18–25% of the total seed weight comprised of hulls. These hulls are typically regarded as nutritionally limited due to their high fiber and low protein content, but the application of urea enrichment appears to increase their potential as feed. In this experiment, fifty male lambs, aged 7–8 months, were divided into five groups of ten, each receiving one of five diets: 1) a control diet with cereal straw and no hulls; 2) a diet with 10% non-enriched hulls; 3) a diet with 20% non-enriched hulls; 4) a diet with 10% urea-enriched hulls; and 5) a diet with 20% urea-enriched hulls. The feeding trial lasted 90 days, during which metrics such as daily weight gain, dry matter intake, and feed conversion efficiency were recorded. At the end of the trial, three lambs from each group were randomly selected for slaughter, and their carcass characteristics were documented. The results suggest that diets including enriched sunflower hulls led to significantly greater final weights, weight gain, and improved feed conversion efficiency. Economically, using enriched sunflower hulls in fattening diets for lambs reduced the cost per kilogram of live and carcass weight gain compared to diets with non-enriched hulls and cereal straw.Keywords: sunflower seed hulls, lamb fattening, urea enrichment, feed efficiency
Procedia PDF Downloads 213273 A Transformer-Based Approach for Multi-Human 3D Pose Estimation Using Color and Depth Images
Authors: Qiang Wang, Hongyang Yu
Abstract:
Multi-human 3D pose estimation is a challenging task in computer vision, which aims to recover the 3D joint locations of multiple people from multi-view images. In contrast to traditional methods, which typically only use color (RGB) images as input, our approach utilizes both color and depth (D) information contained in RGB-D images. We also employ a transformer-based model as the backbone of our approach, which is able to capture long-range dependencies and has been shown to perform well on various sequence modeling tasks. Our method is trained and tested on the Carnegie Mellon University (CMU) Panoptic dataset, which contains a diverse set of indoor and outdoor scenes with multiple people in varying poses and clothing. We evaluate the performance of our model on the standard 3D pose estimation metrics of mean per-joint position error (MPJPE). Our results show that the transformer-based approach outperforms traditional methods and achieves competitive results on the CMU Panoptic dataset. We also perform an ablation study to understand the impact of different design choices on the overall performance of the model. In summary, our work demonstrates the effectiveness of using a transformer-based approach with RGB-D images for multi-human 3D pose estimation and has potential applications in real-world scenarios such as human-computer interaction, robotics, and augmented reality.Keywords: multi-human 3D pose estimation, RGB-D images, transformer, 3D joint locations
Procedia PDF Downloads 813272 Kinematic Analysis of Heel Height Effect on Knee Direction Correction in a Patient with Genu Recurvatum: A Case Study
Authors: Parya Salimitari, Farhad Tabatabai Ghomsheh, Siyamak Khorramymehr, Hossein Taghadosi, Mohammad Hossein Dashti
Abstract:
The aim of this study was to evaluate the effect of heel height on the knee joint direction in Genu recurvatum patients compared to normal state. The test was performed on a patient with Genu recurvatum and a healthy person with similar and match biomechanical conditions. Subjects were tested under six different positions of shoes with heels 0, 1, 2, 3, 4 and 5 cm after marking during the gate. The results of the spatial temporal geometry obtained from Vicon Motion System (six-camera T10 model, Oxford Metrics Ltd., Oxford, UK), and were used to compute and analyze the kinematic results. In this study, we tried to determine the effect of shoe heel intervention on knee joint direction correction. The results indicate that the 1 cm heel has been optimized and significantly improved in knee joint flexion and flexion-extension angle so that the difference in knee flexion-extension angle between the patient and the healthy person at some stages of walking has reached zero (good posture). The 3 cm heel compared with the 0 cm heel has reduced the knee recurvatum index (KRI) by up to 21.74% in the patient (from 219.233 mm to 47.6714 mm). According to the findings of this study, it can be concluded that heel increase is effective in correcting knee joints in Genu recurvatum and the optimum heel height is 1 cm.Keywords: joint alignment of knee, gait analysis, genu recurvatum, heel lift, kinematics, motion-analysis
Procedia PDF Downloads 2043271 Taking Learning beyond Kirkpatrick’s Levels: Applying Return on Investment Measurement in Training
Authors: Charles L. Sigmund, M. A. Aed, Lissa Graciela Rivera Picado
Abstract:
One critical component of the training development process is the evaluation of the impact and value of the program. Oftentimes, however, learning organizations bypass this phase either because they are unfamiliar with effective methods for measuring the success or effect of the training or because they believe the effort to be too time-consuming or cumbersome. As a result, most organizations that do conduct evaluation limit their scope to Kirkpatrick L1 (reaction) and L2 (learning), or at most carry through to L4 (results). In 2021 Microsoft made a strategic decision to assess the measurable and monetized impact for all training launches and designed a scalable and program-agnostic tool for providing full-scale L5 return on investment (ROI) estimates for each. In producing this measurement tool, the learning and development organization built a framework for making business prioritizations and resource allocations that is based on the projected ROI of a course. The analysis and measurement posed by this process use a combination of training data and operational metrics to calculate the effective net benefit derived from a given training effort. Business experts in the learning field generally consider a 10% ROI to be an outstanding demonstration of the value of a project. Initial findings from this work applied to a critical customer-facing program yielded an estimated ROI of more than 49%. This information directed the organization to make a more concerted and concentrated effort in this specific line of business and resulted in additional investment in the training methods and technologies being used.Keywords: evaluation, measurement, return on investment, value
Procedia PDF Downloads 1853270 ANOVA-Based Feature Selection and Machine Learning System for IoT Anomaly Detection
Authors: Muhammad Ali
Abstract:
Cyber-attacks and anomaly detection on the Internet of Things (IoT) infrastructure is emerging concern in the domain of data-driven intrusion. Rapidly increasing IoT risk is now making headlines around the world. denial of service, malicious control, data type probing, malicious operation, DDos, scan, spying, and wrong setup are attacks and anomalies that can affect an IoT system failure. Everyone talks about cyber security, connectivity, smart devices, and real-time data extraction. IoT devices expose a wide variety of new cyber security attack vectors in network traffic. For further than IoT development, and mainly for smart and IoT applications, there is a necessity for intelligent processing and analysis of data. So, our approach is too secure. We train several machine learning models that have been compared to accurately predicting attacks and anomalies on IoT systems, considering IoT applications, with ANOVA-based feature selection with fewer prediction models to evaluate network traffic to help prevent IoT devices. The machine learning (ML) algorithms that have been used here are KNN, SVM, NB, D.T., and R.F., with the most satisfactory test accuracy with fast detection. The evaluation of ML metrics includes precision, recall, F1 score, FPR, NPV, G.M., MCC, and AUC & ROC. The Random Forest algorithm achieved the best results with less prediction time, with an accuracy of 99.98%.Keywords: machine learning, analysis of variance, Internet of Thing, network security, intrusion detection
Procedia PDF Downloads 1263269 Dynamic Compensation for Environmental Temperature Variation in the Coolant Refrigeration Cycle as a Means of Increasing Machine-Tool Precision
Authors: Robbie C. Murchison, Ibrahim Küçükdemiral, Andrew Cowell
Abstract:
Thermal effects are the largest source of dimensional error in precision machining, and a major proportion is caused by ambient temperature variation. The use of coolant is a primary means of mitigating these effects, but there has been limited work on coolant temperature control. This research critically explored whether CNC-machine coolant refrigeration systems adapted to actively compensate for ambient temperature variation could increase machining accuracy. Accuracy data were collected from operators’ checklists for a CNC 5-axis mill and statistically reduced to bias and precision metrics for observations of one day over a sample period of 27 days. Temperature data were collected using three USB dataloggers in ambient air, the chiller inflow, and the chiller outflow. The accuracy and temperature data were analysed using Pearson correlation, then the thermodynamics of the system were described using system identification with MATLAB. It was found that 75% of thermal error is reflected in the hot coolant temperature but that this is negligibly dependent on ambient temperature. The effect of the coolant refrigeration process on hot coolant outflow temperature was also found to be negligible. Therefore, the evidence indicated that it would not be beneficial to adapt coolant chillers to compensate for ambient temperature variation. However, it is concluded that hot coolant outflow temperature is a robust and accessible source of thermal error data which could be used for prevention strategy evaluation or as the basis of other thermal error strategies.Keywords: CNC manufacturing, machine-tool, precision machining, thermal error
Procedia PDF Downloads 893268 A Bi-Objective Model to Optimize the Total Time and Idle Probability for Facility Location Problem Behaving as M/M/1/K Queues
Authors: Amirhossein Chambari
Abstract:
This article proposes a bi-objective model for the facility location problem subject to congestion (overcrowding). Motivated by implementations to locate servers in internet mirror sites, communication networks, one-server-systems, so on. This model consider for situations in which immobile (or fixed) service facilities are congested (or queued) by stochastic demand to behave as M/M/1/K queues. We consider for this problem two simultaneous perspectives; (1) Customers (desire to limit times of accessing and waiting for service) and (2) Service provider (desire to limit average facility idle-time). A bi-objective model is setup for facility location problem with two objective functions; (1) Minimizing sum of expected total traveling and waiting time (customers) and (2) Minimizing the average facility idle-time percentage (service provider). The proposed model belongs to the class of mixed-integer nonlinear programming models and the class of NP-hard problems. In addition, to solve the model, controlled elitist non-dominated sorting genetic algorithms (Controlled NSGA-II) and controlled elitist non-dominated ranking genetic algorithms (NRGA-I) are proposed. Furthermore, the two proposed metaheuristics algorithms are evaluated by establishing standard multiobjective metrics. Finally, the results are analyzed and some conclusions are given.Keywords: bi-objective, facility location, queueing, controlled NSGA-II, NRGA-I
Procedia PDF Downloads 5843267 Identification of Social Responsibility Factors within Mega Construction Projects
Authors: Ali Alotaibi, Francis Edum-Fotwe, Andrew Price /
Abstract:
Mega construction projects create buildings and major infrastructure to respond to work and life requirements while playing a vital role in promoting any nation’s economy. However, the industry is often criticised for not balancing economic, environmental and social dimensions of their projects, with emphasis typically on one aspect to the detriment of the others. This has resulted in many negative impacts including environmental pollution, waste throughout the project lifecycle, low productivity, and avoidable accidents. The identification of comprehensive Social Responsibility (SR) indicators, which combine social, environmental and economic aspects, is urgently needed. This is particularly the case in the context of the Kingdom of Saudi Arabia (KSA), which often has mega public construction projects. The aim of this paper is to develop a set of wide-ranging SR indicators which encompass social, economic and environmental aspects unique to the KSA. A qualitative approach was applied to explore relevant indicators through a review of the existing literature, international standards and reports. A list of appropriate indicators was developed, and its comprehensiveness was corroborated by interviews with experts on mega construction projects working with SR concepts in the KSA. The findings present 39 indicators and their metrics, covering 10 economic, 12 environmental and 17 social aspects of SR mapped against their references. These indicators are a valuable reference for decision-makers and academics in the KSA to understand factors related to SR in mega construction projects. The indicators are related to mega construction projects within the KSA and require validation in a real case scenario or within a different industry to demonstrate their generalisability.Keywords: social responsibility, construction projects, economic, social, environmental, indicators
Procedia PDF Downloads 1703266 Stochastic Edge Based Anomaly Detection for Supervisory Control and Data Acquisitions Systems: Considering the Zambian Power Grid
Authors: Lukumba Phiri, Simon Tembo, Kumbuso Joshua Nyoni
Abstract:
In Zambia recent initiatives by various power operators like ZESCO, CEC, and consumers like the mines to upgrade power systems into smart grids target an even tighter integration with information technologies to enable the integration of renewable energy sources, local and bulk generation, and demand response. Thus, for the reliable operation of smart grids, its information infrastructure must be secure and reliable in the face of both failures and cyberattacks. Due to the nature of the systems, ICS/SCADA cybersecurity and governance face additional challenges compared to the corporate networks, and critical systems may be left exposed. There exist control frameworks internationally such as the NIST framework, however, there are generic and do not meet the domain-specific needs of the SCADA systems. Zambia is also lagging in cybersecurity awareness and adoption, therefore there is a concern about securing ICS controlling key infrastructure critical to the Zambian economy as there are few known facts about the true posture. In this paper, we introduce a stochastic Edged-based Anomaly Detection for SCADA systems (SEADS) framework for threat modeling and risk assessment. SEADS enables the calculation of steady-steady probabilities that are further applied to establish metrics like system availability, maintainability, and reliability.Keywords: anomaly, availability, detection, edge, maintainability, reliability, stochastic
Procedia PDF Downloads 1113265 Commuters Trip Purpose Decision Tree Based Model of Makurdi Metropolis, Nigeria and Strategic Digital City Project
Authors: Emmanuel Okechukwu Nwafor, Folake Olubunmi Akintayo, Denis Alcides Rezende
Abstract:
Decision tree models are versatile and interpretable machine learning algorithms widely used for both classification and regression tasks, which can be related to cities, whether physical or digital. The aim of this research is to assess how well decision tree algorithms can predict trip purposes in Makurdi, Nigeria, while also exploring their connection to the strategic digital city initiative. The research methodology involves formalizing household demographic and trips information datasets obtained from extensive survey process. Modelling and Prediction were achieved using Python Programming Language and the evaluation metrics like R-squared and mean absolute error were used to assess the decision tree algorithm's performance. The results indicate that the model performed well, with accuracies of 84% and 68%, and low MAE values of 0.188 and 0.314, on training and validation data, respectively. This suggests the model can be relied upon for future prediction. The conclusion reiterates that This model will assist decision-makers, including urban planners, transportation engineers, government officials, and commuters, in making informed decisions on transportation planning and management within the framework of a strategic digital city. Its application will enhance the efficiency, sustainability, and overall quality of transportation services in Makurdi, Nigeria.Keywords: decision tree algorithm, trip purpose, intelligent transport, strategic digital city, travel pattern, sustainable transport
Procedia PDF Downloads 233264 Characterising the Performance Benefits of a 1/7-Scale Morphing Rotor Blade
Authors: Mars Burke, Alvin Gatto
Abstract:
Rotary-wing aircraft serve as indispensable components in the advancement of aviation, valued for their ability to operate in diverse and challenging environments without the need for conventional runways. This versatility makes them ideal for applications like environmental conservation, precision agriculture, emergency medical support, and rapid-response operations in rugged terrains. However, although highly maneuverable, rotary-wing platforms generally have lower aerodynamic efficiency than fixed-wing aircraft. This study takes the view of improving aerodynamic performance by examining a 1/7th scale rotor blade model with a NACA0012 airfoil using CROTOR software. The analysis focuses on optimal spanwise locations for separating morphing and fixed blade sections at 85%, 90%, and 95% of the blade radius (r/R) with up to +20 degrees of twist incorporated to the design.. Key performance metrics assessed include lift coefficient (CL), drag coefficient (CD), lift-to-drag ratio (CL / CD), Mach number, power, thrust coefficient, and Figure of Merit (FOM). Results indicate that the 0.90 r/R position is optimal for dividing the morphing and fixed sections, achieving a significant improvement of over 7% in both lift-to-drag ratio and FOM. These findings underscoring the substantial impact on overall performance of the rotor system and rotational aerodynamics that geometric modifications through the inclusion of a morphing capability can ultimately realise.Keywords: rotary morphing, rotational aerodynamics, rotorcraft morphing, rotor blade, twist morphing
Procedia PDF Downloads 183263 The Impact of Enhanced Recovery after Surgery (ERAS) Protocols on Anesthesia Management in High-Risk Surgical Patients
Authors: Rebar Mohammed Hussein
Abstract:
Enhanced Recovery After Surgery (ERAS) protocols have transformed perioperative care, aiming to reduce surgical stress, optimize pain management, and accelerate recovery. This study evaluates the impact of ERAS on anesthesia management in high-risk surgical patients, focusing on opioid-sparing techniques and multimodal analgesia. A retrospective analysis was conducted on patients undergoing major surgeries within an ERAS program, comparing outcomes with a historical cohort receiving standard care. Key metrics included postoperative pain scores, opioid consumption, length of hospital stay, and complication rates. Results indicated that the implementation of ERAS protocols significantly reduced postoperative opioid use by 40% and improved pain management outcomes, with 70% of patients reporting satisfactory pain control on postoperative day one. Additionally, patients in the ERAS group experienced a 30% reduction in length of stay and a 20% decrease in complication rates. These findings underscore the importance of integrating ERAS principles into anesthesia practice, particularly for high-risk patients, to enhance recovery, improve patient satisfaction, and reduce healthcare costs. Future directions include prospective studies to further refine anesthesia techniques within ERAS frameworks and explore their applicability across various surgical specialties.Keywords: ERAS protocols, high-risk surgical patients, anesthesia management, recovery
Procedia PDF Downloads 283262 Life Stage Customer Segmentation by Fine-Tuning Large Language Models
Authors: Nikita Katyal, Shaurya Uppal
Abstract:
This paper tackles the significant challenge of accurately classifying customers within a retailer’s customer base. Accurate classification is essential for developing targeted marketing strategies that effectively engage this important demographic. To address this issue, we propose a method that utilizes Large Language Models (LLMs). By employing LLMs, we analyze the metadata associated with product purchases derived from historical data to identify key product categories that act as distinguishing factors. These categories, such as baby food, eldercare products, or family-sized packages, offer valuable insights into the likely household composition of customers, including families with babies, families with kids/teenagers, families with pets, households caring for elders, or mixed households. We segment high-confidence customers into distinct categories by integrating historical purchase behavior with LLM-powered product classification. This paper asserts that life stage segmentation can significantly enhance e-commerce businesses’ ability to target the appropriate customers with tailored products and campaigns, thereby augmenting sales and improving customer retention. Additionally, the paper details the data sources, model architecture, and evaluation metrics employed for the segmentation task.Keywords: LLMs, segmentation, product tags, fine-tuning, target segments, marketing communication
Procedia PDF Downloads 273261 General Architecture for Automation of Machine Learning Practices
Authors: U. Borasi, Amit Kr. Jain, Rakesh, Piyush Jain
Abstract:
Data collection, data preparation, model training, model evaluation, and deployment are all processes in a typical machine learning workflow. Training data needs to be gathered and organised. This often entails collecting a sizable dataset and cleaning it to remove or correct any inaccurate or missing information. Preparing the data for use in the machine learning model requires pre-processing it after it has been acquired. This often entails actions like scaling or normalising the data, handling outliers, selecting appropriate features, reducing dimensionality, etc. This pre-processed data is then used to train a model on some machine learning algorithm. After the model has been trained, it needs to be assessed by determining metrics like accuracy, precision, and recall, utilising a test dataset. Every time a new model is built, both data pre-processing and model training—two crucial processes in the Machine learning (ML) workflow—must be carried out. Thus, there are various Machine Learning algorithms that can be employed for every single approach to data pre-processing, generating a large set of combinations to choose from. Example: for every method to handle missing values (dropping records, replacing with mean, etc.), for every scaling technique, and for every combination of features selected, a different algorithm can be used. As a result, in order to get the optimum outcomes, these tasks are frequently repeated in different combinations. This paper suggests a simple architecture for organizing this largely produced “combination set of pre-processing steps and algorithms” into an automated workflow which simplifies the task of carrying out all possibilities.Keywords: machine learning, automation, AUTOML, architecture, operator pool, configuration, scheduler
Procedia PDF Downloads 583260 Importance of Human Factors on Cybersecurity within Organizations: A Study of Attitudes and Behaviours
Authors: Elham Rajabian
Abstract:
The ascent of cybersecurity incidents is a rising threat to most organisations in general, while the impact of the incidents is unique to each of the organizations. It is a need for behavioural sciences to concentrate on employees’ behaviour in order to prepare key security mitigation opinions versus cybersecurity incidents. There are noticeable differences among users of a computer system in terms of complying with security behaviours. We can discuss the people's differences under several subjects such as delaying tactics on something that must be done, the tendency to act without thinking, future thinking about unexpected implications of present-day issues, and risk-taking behaviours in security policies compliance. In this article, we introduce high-profile cyber-attacks and their impacts on weakening cyber resiliency in organizations. We also give attention to human errors that influence network security. Human errors are discussed as a part of psychological matters to enhance compliance with the security policies. The organizational challenges are studied in order to shape a sustainable cyber risks management approach in the related work section. Insiders’ behaviours are viewed as a cyber security gap to draw proper cyber resiliency in section 3. We carry out the best cybersecurity practices by discussing four CIS challenges in section 4. In this regard, we provide a guideline and metrics to measure cyber resilience in organizations in section 5. In the end, we give some recommendations in order to build a cybersecurity culture based on individual behaviours.Keywords: cyber resilience, human factors, cybersecurity behavior, attitude, usability, security culture
Procedia PDF Downloads 973259 Improving Patient Journey in the Obstetrics and Gynecology Emergency Department: A Comprehensive Analysis of Patient Experience
Authors: Lolwa Alansari, Abdelhamid Azhaghdani, Sufia Athar, Hanen Mrabet, Annaliza Cruz, Tamara Alshadafat, Almunzer Zakaria
Abstract:
Introduction: Improving the patient experience is a fundamental pillar of healthcare's quadruple aims. Recognizing the importance of patient experiences and perceptions in healthcare interactions is pivotal for driving quality improvement. This abstract centers around the Patient Experience Program, an endeavor crafted with the purpose of comprehending and elevating the experiences of patients in the Obstetrics & Gynecology Emergency Department (OB/GYN ED). Methodology: This comprehensive endeavor unfolded through a structured sequence of phases following Plan-Do-Study-Act (PDSA) model, spanning over 12 months, focused on enhancing patient experiences in the Obstetrics & Gynecology Emergency Department (OB/GYN ED). The study meticulously examined the journeys of patients with acute obstetrics and gynecological conditions, collecting data from over 100 participants monthly. The inclusive approach covered patients of different priority levels (1-5) admitted for acute conditions, with no exclusions. Historical data from March and April 2022 serves as a benchmark for comparison, strengthening causality claims by providing a baseline understanding of OB/GYN ED performance before interventions. Additionally, the methodology includes the incorporation of staff engagement surveys to comprehensively understand the experiences of healthcare professionals with the implemented improvements. Data extraction involved administering open-ended questions and comment sections to gather rich qualitative insights. The survey covered various aspects of the patient journey, including communication, emotional support, timely access to care, care coordination, and patient-centered decision-making. The project's data analysis utilized a mixed-methods approach, combining qualitative techniques to identify recurring themes and extract actionable insights and quantitative methods to assess patient satisfaction scores and relevant metrics over time, facilitating the measurement of intervention impact and longitudinal tracking of changes. From the themes we discovered in both the online and in-person patient experience surveys, several key findings emerged that guided us in initiating improvements, including effective communication and information sharing, providing emotional support and empathy, ensuring timely access to care, fostering care coordination and continuity, and promoting patient-centered decision-making. Results: The project yielded substantial positive outcomes, significantly improving patient experiences in the OB/GYN ED. Patient satisfaction levels rose from 62% to a consistent 98%, with notable improvements in satisfaction with care plan information and physician care. Waiting time satisfaction increased from 68% to a steady 97%. The project positively impacted nurses' and midwives' job satisfaction, increasing from 64% to an impressive 94%. Operational metrics displayed positive trends, including a decrease in the "left without being seen" rate from 3% to 1%, the discharge against medical advice rate dropping from 8% to 1%, and the absconded rate reducing from 3% to 0%. These outcomes underscore the project's effectiveness in enhancing both patient and staff experiences in the healthcare setting. Conclusion: The use of a patient experience questionnaire has been substantiated by evidence-based research as an effective tool for improving the patient experience, guiding interventions, and enhancing overall healthcare quality in the OB/GYN ED. The project's interventions have resulted in a more efficient allocation of resources, reduced hospital stays, and minimized unnecessary resource utilization. This, in turn, contributes to cost savings for the healthcare facility.Keywords: patient experience, patient survey, person centered care, quality initiatives
Procedia PDF Downloads 583258 Utilizing Computational Fluid Dynamics in the Analysis of Natural Ventilation in Buildings
Authors: A. W. J. Wong, I. H. Ibrahim
Abstract:
Increasing urbanisation has driven building designers to incorporate natural ventilation in the designs of sustainable buildings. This project utilises Computational Fluid Dynamics (CFD) to investigate the natural ventilation of an academic building, SIT@SP, using an assessment criterion based on daily mean temperature and mean velocity. The areas of interest are the pedestrian level of first and fourth levels of the building. A reference case recommended by the Architectural Institute of Japan was used to validate the simulation model. The validated simulation model was then used for coupled simulations on SIT@SP and neighbouring geometries, under two wind speeds. Both steady and transient simulations were used to identify differences in results. Steady and transient results are agreeable with the transient simulation identifying peak velocities during flow development. Under a lower wind speed, the first level was sufficiently ventilated while the fourth level was not. The first level has excessive wind velocities in the higher wind speed and the fourth level was adequately ventilated. Fourth level flow velocity was consistently lower than those of the first level. This is attributed to either simulation model error or poor building design. SIT@SP is concluded to have a sufficiently ventilated first level and insufficiently ventilated fourth level. Future works for this project extend to modifying the urban geometry, simulation model improvements, evaluation using other assessment metrics and extending the area of interest to the entire building.Keywords: buildings, CFD Simulations, natural ventilation, urban airflow
Procedia PDF Downloads 2213257 Performance Evaluation of Soft RoCE over 1 Gigabit Ethernet
Authors: Gurkirat Kaur, Manoj Kumar, Manju Bala
Abstract:
Ethernet is the most influential and widely used technology in the world. With the growing demand of low latency and high throughput technologies like InfiniBand and RoCE, unique features viz. RDMA (Remote Direct Memory Access) have evolved. RDMA is an effective technology which is used for reducing system load and improving performance. InfiniBand is a well known technology which provides high-bandwidth and low-latency and makes optimal use of in-built features like RDMA. With the rapid evolution of InfiniBand technology and Ethernet lacking the RDMA and zero copy protocol, the Ethernet community has came out with a new enhancements that bridges the gap between InfiniBand and Ethernet. By adding the RDMA and zero copy protocol to the Ethernet a new networking technology is evolved, called RDMA over Converged Ethernet (RoCE). RoCE is a standard released by the IBTA standardization body to define RDMA protocol over Ethernet. With the emergence of lossless Ethernet, RoCE uses InfiniBand’s efficient transport to provide the platform for deploying RDMA technology in mainstream data centres over 10GigE, 40GigE and beyond. RoCE provide all of the InfiniBand benefits transport benefits and well established RDMA ecosystem combined with converged Ethernet. In this paper, we evaluate the heterogeneous Linux cluster, having multi nodes with fast interconnects i.e. gigabit Ethernet and Soft RoCE. This paper presents the heterogeneous Linux cluster configuration and evaluates its performance using Intel’s MPI Benchmarks. Our result shows that Soft RoCE is performing better than Ethernet in various performance metrics like bandwidth, latency and throughput.Keywords: ethernet, InfiniBand, RoCE, RDMA, MPI, Soft RoCE
Procedia PDF Downloads 4643256 Progress in Combining Image Captioning and Visual Question Answering Tasks
Authors: Prathiksha Kamath, Pratibha Jamkhandi, Prateek Ghanti, Priyanshu Gupta, M. Lakshmi Neelima
Abstract:
Combining Image Captioning and Visual Question Answering (VQA) tasks have emerged as a new and exciting research area. The image captioning task involves generating a textual description that summarizes the content of the image. VQA aims to answer a natural language question about the image. Both these tasks include computer vision and natural language processing (NLP) and require a deep understanding of the content of the image and semantic relationship within the image and the ability to generate a response in natural language. There has been remarkable growth in both these tasks with rapid advancement in deep learning. In this paper, we present a comprehensive review of recent progress in combining image captioning and visual question-answering (VQA) tasks. We first discuss both image captioning and VQA tasks individually and then the various ways in which both these tasks can be integrated. We also analyze the challenges associated with these tasks and ways to overcome them. We finally discuss the various datasets and evaluation metrics used in these tasks. This paper concludes with the need for generating captions based on the context and captions that are able to answer the most likely asked questions about the image so as to aid the VQA task. Overall, this review highlights the significant progress made in combining image captioning and VQA, as well as the ongoing challenges and opportunities for further research in this exciting and rapidly evolving field, which has the potential to improve the performance of real-world applications such as autonomous vehicles, robotics, and image search.Keywords: image captioning, visual question answering, deep learning, natural language processing
Procedia PDF Downloads 743255 Refined Edge Detection Network
Authors: Omar Elharrouss, Youssef Hmamouche, Assia Kamal Idrissi, Btissam El Khamlichi, Amal El Fallah-Seghrouchni
Abstract:
Edge detection is represented as one of the most challenging tasks in computer vision, due to the complexity of detecting the edges or boundaries in real-world images that contains objects of different types and scales like trees, building as well as various backgrounds. Edge detection is represented also as a key task for many computer vision applications. Using a set of backbones as well as attention modules, deep-learning-based methods improved the detection of edges compared with the traditional methods like Sobel and Canny. However, images of complex scenes still represent a challenge for these methods. Also, the detected edges using the existing approaches suffer from non-refined results while the image output contains many erroneous edges. To overcome this, n this paper, by using the mechanism of residual learning, a refined edge detection network is proposed (RED-Net). By maintaining the high resolution of edges during the training process, and conserving the resolution of the edge image during the network stage, we make the pooling outputs at each stage connected with the output of the previous layer. Also, after each layer, we use an affined batch normalization layer as an erosion operation for the homogeneous region in the image. The proposed methods are evaluated using the most challenging datasets including BSDS500, NYUD, and Multicue. The obtained results outperform the designed edge detection networks in terms of performance metrics and quality of output images.Keywords: edge detection, convolutional neural networks, deep learning, scale-representation, backbone
Procedia PDF Downloads 1033254 Personalized Social Resource Recommender Systems on Interest-Based Social Networks
Authors: C. L. Huang, J. J. Sia
Abstract:
The interest-based social networks, also known as social bookmark sharing systems, are useful platforms for people to conveniently read and collect internet resources. These platforms also providing function of social networks, and users can share and explore internet resources from the social networks. Providing personalized internet resources to users is an important issue on these platforms. This study uses two types of relationship on the social networks—following and follower and proposes a collaborative recommender system, consisting of two main steps. First, this study calculates the relationship strength between the target user and the target user's followings and followers to find top-N similar neighbors. Second, from the top-N similar neighbors, the articles (internet resources) that may interest the target user are recommended to the target user. In this system, users can efficiently obtain recent, related and diverse internet resources (knowledge) from the interest-based social network. This study collected the experimental dataset from Diigo, which is a famous bookmark sharing system. The experimental results show that the proposed recommendation model is more accurate than two traditional baseline recommendation models but slightly lower than the cosine model in accuracy. However, in the metrics of the diversity and executing time, our proposed model outperforms the cosine model.Keywords: recommender systems, social networks, tagging, bookmark sharing systems, collaborative recommender systems, knowledge management
Procedia PDF Downloads 1753253 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark
Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos
Abstract:
This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark
Procedia PDF Downloads 1203252 Comprehensive Feature Extraction for Optimized Condition Assessment of Fuel Pumps
Authors: Ugochukwu Ejike Akpudo, Jank-Wook Hur
Abstract:
The increasing demand for improved productivity, maintainability, and reliability has prompted rapidly increasing research studies on the emerging condition-based maintenance concept- Prognostics and health management (PHM). Varieties of fuel pumps serve critical functions in several hydraulic systems; hence, their failure can have daunting effects on productivity, safety, etc. The need for condition monitoring and assessment of these pumps cannot be overemphasized, and this has led to the uproar in research studies on standard feature extraction techniques for optimized condition assessment of fuel pumps. By extracting time-based, frequency-based and the more robust time-frequency based features from these vibrational signals, a more comprehensive feature assessment (and selection) can be achieved for a more accurate and reliable condition assessment of these pumps. With the aid of emerging deep classification and regression algorithms like the locally linear embedding (LLE), we propose a method for comprehensive condition assessment of electromagnetic fuel pumps (EMFPs). Results show that the LLE as a comprehensive feature extraction technique yields better feature fusion/dimensionality reduction results for condition assessment of EMFPs against the use of single features. Also, unlike other feature fusion techniques, its capabilities as a fault classification technique were explored, and the results show an acceptable accuracy level using standard performance metrics for evaluation.Keywords: electromagnetic fuel pumps, comprehensive feature extraction, condition assessment, locally linear embedding, feature fusion
Procedia PDF Downloads 1173251 A Medical Vulnerability Scoring System Incorporating Health and Data Sensitivity Metrics
Authors: Nadir A. Carreon, Christa Sonderer, Aakarsh Rao, Roman Lysecky
Abstract:
With the advent of complex software and increased connectivity, the security of life-critical medical devices is becoming an increasing concern, particularly with their direct impact on human safety. Security is essential, but it is impossible to develop completely secure and impenetrable systems at design time. Therefore, it is important to assess the potential impact on the security and safety of exploiting a vulnerability in such critical medical systems. The common vulnerability scoring system (CVSS) calculates the severity of exploitable vulnerabilities. However, for medical devices it does not consider the unique challenges of impacts to human health and privacy. Thus, the scoring of a medical device on which human life depends (e.g., pacemakers, insulin pumps) can score very low, while a system on which human life does not depend (e.g., hospital archiving systems) might score very high. In this paper, we propose a medical vulnerability scoring system (MVSS) that extends CVSS to address the health and privacy concerns of medical devices. We propose incorporating two new parameters, namely health impact, and sensitivity impact. Sensitivity refers to the type of information that can be stolen from the device, and health represents the impact on the safety of the patient if the vulnerability is exploited (e.g., potential harm, life-threatening). We evaluate fifteen different known vulnerabilities in medical devices and compare MVSS against two state-of-the-art medical device-oriented vulnerability scoring systems and the foundational CVSS.Keywords: common vulnerability system, medical devices, medical device security, vulnerabilities
Procedia PDF Downloads 1693250 Highway Lighting of the 21st Century is Smart, but is it Cost Efficient?
Authors: Saurabh Gupta, Vanshdeep Parmar, Sri Harsha Reddy Yelly, Michele Baker, Elizabeth Bigler, Kunhee Choi
Abstract:
It is known that the adoption of solar powered LED highway lighting systems or sensory LED highway lighting systems can dramatically reduce energy consumption by 55 percent when compared to conventional on-grid High Pressure Sodium (HPS) lamps that are widely applied to most highways. However, an initial high installation cost for building the infrastructure of solar photovoltaic devices hampers a wider adoption of such technologies. This research aims to examine currently available state-of-the-art solar photovoltaic and sensory technologies, identify major obstacles, and analyze each technology to create a benchmarking metrics from the benefit-cost analysis perspective. The on-grid HPS lighting systems will serve as the baseline for this study to compare it with other lighting alternatives such as solar and sensory LED lighting systems. This research will test the validity of the research hypothesis that alternative LED lighting systems produce more favorable benefit-cost ratios and the added initial investment costs are recouped by the savings in the operation and maintenance cost. The payback period of the excess investment and projected savings over the life-cycle of the selected lighting systems will be analyzed by utilizing the concept of Net Present Value (NPV). Researchers believe that if this study validates the research hypothesis, it can promote a wider adoption of alternative lighting systems that will eventually save millions of taxpayer dollars in the long-run.Keywords: lighting systems, sensory and solar PV, benefit cost analysis, net present value
Procedia PDF Downloads 3523249 Improving Similarity Search Using Clustered Data
Authors: Deokho Kim, Wonwoo Lee, Jaewoong Lee, Teresa Ng, Gun-Ill Lee, Jiwon Jeong
Abstract:
This paper presents a method for improving object search accuracy using a deep learning model. A major limitation to provide accurate similarity with deep learning is the requirement of huge amount of data for training pairwise similarity scores (metrics), which is impractical to collect. Thus, similarity scores are usually trained with a relatively small dataset, which comes from a different domain, causing limited accuracy on measuring similarity. For this reason, this paper proposes a deep learning model that can be trained with a significantly small amount of data, a clustered data which of each cluster contains a set of visually similar images. In order to measure similarity distance with the proposed method, visual features of two images are extracted from intermediate layers of a convolutional neural network with various pooling methods, and the network is trained with pairwise similarity scores which is defined zero for images in identical cluster. The proposed method outperforms the state-of-the-art object similarity scoring techniques on evaluation for finding exact items. The proposed method achieves 86.5% of accuracy compared to the accuracy of the state-of-the-art technique, which is 59.9%. That is, an exact item can be found among four retrieved images with an accuracy of 86.5%, and the rest can possibly be similar products more than the accuracy. Therefore, the proposed method can greatly reduce the amount of training data with an order of magnitude as well as providing a reliable similarity metric.Keywords: visual search, deep learning, convolutional neural network, machine learning
Procedia PDF Downloads 2153248 Optimization of the Mechanical Performance of Fused Filament Fabrication Parts
Authors: Iván Rivet, Narges Dialami, Miguel Cervera, Michele Chiumenti
Abstract:
Process parameters in Additive Manufacturing (AM) play a critical role in the mechanical performance of the final component. In order to find the input configuration that guarantees the optimal performance of the printed part, the process-performance relationship must be found. Fused Filament Fabrication (FFF) is the selected demonstrative AM technology due to its great popularity in the industrial manufacturing world. A material model that considers the different printing patterns present in a FFF part is used. A voxelized mesh is built from the manufacturing toolpaths described in the G-Code file. An Adaptive Mesh Refinement (AMR) based on the octree strategy is used in order to reduce the complexity of the mesh while maintaining its accuracy. High-fidelity and cost-efficient Finite Element (FE) simulations are performed and the influence of key process parameters in the mechanical performance of the component is analyzed. A robust optimization process based on appropriate failure criteria is developed to find the printing direction that leads to the optimal mechanical performance of the component. The Tsai-Wu failure criterion is implemented due to the orthotropy and heterogeneity constitutive nature of FFF components and because of the differences between the strengths in tension and compression. The optimization loop implements a modified version of an Anomaly Detection (AD) algorithm and uses the computed metrics to obtain the optimal printing direction. The developed methodology is verified with a case study on an industrial demonstrator.Keywords: additive manufacturing, optimization, printing direction, mechanical performance, voxelization
Procedia PDF Downloads 643247 Optimization of Fused Deposition Modeling 3D Printing Process via Preprocess Calibration Routine Using Low-Cost Thermal Sensing
Authors: Raz Flieshman, Adam Michael Altenbuchner, Jörg Krüger
Abstract:
This paper presents an approach to optimizing the Fused Deposition Modeling (FDM) 3D printing process through a preprocess calibration routine of printing parameters. The core of this method involves the use of a low-cost thermal sensor capable of measuring tempera-tures within the range of -20 to 500 degrees Celsius for detailed process observation. The calibration process is conducted by printing a predetermined path while varying the process parameters through machine instructions (g-code). This enables the extraction of critical thermal, dimensional, and surface properties along the printed path. The calibration routine utilizes computer vision models to extract features and metrics from the thermal images, in-cluding temperature distribution, layer adhesion quality, surface roughness, and dimension-al accuracy and consistency. These extracted properties are then analyzed to optimize the process parameters to achieve the desired qualities of the printed material. A significant benefit of this calibration method is its potential to create printing parameter profiles for new polymer and composite materials, thereby enhancing the versatility and application range of FDM 3D printing. The proposed method demonstrates significant potential in enhancing the precision and reliability of FDM 3D printing, making it a valuable contribution to the field of additive manufacturing.Keywords: FDM 3D printing, preprocess calibration, thermal sensor, process optimization, additive manufacturing, computer vision, material profiles
Procedia PDF Downloads 463246 Downscaling Seasonal Sea Surface Temperature Forecasts over the Mediterranean Sea Using Deep Learning
Authors: Redouane Larbi Boufeniza, Jing-Jia Luo
Abstract:
This study assesses the suitability of deep learning (DL) for downscaling sea surface temperature (SST) over the Mediterranean Sea in the context of seasonal forecasting. We design a set of experiments that compare different DL configurations and deploy the best-performing architecture to downscale one-month lead forecasts of June–September (JJAS) SST from the Nanjing University of Information Science and Technology Climate Forecast System version 1.0 (NUIST-CFS1.0) for the period of 1982–2020. We have also introduced predictors over a larger area to include information about the main large-scale circulations that drive SST over the Mediterranean Sea region, which improves the downscaling results. Finally, we validate the raw model and downscaled forecasts in terms of both deterministic and probabilistic verification metrics, as well as their ability to reproduce the observed precipitation extreme and spell indicator indices. The results showed that the convolutional neural network (CNN)-based downscaling consistently improves the raw model forecasts, with lower bias and more accurate representations of the observed mean and extreme SST spatial patterns. Besides, the CNN-based downscaling yields a much more accurate forecast of extreme SST and spell indicators and reduces the significant relevant biases exhibited by the raw model predictions. Moreover, our results show that the CNN-based downscaling yields better skill scores than the raw model forecasts over most portions of the Mediterranean Sea. The results demonstrate the potential usefulness of CNN in downscaling seasonal SST predictions over the Mediterranean Sea, particularly in providing improved forecast products.Keywords: Mediterranean Sea, sea surface temperature, seasonal forecasting, downscaling, deep learning
Procedia PDF Downloads 773245 Normalized Enterprises Architectures: Portugal's Public Procurement System Application
Authors: Tiago Sampaio, André Vasconcelos, Bruno Fragoso
Abstract:
The Normalized Systems Theory, which is designed to be applied to software architectures, provides a set of theorems, elements and rules, with the purpose of enabling evolution in Information Systems, as well as ensuring that they are ready for change. In order to make that possible, this work’s solution is to apply the Normalized Systems Theory to the domain of enterprise architectures, using Archimate. This application is achieved through the adaptation of the elements of this theory, making them artifacts of the modeling language. The theorems are applied through the identification of the viewpoints to be used in the architectures, as well as the transformation of the theory’s encapsulation rules into architectural rules. This way, it is possible to create normalized enterprise architectures, thus fulfilling the needs and requirements of the business. This solution was demonstrated using the Portuguese Public Procurement System. The Portuguese government aims to make this system as fair as possible, allowing every organization to have the same business opportunities. The aim is for every economic operator to have access to all public tenders, which are published in any of the 6 existing platforms, independently of where they are registered. In order to make this possible, we applied our solution to the construction of two different architectures, which are able of fulfilling the requirements of the Portuguese government. One of those architectures, TO-BE A, has a Message Broker that performs the communication between the platforms. The other, TO-BE B, represents the scenario in which the platforms communicate with each other directly. Apart from these 2 architectures, we also represent the AS-IS architecture that demonstrates the current behavior of the Public Procurement Systems. Our evaluation is based on a comparison between the AS-IS and the TO-BE architectures, regarding the fulfillment of the rules and theorems of the Normalized Systems Theory and some quality metrics.Keywords: archimate, architecture, broker, enterprise, evolvable systems, interoperability, normalized architectures, normalized systems, normalized systems theory, platforms
Procedia PDF Downloads 359