Search results for: accurate tagging algorithm
1640 Using Complete Soil Particle Size Distributions for More Precise Predictions of Soil Physical and Hydraulic Properties
Authors: Habib Khodaverdiloo, Fatemeh Afrasiabi, Farrokh Asadzadeh, Martinus Th. Van Genuchten
Abstract:
The soil particle-size distribution (PSD) is known to affect a broad range of soil physical, mechanical and hydraulic properties. Complete descriptions of a PSD curve should provide more information about these properties as opposed to having only information about soil textural class or the soil sand, silt and clay (SSC) fractions. We compared the accuracy of 19 different models of the cumulative PSD in terms of fitting observed data from a large number of Iranian soils. Parameters of the six most promising models were correlated with measured values of the field saturated hydraulic conductivity (Kfs), the mean weight diameter of soil aggregates (MWD), bulk density (ρb), and porosity (∅). These same soil properties were correlated also with conventional PSD parameters (SSC fractions), selected geometric PSD parameters (notably the mean diameter dg and its standard deviation σg), and several other PSD parameters (D50 and D60). The objective was to find the best predictions of several soil physical quality indices and the soil hydraulic properties. Neither SSC nor dg, σg, D50 and D60 were found to have a significant correlation with both Kfs or logKfs, However, the parameters of several cumulative PSD models showed statistically significant correlation with Kfs and/or logKfs (|r| = 0.42 to 0.65; p ≤ 0.05). The correlation between MWD and the model parameters was generally also higher than either with SSC fraction and dg, or with D50 and D60. Porosity (∅) and the bulk density (ρb) also showed significant correlation with several PSD model parameters, with ρb additionally correlating significantly with various geometric (dg), mechanical (D50 and D60), and agronomic (clay and sand) representations of the PSD. The fitted parameters of selected PSD models furthermore showed statistically significant correlations with Kfs,, MWD and soil porosity, which may be viewed as soil quality indices. Results of this study are promising for developing more accurate pedotransfer functions.Keywords: particle size distribution, soil texture, hydraulic conductivity, pedotransfer functions
Procedia PDF Downloads 2771639 Discrimination and Classification of Vestibular Neuritis Using Combined Fisher and Support Vector Machine Model
Authors: Amine Ben Slama, Aymen Mouelhi, Sondes Manoubi, Chiraz Mbarek, Hedi Trabelsi, Mounir Sayadi, Farhat Fnaiech
Abstract:
Vertigo is a sensation of feeling off balance; the cause of this symptom is very difficult to interpret and needs a complementary exam. Generally, vertigo is caused by an ear problem. Some of the most common causes include: benign paroxysmal positional vertigo (BPPV), Meniere's disease and vestibular neuritis (VN). In clinical practice, different tests of videonystagmographic (VNG) technique are used to detect the presence of vestibular neuritis (VN). The topographical diagnosis of this disease presents a large diversity in its characteristics that confirm a mixture of problems for usual etiological analysis methods. In this study, a vestibular neuritis analysis method is proposed with videonystagmography (VNG) applications using an estimation of pupil movements in the case of an uncontrolled motion to obtain an efficient and reliable diagnosis results. First, an estimation of the pupil displacement vectors using with Hough Transform (HT) is performed to approximate the location of pupil region. Then, temporal and frequency features are computed from the rotation angle variation of the pupil motion. Finally, optimized features are selected using Fisher criterion evaluation for discrimination and classification of the VN disease.Experimental results are analyzed using two categories: normal and pathologic. By classifying the reduced features using the Support Vector Machine (SVM), 94% is achieved as classification accuracy. Compared to recent studies, the proposed expert system is extremely helpful and highly effective to resolve the problem of VNG analysis and provide an accurate diagnostic for medical devices.Keywords: nystagmus, vestibular neuritis, videonystagmographic system, VNG, Fisher criterion, support vector machine, SVM
Procedia PDF Downloads 1341638 A Policy Strategy for Building Energy Data Management in India
Authors: Shravani Itkelwar, Deepak Tewari, Bhaskar Natarajan
Abstract:
The energy consumption data plays a vital role in energy efficiency policy design, implementation, and impact assessment. Any demand-side energy management intervention's success relies on the availability of accurate, comprehensive, granular, and up-to-date data on energy consumption. The Building sector, including residential and commercial, is one of the largest consumers of energy in India after the Industrial sector. With economic growth and increasing urbanization, the building sector is projected to grow at an unprecedented rate, resulting in a 5.6 times escalation in energy consumption till 2047 compared to 2017. Therefore, energy efficiency interventions will play a vital role in decoupling the floor area growth and associated energy demand, thereby increasing the need for robust data. In India, multiple institutions are involved in the collection and dissemination of data. This paper focuses on energy consumption data management in the building sector in India for both residential and commercial segments. It evaluates the robustness of data available through administrative and survey routes to estimate the key performance indicators and identify critical data gaps for making informed decisions. The paper explores several issues in the data, such as lack of comprehensiveness, non-availability of disaggregated data, the discrepancy in different data sources, inconsistent building categorization, and others. The identified data gaps are justified with appropriate examples. Moreover, the paper prioritizes required data in order of relevance to policymaking and groups it into "available," "easy to get," and "hard to get" categories. The paper concludes with recommendations to address the data gaps by leveraging digital initiatives, strengthening institutional capacity, institutionalizing exclusive building energy surveys, and standardization of building categorization, among others, to strengthen the management of building sector energy consumption data.Keywords: energy data, energy policy, energy efficiency, buildings
Procedia PDF Downloads 1841637 'Get the DNR': Exploring the Impact of an Educational eModule on Internal Medicine Residents' Attitudes and Approaches to Goals of Care Conversations
Authors: Leora Branfield Day, Stephanie Saunders, Leah Steinberg, Shiphra Ginsburg, Christine Soong
Abstract:
Introduction: Discordance between patients expressed and documented preferences at the end of life is common. Although junior trainees frequently lead goals of care (GOC) conversations, lack of training can result in poor communication. Based on a needs assessment, we developed an interactive electronic learning module (eModule) for conducting patient-centred GOC discussions. The purpose of this study was to evaluate the impact of the eModule on residents’ attitudes towards GOC conversations. Methods: First-year internal medicine residents (n=11) from the University of Toronto selected using purposive sampling underwent semi-structured interviews before and after completing a GOC eModule. Interviews were anonymized, transcribed and open-coded using NVivo. Using a constructivist grounded theory approach, we developed a framework to understand the attitudes of residents to GOC conversations before and after viewing the module. Results: Before the module, participants described limited training and negative emotions towards GOC conversations. Many focused on code status and procedure choices (e.g., ventilation) instead of eliciting patient-centered values. Pressure to “get the DNR" led to conflicting feelings and distress. After the module, participants’ approached conversations with a greater focus on patient values and process. They felt more prepared and comfortable, recognizing the complexity of conversations and the importance of patient-centeredness. Conclusions: A novel GOC eModule allowed residents to develop a patient-centered and standardized approach to GOC conversations while improving confidence and preparedness. This resource could be an effective strategy toward attaining a critical communication competency among learners with the potential to enhance accurate GOC documentation.Keywords: goals of care conversations, communication skills, emodule, medical education
Procedia PDF Downloads 1351636 Assessment of Hepatosteatosis Among Diabetic and Nondiabetic Patients Using Biochemical Parameters and Noninvasive Imaging Techniques
Authors: Tugba Sevinc Gamsiz, Emine Koroglu, Ozcan Keskin
Abstract:
Aim: Nonalcoholic fatty liver disease (NAFLD) is considered the most common chronic liver disease in the general population. The higher mortality and morbidity among NAFLD patients and lack of symptoms makes early detection and management important. In our study, we aimed to evaluate the relationship between noninvasive imaging and biochemical markers in diabetic and nondiabetic patients diagnosed with NAFLD. Materials and Methods: The study was conducted from (September 2017) to (December 2017) on adults admitted to Internal Medicine and Gastroenterology outpatient clinics with hepatic steatosis reported on ultrasound or transient elastography within the last six months that exclude patients with other liver diseases or alcohol abuse. The data were collected and analyzed retrospectively. Number cruncher statistical system (NCSS) 2007 program was used for statistical analysis. Results: 116 patients were included in this study. Diabetic patients compared to nondiabetics had significantly higher Controlled Attenuation Parameter (CAP), Liver Stiffness Measurement (LSM) and fibrosis values. Also, hypertension, hepatomegaly, high BMI, hypertriglyceridemia, hyperglycemia, high A1c, and hyperuricemia were found to be risk factors for NAFLD progression to fibrosis. Advanced fibrosis (F3, F4) was present in 18,6 % of all our patients; 35,8 % of diabetic and 5,7 % of nondiabetic patients diagnosed with hepatic steatosis. Conclusion: Transient elastography is now used in daily clinical practice as an accurate noninvasive tool during follow-up of patients with fatty liver. Early diagnosis of the stage of liver fibrosis improves the monitoring and management of patients, especially in those with metabolic syndrome criteria.Keywords: diabetes, elastography, fatty liver, fibrosis, metabolic syndrome
Procedia PDF Downloads 1501635 The Co-Simulation Interface SystemC/Matlab Applied in JPEG and SDR Application
Authors: Walid Hassairi, Moncef Bousselmi, Mohamed Abid
Abstract:
Functional verification is a major part of today’s system design task. Several approaches are available for verification on a high abstraction level, where designs are often modeled using MATLAB/Simulink. However, different approaches are a barrier to a unified verification flow. In this paper, we propose a co-simulation interface between SystemC and MATLAB and Simulink to enable functional verification of multi-abstraction levels designs. The resulting verification flow is tested on JPEG compression algorithm. The required synchronization of both simulation environments, as well as data type conversion is solved using the proposed co-simulation flow. We divided into two encoder jpeg parts. First implemented in SystemC which is the DCT is representing the HW part. Second, consisted of quantization and entropy encoding which is implemented in Matlab is the SW part. For communication and synchronization between these two parts we use S-Function and engine in Simulink matlab. With this research premise, this study introduces a new implementation of a Hardware SystemC of DCT. We compare the result of our simulation compared to SW / SW. We observe a reduction in simulation time you have 88.15% in JPEG and the design efficiency of the supply design is 90% in SDR.Keywords: hardware/software, co-design, co-simulation, systemc, matlab, s-function, communication, synchronization
Procedia PDF Downloads 4021634 A Prediction of Cutting Forces Using Extended Kienzle Force Model Incorporating Tool Flank Wear Progression
Authors: Wu Peng, Anders Liljerehn, Martin Magnevall
Abstract:
In metal cutting, tool wear gradually changes the micro geometry of the cutting edge. Today there is a significant gap in understanding the impact these geometrical changes have on the cutting forces which governs tool deflection and heat generation in the cutting zone. Accurate models and understanding of the interaction between the work piece and cutting tool leads to improved accuracy in simulation of the cutting process. These simulations are useful in several application areas, e.g., optimization of insert geometry and machine tool monitoring. This study aims to develop an extended Kienzle force model to account for the effect of rake angle variations and tool flank wear have on the cutting forces. In this paper, the starting point sets from cutting force measurements using orthogonal turning tests of pre-machined flanches with well-defined width, using triangular coated inserts to assure orthogonal condition. The cutting forces have been measured by dynamometer with a set of three different rake angles, and wear progression have been monitored during machining by an optical measuring collaborative robot. The method utilizes the measured cutting forces with the inserts flank wear progression to extend the mechanistic cutting forces model with flank wear as an input parameter. The adapted cutting forces model is validated in a turning process with commercial cutting tools. This adapted cutting forces model shows the significant capability of prediction of cutting forces accounting for tools flank wear and different-rake-angle cutting tool inserts. The result of this study suggests that the nonlinear effect of tools flank wear and interaction between the work piece and the cutting tool can be considered by the developed cutting forces model.Keywords: cutting force, kienzle model, predictive model, tool flank wear
Procedia PDF Downloads 1071633 Automatic Near-Infrared Image Colorization Using Synthetic Images
Authors: Yoganathan Karthik, Guhanathan Poravi
Abstract:
Colorizing near-infrared (NIR) images poses unique challenges due to the absence of color information and the nuances in light absorption. In this paper, we present an approach to NIR image colorization utilizing a synthetic dataset generated from visible light images. Our method addresses two major challenges encountered in NIR image colorization: accurately colorizing objects with color variations and avoiding over/under saturation in dimly lit scenes. To tackle these challenges, we propose a Generative Adversarial Network (GAN)-based framework that learns to map NIR images to their corresponding colorized versions. The synthetic dataset ensures diverse color representations, enabling the model to effectively handle objects with varying hues and shades. Furthermore, the GAN architecture facilitates the generation of realistic colorizations while preserving the integrity of dimly lit scenes, thus mitigating issues related to over/under saturation. Experimental results on benchmark NIR image datasets demonstrate the efficacy of our approach in producing high-quality colorizations with improved color accuracy and naturalness. Quantitative evaluations and comparative studies validate the superiority of our method over existing techniques, showcasing its robustness and generalization capability across diverse NIR image scenarios. Our research not only contributes to advancing NIR image colorization but also underscores the importance of synthetic datasets and GANs in addressing domain-specific challenges in image processing tasks. The proposed framework holds promise for various applications in remote sensing, medical imaging, and surveillance where accurate color representation of NIR imagery is crucial for analysis and interpretation.Keywords: computer vision, near-infrared images, automatic image colorization, generative adversarial networks, synthetic data
Procedia PDF Downloads 421632 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms
Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli
Abstract:
Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning
Procedia PDF Downloads 721631 New Approach for Minimizing Wavelength Fragmentation in Wavelength-Routed WDM Networks
Authors: Sami Baraketi, Jean Marie Garcia, Olivier Brun
Abstract:
Wavelength Division Multiplexing (WDM) is the dominant transport technology used in numerous high capacity backbone networks, based on optical infrastructures. Given the importance of costs (CapEx and OpEx) associated to these networks, resource management is becoming increasingly important, especially how the optical circuits, called “lightpaths”, are routed throughout the network. This requires the use of efficient algorithms which provide routing strategies with the lowest cost. We focus on the lightpath routing and wavelength assignment problem, known as the RWA problem, while optimizing wavelength fragmentation over the network. Wavelength fragmentation poses a serious challenge for network operators since it leads to the misuse of the wavelength spectrum, and then to the refusal of new lightpath requests. In this paper, we first establish a new Integer Linear Program (ILP) for the problem based on a node-link formulation. This formulation is based on a multilayer approach where the original network is decomposed into several network layers, each corresponding to a wavelength. Furthermore, we propose an efficient heuristic for the problem based on a greedy algorithm followed by a post-treatment procedure. The obtained results show that the optimal solution is often reached. We also compare our results with those of other RWA heuristic methods.Keywords: WDM, lightpath, RWA, wavelength fragmentation, optimization, linear programming, heuristic
Procedia PDF Downloads 5261630 Study on The Pile Height Loss of Tunisian Handmade Carpets Under Dynamic Loading
Authors: Fatma Abidi, Taoufik Harizi, Slah Msahli, Faouzi Sakli
Abstract:
Nine different Tunisian handmade carpets were used for the investigation. The raw material of the carpet pile yarns was wool. The influence of the different structure parameters (linear density and pile height) on the carpet compression was investigated. Carpets were tested under dynamic loading in order to evaluate and observe the thickness loss and carpet behavior under dynamic loads. To determine the loss of pile height under dynamic loading, the pile height carpets were measured. The test method was treated according to the Tunisian standard NT 12.165 (corresponds to the standard ISO 2094). The pile height measurements are taken and recorded at intervals up to 1000 impacts (measures of this study were made after 50, 100, 200, 500, and 1000 impacts). The loss of pile height is calculated using the variation between the initial height and those measured after the number of reported impacts. The experimental results were statistically evaluated using Design Expert Analysis of Variance (ANOVA) software. As regards the deformation, results showed that both of the structure parameters of the pile yarn and the pile height have an influence. The carpet with the higher pile and the less linear density of pile yarn showed the worst performance. Results of a polynomial regression analysis are highlighted. There is a good correlation between the loss of pile height and the impacts number of dynamic loads. These equations are in good agreement with measured data. Because the prediction is reasonably accurate for all samples, these equations can also be taken into account when calculating the theoretical loss of pile height for the considered carpet samples. Statistical evaluations of the experimen¬tal data showed that the pile material and number of impacts have a significant effect on mean thickness and thickness loss variations.Keywords: Tunisian handmade carpet, loss of pile height, dynamic loads, performance
Procedia PDF Downloads 3201629 A Modelling Study of the Photochemical and Particulate Pollution Characteristics above a Typical Southeast Mediterranean Urban Area
Authors: Fameli Kyriaki-Maria, Assimakopoulos D. Vasiliki, Kotroni Vassiliki
Abstract:
The Greater Athens Area (GAA) faces photochemical and particulate pollution episodes as a result of the combined effects of local pollutant emissions, regional pollution transport, synoptic circulation and topographic characteristics. The area has undergone significant changes since the Athens 2004 Olympic Games because of large scale infrastructure works that lead to the shift of population to areas previously characterized as rural, the increase of the traffic fleet and the operation of highways. However, no recent modelling studies have been performed due to the lack of an accurate, updated emission inventory. The photochemical modelling system MM5/CAMx was applied in order to study the photochemical and particulate pollution characteristics above the GAA for two distinct ten-day periods in the summer of 2006 and 2010, where air pollution episodes occurred. A new updated emission inventory was used based on official data. Comparison of modeled results with measurements revealed the importance and accuracy of the new Athens emission inventory as compared to previous modeling studies. The model managed to reproduce the local meteorological conditions, the daily ozone and particulates fluctuations at different locations across the GAA. Higher ozone levels were found at suburban and rural areas as well as over the sea at the south of the basin. Concerning PM10, high concentrations were computed at the city centre and the southeastern suburbs in agreement with measured data. Source apportionment analysis showed that different sources contribute to the ozone levels, the local sources (traffic, port activities) affecting its formation.Keywords: photochemical modelling, urban pollution, greater Athens area, MM5/CAMx
Procedia PDF Downloads 2811628 Analysis of Three-Dimensional Longitudinal Rolls Induced by Double Diffusive Poiseuille-Rayleigh-Benard Flows in Rectangular Channels
Authors: O. Rahli, N. Mimouni, R. Bennacer, K. Bouhadef
Abstract:
This numerical study investigates the travelling wave’s appearance and the behavior of Poiseuille-Rayleigh-Benard (PRB) flow induced in 3D thermosolutale mixed convection (TSMC) in horizontal rectangular channels. The governing equations are discretized by using a control volume method with third order Quick scheme in approximating the advection terms. Simpler algorithm is used to handle coupling between the momentum and continuity equations. To avoid the excessively high computer time, full approximation storage (FAS) with full multigrid (FMG) method is used to solve the problem. For a broad range of dimensionless controlling parameters, the contribution of this work is to analyzing the flow regimes of the steady longitudinal thermoconvective rolls (noted R//) for both thermal and mass transfer (TSMC). The transition from the opposed volume forces to cooperating ones, considerably affects the birth and the development of the longitudinal rolls. The heat and mass transfers distribution are also examined.Keywords: heat and mass transfer, mixed convection, poiseuille-rayleigh-benard flow, rectangular duct
Procedia PDF Downloads 2971627 Identification of Groundwater Potential Zones Using Geographic Information System and Multi-Criteria Decision Analysis: A Case Study in Bagmati River Basin
Authors: Hritik Bhattarai, Vivek Dumre, Ananya Neupane, Poonam Koirala, Anjali Singh
Abstract:
The availability of clean and reliable groundwater is essential for the sustainment of human and environmental health. Groundwater is a crucial resource that contributes significantly to the total annual supply. However, over-exploitation has depleted groundwater availability considerably and led to some land subsidence. Determining the potential zone of groundwater is vital for protecting water quality and managing groundwater systems. Groundwater potential zones are marked with the assistance of Geographic Information System techniques. During the study, a standard methodology was proposed to determine groundwater potential using an integration of GIS and AHP techniques. When choosing the prospective groundwater zone, accurate information was generated to get parameters such as geology, slope, soil, temperature, rainfall, drainage density, and lineament density. However, identifying and mapping potential groundwater zones remains challenging due to aquifer systems' complex and dynamic nature. Then, ArcGIS was incorporated with a weighted overlay, and appropriate ranks were assigned to each parameter group. Through data analysis, MCDA was applied to weigh and prioritize the different parameters based on their relative impact on groundwater potential. There were three probable groundwater zones: low potential, moderate potential, and high potential. Our analysis showed that the central and lower parts of the Bagmati River Basin have the highest potential, i.e., 7.20% of the total area. In contrast, the northern and eastern parts have lower potential. The identified potential zones can be used to guide future groundwater exploration and management strategies in the region.Keywords: groundwater, geographic information system, analytic hierarchy processes, multi-criteria decision analysis, Bagmati
Procedia PDF Downloads 1021626 Field Evaluation of Pile Behavior in Sandy Soil Underlain by Clay
Authors: R. Bakr, M. Elmeligy, A. Ibrahim
Abstract:
When the building loads are relatively small, challenges are often facing the foundation design especially when inappropriate soil conditions exist. These may be represented in the existence of soft soil in the upper layers of soil while sandy soil or firm cohesive soil exist in the deeper layers. In such cases, the design becomes infeasible if the piles are extended to the deeper layers, especially when there are sandy layers existing at shallower depths underlain by stiff clayey soil. In this research, models of piles terminated in sand underlain by clay soils are numerically simulated by different modelling theories. Finite element software, Plaxis 3-D Foundation was used to evaluate the pile behavior under different loading scenarios. The standard static load test according to ASTM D-1143 was simulated and compared with the real-life loading scenario. The results showed that the pile behavior obtained from the current static load test do not realistically represent that obtained from real-life loading. Attempts were carried out to capture the proper numerical loading scenario that simulates the pile behavior in real-life loading including the long-term effect. A modified method based on this research findings is proposed for the static pile loading tests. Field loading tests were carried out to validate the new method. Results obtained from both numerical and field tests by using the modified method prove that this method is more accurate in predicting the pile behavior in sand soil underlain by clay more than the current standard static load.Keywords: numerical simulation, static load test, pile behavior, sand underlain with clay, creep
Procedia PDF Downloads 3211625 A Brave New World of Privacy: Empirical Insights into the Metaverse’s Personalization Dynamics
Authors: Cheng Xu
Abstract:
As the metaverse emerges as a dynamic virtual simulacrum of reality, its implications on user privacy have become a focal point of interest. While previous discussions have ventured into metaverse privacy dynamics, a glaring empirical gap persists, especially concerning the effects of personalization in the context of news recommendation services. This study stands at the forefront of addressing this void, meticulously examining how users' privacy concerns shift within the metaverse's personalization context. Through a pre-registered randomized controlled experiment, participants engaged in a personalization task across both the metaverse and traditional online platforms. Upon completion of this task, a comprehensive news recommendation service provider offers personalized news recommendations to the users. Our empirical findings reveal that the metaverse inherently amplifies privacy concerns compared to traditional settings. However, these concerns are notably mitigated when users have a say in shaping the algorithms that drive these recommendations. This pioneering research not only fills a significant knowledge gap but also offers crucial insights for metaverse developers and policymakers, emphasizing the nuanced role of user input in shaping algorithm-driven privacy perceptions.Keywords: metaverse, privacy concerns, personalization, digital interaction, algorithmic recommendations
Procedia PDF Downloads 1151624 A Particle Swarm Optimal Control Method for DC Motor by Considering Energy Consumption
Authors: Yingjie Zhang, Ming Li, Ying Zhang, Jing Zhang, Zuolei Hu
Abstract:
In the actual start-up process of DC motors, the DC drive system often faces a conflict between energy consumption and acceleration performance. To resolve the conflict, this paper proposes a comprehensive performance index that energy consumption index is added on the basis of classical control performance index in the DC motor starting process. Taking the comprehensive performance index as the cost function, particle swarm optimization algorithm is designed to optimize the comprehensive performance. Then it conducts simulations on the optimization of the comprehensive performance of the DC motor on condition that the weight coefficient of the energy consumption index should be properly designed. The simulation results show that as the weight of energy consumption increased, the energy efficiency was significantly improved at the expense of a slight sacrifice of fastness indicators with the comprehensive performance index method. The energy efficiency was increased from 63.18% to 68.48% and the response time reduced from 0.2875s to 0.1736s simultaneously compared with traditional proportion integrals differential controller in energy saving.Keywords: comprehensive performance index, energy consumption, acceleration performance, particle swarm optimal control
Procedia PDF Downloads 1621623 Unlocking the Future of Grocery Shopping: Graph Neural Network-Based Cold Start Item Recommendations with Reverse Next Item Period Recommendation (RNPR)
Authors: Tesfaye Fenta Boka, Niu Zhendong
Abstract:
Recommender systems play a crucial role in connecting individuals with the items they require, as is particularly evident in the rapid growth of online grocery shopping platforms. These systems predominantly rely on user-centered recommendations, where items are suggested based on individual preferences, garnering considerable attention and adoption. However, our focus lies on the item-centered recommendation task within the grocery shopping context. In the reverse next item period recommendation (RNPR) task, we are presented with a specific item and challenged to identify potential users who are likely to consume it in the upcoming period. Despite the ever-expanding inventory of products on online grocery platforms, the cold start item problem persists, posing a substantial hurdle in delivering personalized and accurate recommendations for new or niche grocery items. To address this challenge, we propose a Graph Neural Network (GNN)-based approach. By capitalizing on the inherent relationships among grocery items and leveraging users' historical interactions, our model aims to provide reliable and context-aware recommendations for cold-start items. This integration of GNN technology holds the promise of enhancing recommendation accuracy and catering to users' individual preferences. This research contributes to the advancement of personalized recommendations in the online grocery shopping domain. By harnessing the potential of GNNs and exploring item-centered recommendation strategies, we aim to improve the overall shopping experience and satisfaction of users on these platforms.Keywords: recommender systems, cold start item recommendations, online grocery shopping platforms, graph neural networks
Procedia PDF Downloads 881622 'Explainable Artificial Intelligence' and Reasons for Judicial Decisions: Why Justifications and Not Just Explanations May Be Required
Authors: Jacquelyn Burkell, Jane Bailey
Abstract:
Artificial intelligence (AI) solutions deployed within the justice system face the critical task of providing acceptable explanations for decisions or actions. These explanations must satisfy the joint criteria of public and professional accountability, taking into account the perspectives and requirements of multiple stakeholders, including judges, lawyers, parties, witnesses, and the general public. This research project analyzes and integrates two existing literature on explanations in order to propose guidelines for explainable AI in the justice system. Specifically, we review three bodies of literature: (i) explanations of the purpose and function of 'explainable AI'; (ii) the relevant case law, judicial commentary and legal literature focused on the form and function of reasons for judicial decisions; and (iii) the literature focused on the psychological and sociological functions of these reasons for judicial decisions from the perspective of the public. Our research suggests that while judicial ‘reasons’ (arguably accurate descriptions of the decision-making process and factors) do serve similar explanatory functions as those identified in the literature on 'explainable AI', they also serve an important ‘justification’ function (post hoc constructions that justify the decision that was reached). Further, members of the public are also looking for both justification and explanation in reasons for judicial decisions, and that the absence of either feature is likely to contribute to diminished public confidence in the legal system. Therefore, artificially automated judicial decision-making systems that simply attempt to document the process of decision-making are unlikely in many cases to be useful to and accepted within the justice system. Instead, these systems should focus on the post-hoc articulation of principles and precedents that support the decision or action, especially in cases where legal subjects’ fundamental rights and liberties are at stake.Keywords: explainable AI, judicial reasons, public accountability, explanation, justification
Procedia PDF Downloads 1251621 Immature Palm Tree Detection Using Morphological Filter for Palm Counting with High Resolution Satellite Image
Authors: Nur Nadhirah Rusyda Rosnan, Nursuhaili Najwa Masrol, Nurul Fatiha MD Nor, Mohammad Zafrullah Mohammad Salim, Sim Choon Cheak
Abstract:
Accurate inventories of oil palm planted areas are crucial for plantation management as this would impact the overall economy and production of oil. One of the technological advancements in the oil palm industry is semi-automated palm counting, which is replacing conventional manual palm counting via digitizing aerial imagery. Most of the semi-automated palm counting method that has been developed was limited to mature palms due to their ideal canopy size represented by satellite image. Therefore, immature palms were often left out since the size of the canopy is barely visible from satellite images. In this paper, an approach using a morphological filter and high-resolution satellite image is proposed to detect immature palm trees. This approach makes it possible to count the number of immature oil palm trees. The method begins with an erosion filter with an appropriate window size of 3m onto the high-resolution satellite image. The eroded image was further segmented using watershed segmentation to delineate immature palm tree regions. Then, local minimum detection was used because it is hypothesized that immature oil palm trees are located at the local minimum within an oil palm field setting in a grayscale image. The detection points generated from the local minimum are displaced to the center of the immature oil palm region and thinned. Only one detection point is left that represents a tree. The performance of the proposed method was evaluated on three subsets with slopes ranging from 0 to 20° and different planting designs, i.e., straight and terrace. The proposed method was able to achieve up to more than 90% accuracy when compared with the ground truth, with an overall F-measure score of up to 0.91.Keywords: immature palm count, oil palm, precision agriculture, remote sensing
Procedia PDF Downloads 751620 Estimation of Greenhouse Gas (GHG) Reductions from Solar Cell Technology Using Bottom-up Approach and Scenario Analysis in South Korea
Authors: Jaehyung Jung, Kiman Kim, Heesang Eum
Abstract:
Solar cell is one of the main technologies to reduce greenhouse gas (GHG). Thereby, accurate estimation of greenhouse gas reduction by solar cell technology is crucial to consider strategic applications of the solar cell. The bottom-up approach using operating data such as operation time and efficiency is one of the methodologies to improve the accuracy of the estimation. In this study, alternative GHG reductions from solar cell technology were estimated by a bottom-up approach to indirect emission source (scope 2) in Korea, 2015. In addition, the scenario-based analysis was conducted to assess the effect of technological change with respect to efficiency improvement and rate of operation. In order to estimate GHG reductions from solar cell activities in operating condition levels, methodologies were derived from 2006 IPCC guidelines for national greenhouse gas inventories and guidelines for local government greenhouse inventories published in Korea, 2016. Indirect emission factors for electricity were obtained from Korea Power Exchange (KPX) in 2011. As a result, the annual alternative GHG reductions were estimated as 21,504 tonCO2eq, and the annual average value was 1,536 tonCO2eq per each solar cell technology. Those results of estimation showed to be 91% levels versus design of capacity. Estimation of individual greenhouse gases (GHGs) showed that the largest gas was carbon dioxide (CO2), of which up to 99% of the total individual greenhouse gases. The annual average GHG reductions from solar cell per year and unit installed capacity (MW) were estimated as 556 tonCO2eq/yr•MW. Scenario analysis of efficiency improvement by 5%, 10%, 15% increased as much as approximately 30, 61, 91%, respectively, and rate of operation as 100% increased 4% of the annual GHG reductions.Keywords: bottom-up approach, greenhouse gas (GHG), reduction, scenario, solar cell
Procedia PDF Downloads 2191619 Massively-Parallel Bit-Serial Neural Networks for Fast Epilepsy Diagnosis: A Feasibility Study
Authors: Si Mon Kueh, Tom J. Kazmierski
Abstract:
There are about 1% of the world population suffering from the hidden disability known as epilepsy and major developing countries are not fully equipped to counter this problem. In order to reduce the inconvenience and danger of epilepsy, different methods have been researched by using a artificial neural network (ANN) classification to distinguish epileptic waveforms from normal brain waveforms. This paper outlines the aim of achieving massive ANN parallelization through a dedicated hardware using bit-serial processing. The design of this bit-serial Neural Processing Element (NPE) is presented which implements the functionality of a complete neuron using variable accuracy. The proposed design has been tested taking into consideration non-idealities of a hardware ANN. The NPE consists of a bit-serial multiplier which uses only 16 logic elements on an Altera Cyclone IV FPGA and a bit-serial ALU as well as a look-up table. Arrays of NPEs can be driven by a single controller which executes the neural processing algorithm. In conclusion, the proposed compact NPE design allows the construction of complex hardware ANNs that can be implemented in a portable equipment that suits the needs of a single epileptic patient in his or her daily activities to predict the occurrences of impending tonic conic seizures.Keywords: Artificial Neural Networks (ANN), bit-serial neural processor, FPGA, Neural Processing Element (NPE)
Procedia PDF Downloads 3191618 Sensitive Electrochemical Sensor for Simultaneous Detection of Endocrine Disruptors, Bisphenol A and 4- Nitrophenol Using La₂Cu₂O₅ Modified Glassy Carbon Electrode
Authors: S. B. Mayil Vealan, C. Sekar
Abstract:
Bisphenol A (BIS A) and 4 Nitrophenol (4N) are the most prevalent environmental endocrine-disrupting chemicals which mimic hormones and have a direct relationship to the development and growth of animal and human reproductive systems. Moreover, intensive exposure to the compound is related to prostate and breast cancer, infertility, obesity, and diabetes. Hence, accurate and reliable determination techniques are crucial for preventing human exposure to these harmful chemicals. Lanthanum Copper Oxide (La₂Cu₂O₅) nanoparticles were synthesized and investigated through various techniques such as scanning electron microscopy, high-resolution transmission electron microscopy, X-ray diffraction, X-ray photoelectron spectroscopy, and electrochemical impedance spectroscopy. Cyclic voltammetry and square wave voltammetry techniques are employed to evaluate the electrochemical behavior of as-synthesized samples toward the electrochemical detection of Bisphenol A and 4-Nitrophenol. Under the optimal conditions, the oxidation current increased linearly with increasing the concentration of BIS A and 4-N in the range of 0.01 to 600 μM with a detection limit of 2.44 nM and 3.8 nM. These are the lowest limits of detection and the widest linear ranges in the literature for this determination. The method was applied to the simultaneous determination of BIS A and 4-N in real samples (food packing materials and river water) with excellent recovery values ranging from 95% to 99%. Better stability, sensitivity, selectivity and reproducibility, fast response, and ease of preparation made the sensor well-suitable for the simultaneous determination of bisphenol and 4 Nitrophenol. To the best of our knowledge, this is the first report in which La₂Cu₂O₅ nano particles were used as efficient electron mediators for the fabrication of endocrine disruptor (BIS A and 4N) chemical sensors.Keywords: endocrine disruptors, electrochemical sensor, Food contacting materials, lanthanum cuprates, nanomaterials
Procedia PDF Downloads 841617 Blood Analysis of Diarrheal Calves Using Portable Blood Analyzer: Analysis of Calves by Age
Authors: Kwangman Park, Jinhee Kang, Suhee Kim, Dohyeon Yu, Kyoungseong Choi, Jinho Park
Abstract:
Statement of the Problem: Diarrhea is a major cause of death in young calves. This causes great economic damage to the livestock industry. These diarrhea cause dehydration, decrease blood flow, lower the pH and degrade enzyme function. In the past, serum screening was not possible in the field. However, now with the spread of portable serum testing devices, it is now possible to conduct tests directly on field. Thus, accurate serological changes can be identified and used in the field of large animals. Methodology and Theoretical Orientation: The test groups were calves from 1 to 44 days old. The status of the feces was divided into four grade to determine the severity of diarrhea (grade 0,1,2,3). Grade 0, 1 is considered to have no diarrhea. Grade 2, 3 is considered to diarrhea positive group. One or more viruses were detected in this group. Diarrhea negasitive group consisted of 57 calves (Asan=30, Samrye=27). Diarrhea positive group consisted of 34 calves (Kimje=27, Geochang=7). The feces of all calves were analyzed by PCR Test. Blood sample was measured using an automatic blood analyzer(i-STAT, Abbott inc. Illinois, US). Calves were divided into 3 groups according to age. Group 1 is 1 to 14 days old. Group 2 is 15 to 28 days old. Group 3 is more than 28 days old. Findings: Diarrhea caused an increase in HCT due to dehydration. The difference from normal was highest in 15 to 28 days old (p < 0.01). At all ages, bicarbonate decreased compared to normal, and therefore pH decreased. Similar to HCT, the largest difference was observed between 15 and 28 days (p < 0.01). The pCO₂ decreases to compensate for the decrease in pH. Conclusion and Significance: At all ages, HCT increases, and bicarbonate, pH, and pCO₂ decrease in diarrhea calves. The calf from 15 days to 28 days shows the most difference from normal. Over 28 days of age, weight gain and homeostasis ability increase, diarrhea is seen in the stool, there are fewer hematologic changes than groups below 28 days of age.Keywords: calves, diarrhea, hematological changes, i-STAT
Procedia PDF Downloads 1601616 Non-Invasive Imaging of Tissue Using Near Infrared Radiations
Authors: Ashwani Kumar Aggarwal
Abstract:
NIR Light is non-ionizing and can pass easily through living tissues such as breast without any harmful effects. Therefore, use of NIR light for imaging the biological tissue and to quantify its optical properties is a good choice over other invasive methods. Optical tomography involves two steps. One is the forward problem and the other is the reconstruction problem. The forward problem consists of finding the measurements of transmitted light through the tissue from source to detector, given the spatial distribution of absorption and scattering properties. The second step is the reconstruction problem. In X-ray tomography, there is standard method for reconstruction called filtered back projection method or the algebraic reconstruction methods. But this method cannot be applied as such, in optical tomography due to highly scattering nature of biological tissue. A hybrid algorithm for reconstruction has been implemented in this work which takes into account the highly scattered path taken by photons while back projecting the forward data obtained during Monte Carlo simulation. The reconstructed image suffers from blurring due to point spread function. This blurred reconstructed image has been enhanced using a digital filter which is optimal in mean square sense.Keywords: least-squares optimization, filtering, tomography, laser interaction, light scattering
Procedia PDF Downloads 3141615 Periodical System of Isotopes
Authors: Andriy Magula
Abstract:
With the help of a special algorithm being the principle of multilevel periodicity, the periodic change of properties at the nuclear level of chemical elements was discovered and the variant for the periodic system of isotopes was presented. The periodic change in the properties of isotopes, as well as the vertical symmetry of subgroups, was checked for consistency in accordance with the following ten types of experimental data: mass ratio of fission fragments; quadrupole moment values; magnetic moment; lifetime of radioactive isotopes; neutron scattering; thermal neutron radiative capture cross-sections (n, γ); α-particle yield cross-sections (n, α); isotope abundance on Earth, in the Solar system and other stellar systems; features of ore formation and stellar evolution. For all ten cases, the correspondences for the proposed periodic structure of the nucleus were obtained. The system was formed in the usual 2D table, similar to the periodic system of elements, and the mass series of isotopes was divided into 8 periods and 4 types of ‘nuclear’ orbitals: sn, dn, pn, fn. The origin of ‘magic’ numbers as a set of filled charge shells of the nucleus was explained. Due to the isotope system, the periodic structure is shown at a new level of the universe, and the prospects of its practical use are opened up.Keywords: periodic system, isotope, period, subgroup, “nuclear” orbital, nuclear reaction
Procedia PDF Downloads 151614 Fuzzy Multi-Objective Approach for Emergency Location Transportation Problem
Authors: Bidzina Matsaberidze, Anna Sikharulidze, Gia Sirbiladze, Bezhan Ghvaberidze
Abstract:
In the modern world emergency management decision support systems are actively used by state organizations, which are interested in extreme and abnormal processes and provide optimal and safe management of supply needed for the civil and military facilities in geographical areas, affected by disasters, earthquakes, fires and other accidents, weapons of mass destruction, terrorist attacks, etc. Obviously, these kinds of extreme events cause significant losses and damages to the infrastructure. In such cases, usage of intelligent support technologies is very important for quick and optimal location-transportation of emergency service in order to avoid new losses caused by these events. Timely servicing from emergency service centers to the affected disaster regions (response phase) is a key task of the emergency management system. Scientific research of this field takes the important place in decision-making problems. Our goal was to create an expert knowledge-based intelligent support system, which will serve as an assistant tool to provide optimal solutions for the above-mentioned problem. The inputs to the mathematical model of the system are objective data, as well as expert evaluations. The outputs of the system are solutions for Fuzzy Multi-Objective Emergency Location-Transportation Problem (FMOELTP) for disasters’ regions. The development and testing of the Intelligent Support System were done on the example of an experimental disaster region (for some geographical zone of Georgia) which was generated using a simulation modeling. Four objectives are considered in our model. The first objective is to minimize an expectation of total transportation duration of needed products. The second objective is to minimize the total selection unreliability index of opened humanitarian aid distribution centers (HADCs). The third objective minimizes the number of agents needed to operate the opened HADCs. The fourth objective minimizes the non-covered demand for all demand points. Possibility chance constraints and objective constraints were constructed based on objective-subjective data. The FMOELTP was constructed in a static and fuzzy environment since the decisions to be made are taken immediately after the disaster (during few hours) with the information available at that moment. It is assumed that the requests for products are estimated by homeland security organizations, or their experts, based upon their experience and their evaluation of the disaster’s seriousness. Estimated transportation times are considered to take into account routing access difficulty of the region and the infrastructure conditions. We propose an epsilon-constraint method for finding the exact solutions for the problem. It is proved that this approach generates the exact Pareto front of the multi-objective location-transportation problem addressed. Sometimes for large dimensions of the problem, the exact method requires long computing times. Thus, we propose an approximate method that imposes a number of stopping criteria on the exact method. For large dimensions of the FMOELTP the Estimation of Distribution Algorithm’s (EDA) approach is developed.Keywords: epsilon-constraint method, estimation of distribution algorithm, fuzzy multi-objective combinatorial programming problem, fuzzy multi-objective emergency location/transportation problem
Procedia PDF Downloads 3201613 Automated Method Time Measurement System for Redesigning Dynamic Facility Layout
Authors: Salam Alzubaidi, G. Fantoni, F. Failli, M. Frosolini
Abstract:
The dynamic facility layout problem is a really critical issue in the competitive industrial market; thus, solving this problem requires robust design and effective simulation systems. The sustainable simulation requires inputting reliable and accurate data into the system. So this paper describes an automated system integrated into the real environment to measure the duration of the material handling operations, collect the data in real-time, and determine the variances between the actual and estimated time schedule of the operations in order to update the simulation software and redesign the facility layout periodically. The automated method- time measurement system collects the real data through using Radio Frequency-Identification (RFID) and Internet of Things (IoT) technologies. Hence, attaching RFID- antenna reader and RFID tags enables the system to identify the location of the objects and gathering the time data. The real duration gathered will be manipulated by calculating the moving average duration of the material handling operations, choosing the shortest material handling path, and then updating the simulation software to redesign the facility layout accommodating with the shortest/real operation schedule. The periodic simulation in real-time is more sustainable and reliable than the simulation system relying on an analysis of historical data. The case study of this methodology is in cooperation with a workshop team for producing mechanical parts. Although there are some technical limitations, this methodology is promising, and it can be significantly useful in the redesigning of the manufacturing layout.Keywords: dynamic facility layout problem, internet of things, method time measurement, radio frequency identification, simulation
Procedia PDF Downloads 1191612 Prototype Development of ARM-7 Based Embedded Controller for Packaging Machine
Authors: Jeelka Ray
Abstract:
Survey of the papers revealed that there is no practical design available for packaging machine based on Embedded system, so the need arose for the development of the prototype model. In this paper, author has worked on the development of an ARM7 based Embedded Controller for controlling the sequence of packaging machine. The unit is made user friendly with TFT and Touch Screen implementing human machine interface (HMI). The different system components are briefly discussed, followed by a description of the overall design. The major functions which involve bag forming, sealing temperature control, fault detection, alarm, animated view on the home screen when the machine is working as per different parameters set makes the machine performance more successful. LPC2478 ARM 7 Embedded Microcontroller controls the coordination of individual control function modules. In back gone days, these machines were manufactured with mechanical fittings. Later on, the electronic system replaced them. With the help of ongoing technologies, these mechanical systems were controlled electronically using Microprocessors. These became the backbone of the system which became a cause for the updating technologies in which the control was handed over to the Microcontrollers with Servo drives for accurate positioning of the material. This helped to maintain the quality of the products. Including all, RS 485 MODBUS Communication technology is used for synchronizing AC Drive & Servo Drive. These all concepts are operated either manually or through a Graphical User Interface. Automatic tuning of heaters, sealers and their temperature is controlled using Proportional, Integral and Derivation loops. In the upcoming latest technological world, the practical implementation of the above mentioned concepts is really important to be in the user friendly environment. Real time model is implemented and tested on the actual machine and received fruitful results.Keywords: packaging machine, embedded system, ARM 7, micro controller, HMI, TFT, touch screen, PID
Procedia PDF Downloads 2741611 Modelling a Hospital as a Queueing Network: Analysis for Improving Performance
Authors: Emad Alenany, M. Adel El-Baz
Abstract:
In this paper, the flow of different classes of patients into a hospital is modelled and analyzed by using the queueing network analyzer (QNA) algorithm and discrete event simulation. Input data for QNA are the rate and variability parameters of the arrival and service times in addition to the number of servers in each facility. Patient flows mostly match real flow for a hospital in Egypt. Based on the analysis of the waiting times, two approaches are suggested for improving performance: Separating patients into service groups, and adopting different service policies for sequencing patients through hospital units. The separation of a specific group of patients, with higher performance target, to be served separately from the rest of patients requiring lower performance target, requires the same capacity while improves performance for the selected group of patients with higher target. Besides, it is shown that adopting the shortest processing time and shortest remaining processing time service policies among other tested policies would results in, respectively, 11.47% and 13.75% reduction in average waiting time relative to first come first served policy.Keywords: queueing network, discrete-event simulation, health applications, SPT
Procedia PDF Downloads 185