Search results for: accurate data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26491

Search results for: accurate data

25681 Investigation of Mangrove Area Effects on Hydrodynamic Conditions of a Tidal Dominant Strait Near the Strait of Hormuz

Authors: Maryam Hajibaba, Mohsen Soltanpour, Mehrnoosh Abbasian, S. Abbas Haghshenas

Abstract:

This paper aims to evaluate the main role of mangroves forests on the unique hydrodynamic characteristics of the Khuran Strait (KS) in the Persian Gulf. Investigation of hydrodynamic conditions of KS is vital to predict and estimate sedimentation and erosion all over the protected areas north of Qeshm Island. KS (or Tang-e-Khuran) is located between Qeshm Island and the Iranian mother land and has a minimum width of approximately two kilometers. Hydrodynamics of the strait is dominated by strong tidal currents of up to 2 m/s. The bathymetry of the area is dynamic and complicated as 1) strong currents do exist in the area which lead to seemingly sand dune movements in the middle and southern parts of the strait, and 2) existence a vast area with mangrove coverage next to the narrowest part of the strait. This is why ordinary modeling schemes with normal mesh resolutions are not capable for high accuracy estimations of current fields in the KS. A comprehensive set of measurements were carried out with several components, to investigate the hydrodynamics and morpho-dynamics of the study area, including 1) vertical current profiling at six stations, 2) directional wave measurements at four stations, 3) water level measurements at six stations, 4) wind measurements at one station, and 5) sediment grab sampling at 100 locations. Additionally, a set of periodic hydrographic surveys was included in the program. The numerical simulation was carried out by using Delft3D – Flow Module. Model calibration was done by comparing water levels and depth averaged velocity of currents against available observational data. The results clearly indicate that observed data and simulations only fit together if a realistic perspective of the mangrove area is well captured by the model bathymetry data. Generating unstructured grid by using RGFGRID and QUICKIN, the flow model was driven with water level time-series at open boundaries. Adopting the available field data, the key role of mangrove area on the hydrodynamics of the study area can be studied. The results show that including the accurate geometry of the mangrove area and consideration of its sponge-like behavior are the key aspects through which a realistic current field can be simulated in the KS.

Keywords: Khuran Strait, Persian Gulf, tide, current, Delft3D

Procedia PDF Downloads 209
25680 A New Correlation between SPT and CPT for Various Soils

Authors: Fauzi Jarushi, Sinan Mohsin AlKaabi

Abstract:

The Standard Penetration Test (SPT) is the most common insitu test for soil investigations. On the other hand, the Cone Penetration Test (CPT) is considered one of the best investigation tools. Due to the fast and accurate results that can be obtained it complaints the SPT in many applications like field explorations, design parameters, and quality control assessments. Many soil index and engineering properties have been correlated to both of SPT and CPT. Various foundation design methods were developed based on the outcome of these tests. Therefore it is vital to correlate these tests to each other so that either one of the tests can be used in the absence of the other, especially for preliminary evaluation and design purposes. The primary purpose of this study was to investigate the relationships between the SPT and CPT for different types of soil in Florida. Data for this research were collected from number of projects sponsored by the Florida Department of Transportation (FDOT), six sites served as the subject of SPT-CPT correlations. The correlations were established between the cone resistance (qc) and the SPT blows (i.e., N) for various soils. A positive linear relationship was found between fs and N for various soils. In general, qc versus N showed higher correlation coefficients than fs versus N. qc/N ratios were developed for different soil types and compared to literature values, the results of this research revealed higher ratios than literature values.

Keywords: in situ tests, correlation, SPT, CPT

Procedia PDF Downloads 396
25679 Comprehensive Study of Data Science

Authors: Asifa Amara, Prachi Singh, Kanishka, Debargho Pathak, Akshat Kumar, Jayakumar Eravelly

Abstract:

Today's generation is totally dependent on technology that uses data as its fuel. The present study is all about innovations and developments in data science and gives an idea about how efficiently to use the data provided. This study will help to understand the core concepts of data science. The concept of artificial intelligence was introduced by Alan Turing in which the main principle was to create an artificial system that can run independently of human-given programs and can function with the help of analyzing data to understand the requirements of the users. Data science comprises business understanding, analyzing data, ethical concerns, understanding programming languages, various fields and sources of data, skills, etc. The usage of data science has evolved over the years. In this review article, we have covered a part of data science, i.e., machine learning. Machine learning uses data science for its work. Machines learn through their experience, which helps them to do any work more efficiently. This article includes a comparative study image between human understanding and machine understanding, advantages, applications, and real-time examples of machine learning. Data science is an important game changer in the life of human beings. Since the advent of data science, we have found its benefits and how it leads to a better understanding of people, and how it cherishes individual needs. It has improved business strategies, services provided by them, forecasting, the ability to attend sustainable developments, etc. This study also focuses on a better understanding of data science which will help us to create a better world.

Keywords: data science, machine learning, data analytics, artificial intelligence

Procedia PDF Downloads 80
25678 Engine Thrust Estimation by Strain Gauging of Engine Mount Assembly

Authors: Rohit Vashistha, Amit Kumar Gupta, G. P. Ravishankar, Mahesh P. Padwale

Abstract:

Accurate thrust measurement is required for aircraft during takeoff and after ski-jump. In a developmental aircraft, takeoff from ship is extremely critical and thrust produced by the engine should be known to the pilot before takeoff so that if thrust produced is not sufficient then take-off can be aborted and accident can be avoided. After ski-jump, thrust produced by engine is required because the horizontal speed of aircraft is less than the normal takeoff speed. Engine should be able to produce enough thrust to provide nominal horizontal takeoff speed to the airframe within prescribed time limit. The contemporary low bypass gas turbine engines generally have three mounts where the two side mounts transfer the engine thrust to the airframe. The third mount only takes the weight component. It does not take any thrust component. In the present method of thrust estimation, the strain gauging of the two side mounts is carried out. The strain produced at various power settings is used to estimate the thrust produced by the engine. The quarter Wheatstone bridge is used to acquire the strain data. The engine mount assembly is subjected to Universal Test Machine for determination of equivalent elasticity of assembly. This elasticity value is used in the analytical approach for estimation of engine thrust. The estimated thrust is compared with the test bed load cell thrust data. The experimental strain data is also compared with strain data obtained from FEM analysis. Experimental setup: The strain gauge is mounted on the tapered portion of the engine mount sleeve. Two strain gauges are mounted on diametrically opposite locations. Both of the strain gauges on the sleeve were in the horizontal plane. In this way, these strain gauges were not taking any strain due to the weight of the engine (except negligible strain due to material's poison's ratio) or the hoop's stress. Only the third mount strain gauge will show strain when engine is not running i.e. strain due to weight of engine. When engine starts running, all the load will be taken by the side mounts. The strain gauge on the forward side of the sleeve was showing a compressive strain and the strain gauge on the rear side of the sleeve shows a tensile strain. Results and conclusion: the analytical calculation shows that the hoop stresses dominate the bending stress. The estimated thrust by strain gauge shows good accuracy at higher power setting as compared to lower power setting. The accuracy of estimated thrust at max power setting is 99.7% whereas at lower power setting is 78%.

Keywords: engine mounts, finite elements analysis, strain gauge, stress

Procedia PDF Downloads 479
25677 Presentation of a Mix Algorithm for Estimating the Battery State of Charge Using Kalman Filter and Neural Networks

Authors: Amin Sedighfar, M. R. Moniri

Abstract:

Determination of state of charge (SOC) in today’s world becomes an increasingly important issue in all the applications that include a battery. In fact, estimation of the SOC is a fundamental need for the battery, which is the most important energy storage in Hybrid Electric Vehicles (HEVs), smart grid systems, drones, UPS and so on. Regarding those applications, the SOC estimation algorithm is expected to be precise and easy to implement. This paper presents an online method for the estimation of the SOC of Valve-Regulated Lead Acid (VRLA) batteries. The proposed method uses the well-known Kalman Filter (KF), and Neural Networks (NNs) and all of the simulations have been done with MATLAB software. The NN is trained offline using the data collected from the battery discharging process. A generic cell model is used, and the underlying dynamic behavior of the model has used two capacitors (bulk and surface) and three resistors (terminal, surface, and end), where the SOC determined from the voltage represents the bulk capacitor. The aim of this work is to compare the performance of conventional integration-based SOC estimation methods with a mixed algorithm. Moreover, by containing the effect of temperature, the final result becomes more accurate. 

Keywords: Kalman filter, neural networks, state-of-charge, VRLA battery

Procedia PDF Downloads 191
25676 Individualized Emotion Recognition Through Dual-Representations and Ground-Established Ground Truth

Authors: Valentina Zhang

Abstract:

While facial expression is a complex and individualized behavior, all facial emotion recognition (FER) systems known to us rely on a single facial representation and are trained on universal data. We conjecture that: (i) different facial representations can provide different, sometimes complementing views of emotions; (ii) when employed collectively in a discussion group setting, they enable more accurate emotion reading which is highly desirable in autism care and other applications context sensitive to errors. In this paper, we first study FER using pixel-based DL vs semantics-based DL in the context of deepfake videos. Our experiment indicates that while the semantics-trained model performs better with articulated facial feature changes, the pixel-trained model outperforms on subtle or rare facial expressions. Armed with these findings, we have constructed an adaptive FER system learning from both types of models for dyadic or small interacting groups and further leveraging the synthesized group emotions as the ground truth for individualized FER training. Using a collection of group conversation videos, we demonstrate that FER accuracy and personalization can benefit from such an approach.

Keywords: neurodivergence care, facial emotion recognition, deep learning, ground truth for supervised learning

Procedia PDF Downloads 145
25675 Mobi Navi Tour for Rescue Operations

Authors: V. R. Sadasivam, M. Vipin, P. Vineeth, M. Sajith, G. Sathiskumar, R. Manikandan, N. Vijayarangan

Abstract:

Global positioning system technology is what leads to such things as navigation systems, GPS tracking devices, GPS surveying and GPS mapping. All that GPS does is provide a set of coordinates which represent the location of GPS units with respect to its latitude, longitude and elevation on planet Earth. It also provides time, which is accurate. The tracking devices themselves come in different flavors. They will contain a GPS receiver, and GPS software, along with some way of transmitting the resulting coordinates. GPS in mobile tend to use radio waves to transmit their location to another GPS device. The purpose of this prototype “Mobi Navi Tour for Rescue Operation” timely communication, and lightning fast decision-making with a group of people located in different places with a common goal. Timely communication and tracking the people are a critical issue in many situations, environments. Expedited can find missing person by sending the location and other related information to them through mobile. Information must be drawn from the caller and entered into the system by the administrator or a group leader and transferred to the group leader. This system will locate the closest available person, a group of people working in an organization/company or vehicle to determine availability and their position to track them. Misinformation cannot lead to the wrong decision in the rapidly paced environment in a normal and an abnormal situation. In “Mobi Navi Tour for Rescue Operation” we use Google Cloud Messaging for android (GCM) which is a service that helps developers send data from servers to their android applications on android devices. The service provides a simple, lightweight mechanism that servers can use to tell mobile applications to contact the server directly, to fetch updated application or user data.

Keywords: android, gps, tour, communication, service

Procedia PDF Downloads 396
25674 Examining Litter Distributions in Lethbridge, Alberta, Canada, Using Citizen Science and GIS Methods: OpenLitterMap App and Story Maps

Authors: Tali Neta

Abstract:

Humans’ impact on the environment has been incredibly brutal, with enormous plastic- and other pollutants (e.g., cigarette buds, paper cups, tires) worldwide. On land, litter costs taxpayers a fortune. Most of the litter pollution comes from the land, yet it is one of the greatest hazards to marine environments. Due to spatial and temporal limitations, previous litter data covered very small areas. Currently, smartphones can be used to obtain information on various pollutants (through citizen science), and they can greatly assist in acknowledging and mitigating the environmental impact of litter. Litter app data, such as the Litterati, are available for study through a global map only; these data are not available for download, and it is not clear whether irrelevant hashtags have been eliminated. Instagram and Twitter open-source geospatial data are available for download; however, these are considered inaccurate, computationally challenging, and impossible to quantify. Therefore, the resulting data are of poor quality. Other downloadable geospatial data (e.g., Marine Debris Tracker8 and Clean Swell10) are focused on marine- rather than terrestrial litter. Therefore, accurate terrestrial geospatial documentation of litter distribution is needed to improve environmental awareness. The current research employed citizen science to examine litter distribution in Lethbridge, Alberta, Canada, using the OpenLitterMap (OLM) app. The OLM app is an application used to track litter worldwide, and it can mark litter locations through photo georeferencing, which can be presented through GIS-designed maps. The OLM app provides open-source data that can be downloaded. It also offers information on various litter types and “hot-spots” areas where litter accumulates. In this study, Lethbridge College students collected litter data with the OLM app. The students produced GIS Story Maps (interactive web GIS illustrations) and presented these to school children to improve awareness of litter's impact on environmental health. Preliminary results indicate that towards the Lethbridge Coulees’ (valleys) East edges, the amount of litter significantly increased due to shrubs’ presence, that acted as litter catches. As wind generally travels from west to east in Lethbridge, litter in West-Lethbridge often finds its way down in the east part of the coulees. The students’ documented various litter types, while the majority (75%) included plastic and paper food packaging. The students also found metal wires, broken glass, plastic bottles, golf balls, and tires. Presentations of the Story Maps to school children had a significant impact, as the children voluntarily collected litter during school recess, and they were looking into solutions to reduce litter. Further litter distribution documentation through Citizen Science is needed to improve public awareness. Additionally, future research will be focused on Drone imagery of highly concentrated litter areas. Finally, a time series analysis of litter distribution will help us determine whether public education through Citizen Science and Story Maps can assist in reducing litter and reaching a cleaner and healthier environment.

Keywords: citizen science, litter pollution, Open Litter Map, GIS Story Map

Procedia PDF Downloads 79
25673 Photovoltaic System: An Alternative to Energy Efficiency in a Residence

Authors: Arsenio Jose Mindu

Abstract:

The concern to carry out a study related to Energy Efficiency arose based on the various debates in international television networks and not only, but also in several forums of national debates. The concept of Energy Efficiency is not yet widely disseminated and /or taken into account in terms of energy consumption, not only at the domestic level but also at the industrial level in Mozambique. In the context of the energy audit, the time during which each of the appliances is connected to the voltage source, the time during which they are in standby mode was recorded on a spreadsheet basis. Based on these data, daily and monthly consumption was calculated. In order to have more accurate information on the daily levels of daily consumption, the electricity consumption was read every hour of the day (from 5:00 am to 11:00 pm), since after 23:00 the energy consumption remains constant. For ten days. Based on the daily energy consumption and the maximum consumption power, the design of the photovoltaic system for the residence was made. With the implementation of the photovoltaic system in order to guarantee energy efficiency, there was a significant reduction in the use of electricity from the public grid, increasing from approximately 17 kwh per day to around 11 kwh, thus achieving an energy efficiency of 67.4 %. That is to say, there was a reduction not only in terms of the amount of energy consumed but also of the monthly expenses with electricity, having increased from around 2,500,00Mt (2,500 meticais) to around 800Mt per month.

Keywords: energy efficiency, photovoltaic system, residential sector, Mozambique

Procedia PDF Downloads 206
25672 Design and Development of On-Line, On-Site, In-Situ Induction Motor Performance Analyser

Authors: G. S. Ayyappan, Srinivas Kota, Jaffer R. C. Sheriff, C. Prakash Chandra Joshua

Abstract:

In the present scenario of energy crises, energy conservation in the electrical machines is very important in the industries. In order to conserve energy, one needs to monitor the performance of an induction motor on-site and in-situ. The instruments available for this purpose are very meager and very expensive. This paper deals with the design and development of induction motor performance analyser on-line, on-site, and in-situ. The system measures only few electrical input parameters like input voltage, line current, power factor, frequency, powers, and motor shaft speed. These measured data are coupled to name plate details and compute the operating efficiency of induction motor. This system employs the method of computing motor losses with the help of equivalent circuit parameters. The equivalent circuit parameters of the concerned motor are estimated using the developed algorithm at any load conditions and stored in the system memory. The developed instrument is a reliable, accurate, compact, rugged, and cost-effective one. This portable instrument could be used as a handy tool to study the performance of both slip ring and cage induction motors. During the analysis, the data can be stored in SD Memory card and one can perform various analyses like load vs. efficiency, torque vs. speed characteristics, etc. With the help of the developed instrument, one can operate the motor around its Best Operating Point (BOP). Continuous monitoring of the motor efficiency could lead to Life Cycle Assessment (LCA) of motors. LCA helps in taking decisions on motor replacement or retaining or refurbishment.

Keywords: energy conservation, equivalent circuit parameters, induction motor efficiency, life cycle assessment, motor performance analysis

Procedia PDF Downloads 379
25671 Interpreting Privacy Harms from a Non-Economic Perspective

Authors: Christopher Muhawe, Masooda Bashir

Abstract:

With increased Internet Communication Technology(ICT), the virtual world has become the new normal. At the same time, there is an unprecedented collection of massive amounts of data by both private and public entities. Unfortunately, this increase in data collection has been in tandem with an increase in data misuse and data breach. Regrettably, the majority of data breach and data misuse claims have been unsuccessful in the United States courts for the failure of proof of direct injury to physical or economic interests. The requirement to express data privacy harms from an economic or physical stance negates the fact that not all data harms are physical or economic in nature. The challenge is compounded by the fact that data breach harms and risks do not attach immediately. This research will use a descriptive and normative approach to show that not all data harms can be expressed in economic or physical terms. Expressing privacy harms purely from an economic or physical harm perspective negates the fact that data insecurity may result into harms which run counter the functions of privacy in our lives. The promotion of liberty, selfhood, autonomy, promotion of human social relations and the furtherance of the existence of a free society. There is no economic value that can be placed on these functions of privacy. The proposed approach addresses data harms from a psychological and social perspective.

Keywords: data breach and misuse, economic harms, privacy harms, psychological harms

Procedia PDF Downloads 195
25670 Enhancing Fault Detection in Rotating Machinery Using Wiener-CNN Method

Authors: Mohamad R. Moshtagh, Ahmad Bagheri

Abstract:

Accurate fault detection in rotating machinery is of utmost importance to ensure optimal performance and prevent costly downtime in industrial applications. This study presents a robust fault detection system based on vibration data collected from rotating gears under various operating conditions. The considered scenarios include: (1) both gears being healthy, (2) one healthy gear and one faulty gear, and (3) introducing an imbalanced condition to a healthy gear. Vibration data was acquired using a Hentek 1008 device and stored in a CSV file. Python code implemented in the Spider environment was used for data preprocessing and analysis. Winner features were extracted using the Wiener feature selection method. These features were then employed in multiple machine learning algorithms, including Convolutional Neural Networks (CNN), Multilayer Perceptron (MLP), K-Nearest Neighbors (KNN), and Random Forest, to evaluate their performance in detecting and classifying faults in both the training and validation datasets. The comparative analysis of the methods revealed the superior performance of the Wiener-CNN approach. The Wiener-CNN method achieved a remarkable accuracy of 100% for both the two-class (healthy gear and faulty gear) and three-class (healthy gear, faulty gear, and imbalanced) scenarios in the training and validation datasets. In contrast, the other methods exhibited varying levels of accuracy. The Wiener-MLP method attained 100% accuracy for the two-class training dataset and 100% for the validation dataset. For the three-class scenario, the Wiener-MLP method demonstrated 100% accuracy in the training dataset and 95.3% accuracy in the validation dataset. The Wiener-KNN method yielded 96.3% accuracy for the two-class training dataset and 94.5% for the validation dataset. In the three-class scenario, it achieved 85.3% accuracy in the training dataset and 77.2% in the validation dataset. The Wiener-Random Forest method achieved 100% accuracy for the two-class training dataset and 85% for the validation dataset, while in the three-class training dataset, it attained 100% accuracy and 90.8% accuracy for the validation dataset. The exceptional accuracy demonstrated by the Wiener-CNN method underscores its effectiveness in accurately identifying and classifying fault conditions in rotating machinery. The proposed fault detection system utilizes vibration data analysis and advanced machine learning techniques to improve operational reliability and productivity. By adopting the Wiener-CNN method, industrial systems can benefit from enhanced fault detection capabilities, facilitating proactive maintenance and reducing equipment downtime.

Keywords: fault detection, gearbox, machine learning, wiener method

Procedia PDF Downloads 79
25669 Dual Electrochemical Immunosensor for IL-13Rα2 and E-Cadherin Determination in Cell, Serum and Tissues from Cancer Patients

Authors: Amira ben Hassine, A. Valverde, V. Serafín, C. Muñoz-San Martín, M. Garranzo-Asensio, M. Gamella, R. Barderas, M. Pedrero, N. Raouafi, S. Campuzano, P. Yáñez-Sedeño, J. M. Pingarrón

Abstract:

This work describes the development of a dual electrochemical immunosensing platform for accurate determination of two target proteins, IL-13 Receptor α2 (IL-13Rα2) and E-cadherin (E-cad). The proposed methodology is based on the use of sandwich immunosensing approaches (involving horseradish peroxidase-labeled detector antibodies) implemented onto magnetic microbeads (MBs) and amperometric transduction at screen-printed dual carbon electrodes (SPdCEs). The magnetic bioconjugates were captured onto SPdCEs and the amperometric transduction was performed using the H2O2/hydroquinone (HQ) system. Under optimal experimental conditions, the developed bio platform demonstrates linear concentration ranges of 1.0–25 and 5.0-100 ng mL-1, detection limits of 0.28 and 1.04 ng mL-1 for E-cad and IL-13Rα2, respectively, and excellent selectivity against other non-target proteins. The developed immuno-platform also offers a good reproducibility among amperometric responses provided by nine different sensors constructed in the same manner (Relative Standard Deviation values of 3.1% for E-cad and 4.3% for IL-13Rα2). Moreover, obtained results confirm the practical applicability of this bio-platform for the accurate determination of the endogenous levels of both extracellular receptors in colon cancer cells (both intact and lysed) with different metastatic potential and serum and tissues from patients diagnosed with colorectal cancer at different grades. Interesting features in terms of, simplicity, speed, portability and sample amount required to provide quantitative results, make this immuno-platform more compatible than conventional methodologies with the clinical diagnosis and prognosis at the point of care.

Keywords: electrochemistry, mmunosensors, biosensors, E-cadherin, IL-13 receptor α2, cancer colorectal

Procedia PDF Downloads 136
25668 Study on The Pile Height Loss of Tunisian Handmade Carpets Under Dynamic Loading

Authors: Fatma Abidi, Taoufik Harizi, Slah Msahli, Faouzi Sakli

Abstract:

Nine different Tunisian handmade carpets were used for the investigation. The raw material of the carpet pile yarns was wool. The influence of the different structure parameters (linear density and pile height) on the carpet compression was investigated. Carpets were tested under dynamic loading in order to evaluate and observe the thickness loss and carpet behavior under dynamic loads. To determine the loss of pile height under dynamic loading, the pile height carpets were measured. The test method was treated according to the Tunisian standard NT 12.165 (corresponds to the standard ISO 2094). The pile height measurements are taken and recorded at intervals up to 1000 impacts (measures of this study were made after 50, 100, 200, 500, and 1000 impacts). The loss of pile height is calculated using the variation between the initial height and those measured after the number of reported impacts. The experimental results were statistically evaluated using Design Expert Analysis of Variance (ANOVA) software. As regards the deformation, results showed that both of the structure parameters of the pile yarn and the pile height have an influence. The carpet with the higher pile and the less linear density of pile yarn showed the worst performance. Results of a polynomial regression analysis are highlighted. There is a good correlation between the loss of pile height and the impacts number of dynamic loads. These equations are in good agreement with measured data. Because the prediction is reasonably accurate for all samples, these equations can also be taken into account when calculating the theoretical loss of pile height for the considered carpet samples. Statistical evaluations of the experimen¬tal data showed that the pile material and number of impacts have a significant effect on mean thickness and thickness loss variations.

Keywords: Tunisian handmade carpet, loss of pile height, dynamic loads, performance

Procedia PDF Downloads 320
25667 Quantitative Evaluation of Mitral Regurgitation by Using Color Doppler Ultrasound

Authors: Shang-Yu Chiang, Yu-Shan Tsai, Shih-Hsien Sung, Chung-Ming Lo

Abstract:

Mitral regurgitation (MR) is a heart disorder which the mitral valve does not close properly when the heart pumps out blood. MR is the most common form of valvular heart disease in the adult population. The diagnostic echocardiographic finding of MR is straightforward due to the well-known clinical evidence. In the determination of MR severity, quantification of sonographic findings would be useful for clinical decision making. Clinically, the vena contracta is a standard for MR evaluation. Vena contracta is the point in a blood stream where the diameter of the stream is the least, and the velocity is the maximum. The quantification of vena contracta, i.e. the vena contracta width (VCW) at mitral valve, can be a numeric measurement for severity assessment. However, manually delineating the VCW may not accurate enough. The result highly depends on the operator experience. Therefore, this study proposed an automatic method to quantify VCW to evaluate MR severity. Based on color Doppler ultrasound, VCW can be observed from the blood flows to the probe as the appearance of red or yellow area. The corresponding brightness represents the value of the flow rate. In the experiment, colors were firstly transformed into HSV (hue, saturation and value) to be closely align with the way human vision perceives red and yellow. Using ellipse to fit the high flow rate area in left atrium, the angle between the mitral valve and the ultrasound probe was calculated to get the vertical shortest diameter as the VCW. Taking the manual measurement as the standard, the method achieved only 0.02 (0.38 vs. 0.36) to 0.03 (0.42 vs. 0.45) cm differences. The result showed that the proposed automatic VCW extraction can be efficient and accurate for clinical use. The process also has the potential to reduce intra- or inter-observer variability at measuring subtle distances.

Keywords: mitral regurgitation, vena contracta, color doppler, image processing

Procedia PDF Downloads 369
25666 A Modelling Study of the Photochemical and Particulate Pollution Characteristics above a Typical Southeast Mediterranean Urban Area

Authors: Fameli Kyriaki-Maria, Assimakopoulos D. Vasiliki, Kotroni Vassiliki

Abstract:

The Greater Athens Area (GAA) faces photochemical and particulate pollution episodes as a result of the combined effects of local pollutant emissions, regional pollution transport, synoptic circulation and topographic characteristics. The area has undergone significant changes since the Athens 2004 Olympic Games because of large scale infrastructure works that lead to the shift of population to areas previously characterized as rural, the increase of the traffic fleet and the operation of highways. However, no recent modelling studies have been performed due to the lack of an accurate, updated emission inventory. The photochemical modelling system MM5/CAMx was applied in order to study the photochemical and particulate pollution characteristics above the GAA for two distinct ten-day periods in the summer of 2006 and 2010, where air pollution episodes occurred. A new updated emission inventory was used based on official data. Comparison of modeled results with measurements revealed the importance and accuracy of the new Athens emission inventory as compared to previous modeling studies. The model managed to reproduce the local meteorological conditions, the daily ozone and particulates fluctuations at different locations across the GAA. Higher ozone levels were found at suburban and rural areas as well as over the sea at the south of the basin. Concerning PM10, high concentrations were computed at the city centre and the southeastern suburbs in agreement with measured data. Source apportionment analysis showed that different sources contribute to the ozone levels, the local sources (traffic, port activities) affecting its formation.

Keywords: photochemical modelling, urban pollution, greater Athens area, MM5/CAMx

Procedia PDF Downloads 281
25665 The Purification of Waste Printing Developer with the Fixed Bed Adsorption Column

Authors: Kiurski S. Jelena, Ranogajec G. Jonjaua, Kecić S. Vesna, Oros B. Ivana

Abstract:

The present study investigates the effectiveness of newly designed clayey pellets (fired clay pellets diameter sizes of 5 and 8 mm, and unfired clay pellets with the diameter size of 15 mm) as the beds in the column adsorption process. The adsorption experiments in the batch mode were performed before the column experiment with the purpose to determine the order of adsorbent package in the column which was to be designed in the investigation. The column experiment was performed by using a known mass of the clayey beds and the volume of the waste printing developer, which was purified. The column was filled in the following order: fired clay pellets of the diameter size of 5 mm, fired clay pellets of the diameter size of 8 mm, and unfired clay pellets of the diameter size of 15 mm. The selected order of the adsorbents showed a high removal efficiency for zinc (97.8%) and copper (81.5%) ions. These efficiencies were better than those in the case of the already existing mode adsorption. The obtained experimental data present a good basis for the selection of an appropriate column fill, but further testing is necessary in order to obtain more accurate results.

Keywords: clay materials, fix bed adsorption column, metal ions, printing developer

Procedia PDF Downloads 322
25664 Machine Learning Analysis of Student Success in Introductory Calculus Based Physics I Course

Authors: Chandra Prayaga, Aaron Wade, Lakshmi Prayaga, Gopi Shankar Mallu

Abstract:

This paper presents the use of machine learning algorithms to predict the success of students in an introductory physics course. Data having 140 rows pertaining to the performance of two batches of students was used. The lack of sufficient data to train robust machine learning models was compensated for by generating synthetic data similar to the real data. CTGAN and CTGAN with Gaussian Copula (Gaussian) were used to generate synthetic data, with the real data as input. To check the similarity between the real data and each synthetic dataset, pair plots were made. The synthetic data was used to train machine learning models using the PyCaret package. For the CTGAN data, the Ada Boost Classifier (ADA) was found to be the ML model with the best fit, whereas the CTGAN with Gaussian Copula yielded Logistic Regression (LR) as the best model. Both models were then tested for accuracy with the real data. ROC-AUC analysis was performed for all the ten classes of the target variable (Grades A, A-, B+, B, B-, C+, C, C-, D, F). The ADA model with CTGAN data showed a mean AUC score of 0.4377, but the LR model with the Gaussian data showed a mean AUC score of 0.6149. ROC-AUC plots were obtained for each Grade value separately. The LR model with Gaussian data showed consistently better AUC scores compared to the ADA model with CTGAN data, except in two cases of the Grade value, C- and A-.

Keywords: machine learning, student success, physics course, grades, synthetic data, CTGAN, gaussian copula CTGAN

Procedia PDF Downloads 43
25663 Long Short-Term Memory Based Model for Modeling Nicotine Consumption Using an Electronic Cigarette and Internet of Things Devices

Authors: Hamdi Amroun, Yacine Benziani, Mehdi Ammi

Abstract:

In this paper, we want to determine whether the accurate prediction of nicotine concentration can be obtained by using a network of smart objects and an e-cigarette. The approach consists of, first, the recognition of factors influencing smoking cessation such as physical activity recognition and participant’s behaviors (using both smartphone and smartwatch), then the prediction of the configuration of the e-cigarette (in terms of nicotine concentration, power, and resistance of e-cigarette). The study uses a network of commonly connected objects; a smartwatch, a smartphone, and an e-cigarette transported by the participants during an uncontrolled experiment. The data obtained from sensors carried in the three devices were trained by a Long short-term memory algorithm (LSTM). Results show that our LSTM-based model allows predicting the configuration of the e-cigarette in terms of nicotine concentration, power, and resistance with a root mean square error percentage of 12.9%, 9.15%, and 11.84%, respectively. This study can help to better control consumption of nicotine and offer an intelligent configuration of the e-cigarette to users.

Keywords: Iot, activity recognition, automatic classification, unconstrained environment

Procedia PDF Downloads 223
25662 A Mixture Vine Copula Structures Model for Dependence Wind Speed among Wind Farms and Its Application in Reactive Power Optimization

Authors: Yibin Qiu, Yubo Ouyang, Shihan Li, Guorui Zhang, Qi Li, Weirong Chen

Abstract:

This paper aims at exploring the impacts of high dimensional dependencies of wind speed among wind farms on probabilistic optimal power flow. To obtain the reactive power optimization faster and more accurately, a mixture vine Copula structure model combining the K-means clustering, C vine copula and D vine copula is proposed in this paper, through which a more accurate correlation model can be obtained. Moreover, a Modified Backtracking Search Algorithm (MBSA), the three-point estimate method is applied to probabilistic optimal power flow. The validity of the mixture vine copula structure model and the MBSA are respectively tested in IEEE30 node system with measured data of 3 adjacent wind farms in a certain area, and the results indicate effectiveness of these methods.

Keywords: mixture vine copula structure model, three-point estimate method, the probability integral transform, modified backtracking search algorithm, reactive power optimization

Procedia PDF Downloads 247
25661 Community Resilience in Response to the Population Growth in Al-Thahabiah Neighborhood

Authors: Layla Mujahed

Abstract:

Amman, the capital of Jordan, is the main political, economic, social and cultural center of Jordan and beyond. The city faces multitude demographic challenges related to the unstable political situation in the surrounded countries. It has regional and local migrants who left their homes to find better life in the capital. This resulted with random and unequaled population distribution. Some districts have high population and pressure on the infrastructure and services more than other districts.Government works to resolve this challenge in compliance with 100 Cities Resilience Framework (CRF). Amman participated in this framework as a member in December 2014 to work in achieving the four goals: health and welfare, infrastructure and utilities, economy and education as well as administration and government.  Previous research studies lack in studying Amman resilient work in neighborhood scale and the population growth as resilient challenge. For that, this study focuses on Al-Thahabiah neighborhood in Shafa Badran district in Amman. This paper studies the reasons and drivers behind this population growth during the selected period in this area then provide strategies to improve the resilient work in neighborhood scale. The methodology comprises of primary and secondary data. The primary data consist of interviews with chief officer in the executive part in Great Amman Municipality and resilient officer. The secondary data consist of papers, journals, newspaper, articles and book’s reading. The other part of data consists of maps and statistical data which describe the infrastructural and social situation in the neighborhood and district level during the studying period. Based upon those data, more detailed information will be found, e.g., the centralizing position of population and the provided infrastructure for them. This will help to provide these services and infrastructure to other neighborhoods and enhance population distribution. This study develops an analytical framework to assess urban demographical time series in accordance with the criteria of CRF to make accurate detailed projections on the requirements for the future development in the neighborhood scale and organize the human requirements for affordable quality housing, employment, transportation, health and education in this neighborhood to improve the social relations between its inhabitants and the community. This study highlights on the localization of resilient work in neighborhood scale and spread the resilient knowledge related to the shortage of its research in Jordan. Studying the resilient work from population growth challenge perspective helps improve the facilities provide to the inhabitants and improve their quality of life.

Keywords: city resilience framework, demography, population growth, stakeholders, urban resilience

Procedia PDF Downloads 176
25660 Application of Agile Project Management to Construction Projects: Case Study

Authors: Ran Etgar, Sarit Freund

Abstract:

Agile project management (APM) has been developed originally for software development project. Construction projects seemed to be more apt to traditional water-fall approach than to APM. However, Construction project suffers from similar problems that necessitated the invention of APM, mainly the need to break down the project structure to small increments, thus minimizing the needed managerial planning and design. Since the classical structure of APM is not applicable the way it is to construction project, a modified version of APM was devised. This method, nicknamed 'The anchor method', exploits the fundamentals of APM (i.e., iterations, or sprints of short time frames or timeboxes, cross-functional teams, risk reduction and adaptation to changes) and adjust them to the construction world. The projects had to be structured appropriately to proactively and quickly adapt to change. The method aims to encompass human behavior and lean towards adaptivity rather than predictability. To enable smooth application of the method, a special project management software was developed, so as to provide solid administrational help and accurate data. The method is tested on a bunch of construction projects and some key performance indicators (KPIs) are collected. According to preliminary results the method is indeed very advantageous and with proper assimilation can radically change the construction project management paradigm.

Keywords: agile project management, construction, information systems, project management

Procedia PDF Downloads 128
25659 Characteristics of Pore Pressure and Effective Stress Changes in Sandstone Reservoir Due to Hydrocarbon Production

Authors: Kurniawan Adha, Wan Ismail Wan Yusoff, Luluan Almanna Lubis

Abstract:

Preventing hazardous events during oil and gas operation is an important contribution of accurate pore pressure data. The availability of pore pressure data also contribute in reducing the operation cost. Suggested methods in pore pressure estimation were mostly complex by the many assumptions and hypothesis used. Basic properties which may have significant impact on estimation model are somehow being neglected. To date, most of pore pressure determinations are estimated by data model analysis and rarely include laboratory analysis, stratigraphy study or core check measurement. Basically, this study developed a model that might be applied to investigate the changes of pore pressure and effective stress due to hydrocarbon production. In general, this paper focused velocity model effect of pore pressure and effective stress changes due to hydrocarbon production with illustrated by changes in saturation. The core samples from Miri field from Sarawak Malaysia ware used in this study, where the formation consists of sandstone reservoir. The study area is divided into sixteen (16) layers and encompassed six facies (A-F) from the outcrop that is used for stratigraphy sequence model. The experimental work was firstly involving data collection through field study and developing stratigraphy sequence model based on outcrop study. Porosity and permeability measurements were then performed after samples were cut into 1.5 inch diameter core samples. Next, velocity was analyzed using SONIC OYO and AutoLab 500. Three (3) scenarios of saturation were also conducted to exhibit the production history of the samples used. Results from this study show the alterations of velocity for different saturation with different actions of effective stress and pore pressure. It was observed that sample with water saturation has the highest velocity while dry sample has the lowest value. In comparison with oil to samples with oil saturation, water saturated sample still leads with the highest value since water has higher fluid density than oil. Furthermore, water saturated sample exhibits velocity derived parameters, such as poisson’s ratio and P-wave velocity over S-wave velocity (Vp/Vs) The result shows that pore pressure value ware reduced due to the decreasing of fluid content. The decreasing of pore pressure result may soften the elastic mineral frame and have tendency to possess high velocity. The alteration of pore pressure by the changes in fluid content or saturation resulted in alteration of velocity value that has proportionate trend with the effective stress.

Keywords: pore pressure, effective stress, production, miri formation

Procedia PDF Downloads 288
25658 A Robust Optimization Model for Multi-Objective Closed-Loop Supply Chain

Authors: Mohammad Y. Badiee, Saeed Golestani, Mir Saman Pishvaee

Abstract:

In recent years consumers and governments have been pushing companies to design their activities in such a way as to reduce negative environmental impacts by producing renewable product or threat free disposal policy more and more. It is therefore important to focus more accurate to the optimization of various aspect of total supply chain. Modeling a supply chain can be a challenging process due to the fact that there are a large number of factors that need to be considered in the model. The use of multi-objective optimization can lead to overcome those problems since more information is used when designing the model. Uncertainty is inevitable in real world. Considering uncertainty on parameters in addition to use multi-objectives are ways to give more flexibility to the decision making process since the process can take into account much more constraints and requirements. In this paper we demonstrate a stochastic scenario based robust model to cope with uncertainty in a closed-loop multi-objective supply chain. By applying the proposed model in a real world case, the power of proposed model in handling data uncertainty is shown.

Keywords: supply chain management, closed-loop supply chain, multi-objective optimization, goal programming, uncertainty, robust optimization

Procedia PDF Downloads 413
25657 Data Access, AI Intensity, and Scale Advantages

Authors: Chuping Lo

Abstract:

This paper presents a simple model demonstrating that ceteris paribus countries with lower barriers to accessing global data tend to earn higher incomes than other countries. Therefore, large countries that inherently have greater data resources tend to have higher incomes than smaller countries, such that the former may be more hesitant than the latter to liberalize cross-border data flows to maintain this advantage. Furthermore, countries with higher artificial intelligence (AI) intensity in production technologies tend to benefit more from economies of scale in data aggregation, leading to higher income and more trade as they are better able to utilize global data.

Keywords: digital intensity, digital divide, international trade, scale of economics

Procedia PDF Downloads 66
25656 Secured Transmission and Reserving Space in Images Before Encryption to Embed Data

Authors: G. R. Navaneesh, E. Nagarajan, C. H. Rajam Raju

Abstract:

Nowadays the multimedia data are used to store some secure information. All previous methods allocate a space in image for data embedding purpose after encryption. In this paper, we propose a novel method by reserving space in image with a boundary surrounded before encryption with a traditional RDH algorithm, which makes it easy for the data hider to reversibly embed data in the encrypted images. The proposed method can achieve real time performance, that is, data extraction and image recovery are free of any error. A secure transmission process is also discussed in this paper, which improves the efficiency by ten times compared to other processes as discussed.

Keywords: secure communication, reserving room before encryption, least significant bits, image encryption, reversible data hiding

Procedia PDF Downloads 410
25655 Automated Transformation of 3D Point Cloud to BIM Model: Leveraging Algorithmic Modeling for Efficient Reconstruction

Authors: Radul Shishkov, Orlin Davchev

Abstract:

The digital era has revolutionized architectural practices, with building information modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research introduces a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data -a collection of data points in space, typically produced by 3D scanners- into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. Our methodology has been tested on several real-world case studies, demonstrating its capability to handle diverse architectural styles and complexities. The results showcase a substantial reduction in time and resources required for BIM model generation while maintaining high levels of accuracy and detail. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historic preservation.

Keywords: BIM, 3D point cloud, algorithmic modeling, computational design, architectural reconstruction

Procedia PDF Downloads 61
25654 Quantification of the Non-Registered Electrical and Electronic Equipment for Domestic Consumption and Enhancing E-Waste Estimation: A Case Study on TVs in Vietnam

Authors: Ha Phuong Tran, Feng Wang, Jo Dewulf, Hai Trung Huynh, Thomas Schaubroeck

Abstract:

The fast increase and complex components have made waste of electrical and electronic equipment (or e-waste) one of the most problematic waste streams worldwide. Precise information on its size on national, regional and global level has therefore been highlighted as prerequisite to obtain a proper management system. However, this is a very challenging task, especially in developing countries where both formal e-waste management system and necessary statistical data for e-waste estimation, i.e. data on the production, sale and trade of electrical and electronic equipment (EEE), are often lacking. Moreover, there is an inflow of non-registered electronic and electric equipment, which ‘invisibly’ enters the EEE domestic market and then is used for domestic consumption. The non-registration/invisibility and (in most of the case) illicit nature of this flow make it difficult or even impossible to be captured in any statistical system. The e-waste generated from it is thus often uncounted in current e-waste estimation based on statistical market data. Therefore, this study focuses on enhancing e-waste estimation in developing countries and proposing a calculation pathway to quantify the magnitude of the non-registered EEE inflow. An advanced Input-Out Analysis model (i.e. the Sale–Stock–Lifespan model) has been integrated in the calculation procedure. In general, Sale-Stock-Lifespan model assists to improve the quality of input data for modeling (i.e. perform data consolidation to create more accurate lifespan profile, model dynamic lifespan to take into account its changes over time), via which the quality of e-waste estimation can be improved. To demonstrate the above objectives, a case study on televisions (TVs) in Vietnam has been employed. The results show that the amount of waste TVs in Vietnam has increased four times since 2000 till now. This upward trend is expected to continue in the future. In 2035, a total of 9.51 million TVs are predicted to be discarded. Moreover, estimation of non-registered TV inflow shows that it might on average contribute about 15% to the total TVs sold on the Vietnamese market during the whole period of 2002 to 2013. To tackle potential uncertainties associated with estimation models and input data, sensitivity analysis has been applied. The results show that both estimations of waste and non-registered inflow depend on two parameters i.e. number of TVs used in household and the lifespan. Particularly, with a 1% increase in the TV in-use rate, the average market share of non-register inflow in the period 2002-2013 increases 0.95%. However, it decreases from 27% to 15% when the constant unadjusted lifespan is replaced by the dynamic adjusted lifespan. The effect of these two parameters on the amount of waste TV generation for each year is more complex and non-linear over time. To conclude, despite of remaining uncertainty, this study is the first attempt to apply the Sale-Stock-Lifespan model to improve the e-waste estimation in developing countries and to quantify the non-registered EEE inflow to domestic consumption. It therefore can be further improved in future with more knowledge and data.

Keywords: e-waste, non-registered electrical and electronic equipment, TVs, Vietnam

Procedia PDF Downloads 245
25653 A Review on Intelligent Systems for Geoscience

Authors: R Palson Kennedy, P.Kiran Sai

Abstract:

This article introduces machine learning (ML) researchers to the hurdles that geoscience problems present, as well as the opportunities for improvement in both ML and geosciences. This article presents a review from the data life cycle perspective to meet that need. Numerous facets of geosciences present unique difficulties for the study of intelligent systems. Geosciences data is notoriously difficult to analyze since it is frequently unpredictable, intermittent, sparse, multi-resolution, and multi-scale. The first half addresses data science’s essential concepts and theoretical underpinnings, while the second section contains key themes and sharing experiences from current publications focused on each stage of the data life cycle. Finally, themes such as open science, smart data, and team science are considered.

Keywords: Data science, intelligent system, machine learning, big data, data life cycle, recent development, geo science

Procedia PDF Downloads 133
25652 Modeling the Demand for the Healthcare Services Using Data Analysis Techniques

Authors: Elizaveta S. Prokofyeva, Svetlana V. Maltseva, Roman D. Zaitsev

Abstract:

Rapidly evolving modern data analysis technologies in healthcare play a large role in understanding the operation of the system and its characteristics. Nowadays, one of the key tasks in urban healthcare is to optimize the resource allocation. Thus, the application of data analysis in medical institutions to solve optimization problems determines the significance of this study. The purpose of this research was to establish the dependence between the indicators of the effectiveness of the medical institution and its resources. Hospital discharges by diagnosis; hospital days of in-patients and in-patient average length of stay were selected as the performance indicators and the demand of the medical facility. The hospital beds by type of care, medical technology (magnetic resonance tomography, gamma cameras, angiographic complexes and lithotripters) and physicians characterized the resource provision of medical institutions for the developed models. The data source for the research was an open database of the statistical service Eurostat. The choice of the source is due to the fact that the databases contain complete and open information necessary for research tasks in the field of public health. In addition, the statistical database has a user-friendly interface that allows you to quickly build analytical reports. The study provides information on 28 European for the period from 2007 to 2016. For all countries included in the study, with the most accurate and complete data for the period under review, predictive models were developed based on historical panel data. An attempt to improve the quality and the interpretation of the models was made by cluster analysis of the investigated set of countries. The main idea was to assess the similarity of the joint behavior of the variables throughout the time period under consideration to identify groups of similar countries and to construct the separate regression models for them. Therefore, the original time series were used as the objects of clustering. The hierarchical agglomerate algorithm k-medoids was used. The sampled objects were used as the centers of the clusters obtained, since determining the centroid when working with time series involves additional difficulties. The number of clusters used the silhouette coefficient. After the cluster analysis it was possible to significantly improve the predictive power of the models: for example, in the one of the clusters, MAPE error was only 0,82%, which makes it possible to conclude that this forecast is highly reliable in the short term. The obtained predicted values of the developed models have a relatively low level of error and can be used to make decisions on the resource provision of the hospital by medical personnel. The research displays the strong dependencies between the demand for the medical services and the modern medical equipment variable, which highlights the importance of the technological component for the successful development of the medical facility. Currently, data analysis has a huge potential, which allows to significantly improving health services. Medical institutions that are the first to introduce these technologies will certainly have a competitive advantage.

Keywords: data analysis, demand modeling, healthcare, medical facilities

Procedia PDF Downloads 144